SAP HANA deployment guide

This deployment guide shows you how to deploy a SAP HANA system on Google Cloud by using Cloud Deployment Manager and a configuration file template to define your installation. The guide helps you configure Compute Engine virtual machines (VMs) and persistent disks, as well as the Linux operating system, to achieve the best performance for your SAP HANA system. The Deployment Manager template incorporates best practices from both Compute Engine and SAP.

Use this guide to deploy either a single-host scale-up or a multi-host scale-out SAP HANA system that does not include standby hosts.

If you need to include SAP HANA automatic host failover, use the SAP HANA Scale-Out System with Host Auto-Failover Deployment Guide instead.

If you need to deploy SAP HANA in a Linux high-availability cluster, use on of the following guides:

Prerequisites

Before you begin, make sure that you meet the following prerequisites:

  • You have a Google Cloud account and project.
  • Virtual Private Cloud networking is set up with firewall rules or other methods to control access to your VMs.
  • You have access to the SAP HANA installation media.

  • If OS login is enabled in your project metadata and you are deploying scale-out nodes, you need to disable OS login temporarily until your deployment is complete. For deployment purposes, this procedure configures SSH keys in instance metadata. When OS login is enabled, metadata-based SSH key configurations are disabled, and this deployment fails. After deployment is complete, you can enable OS login again.

    For more information, see:

Setting up your Google account

A Google account is required to work with GCP.

  1. Sign up for a Google account if you don't already have one.
  2. Log in to the Google Cloud Console, and create a new project.
  3. Enable your billing account.
  4. Configure SSH keys so that you are able to use them to SSH into your Compute Engine instances. Use the gcloud command-line tool to create a new SSH key.
  5. Use the gcloud command-line tool or Cloud Console to add the SSH keys to your project metadata. This allows you to access any Compute Engine instance created within this project, except for instances that explicitly disable project-wide SSH keys.

Creating a network

For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.

If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.

During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SAP-certified Linux images that are available from Google Cloud, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.

To set up networking:

  1. Go to Cloud Shell.

    Go to Cloud Shell

  2. To create a new network in the custom subnetworks mode, run:

    gcloud compute networks create NETWORK_NAME --subnet-mode custom

    Replace NETWORK_NAME with the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).

    Specify --subnet-mode custom to avoid using the default auto mode, which automatically creates a subnet in each Compute Engine region. For more information, see Subnet creation mode.

  3. Create a subnetwork, and specify the region and IP range:

    gcloud compute networks subnets create SUBNETWORK_NAME \
            --network NETWORK_NAME --region REGION --range RANGE

    Replace the following:

    • SUBNETWORK_NAME: the name of the new subnetwork.
    • NETWORK_NAME: the name of the network you created in the previous step.
    • REGION: the region where you want the subnetwork.
    • RANGE: the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
  4. Optionally, repeat the previous step and add additional subnetworks.

Setting up a NAT gateway

If you need to create one or more VMs without public IP addresses, you need to use network address translation (NAT) to enable the VMs to access the internet. Use Cloud NAT, a Google Cloud distributed, software-defined managed service that lets VMs send outbound packets to the internet and receive any corresponding established inbound response packets. Alternatively, you can set up a separate VM as a NAT gateway.

To create a Cloud NAT instance for your project, see Using Cloud NAT.

After you configure Cloud NAT for your project, your VM instances can securely access the internet without a public IP address.

Adding firewall rules

By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.

You can also create a firewall rule to allow external access to specified ports, or to restrict access between VMs on the same network. If the default VPC network type is used, some additional default rules also apply, such as the default-allow-internal rule, which allows connectivity between VMs on the same network on all ports.

Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.

Depending on your scenario, you can create firewall rules to allow access for:

  • The default SAP ports that are listed in TCP/IP of All SAP Products.
  • Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
  • Communication between VMs in the SAP HANA subnetwork, including communication between nodes in an SAP HANA scale-out system or communication between the database server and application servers in a 3-tier architecture. You can enable communication between VMs by creating a firewall rule to allow traffic that originates from within the subnetwork.
  • SSH connections to your VM instance, including SSH from the browser.
  • Connection to your VM by using a third-party tool in Linux. Create a rule to allow access for the tool through your firewall.

To create a firewall rule:

Console

  1. In the Cloud Console, go to the Firewall rules page.

    OPEN FIREWALL RULES

  2. At the top of the page, click Create firewall rule.

    • In the Network field, select the network where your VM is located.
    • In the Targets field, specify the resources on Google Cloud that this rule applies to. For example, specify All instances in the network. Or to to limit the rule to specific instances on Google Cloud, enter tags in Specified target tags.
    • In the Source filter field, select one of the following:
      • IP ranges to allow incoming traffic from specific IP addresses. Specify the range of IP addresses in the Source IP ranges field.
      • Subnets to allow incoming traffic from a particular subnetwork. Specify the subnetwork name in the following Subnets field. You can use this option to allow access between the VMs in a 3-tier or scaleout configuration.
    • In the Protocols and ports section, select Specified protocols and ports and enter tcp:[PORT_NUMBER].
  3. Click Create to create your firewall rule.

gcloud

Create a firewall rule by using the following command:

$ gcloud compute firewall-rules create firewall-name
--direction=INGRESS --priority=1000 \
--network=network-name --action=ALLOW --rules=protocol:port \
--source-ranges ip-range --target-tags=network-tags

Creating a Cloud Storage bucket for the SAP HANA installation files

The installation files that contain the SAP HANA binaries must be stored in a Cloud Storage bucket before you can use Deployment Manager to install SAP HANA. Deployment Manager expects the files in the file formats provided by SAP. Depending on your version of SAP HANA, the file format might be a .zip file or .exe and .rar files.

To download the SAP HANA installation files, create a bucket, and upload the files to the bucket:

  1. From SAP Software Downloads, download all parts of the Linux x86_64 distribution of SAP HANA Platform Edition 1.0 or 2.0, as well as any applicable revision upgrades to your local drive.

    If your SAP Support Portal account does not allow access to the software and you believe that you should be entitled to the software, contact the SAP Global Support Customer Interaction Center.

  2. Use the Cloud Console to create a Cloud Storage bucket for storing the SAP HANA installation files. Note that the bucket name must be unique across GCP.

    • During bucket creation, choose Standard for your storage class.
  3. Configure bucket permissions. By default, as owner of the bucket, you have read-write access to the bucket. To give access to other principals, see Using IAM permissions.

  4. In the Cloud Console, in the Cloud Storage bucket page, choose Upload Files to upload the SAP HANA software and any upgrade revision files to your bucket from your local media:

    Upload Files

  5. Note the name of the bucket that you uploaded the binaries to. You need to use it later when you install SAP HANA.

Creating a VM with SAP HANA installed

The following instructions use the Deployment Manager to install SAP HANA on one or more VM instances with all of the persistent disks that SAP HANA requires. You define the values for the installation in a Deployment Manager configuration file template.

Deployment Manager treats your SAP HANA system and all of the VMs, disks, and other resources that are created for the SAP HANA system as a single entity called a deployment. You can view all of the deployments for your GCP project on the Deployment Manager Deployments page.

The following instructions use Cloud Shell, but are generally applicable to the Cloud SDK.

  1. Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA system you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.

    Go to the quotas page

  2. Open the Cloud Shell or, if you installed the Cloud SDK on your local workstation, open a terminal.

    Go to the Cloud Shell

  3. Download the template.yaml configuration file template to your working directory by entering the following command in the Cloud Shell or Cloud SDK:

    wget https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/latest/dm-templates/sap_hana/template.yaml
  4. Optionally, rename the template.yaml file to identify the configuration it defines.

  5. Open the template.yaml file in Cloud Shell code editor or, if you are using the Cloud SDK, the text editor of your choice.

    To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.

  6. In the template.yaml file, update the following property values by replacing the brackets and their contents with the values for your installation.

    Some of the property values that you specify for the SAP HANA system, such as [SID] or [PASSWORD], are subject to rules that are defined by SAP. For more information, see the Parameter Reference in the SAP HANA Server Installation and Update Guide.

    If you want to create a VM instance without installing SAP HANA, delete all of the lines that begin with sap_hana_.

    Property Data type Description
    type String

    Specifies the location, type, and version of the Deployment Manager template to use during deployment.

    The YAML file includes two type specifications, one of which is commented out. The type specification that is active by default specifies the template version as latest. The type specification that is commented out specifies a specific template version with a timestamp.

    If you need all of your deployments to use the same template version, use the type specification that includes the timestamp.

    instanceName String The name of the VM instance for the SAP HANA master host. The name must be specified in lowercase letters, numbers, or hyphens. If any other characters are used, such as "_" (underscore) or a capital letter, deployment fails. The VM instances for any worker hosts use the same name with a "w" and the host number appended to the name.
    instanceType String The type of Compute Engine virtual machine that you need to run SAP HANA on. If you need a custom VM type, specify a predefined VM type with a number of vCPUs that is closest to the number you need while still being larger. After deployment is complete, modify the number of vCPUs and the amount of memory.
    zone String The zone in which you are deploying your SAP HANA system to run. It must be in the region that you selected for your subnet.
    subnetwork String The name of the subnetwork you created in a previous step. If you are deploying to a shared VPC, specify this value as [SHAREDVPC_PROJECT]/[SUBNETWORK]. For example, myproject/network1.
    linuxImage String The name of the Linux operating-system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/rhel-7-4-sap or family/sles-12-sp2-sap. To specify a specific image, specify only the image name. For the list of available image families, see the Images page in the Cloud console.
    linuxImageProject String The Google Cloud project that contains the image you are going to use. This project might be your own project or a Google Cloud image project, such as rhel-sap-cloud or suse-sap-cloud. For a list of GCP image projects, see the Images page in the Compute Engine documentation.
    sap_hana_deployment_bucket String The name of the GCP storage bucket in your project that contains the SAP HANA installation and revision files that you uploaded in a previous step. Any upgrade revision files in the bucket are applied to SAP HANA during the deployment process.
    sap_hana_sid String The SAP HANA system ID. The ID must consist of three alphanumeric characters and begin with a letter. All letters must be uppercase.
    sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0.
    sap_hana_sidadm_password String A temporary password for the operating system administrator. Passwords must be at least eight characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_system_password String A temporary password for the database superuser. Passwords must be at least 8 characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_scaleout_nodes Integer The number of additional SAP HANA worker hosts that you need. The worker hosts are in addition to the primary SAP HANA instance. For example, if you specify 3, four SAP HANA instances are deployed in a scale-out cluster.
    networkTag String Optional. A network tag that represents your VM instance for firewall or routing purposes. If you specify `publicIP: No` and do not specify a network tag, be sure to provide another means of access to the internet.
    publicIP Boolean Optional. Determines whether a public IP address is added to your VM instance. The default is Yes.

    The following example shows a completed configuration file, which directs the Deployment Manager to deploy an n2-highmem-32 virtual machine with a scale-out HANA system that includes a master SAP HANA instance with three worker hosts. SAP HANA is running on a SLES 15 SP2 operating system.

    resources:
    - name: sap_hana
      type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/latest/dm-templates/sap_hana/sap_hana.py
      #
      # By default, this configuration file uses the latest release of the deployment
      # scripts for SAP on Google Cloud.  To fix your deployments to a specific release
      # of the scripts, comment out the type property above and uncomment the type property below.
      #
      # type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/yyyymmddhhmm/dm-templates/sap_hana/sap_hana.py
      #
      properties:
        instanceName: example-vm
        instanceType: n2-highmem-32
        zone: us-central1-f
        subnetwork: example-subnet-us-central1
        linuxImage: family/sles-15-sp2-sap
        linuxImageProject: suse-sap-cloud
        sap_hana_deployment_bucket: mybucketname
        sap_hana_sid: ABC
        sap_hana_instance_number: 00
        sap_hana_sidadm_password: TempPa55word
        sap_hana_system_password: TempPa55word
        sap_hana_scaleout_nodes: 3
  7. Create the instances:

    gcloud deployment-manager deployments create [DEPLOYMENT-NAME] --config [TEMPLATE-NAME].yaml
    

    The above command invokes the Deployment Manager, which deploys the VM and persistent disks. Deployment Manager then calls another script that configures the operating system and installs SAP HANA.

    While Deployment Manager has control, status messages are written to the Cloud Shell. After the scripts are invoked, status messages are written to Logging and are viewable in the Cloud Console, as described in Checking the Logging logs.

    Time to completion can vary, but the entire process usually takes less than 30 minutes.

Verifying deployment

  1. Open Cloud Logging to check for errors and monitor the progress of the installation.

    Go to Cloud Logging

  2. On the Resources tab, select Global as your logging resource.

    • If "INSTANCE DEPLOYMENT COMPLETE" is displayed for all VMs, Deployment Manager processing is complete and you can proceed to the next step.
    • If you see a quota error:

      1. On the IAM & admin Quotas page, increase any of your quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA Planning Guide.
      2. On the Deployment Manager Deployments page, delete the deployment to clean up the VMs and persistent disks from the failed installation.
      3. Rerun the Deployment Manager.

    Cloud Logging display.

  3. After the SAP HANA system deploys without errors, connect to your VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for your VM instance, or you can use your preferred SSH method.

    SSH button on Compute Engine VM instances page.

  4. Change to the root user.

    sudo su -
  5. At the command prompt, enter df -h. Ensure that you see output similar to the following, with the volumes and sizes that you expect. The following example is from the master node of a scale-out system that has three worker nodes.

    example-vm:~ # df -h
    Filesystem                        Size  Used Avail Use% Mounted on
    devtmpfs                          126G  8.0K  126G   1% /dev
    tmpfs                             189G     0  189G   0% /dev/shm
    tmpfs                             126G   18M  126G   1% /run
    tmpfs                             126G     0  126G   0% /sys/fs/cgroup
    /dev/sda3                          30G  5.4G   25G  18% /
    /dev/sda2                          20M  2.9M   18M  15% /boot/efi
    /dev/mapper/vg_hana-shared        251G   50G  201G  20% /hana/shared
    /dev/mapper/vg_hana-sap            32G  282M   32G   1% /usr/sap
    /dev/mapper/vg_hana-data          426G  7.4G  419G   2% /hana/data
    /dev/mapper/vg_hana-log           125G  4.3G  121G   4% /hana/log
    /dev/mapper/vg_hanabackup-backup  2.0T  2.1G  2.0T   1% /hanabackup
    tmpfs                              26G     0   26G   0% /run/user/473
    tmpfs                              26G     0   26G   0% /run/user/900
    tmpfs                              26G     0   26G   0% /run/user/1003
  6. Change to the SAP admin user. Replace [SID] with the [SID] value that you specified in the configuration file template.

    su - [SID]adm
    
  7. Ensure that SAP HANA services, such as hdbnameserver, hdbindexserver, and others, are running on the instance by entering the following command:

    HDB info
    

If any of the validation steps show that the installation failed, resolve any errors, delete the deployment from the Deployments page, and then recreate the instances, as described in the last step of the previous section.

Installing SAP HANA Studio on a Compute Engine Windows VM

You can connect from a SAP HANA instance outside of Google Cloud or from an instance on Google Cloud. To do so, you might need to enable network access to the target VMs from within SAP HANA Studio.

To install SAP HANA Studio on a Windows VM on Google Cloud, use the following procedure.

  1. Use the Cloud Shell to invoke the following commands.

    OPEN THE CLOUD SHELL

    export NETWORK_NAME="[YOUR_NETWORK_NAME]"
    export REGION="[YOUR_REGION]"
    export ZONE="[YOUR_ZONE]"
    export SUBNET="[YOUR_SUBNETWORK_NAME]"
    export SOURCE_IP_RANGE="[YOUR_WORKSTATION_IP]"
    gcloud compute instances create saphanastudio --zone=$ZONE \
    --machine-type=n1-standard-2  --subnet=$SUBNET --tags=hanastudio \
    --image-family=windows-2016  --image-project=windows-cloud \
    --boot-disk-size=100 --boot-disk-type=pd-standard \
    --boot-disk-device-name=saphanastudio
    gcloud compute firewall-rules create ${NETWORK_NAME}-allow-rdp \
    --network=$NETWORK_NAME --allow=tcp:3389 --source-ranges=$SOURCE_IP_RANGE \
    --target-tags=hanastudio

    The above commands set variables for the current Cloud Shell session, create a Windows server in the subnetwork that you created earlier, and create a firewall rule that allows access from your local workstation to the instance through the Remote Desktop Protocol (RDP).

  2. Install SAP HANA Studio on this server.

    1. Upload the SAP HANA Studio installation files and the SAPCAR extraction tool to a Cloud Storage bucket in your Google Cloud project.
    2. Connect to the new Windows VM by using RDP or your preferred method.
    3. In Windows, with administrator permissions, open the Cloud SDK Shell or other command-line interface.
    4. Copy the SAP HANA Studio installation files and the SAPCAR extraction tool from the storage bucket to the VM by entering the gsutil cp command in the command interface. For example:

      gsutil cp gs://[SOURCE_BUCKET]/IMC_STUDIO2_232_0-80000323.SAR C:\[TARGET_DIRECTORY] &
      gsutil cp gs://[SOURCE_BUCKET]/SAPCAR_1014-80000938.EXE C:\[TARGET_DIRECTORY]
      
    5. Change the directory to your target directory.

      cd C:\[TARGET_DIRECTORY]
      
    6. Run the SAPCAR program to extract the SAP HANA Studio installation file.

      SAPCAR_1014-80000938.EXE -xvf IMC_STUDIO2_232_0-80000323.SAR
      
    7. Run the extracted hdbinst program to install SAP HANA Studio.

Optional: enable SAP HANA Fast Restart

Google Cloud recommends enabling SAP HANA Fast Restart for each instance of SAP HANA, especially for larger instances.

As configured by Deployment Manager, the operating system and kernel settings already support SAP HANA Fast Restart. You need to define the tmpfs file system and configure SAP HANA.

For the complete authoritative instructions for SAP HANA Fast Restart, see the SAP HANA Fast Restart Option documentation.

Configure the tmpfs file system

After the host VMs and the base SAP HANA systems are successfully deployed, you need to create and mount directories for the NUMA nodes in the tmpfs file system.

Display the NUMA topology of your VM

Before you can map the required tmpfs file system, you need to know how many NUMA nodes your VM has. To display the available NUMA nodes on a Compute Engine VM, enter the following command:

lscpu | grep NUMA

For example, an m2-ultramem-208 VM type has four NUMA nodes, numbered 0-3, as shown in the following example:

NUMA node(s):        4
NUMA node0 CPU(s):   0-25,104-129
NUMA node1 CPU(s):   26-51,130-155
NUMA node2 CPU(s):   52-77,156-181
NUMA node3 CPU(s):   78-103,182-207

Create the NUMA node directories

Create a directory for each NUMA node in your VM and set the permissions.

For example, for four NUMA nodes that are numbered 0-3:

mkdir -pv /hana/tmpfs{0..3}/SID
chown -R SIDadm:sapsys /hana/tmpfs*/SID
chmod 777 -R /hana/tmpfs*/SID

Mount the NUMA node directories to tmpfs

Mount the tmpfs file system directories and specify a NUMA node preference for each with mpol=prefer:

mount tmpfsSID0 -t tmpfs -o mpol=prefer:0 /hana/tmpfs0/SID
mount tmpfsSID1 -t tmpfs -o mpol=prefer:1 /hana/tmpfs1/SID
mount tmpfsSID2 -t tmpfs -o mpol=prefer:2 /hana/tmpfs2/SID
mount tmpfsSID3 -t tmpfs -o mpol=prefer:3 /hana/tmpfs3/SID

Update /etc/fstab

To ensure that the mount points are available after an operating system reboot, add entries into the file system table, /etc/fstab:

tmpfsSID0 /hana/tmpfs0/SID tmpfs rw,relatime,mpol=prefer:0
tmpfsSID1 /hana/tmpfs1/SID tmpfs rw,relatime,mpol=prefer:1
tmpfsSID1 /hana/tmpfs2/SID tmpfs rw,relatime,mpol=prefer:2
tmpfsSID1 /hana/tmpfs3/SID tmpfs rw,relatime,mpol=prefer:3

Optional: set limits on memory usage

The tmpfs file system can grow and shrink dynamically.

To limit the memory used by the tmpfs file system, you can set a size limit for a NUMA node volume with the size option. For example:

mount tmpfsSID0 -t tmpfs -o mpol=prefer:0,size=250G /hana/tmpfs0/SID

You can also limit overall tmpfs memory usage for all NUMA nodes for a given SAP HANA instance and a given server node by setting the persistent_memory_global_allocation_limit parameter in the [memorymanager] section of the global.ini file.

SAP HANA configuration for Fast Restart

To configure SAP HANA for Fast Restart, update the global.ini file and specify the tables to store in persistent memory.

Update the [persistence] section in the global.ini file

Configure the [persistence] section in the SAP HANA global.ini file to reference the tmpfs locations. Separate each tmpfs location with a semicolon:

[persistence]
basepath_datavolumes = /hana/data
basepath_logvolumes = /hana/log
basepath_persistent_memory_volumes = /hana/tmpfs0/SID;/hana/tmpfs1/SID;/hana/tmpfs2/SID;/hana/tmpfs3/SID

The preceding example specifies four memory volumes for four NUMA nodes, which corresponds to the m2-ultramem-208. If you were running on the m2-ultramem-416, you would need to configure eight memory volumes (0..7).

Restart SAP HANA after modifying the global.ini file.

SAP HANA can now use the tmpfs location as persistent memory space.

Specify the tables to store in persistent memory

Specify specific column tables or partitions to store in persistent memory.

For example, to turn on persistent memory for an existing table, execute the SQL query:

ALTER TABLE exampletable persistent memory ON immediate CASCADE

To change the default for new tables add the parameter table_default in the indexserver.ini file. For example:

[persistent_memory]
table_default = ON

For more information on how to control columns, tables and which monitoring views provide detailed information, see SAP HANA Persistent Memory.

Setting up the Google monitoring agent for SAP HANA

Optionally, you can set up the Google monitoring agent for SAP HANA, which collects metrics from SAP HANA and sends them to Cloud Monitoring. Cloud Monitoring allows you to create dashboards for your metrics, set up custom alerts based on metric thresholds, and more. For more information on setting up and configuring the Google monitoring agent for SAP HANA, see the SAP HANA Monitoring Agent User Guide.

Connecting to SAP HANA

Note that because these instructions don't use an external IP for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.

  • To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.

  • To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.

Performing post-deployment tasks

Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.

  1. Change the temporary passwords for the SAP HANA system administrator and database superuser. For example:

    sudo passwd [SID]adm

    See Reset the SYSTEM User Password of the System Database.

  2. Install your permanent SAP HANA license. If you do not, SAP HANA might go into database lockdown after the temporary license expires.

    For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.

  3. Update the SAP HANA software with the latest patches.

  4. Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).

  5. Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.

What's next

  • If you need to use NetApp Cloud Volumes Service for Google Cloud instead of persistent disks for your SAP HANA directories, see the NetApp Cloud Volumes Service deployment information in the SAP HANA planning guide.
  • For more information about VM administration of and monitoring, see the SAP HANA Operations Guide.