SAP HANA scale-out system with host auto-failover deployment guide

This guide shows you how to use Cloud Deployment Manager to deploy a SAP HANA scale-out system that includes the SAP HANA host auto-failover fault-recovery solution on SUSE Linux Enterprise Server (SLES). By using Deployment Manager, you can deploy a system that meets SAP support requirements and adheres to both SAP and Compute Engine best practices. The resulting SAP HANA system includes a master host, up to 15 worker hosts, and up to 3 standby hosts all within a single Compute Engine zone.

Do not use this guide if you do not need the host auto failover feature. Instead, use the SAP HANA Deployment Guide. If you need to deploy a Linux high-availability cluster for a single-host scale-up SAP HANA system, use the SAP HANA High Availability Cluster on SLES Deployment Guide.

This guide is intended for advanced SAP HANA users who are familiar with SAP scale-out configurations that include standby hosts for high-availability, as well as network file systems.

Prerequisites

Before you create the SAP HANA high availability scale-out system, make sure that the following prerequisites are met:

  • You or your organization has a GCP account and you have created a project for the SAP HANA deployment. For information about creating GCP accounts and projects, see Setting up your Google account in the SAP HANA Deployment Guide.
  • The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Creating a Cloud Storage bucket. in the SAP HANA Deployment Guide.
  • You have an NFS solution, such as the managed Cloud Filestore solution, for sharing the SAP HANA /hana/shared and /hanabackup volumes among the hosts in the scale-out SAP HANA system. You specify the mount points for the NFS servers in the Deployment Manager configuration file before you can deploy the system. To deploy Cloud Filestore NFS servers, see Creating instances.

Creating a network

For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.

If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.

During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SAP-certified Linux images that are available from GCP, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.

To set up networking:

  1. Go to Cloud Shell.

    Go to Cloud Shell

  2. To create a new network in the custom subnetworks mode, run:

    gcloud compute networks create [YOUR_NETWORK_NAME] --subnet-mode custom

    where [YOUR_NETWORK_NAME] is the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).

    Specify --subnet-mode custom to avoid using the default auto mode, which automatically creates a subnet in each Compute Engine region. For more information, see Subnet creation mode.

  3. Create a subnetwork, and specify the region and IP range:

    gcloud compute networks subnets create [YOUR_SUBNETWORK_NAME] \
            --network [YOUR_NETWORK_NAME] --region [YOUR_REGION] --range [YOUR_RANGE]

    where:

    • [YOUR_SUBNETWORK_NAME] is the new subnetwork.
    • [YOUR_NETWORK_NAME] is the name of the network you created in the previous step.
    • [REGION] is the region where you want the subnetwork.
    • [YOUR_RANGE] is the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
  4. Optionally, repeat the previous step and add additional subnetworks.

Setting up a NAT gateway

If you intend to create one or more VMs that will not have public IP addresses, you must create a NAT gateway so that your VMs can access the Internet to download Google's monitoring agent.

If you intend to assign an external public IP address to your VM, you can skip this step.

To create a NAT gateway:

  1. Create a VM to act as the NAT gateway in the subnet you just created:

    gcloud compute instances create [YOUR_VM_NAME] --can-ip-forward \
            --zone [YOUR_ZONE]  --image-family [YOUR_IMAGE_FAMILY] \
            --image-project [YOUR_IMAGE_PROJECT] \
            --machine-type=[YOUR_MACHINE_TYPE] --subnet [YOUR_SUBNETWORK_NAME] \
            --metadata startup-script="sysctl -w net.ipv4.ip_forward=1; iptables \
            -t nat -A POSTROUTING -o eth0 -j MASQUERADE" --tags [YOUR_VM_TAG]

    where:

    • [YOUR_VM_NAME] is the name of the VM you are creating that want to use for the NAT gateway.
    • [YOUR_ZONE] is the zone where you want the VM.
    • [YOUR_IMAGE_FAMILY] and [YOUR_IMAGE_PROJECT] specify the image you want to use for the NAT gateway.
    • [YOUR_MACHINE_TYPE] is any supported machine type. If you expect high network traffic, choose a machine type with that has at least eight virtual CPUs.
    • [YOUR_SUBNETWORK_NAME] is the name of the subnetwork where you want the VM.
    • [YOUR_VM_TAG] is a tag that is applied to the VM you are creating. If you use this VM as a bastion host, this tag is used to apply the related firewall rule only to this VM.
  2. Create a route that is tagged so that traffic passes through the NAT VM instead of the default Internet gateway:

    gcloud compute routes create [YOUR_ROUTE_NAME] \
            --network [YOUR_NETWORK_NAME] --destination-range 0.0.0.0/0 \
            --next-hop-instance [YOUR_VM_NAME] --next-hop-instance-zone \
            [YOUR_ZONE] --tags [YOUR_TAG_NAME] --priority 800

    where:

    • [YOUR_ROUTE_NAME] is the name of the route you are creating.
    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_NAME] is the VM you are using for your NAT gateway.
    • [YOUR_ZONE] is the zone where the VM is located.
    • [YOUR_TAG_NAME] is the tag on the route that directs traffic through the NAT VM.
  3. If you also want to use the NAT gateway VM as a bastion host, run the following command. This command creates a firewall rule that allows inbound SSH access to this instance from the Internet:

    gcloud compute firewall-rules create allow-ssh --network [YOUR_NETWORK_NAME] --allow tcp:22 --source-ranges 0.0.0.0/0 --target-tags "[YOUR_VM_TAG]"

    where:

    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_TAG] is the tag you specified when you created the NAT gateway VM. This tag is used so this firewall rule applies only to the VM that hosts the NAT gateway, and not to all VMs in the network.

Adding firewall rules

By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.

You can create a firewall rule to allow external access to specified ports, or to restrict access between VMs on the same network. If the default VPC network type is used, some additional default rules also apply, such as the default-allow-internal rule, which allows connectivity between VMs on the same network on all ports.

Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.

Depending on your scenario, you can create firewall rules to allow access for:

  • The default SAP ports that are listed in TCP/IP of All SAP Products.
  • Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
  • Communication between VMs when, for example, your database server and application server are running on different VMs. To enable communication between VMs, you must create a firewall rule to allow traffic that originates from the subnetwork.
  • SSH connections to your VM instance, including SSH from the browser.
  • Connection to your VM by using a third-party tool in Linux. Create a rule to allow access for the tool through your firewall.

To create a firewall rule:

  1. In the GCP Console, go to the Firewall rules page.

    OPEN FIREWALL RULES

  2. At the top of the page, click Create firewall rule.

    • In the Network field, select the network where your VM is located.
    • In the Targets field, specify the resources on GCP that this rule applies to. For example, specify All instances in the network. Or to to limit the rule to specific instances on GCP, enter tags in Specified target tags.
    • In the Source filter field, select one of the following:
      • IP ranges to allow incoming traffic from specific IP addresses. Specify the range of IP addresses in the Source IP ranges field.
      • Subnets to allow incoming traffic from a particular subnetwork. Specify the subnetwork name in the following Subnets field. You can use this option to allow access between the VMs in a 3-tier or scaleout configuration.
    • In the Protocols and ports section, select Specified protocols and ports and enter tcp:[PORT_NUMBER].
  3. Click Create to create your firewall rule.

Creating a SAP HANA scale-out system with standby hosts

In the following instructions, you complete the following actions:

  • Create the SAP HANA system by invoking Deployment Manager with a configuration file template that you complete.
  • Verify deployment.
  • Test the standby host(s) by simulating a host failure.

Some of the steps in the following instructions use Cloud Shell to enter the gcloud commands. If you have the latest version of Cloud SDK installed, you can enter the gcloud commands from a local terminal instead.

Define and create the SAP HANA system

In the following steps, you download and complete a Deployment Manager configuration file template and invoke Deployment Manager, which deploys the VMs, persistent disks, and SAP HANA instances.

  1. Confirm that your current quotas for project resources, such as persistent disks and CPUs, are sufficient for the SAP HANA system you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.

    Go to the quotas page

  2. Open Cloud Shell.

    Go to Cloud Shell

  3. Download the template.yaml configuration file template for the SAP HANA high-availability scale-out system to your working directory:

    wget https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_scaleout/template.yaml
    
  4. Optionally, rename the template.yaml file to identify the configuration it defines. For example, you could use a file name like hana2sp3rev30-scaleout.yaml.

  5. Open the template.yaml file in the Cloud Shell code editor.

    To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.

  6. In the template.yaml file, update the following property values by replacing the brackets and their contents with the values for your installation. For example, you might replace "[ZONE]" with "us-central1-f".

    Property Data type Description
    instanceName String The name of the VM instance for the SAP HANA master host. The name must be specified in lowercase letters, numbers, or hyphens. The VM instances for the worker and standby hosts use the same name with a "w" and the host number appended to the name.
    instanceType String The type of Compute Engine virtual machine that you want to run SAP HANA on.
    zone String The zone in which you are deploying your SAP HANA systems to run. It must be in the region that you selected for your subnet.
    subnetwork String The name of the subnetwork you created in a previous step. If you are deploying to a shared VPC, specify this value as [SHAREDVPC_PROJECT]/[SUBNETWORK]. For example, myproject/network1.
    linuxImage String The name of the Linux operating-system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/sles-12-sp3-sap. To specify a specific image, specify only the image name. For the list of available image families, see the Images page in the GCP Console.
    linuxImageProject String The GCP project that contains the image you are going to use. This project might be your own project or a GCP image project. For SLES, specify suse-sap-cloud. For a list of GCP image projects, see the Images page in the Compute Engine documentation.
    sap_hana_deployment_bucket String The name of the Cloud Storage bucket in your project that contains the SAP HANA installation files that you uploaded in a previous step.
    sap_hana_sid String The SAP HANA system ID. The ID must consist of 3 alphanumeric characters and begin with a letter. All letters must be uppercase.
    sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0.
    sap_hana_sidadm_password String The password for the operating system administrator. Passwords must be at least 8 characters and include at least 1 uppercase letter, 1 lowercase letter, and 1 number.
    sap_hana_system_password String The password for the database superuser. Passwords must be at least 8 characters and include at least 1 uppercase letter, 1 lowercase letter, and 1 number.
    sap_hana_worker_nodes Integer The number of additional SAP HANA worker hosts that you need. You can specify 1 to 15 worker hosts. The default is value is 1.
    sap_hana_standby_nodes Integer The number of additional SAP HANA standby hosts that you need. You can specify 1 to 3 standby hosts. The default value is 1.
    sap_hana_shared_nfs String The NFS mount point for the /hana/shared volume. For example, 10.151.91.122:/hana_shared_nfs.
    sap_hana_backup_nfs String The NFS mount point for the /hanabackup volume. For example, 10.216.41.122:/hana_backup_nfs.
    networkTag String Optional. One or more comma-separated network tags that represents your VM instance for firewall or routing purposes. If you specify publicIP: No and do not specify a network tag, be sure to provide another means of access to the internet.
    publicIP Boolean Optional. Determines whether a public IP address is added to your VM instance. The default is Yes.
    sap_hana_double_volume_size Integer Optional. Doubles the HANA volume size. Useful if you wish to deploy multiple SAP HANA instances or a disaster recovery SAP HANA instance on the same VM. By default, the volume size is automatically calculated to be the minimum size required for your memory footprint, while still meeting the SAP certification and support requirements.
    sap_hana_sidadm_uid Integer Optional. Overrides the default value of the [SID]adm user ID. The default value is 900. You can change this to a different value for consistency within your SAP landscape.
    sap_hana_sapsys_gid Integer Optional. Overrides the default group ID for sapsys. The default is 79.
    sap_deployment_debug Boolean Optional. If this value is set to Yes, the deployment generates verbose deployment logs. Do not turn this setting on unless a Google support engineer asks you to enable debugging.
    post_deployment_script Boolean Optional. The URL or storage location of a script to run after the deployment is complete. The script should be hosted on a web server or in a Cloud Storage bucket. Begin the value with http:// https:// or gs://. Note that this script will be executed on all VMs that the template creates. If you only want to run it on the master instance, you will need to add a check at the top of your script.

    The following example shows a completed configuration file that deploys an SAP HANA scale-out system with three worker hosts and one standby host in the us-central1-f zone. Each host is installed on an n1-highmem-32 VM that is running the SLES 12 SP2 operating system. The worker and standby hosts access the /hana/shared and /hanabackup volumes through the two NFS instances.

    imports:
    ‐ path: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_scaleout/sap_hana_scaleout.py
    
    resources:
    ‐ name: sap_hana_ha_scaleout
      type: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_scaleout/sap_hana_scaleout.py
      properties:
        instanceName: hana-scaleout-w-failover
        instanceType: n1-highmem-32
        zone: us-central1-f
        subnetwork: example-sub-network-sap
        linuxImage: family/sles-12-sp2-sap
        linuxImageProject: suse-sap-cloud
        sap_hana_deployment_bucket: hana2-sp3-rev30
        sap_hana_sid: HS0
        sap_hana_instance_number: 00
        sap_hana_sidadm_password: Google123
        sap_hana_system_password: Google123
        sap_hana_worker_nodes: 3
        sap_hana_standby_nodes: 1
        sap_hana_shared_nfs: 10.151.91.122:/hana_shared_nfs
        sap_hana_backup_nfs: 10.216.41.122:/hana_backup_nfs
    
  7. Create the instances:

    gcloud deployment-manager deployments create [DEPLOYMENT_NAME] --config [TEMPLATE_NAME].yaml
    

    The above command invokes Deployment Manager, which deploys the VMs, downloads the SAP HANA software from your storage bucket, and installs SAP HANA, all according to the specifications in your template.yaml file. The process can take many minutes to complete.

Verifying deployment

To verify deployment, you check the deployment logs in Stackdriver, check the disks and services on the VMs of the primary and worker hosts, display the system in SAP HANA Studio, and test the takeover by a standby host.

Check the deployment logs

  1. Open Stackdriver Logging to monitor the progress of the installation and check for errors.

    Go to Stackdriver Logging

  2. Select Global from the resources list box. If "INSTANCE DEPLOYMENT COMPLETE" is displayed for all VMs, Deployment Manager processing is complete.

    Stackdriver logging display.

Connect to the VMs to check the disks and SAP HANA services

After deployment is complete, confirm that the disks and SAP HANA services have deployed properly by checking the disks and services of the master host and one worker host.

  1. On the Compute Engine VM instances page, connect to the VM of the master host and the VM of one worker host by clicking the SSH button on the row of each of the two VM instances.

    Go to VM instances

    When connecting to the worker host, make sure that you aren't connecting to a standby host. The standby hosts use the same naming convention as the worker hosts, but have the highest numbered worker-host suffix before the first takeover. For example, if you have three worker hosts and one standby host, before the first takeover the standby host has a suffix of "w4".

  2. In each terminal window, change to the root user.

    sudo su -
  3. In each terminal window, display the disk file system.

    df -h

    On the master host, you should see output similar to the following.

    hana-scaleout-w-failover:~ # df -h
      Filesystem                       Size  Used Avail Use% Mounted on
      devtmpfs                         103G     0  103G   0% /dev
      tmpfs                            103G     0  103G   0% /dev/shm
      tmpfs                            103G   18M  103G   1% /run
      tmpfs                            103G     0  103G   0% /sys/fs/cgroup
      /dev/sda3                         44G  2.3G   40G   6% /
      /dev/sda2                        200M  660K  200M   1% /boot/efi
      10.113.193.146:/hana_shared_nfs 1007G   40G  917G   5% /hana/shared
      172.26.41.90:/hana_backup_nfs    2.0T   80M  1.9T   1% /hanabackup
      /dev/mapper/vg_hana-data         1.5T  6.7G  1.5T   1% /hana/data/HF0/mnt00001
      /dev/mapper/vg_hana-log          256G  5.2G  251G   2% /hana/log/HF0/mnt00001
      tmpfs                             21G     0   21G   0% /run/user/900
      tmpfs                             21G     0   21G   0% /run/user/489
      tmpfs                             21G     0   21G   0% /run/user/1001

    On the worker host, notice that the /hana/data and /hana/log directories have different mounts. On a standby host, the data and log directories are not mounted until the standby host takes over for a failed host.

    hana-scaleout-w-failoverw2:~ # df -h
      Filesystem                       Size  Used Avail Use% Mounted on
      devtmpfs                         103G     0  103G   0% /dev
      tmpfs                            103G     0  103G   0% /dev/shm
      tmpfs                            103G   18M  103G   1% /run
      tmpfs                            103G     0  103G   0% /sys/fs/cgroup
      /dev/sda3                         44G  2.3G   40G   6% /
      /dev/sda2                        200M  660K  200M   1% /boot/efi
      10.113.193.146:/hana_shared_nfs 1007G   40G  917G   5% /hana/shared
      172.26.41.90:/hana_backup_nfs    2.0T   80M  1.9T   1% /hanabackup
      tmpfs                             21G     0   21G   0% /run/user/0
      /dev/mapper/vg_hana-data         1.5T  291M  1.5T   1% /hana/data/HF0/mnt00003
      /dev/mapper/vg_hana-log          256G  2.1G  254G   1% /hana/log/HF0/mnt00003
      tmpfs                             21G     0   21G   0% /run/user/1001

  4. In each terminal window, change to the SAP HANA operating system user. Replace [SID] with the [SID] value that you specified in the configuration file template.

    su - [SID]adm
    
  5. In each terminal window, ensure that SAP HANA services, such as hdbnameserver, hdbindexserver, and others, are running on the instance.

    HDB info
    

    On the master host, you should see output similar to the following:

    hf0adm@hana-scaleout-w-failover:/usr/sap/HF0/HDB00> HDB info
      USER       PID  PPID %CPU    VSZ   RSS COMMAND
      hf0adm   19179 19178  0.4  14244  4128 -sh
      hf0adm   19237 19179  0.0  13392  3444  _ /bin/sh /usr/sap/HF0/HDB00/HDB info
      hf0adm   19268 19237  0.0  36884  2996      _ ps fx -U hf0adm -o user,pid,ppid,pcpu,vsz,rss,
      hf0adm   10350     1  0.0  22752  2924 sapstart pf=/hana/shared/HF0/profile/HF0_HDB00_hana-sc
      hf0adm   10358 10350  0.0 229216 49092  _ /usr/sap/HF0/HDB00/hana-scaleout-w-failover/trace/
      hf0adm   10374 10358 14.0 15907624 11835064      _ hdbnameserver
      hf0adm   10566 10358  0.5 5237888 427576      _ hdbcompileserver
      hf0adm   10568 10358 64.6 15150420 13585664      _ hdbpreprocessor
      hf0adm   10608 10358 15.3 16239280 12293788      _ hdbindexserver -port 30003
      hf0adm   10610 10358  1.3 5982992 1295864      _ hdbxsengine -port 30007
      hf0adm   11029 10358  0.6 5539856 443452      _ hdbwebdispatcher
      hf0adm   10224     1  0.0 562984 29324 /usr/sap/HF0/HDB00/exe/sapstartsrv pf=/hana/shared/HF0
      hf0adm   10086     1  0.0  36868  4640 /usr/lib/systemd/systemd --user
      hf0adm   10091 10086  0.0  86144  1780  _ (sd-pam)

    On a worker host, you should see output similar to the following:

    hf0adm@hana-scaleout-w-failoverw2:/usr/sap/HF0/HDB00> HDB info
      USER       PID  PPID %CPU    VSZ   RSS COMMAND
      hf0adm   14537 14536  0.1  14244  4060 -sh
      hf0adm   14595 14537  0.0  13392  3460  _ /bin/sh /usr/sap/HF0/HDB00/HDB info
      hf0adm   14626 14595  0.0  36884  3000      _ ps fx -U hf0adm -o user,pid,ppid,pcpu,vsz,rss,
      hf0adm    8138     1  0.0  22756  2980 sapstart pf=/hana/shared/HF0/profile/HF0_HDB00_hana-sc
      hf0adm    8147  8138  0.0 229216 49260  _ /usr/sap/HF0/HDB00/hana-scaleout-w-failoverw2/trac
      hf0adm    8163  8147  0.7 5695208 669864      _ hdbnameserver
      hf0adm    8377  8147  0.4 4320596 373216      _ hdbcompileserver
      hf0adm    8379  8147  0.5 4996000 433976      _ hdbpreprocessor
      hf0adm    8419  8147  2.2 6607404 1885664      _ hdbindexserver -port 30003
      hf0adm    8605  8147  0.6 4753424 428688      _ hdbwebdispatcher
      hf0adm    8057     1  0.0 497336 28788 /hana/shared/HF0/HDB00/exe/sapstartsrv pf=/hana/shared

Connect SAP HANA Studio

  1. Connect to the master SAP HANA host from SAP HANA Studio.

    You can connect from an instance of SAP HANA Studio that is outside of GCP or from an instance on GCP. You might need to enable network access between the target VMs and SAP HANA Studio.

    To use SAP HANA Studio on GCP and enable access to the SAP HANA system, see Installing SAP HANA Studio on a Compute Engine Windows VM.

  2. In SAP HANA Studio, click the Landscape tab on the default system administration panel. You should see a display similar to the following example.

    Screenshot of the Landscape view in SAP HANA Studio

If any of the validation steps show that the installation failed:

  1. Correct the error.
  2. On the Deployments page, delete the deployment.
  3. Rerun your deployment.

Performing a failover test

After you have confirmed that the SAP HANA system deployed successfully, test the failover function.

The following instructions trigger a failover by switching to the SAP HANA operating system user and entering the HDB stop command. The HDB stop command initiates a graceful shutdown of SAP HANA and detaches the disks from the host, which enables a relatively quick failover.

To perform a failover test:

  1. Connect to the VM of a worker host by using SSH. You can connect from the Compute Engine VM instances page by clicking the SSH button for each VM instance, or you can use your preferred SSH method.

    Go to VM instances

  2. Change to the SAP HANA operating system user. In the following example, replace "[SID]" with the SID that you defined for your system.

    su - [SID]adm
  3. Simulate a failure by stopping SAP HANA:

    HDB stop

    The HDB stop command initiates a shutdown of SAP HANA, which triggers a failover. During the failover, the disks are detached from the failed host and reattached to the standby host. The failed host is restarted and becomes a standby host.

  4. After allowing time for the takeover to complete, reconnect to the host that took over for the failed host by using SSH.

  5. Change to the root user.

    sudo su -
  6. Display the disk file system of the VMs for the master and worker hosts.

    df -h

    You should see output similar to the following. The /hana/data and /hana/log directories from the failed host are now mounted to the host that took over.

    hana-scaleout-w-failoverw4:~ # df -h
      Filesystem                       Size  Used Avail Use% Mounted on
      devtmpfs                         103G     0  103G   0% /dev
      tmpfs                            103G     0  103G   0% /dev/shm
      tmpfs                            103G   18M  103G   1% /run
      tmpfs                            103G     0  103G   0% /sys/fs/cgroup
      /dev/sda3                         44G  2.3G   40G   6% /
      /dev/sda2                        200M  660K  200M   1% /boot/efi
      10.113.193.146:/hana_shared_nfs 1007G   40G  917G   5% /hana/shared
      172.26.41.90:/hana_backup_nfs    2.0T   80M  1.9T   1% /hanabackup
      tmpfs                             21G     0   21G   0% /run/user/0
      /dev/mapper/vg_hana-data         1.5T  483M  1.5T   1% /hana/data/HF0/mnt00003
      /dev/mapper/vg_hana-log          256G  2.1G  254G   1% /hana/log/HF0/mnt00003
      tmpfs                             21G     0   21G   0% /run/user/1001

  7. In SAP HANA Studio, open the Landscape view of the SAP HANA system to confirm that the failover over was successful:

    • The status of the hosts involved in the failover should be INFO.
    • The Index Server Role (Actual) column should show the failed host as the new standby host.

    Screenshot of the Landscape view in SAP HANA Studio

Completing the NAT gateway installation

If you created a NAT gateway, complete the following steps in Cloud Shell.

  1. In Cloud Shell, add tags to all instances, including the worker hosts:

    export NETWORK_NAME="[YOUR_NETWORK_NAME]"
    export TAG="[YOUR_TAG_TEXT]"
    gcloud compute instances add-tags "[PRIMARY_VM_NAME]" --tags="$TAG" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances add-tags "[SECONDARY_VM_NAME]" --tags="$TAG" --zone=[SECONDARY_VM_ZONE]
  2. In Cloud Shell, delete external IPs:

    gcloud compute instances delete-access-config "[PRIMARY_VM_NAME]" --access-config-name "external-nat" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances delete-access-config "[SECONDARY_VM_NAME]" --access-config-name "external-nat" --zone=[SECONDARY_VM_ZONE]

Setting up Google's monitoring agent for SAP HANA

Optionally, you can set up Google's monitoring agent for SAP HANA, which collects metrics from SAP HANA and sends them to Stackdriver Monitoring. Stackdriver Monitoring allows you to create dashboards for your metrics, set up custom alerts based on metric thresholds, and more. For more information on setting up and configuring Google's monitoring agent for SAP HANA, see the SAP HANA Monitoring Agent User Guide.

Connecting to SAP HANA

Note that because these instructions don't use an external IP for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server using SAP HANA Studio.

  • To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.

  • To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.

Performing post-deployment tasks

Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.

  1. Update the SAP HANA software with the latest patches.

  2. Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).

  3. Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...