Automated SAP HANA HA deployment with load-balancer VIP implementation

This guide shows you how to automate the deployment of SAP HANA in a Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES) high-availability (HA) cluster that uses Internal TCP/UDP Load Balancing to manage the virtual IP (VIP) address.

The guide uses Cloud Deployment Manager to deploy two Compute Engine virtual machines (VMs), two SAP HANA scale up systems, a virtual IP address (VIP) with a load balancer implementation, and an OS-based HA cluster, all according to the best practices from Google Cloud, SAP, and the OS vendor.

One of the SAP HANA systems functions as the primary, active system and the other functions as a secondary, standby system. You deploy both SAP HANA systems within the same region, ideally in different zones.

Overview of a high-availability Linux cluster for a single-node SAP HANA scaleup system

The deployed cluster includes the following functions and features:

  • The Pacemaker high-availability cluster resource manager.
  • A Google Cloud fencing mechanism.
  • A virtual IP (VIP) that uses a level 4 TCP internal load balancer implementation, including:
    • A reservation of the IP address that you select for the VIP
    • Two Compute Engine instance groups
    • A TCP internal load balancer
    • A Compute Engine health check
  • In RHEL HA clusters:
    • The Red Hat high-availability pattern
    • The Red Hat resource agent and fencing packages
  • In SLES HA clusters:
    • The SUSE high-availability pattern.
    • The SUSE SAPHanaSR resource agent package.
  • Synchronous system replication.
  • Memory preload.
  • Automatic restart of the failed instance as the new secondary instance.

To deploy an SAP HANA system without a Linux high-availability cluster or standby hosts, use the SAP HANA Deployment Guide.

This guide is intended for advanced SAP HANA users who are familiar with Linux high-availability configurations for SAP HANA.

Prerequisites

Before you create the SAP HANA high availability cluster, make sure that the following prerequisites are met:

  • You or your organization has a Google Cloud account and you have created a project for the SAP HANA deployment. For information about creating Google Cloud accounts and projects, see Setting up your Google account in the SAP HANA Deployment Guide.
  • The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Downloading SAP HANA in the SAP HANA Deployment Guide.
  • If you are using VPC internal DNS, the value of the VmDnsSetting variable in your project metadata must be either GlobalOnly or ZonalPreferred to enable the resolution of the node names across zones. The default setting of VmDnsSetting is ZonalOnly. For more information, see:

Creating a network

For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.

If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.

During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SAP-certified Linux images that are available from Google Cloud, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.

To set up networking:

  1. Go to Cloud Shell.

    Go to Cloud Shell

  2. To create a new network in the custom subnetworks mode, run:

    gcloud compute networks create [YOUR_NETWORK_NAME] --subnet-mode custom

    where [YOUR_NETWORK_NAME] is the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).

    Specify --subnet-mode custom to avoid using the default auto mode, which automatically creates a subnet in each Compute Engine region. For more information, see Subnet creation mode.

  3. Create a subnetwork, and specify the region and IP range:

    gcloud compute networks subnets create [YOUR_SUBNETWORK_NAME] \
            --network [YOUR_NETWORK_NAME] --region [YOUR_REGION] --range [YOUR_RANGE]

    where:

    • [YOUR_SUBNETWORK_NAME] is the new subnetwork.
    • [YOUR_NETWORK_NAME] is the name of the network you created in the previous step.
    • [REGION] is the region where you want the subnetwork.
    • [YOUR_RANGE] is the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
  4. Optionally, repeat the previous step and add additional subnetworks.

Setting up a NAT gateway

If you need to create one or more VMs without public IP addresses, you need to use network address translation (NAT) to enable the VMs to access the internet. Use Cloud NAT, a Google Cloud distributed, software-defined managed service that lets VMs send outbound packets to the internet and receive any corresponding established inbound response packets. Alternatively, you can set up a separate VM as a NAT gateway.

To create a Cloud NAT instance for your project, see Using Cloud NAT.

After you configure Cloud NAT for your project, your VM instances can securely access the internet without a public IP address.

Adding firewall rules

By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.

HA clusters for SAP HANA require at least two firewall rules, one that allows the Compute Engine health check to check the health of the cluster nodes and another that allows the cluster nodes to communicate with each other.

If you are not using a shared VPC network, you need to create the firewall rule for the communication between the nodes, but not for the health checks. The Deployment Manager template creates the firewall rule for the health checks, which you can modify after deployment is complete, if needed.

If you are using a shared VPC network, a network administrator needs to create both firewall rules in the host project.

You can also create a firewall rule to allow external access to specified ports, or to restrict access between VMs on the same network. If the default VPC network type is used, some additional default rules also apply, such as the default-allow-internal rule, which allows connectivity between VMs on the same network on all ports.

Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.

Depending on your scenario, you can create firewall rules to allow access for:

  • The default SAP ports that are listed in TCP/IP of All SAP Products.
  • Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
  • SSH connections to your VM instance, including SSH from the browser.
  • Connection to your VM by using a third-party tool in Linux. Create a rule to allow access for the tool through your firewall.

To create a firewall rule:

Console

  1. In the Cloud Console, go to the Firewall rules page.

    OPEN FIREWALL RULES

  2. At the top of the page, click Create firewall rule.

    • In the Network field, select the network where your VM is located.
    • In the Targets field, specify the resources on Google Cloud that this rule applies to. For example, specify All instances in the network. Or to to limit the rule to specific instances on Google Cloud, enter tags in Specified target tags.
    • In the Source filter field, select one of the following:
      • IP ranges to allow incoming traffic from specific IP addresses. Specify the range of IP addresses in the Source IP ranges field.
      • Subnets to allow incoming traffic from a particular subnetwork. Specify the subnetwork name in the following Subnets field. You can use this option to allow access between the VMs in a 3-tier or scaleout configuration.
    • In the Protocols and ports section, select Specified protocols and ports and enter tcp:[PORT_NUMBER].
  3. Click Create to create your firewall rule.

gcloud

Create a firewall rule by using the following command:

$ gcloud compute firewall-rules create firewall-name
--direction=INGRESS --priority=1000 \
--network=network-name --action=ALLOW --rules=protocol:port \
--source-ranges ip-range --target-tags=network-tags

Creating a high-availability Linux cluster with SAP HANA installed

The following instructions use the Cloud Deployment Manager to create a RHEL or SLES cluster with two SAP HANA systems: a primary single-host SAP HANA system on one VM instance and a standby SAP HANA system on another VM instance in the same Compute Engine region. The SAP HANA systems use synchronous system replication and the standby system preloads the replicated data.

You define configuration options for the SAP HANA high-availability cluster in a Deployment Manager configuration file template.

The following instructions use the Cloud Shell, but are generally applicable to the Cloud SDK.

  1. Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA systems you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.

    Go to the quotas page

  2. Open the Cloud Shell or, if you installed the Cloud SDK on your local workstation, open a terminal.

    Go to the Cloud Shell

  3. Download the template.yaml configuration file template for the SAP HANA high-availability cluster to your working directory by entering the following command in the Cloud Shell or Cloud SDK:

    $ wget https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/latest/dm-templates/sap_hana_ha_ilb/template.yaml
  4. Optionally, rename the template.yaml file to identify the configuration it defines.

  5. Open the template.yaml file in the Cloud Shell code editor or, if you are using the Cloud SDK, the text editor of your choice.

    To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.

  6. In the template.yaml file, update the property values by replacing the brackets and their contents with the values for your installation. The properties are described in the following table.

    To create the VM instances without installing SAP HANA, delete or comment out all of the lines that begin with sap_hana_.

    Property Data type Description
    type String

    Specifies the location, type, and version of the Deployment Manager template to use during deployment.

    The YAML file includes two type specifications, one of which is commented out. The type specification that is active by default specifies the template version as latest. The type specification that is commented out specifies a specific template version with a timestamp.

    If you need all of your deployments to use the same template version, use the type specification that includes the timestamp.

    primaryInstanceName String The name of the VM instance for the primary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens.
    secondaryInstanceName String The name of the VM instance for the secondary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens.
    primaryZone String The zone in which the primary SAP HANA system is deployed. The primary and secondary zones must be in the same region.
    secondaryZone String The zone in which the secondary SAP HANA system will be deployed. The primary and secondary zones must be in the same region.
    instanceType String The type of Compute Engine virtual machine that you need to run SAP HANA on. If you need a custom VM type, specify a predefined VM type with a number of vCPUs that is closest to the number you need while still being larger. After deployment is complete, modify the number of vCPUs and the amount of memory.
    network String The name of the network in which to create the load balancer that manages the VIP.

    If you are using a shared VPC network, you must add the ID of the host project as a parent directory of the network name. For example, host-project-id/network-name.

    subnetwork String The name of the subnetwork that you are using for your HA cluster.

    If you are using a shared VPC network, you must add the ID of the host project as a parent directory of the subnetwork name. For example, host-project-id/subnetwork-name.

    linuxImage String The name of the Linux operating- system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/rhel-8-2-sap-ha or family/sles-15-sp2-sap. To specify a specific image, specify only the image name. For the list of available image families, see the Images page in the Cloud console.
    linuxImageProject String The Google Cloud project that contains the image you are going to use. This project might be your own project or a Google Cloud image project. For RHEL, specify rhel-sap-cloud. For SLES, specify suse-sap-cloud. For a list of GCP image projects, see the Images page in the Compute Engine documentation.
    sap_hana_deployment_bucket String The name of the GCP storage bucket in your project that contains the SAP HANA installation files that you uploaded in a previous step.
    sap_hana_sid String The SAP HANA system ID. The ID must consist of three alphanumeric characters and begin with a letter. All letters must be uppercase.
    sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0.
    sap_hana_sidadm_password String A temporary password for the operating system administrator to be used during deployment. When deployment is complete, change the password. Passwords must be at least eight characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_system_password String A temporary password for the database superuser. When deployment is complete, change the password. Passwords must be at least 8 characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_scaleout_nodes Integer The number of additional SAP HANA worker hosts that you need. Specify 0, because scaleout hosts are not supported in high availability configurations currently.
    sap_vip String The IP address that you are going to use for your VIP. The IP address must be within the range of IP addresses that are assigned to your subnetwork. The Deployment Manager template reserves this IP address for you. In an active HA cluster, this IP address is always assigned to the active SAP HANA instance.
    primaryInstanceGroupName String Defines the name of the unmanaged instance group for the primary node. If you omit the parameter, the default name is ig-primaryInstanceName.
    secondaryInstanceGroupName String Defines the name of the unmanaged instance group for the secondary node. If you omit this parameter, the default name is ig-secondaryInstanceName.
    loadBalancerName String Defines the name of the TCP internal load balancer.

    Under each of the following tabs is an example of completed configuration file that directs the Deployment Manager to deploy two n2-highmem-32 VMs for a high-availability cluster, one in the us-central1-a zone and the secondary SAP HANA system installed in the us-central1-c zone. Startup scripts then deploy an SAP HANA system on each VM, configure SAP HANA system replication, and the Linux HA cluster.

    The template file references the sap_hana_ha_ilb folder in the Google Cloud sapdeploy bucket, which contains the scripts for an SAP HANA HA deployment that uses a load balancer VIP implementation.

    Click the tab for RHEL or SLES for an example of how to specify the operating system in the configuration file.

    RHEL

    resources:
    - name: sap_hana_ha
      type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/latest/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
      #
      # By default, this configuration file uses the latest release of the deployment
      # scripts for SAP on Google Cloud.  To fix your deployments to a specific release
      # of the scripts, comment out the type property above and uncomment the type property below.
      #
      # type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/202103310846/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
      #
      properties:
        primaryInstanceName: example-ha-vm1
        secondaryInstanceName: example-ha-vm2
        primaryZone: us-central1-a
        secondaryZone: us-central1-c
        instanceType: n2-highmem-32
        network: example-network
        subnetwork: example-subnet-us-central1
        linuxImage: family/rhel-8-2-sap-ha
        linuxImageProject: rhel-sap-cloud
        # SAP HANA parameters
        sap_hana_deployment_bucket: my-hana-bucket
        sap_hana_sid: HA1
        sap_hana_instance_number: 00
        sap_hana_sidadm_password: TempPa55word
        sap_hana_system_password: TempPa55word
        # VIP parameters
        sap_vip: 10.0.0.100
        primaryInstanceGroupName: ig-example-ha-vm1
        secondaryInstanceGroupName: ig-example-ha-vm2
        loadBalancerName: lb-ha1
        # Additional optional properties
        networkTag: hana-ha-ntwk-tag
        serviceAccount: sap-deploy-example@example-project-123456.iam.gserviceaccount.com

    SLES

    resources:
    - name: sap_hana_ha
      type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/latest/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
      #
      # By default, this configuration file uses the latest release of the deployment
      # scripts for SAP on Google Cloud.  To fix your deployments to a specific release
      # of the scripts, comment out the type property above and uncomment the type property below.
      #
      # type: https://storage.googleapis.com/cloudsapdeploy/deploymentmanager/202103310846/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
      #
      properties:
        primaryInstanceName: example-ha-vm1
        secondaryInstanceName: example-ha-vm2
        primaryZone: us-central1-a
        secondaryZone: us-central1-c
        instanceType: n2-highmem-32
        network: example-network
        subnetwork: example-subnet-us-central1
        linuxImage: family/sles-15-sp1-sap
        linuxImageProject: suse-sap-cloud
        # SAP HANA parameters
        sap_hana_deployment_bucket: my-hana-bucket
        sap_hana_sid: HA1
        sap_hana_instance_number: 00
        sap_hana_sidadm_password: TempPa55word
        sap_hana_system_password: TempPa55word
        # VIP parameters
        sap_vip: 10.0.0.100
        primaryInstanceGroupName: ig-example-ha-vm1
        secondaryInstanceGroupName: ig-example-ha-vm2
        loadBalancerName: lb-ha1
        # Additional optional properties
        networkTag: hana-ha-ntwk-tag
        serviceAccount: sap-deploy-example@example-project-123456.iam.gserviceaccount.com
  7. Create the instances:

    $ gcloud deployment-manager deployments create deployment-name --config template-name.yaml

    The above command invokes the Deployment Manager, which sets up the Google Cloud infrastructure and then hands control over to a script that installs and configures SAP HANA and the HA cluster.

    While Deployment Manager has control, status messages are written to the Cloud Shell. After the script is invoked, status messages are written to Logging and are viewable in the Cloud Console, as described in Checking the Logging logs.

    Time to completion can vary, but the entire process usually takes less than 30 minutes.

Verifying the deployment of your HANA HA system

Verifying an SAP HANA HA cluster involves several different procedures:

  • Checking Logging
  • Checking the configuration of the VM and the SAP HANA installation
  • Checking the load balancer and the health of the instance groups
  • Checking the SAP HANA system using SAP HANA Studio
  • Performing a failover test

Checking the logs

  1. Open Logging to check for errors and monitor the progress of the installation.

    Go to Cloud Logging

  2. On the Logs Explorer page, select Global as your resource in the Query builder and run the query.

    • If "--- Finished" is displayed for both VM instances, proceed to the next step.

      Logging display.

    • If the logs contain errors:

      1. Correct the errors.

        For quota errors, go to the IAM & admin Quotas page in the Cloud Console and increase any quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA Planning Guide.

      2. On the Deployment Manager Deployments page in the Cloud Console, delete the deployment to clean up all of the infrastructure from the failed installation.

      3. Rerun the Deployment Manager.

Checking the configuration of the VM and the SAP HANA installation

  1. After the SAP HANA system deploys without errors, connect to each VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for each VM instance, or you can use your preferred SSH method.

    SSH button on Compute Engine VM instances page.

  2. Change to the root user.

    sudo su -
  3. At the command prompt, enter df -h. Ensure that you see output that includes the /hana directories, such as /hana/data.

    example-ha-vm1:~ # df -h
    Filesystem                        Size  Used Avail Use% Mounted on
    devtmpfs                          308G  8.0K  308G   1% /dev
    tmpfs                             461G   54M  461G   1% /dev/shm
    tmpfs                             308G  250M  307G   1% /run
    tmpfs                             308G     0  308G   0% /sys/fs/cgroup
    /dev/sda3                          30G  3.2G   27G  11% /
    /dev/sda2                          20M  3.6M   17M  18% /boot/efi
    /dev/mapper/vg_hana-shared        614G   50G  565G   9% /hana/shared
    /dev/mapper/vg_hana-sap            32G  278M   32G   1% /usr/sap
    /dev/mapper/vg_hana-data          951G  7.9G  943G   1% /hana/data
    /dev/mapper/vg_hana-log           307G  5.5G  302G   2% /hana/log
    /dev/mapper/vg_hanabackup-backup  1.3T  409G  840G  33% /hanabackup
  4. Check the status of the new cluster by entering the status command that is specific to your operating system:

    RHEL

    pcs status

    SLES

    crm status

    You should see results similar to the following the example, in which both VM instances are started and example-ha-vm1 is the active primary instance:

    RHEL

    example-ha-vm1:~ # pcs status
    Cluster name: hacluster
    Cluster Summary:
    * Stack: corosync
    * Current DC: example-ha-vm1 (version 2.0.3-5.el8_2.3-4b1f869f0f) - partition with quorum
    * Last updated: Fri Mar 19 21:09:23 2021
    * Last change:  Fri Mar 19 21:08:27 2021 by root via crm_attribute on example-ha-vm1
    * 2 nodes configured
    * 8 resource instances configured
    
    Node List:
    * Online: [ example-ha-vm1 example-ha-vm2 ]
    
    Full List of Resources:
    * STONITH-example-ha-vm1    (stonith:fence_gce):    Started example-ha-vm2
    * STONITH-example-ha-vm2    (stonith:fence_gce):    Started example-ha-vm1
    * Resource Group: g-primary:
      * rsc_healthcheck_HA1 (service:haproxy):  Started example-ha-vm1
      * rsc_vip_HA1_00  (ocf::heartbeat:IPaddr2):   Started example-ha-vm1
    * Clone Set: SAPHanaTopology_HA1_00-clone [SAPHanaTopology_HA1_00]:
      * Started: [ example-ha-vm1 example-ha-vm2 ]
    * Clone Set: SAPHana_HA1_00-clone [SAPHana_HA1_00] (promotable):
      * Masters: [ example-ha-vm1 ]
      * Slaves: [ example-ha-vm2 ]
    
    Daemon Status:
    corosync: active/enabled
    pacemaker: active/enabled
    pcsd: active/enabled

    SLES

    example-ha-vm1:~ # crm status
    Stack: corosync
    Current DC: example-ha-vm1 (version 2.0.1+20190417.13d370ca9-3.15.1-2.0.1+20190417.13d370ca9) - partition with quorum
    Last updated: Thu Nov 19 16:34:14 2020
    Last change: Thu Nov 19 16:34:04 2020 by root via crm_attribute on example-ha-vm1
    
    2 nodes configured
    8 resources configured
    
    Online: [ example-ha-vm1 example-ha-vm2 ]
    
    Full list of resources:
    
    STONITH-example-ha-vm1 (stonith:external/gcpstonith):  Started example-ha-vm2
    STONITH-example-ha-vm2 (stonith:external/gcpstonith):  Started example-ha-vm1
    Resource Group: g-primary
       rsc_vip_int-primary        (ocf::heartbeat:IPaddr2):       Started example-ha-vm1
       rsc_vip_hc-primary (ocf::heartbeat:anything):      Started example-ha-vm1
    Clone Set: cln_SAPHanaTopology_HA1_HDB00 [rsc_SAPHanaTopology_HA1_HDB00]
       Started: [ example-ha-vm1 example-ha-vm2 ]
    Clone Set: msl_SAPHana_HA1_HDB00 [rsc_SAPHana_HA1_HDB00] (promotable)
       Masters: [ example-ha-vm1 ]
       Slaves: [ example-ha-vm2 ]
  5. Change to the SAP admin user by replacing [SID] in the following command with the [SID] value that you specified in the configuration file template.

    su - [SID]adm
    
  6. Ensure that the SAP HANA services, such as hdbnameserver, hdbindexserver, and others, are running on the instance by entering the following command:

    HDB info
    

Checking the load balancer and the health of the instance groups

To confirm that the load balancer and health check were set up correctly, check the load balancer and instance groups in the Cloud Console.

  1. Open the Load balancing page in the Cloud Console:

    Go to Cloud Load Balancing

  2. In the list of load balancers, confirm that a load balancer was created for your HA cluster.

  3. On the Load balancer details page in the Healthy column under Instance group in the Backend section, confirm that one of the instance groups shows "1/1" and the other shows "0/1". After a failover, the healthy indicator, "1/1", switches to the new active instance group.

    Shows the load balancer details page with the active primary instance group
indicated by "1/1" and the inactive secondary indicated by "0/1".

Checking the SAP HANA system using SAP HANA Studio

  1. Connect to the HANA system by using SAP HANA Studio. When defining the connection, specify the following values:

    • On the Specify System panel, specify the floating IP address as the Host Name.
    • On the Connection Properties panel, for database user authentication, specify the database superuser name and the password that you specified for the sap_hana_system_password property in the template.yaml file.

    For information from SAP about installing SAP HANA Studio, see SAP HANA Studio Installation and Update Guide.

  2. After SAP HANA Studio is connected to your HANA HA system, display the system overview by double-clicking the system name in the navigation pane on the left side of the window.

    Screenshot of the navigation pane in SAP HANA Studio

  3. Under General Information on the Overview tab, confirm that:

    • The Operational Status shows "All services started".
    • The System Replication Status shows "All services are active and in sync".

    Screenshot of the Overview tab in SAP HANA Studio

  4. Confirm the replication mode by clicking the System Replication Status link under General Information. Synchronous replication is indicated by SYNCMEM in the REPLICATION_MODE column on the System Replication tab.

    Screenshot of the System Replication Status tab in SAP HANA Studio

If any of the validation steps show that the installation failed:

  1. Resolve the errors.
  2. Delete the deployment from the Deployments page.
  3. Recreate the instances, as described in the last step of the previous section.

Performing a failover test

To perform a failover test:

  1. Connect to the primary VM by using SSH. You can connect from the Compute Engine VM instances page by clicking the SSH button for each VM instance, or you can use your preferred SSH method.

  2. At the command prompt, enter the following command:

    sudo ip link set eth0 down

    The ip link set eth0 down command triggers a failover by severing communications with the primary host.

  3. Reconnect to either host using SSH and change to the root user.

  4. Confirm that the primary host is now active on the VM that used to contain the secondary host. Automatic restart is enabled in the cluster, so the stopped host will restart and assume the role of secondary host.

    • On RHEL, enter pcs status
    • On SLES, enter crm status

    The following examples show that the roles on each host have switched.

    RHEL

    [root@example-ha-vm1 ~]# pcs status
    Cluster name: hacluster
    Cluster Summary:
      * Stack: corosync
      * Current DC: example-ha-vm1 (version 2.0.3-5.el8_2.3-4b1f869f0f) - partition with quorum
      * Last updated: Fri Mar 19 21:22:07 2021
      * Last change:  Fri Mar 19 21:21:28 2021 by root via crm_attribute on example-ha-vm2
      * 2 nodes configured
      * 8 resource instances configured
    
    Node List:
      * Online: [ example-ha-vm1 example-ha-vm2 ]
    
    Full List of Resources:
      * STONITH-example-ha-vm1  (stonith:fence_gce):    Started example-ha-vm2
      * STONITH-example-ha-vm2  (stonith:fence_gce):    Started example-ha-vm1
      * Resource Group: g-primary:
        * rsc_healthcheck_HA1   (service:haproxy):  Started example-ha-vm2
        * rsc_vip_HA1_00    (ocf::heartbeat:IPaddr2):   Started example-ha-vm2
      * Clone Set: SAPHanaTopology_HA1_00-clone [SAPHanaTopology_HA1_00]:
        * Started: [ example-ha-vm1 example-ha-vm2 ]
      * Clone Set: SAPHana_HA1_00-clone [SAPHana_HA1_00] (promotable):
        * Masters: [ example-ha-vm2 ]
        * Slaves: [ example-ha-vm1 ]
    

    SLES

    Screenshot of the crm status output showing that the primary and secondary hosts switched VMs

  5. On the Load balancer details page in the console, confirm that the new active primary instance shows "1/1" in the Healthy column. Refresh the page, if necessary.

    Go to Cloud Load Balancing

    For example:

    Shows the load balancer details page with the "ig-example-ha-vm2" instance
showing "1/1" in the Healthy column.

  6. In SAP HANA Studio, confirm that you are still connected to the system by double-clicking the system entry in the navigation pane to refresh the system information.

  7. Click the System Replication Status link to confirm that the primary and secondary hosts have switched hosts and are active.

    Screenshot of the System Replication Status tab in SAP HANA Studio

Setting up the Google monitoring agent for SAP HANA

Optionally, you can set up the Google monitoring agent for SAP HANA, which collects metrics from SAP HANA and sends them to Monitoring. Monitoring lets you create dashboards for your metrics, set up custom alerts based on metric thresholds, and more.

To monitor an HA cluster, install the monitoring agent on a VM instance outside of the cluster. Specify the floating IP address of the cluster as the IP address of the host instance to monitor.

For more information on setting up and configuring the Google monitoring agent for SAP HANA, see the SAP HANA Monitoring Agent User Guide.

Connecting to SAP HANA

Note that because these instructions don't use an external IP address for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.

  • To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.

  • To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.

Performing post-deployment tasks

Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.

  1. Change the temporary passwords for the SAP HANA system administrator and database superuser.

  2. Update the SAP HANA software with the latest patches.

  3. Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).

  4. Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.

What's next