HA cluster configuration guide for SAP HANA on SLES

This guide shows you how to deploy and configure a performance-optimized SUSE Linux Enterprise Server (SLES) high-availability (HA) cluster for an SAP HANA scale-up system on Google Cloud.

This guide includes the steps for:

  • Configuring Internal TCP/UDP Load Balancing to reroute traffic in the event of a failure
  • Configuring a Pacemaker cluster on SLES to manage the SAP systems and other resources during a failover

This guide also includes steps for configuring SAP HANA system replication, but refer to the SAP documentation for the definitive instructions.

To deploy a SAP HANA system without a Linux high-availability cluster or standby hosts, use the SAP HANA deployment guide.

To configure an HA cluster for SAP HANA on Red Hat Enterprise Linux (RHEL), see the HA cluster configuration guide for SAP HANA scale-up on RHEL.

This guide is intended for advanced SAP HANA users who are familiar with Linux high-availability configurations for SAP HANA.

The system that this guide deploys

Following this guide, you will deploy two SAP HANA instances and set up an HA cluster on SLES. You deploy each SAP HANA instance on a Compute Engine VM in a different zone within the same region. A high-availability installation of SAP NetWeaver is not covered in this guide.

Overview of a high-availability Linux cluster for a single-node SAP HANA scaleup system

The deployed cluster includes the following functions and features:

  • Two host VMs, each with an instance of SAP HANA
  • Synchronous SAP HANA system replication.
  • The Pacemaker high-availability cluster resource manager.
  • A STONITH fencing mechanism.
  • Automatic restart of the failed instance as the new secondary instance.

This guide has you use the Cloud Deployment Manager templates that are provided by Google Cloud to deploy the Compute Engine virtual machines (VMs) and the SAP HANA instances, which ensures that the VMs and the base SAP HANA systems meet SAP supportability requirements and conform to current best practices.

SAP HANA Studio is used in this guide to test SAP HANA system replication. You can use SAP HANA Cockpit instead, if you prefer. For information about installing SAP HANA Studio, see:

Prerequisites

Before you create the SAP HANA high availability cluster, make sure that the following prerequisites are met:

  • You or your organization has a Google Cloud account and you have created a project for the SAP HANA deployment. For information about creating Google Cloud accounts and projects, see Setting up your Google account in the SAP HANA Deployment Guide.
  • The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Downloading SAP HANA in the SAP HANA Deployment Guide.
  • If you are using VPC internal DNS, the value of the VmDnsSetting variable in your project metadata must be either GlobalOnly or ZonalPreferred to enable the resolution of the node names across zones. The default setting of VmDnsSetting is ZonalOnly. For more information, see:

Creating a network

For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.

If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.

During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SAP-certified Linux images that are available from Google Cloud, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.

To set up networking:

  1. Go to Cloud Shell.

    Go to Cloud Shell

  2. To create a new network in the custom subnetworks mode, run:

    gcloud compute networks create [YOUR_NETWORK_NAME] --subnet-mode custom

    where [YOUR_NETWORK_NAME] is the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).

    Specify --subnet-mode custom to avoid using the default auto mode, which automatically creates a subnet in each Compute Engine region. For more information, see Subnet creation mode.

  3. Create a subnetwork, and specify the region and IP range:

    gcloud compute networks subnets create [YOUR_SUBNETWORK_NAME] \
            --network [YOUR_NETWORK_NAME] --region [YOUR_REGION] --range [YOUR_RANGE]

    where:

    • [YOUR_SUBNETWORK_NAME] is the new subnetwork.
    • [YOUR_NETWORK_NAME] is the name of the network you created in the previous step.
    • [REGION] is the region where you want the subnetwork.
    • [YOUR_RANGE] is the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
  4. Optionally, repeat the previous step and add additional subnetworks.

Setting up a NAT gateway

If you intend to create one or more VMs that will not have public IP addresses, you must create a NAT gateway so that your VMs can access the Internet to download Google's monitoring agent.

If you intend to assign an external public IP address to your VM, you can skip this step.

To create a NAT gateway:

  1. Create a VM to act as the NAT gateway in the subnet you just created:

    gcloud compute instances create [YOUR_VM_NAME] --can-ip-forward \
            --zone [YOUR_ZONE]  --image-family [YOUR_IMAGE_FAMILY] \
            --image-project [YOUR_IMAGE_PROJECT] \
            --machine-type=[YOUR_MACHINE_TYPE] --subnet [YOUR_SUBNETWORK_NAME] \
            --metadata startup-script="sysctl -w net.ipv4.ip_forward=1; iptables \
            -t nat -A POSTROUTING -o eth0 -j MASQUERADE" --tags [YOUR_VM_TAG]

    where:

    • [YOUR_VM_NAME] is the name of the VM you are creating that want to use for the NAT gateway.
    • [YOUR_ZONE] is the zone where you want the VM.
    • [YOUR_IMAGE_FAMILY] and [YOUR_IMAGE_PROJECT] specify the image you want to use for the NAT gateway.
    • [YOUR_MACHINE_TYPE] is any supported machine type. If you expect high network traffic, choose a machine type with that has at least eight virtual CPUs.
    • [YOUR_SUBNETWORK_NAME] is the name of the subnetwork where you want the VM.
    • [YOUR_VM_TAG] is a tag that is applied to the VM you are creating. If you use this VM as a bastion host, this tag is used to apply the related firewall rule only to this VM.
  2. Create a route that is tagged so that traffic passes through the NAT VM instead of the default Internet gateway:

    gcloud compute routes create [YOUR_ROUTE_NAME] \
            --network [YOUR_NETWORK_NAME] --destination-range 0.0.0.0/0 \
            --next-hop-instance [YOUR_VM_NAME] --next-hop-instance-zone \
            [YOUR_ZONE] --tags [YOUR_TAG_NAME] --priority 800

    where:

    • [YOUR_ROUTE_NAME] is the name of the route you are creating.
    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_NAME] is the VM you are using for your NAT gateway.
    • [YOUR_ZONE] is the zone where the VM is located.
    • [YOUR_TAG_NAME] is the tag on the route that directs traffic through the NAT VM.
  3. If you also want to use the NAT gateway VM as a bastion host, run the following command. This command creates a firewall rule that allows inbound SSH access to this instance from the Internet:

    gcloud compute firewall-rules create allow-ssh --network [YOUR_NETWORK_NAME] --allow tcp:22 --source-ranges 0.0.0.0/0 --target-tags "[YOUR_VM_TAG]"

    where:

    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_TAG] is the tag you specified when you created the NAT gateway VM. This tag is used so this firewall rule applies only to the VM that hosts the NAT gateway, and not to all VMs in the network.

Adding firewall rules

By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.

You can also create a firewall rule to allow external access to specified ports, or to restrict access between VMs on the same network. If the default VPC network type is used, some additional default rules also apply, such as the default-allow-internal rule, which allows connectivity between VMs on the same network on all ports.

Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.

Depending on your scenario, you can create firewall rules to allow access for:

  • The default SAP ports that are listed in TCP/IP of All SAP Products.
  • Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
  • Communication between VMs when, for example, your database server and application server are running on different VMs. To enable communication between VMs, you must create a firewall rule to allow traffic that originates from the subnetwork.

To create a firewall rule:

Console

  1. In the Cloud Console, go to the Firewall rules page.

    OPEN FIREWALL RULES

  2. At the top of the page, click Create firewall rule.

    • In the Network field, select the network where your VM is located.
    • In the Targets field, specify the resources on Google Cloud that this rule applies to. For example, specify All instances in the network. Or to to limit the rule to specific instances on Google Cloud, enter tags in Specified target tags.
    • In the Source filter field, select one of the following:
      • IP ranges to allow incoming traffic from specific IP addresses. Specify the range of IP addresses in the Source IP ranges field.
      • Subnets to allow incoming traffic from a particular subnetwork. Specify the subnetwork name in the following Subnets field. You can use this option to allow access between the VMs in a 3-tier or scaleout configuration.
    • In the Protocols and ports section, select Specified protocols and ports and enter tcp:[PORT_NUMBER].
  3. Click Create to create your firewall rule.

gcloud

Create a firewall rule by using the following command:

$ gcloud compute firewall-rules create firewall-name
--direction=INGRESS --priority=1000 \
--network=network-name --action=ALLOW --rules=protocol:port \
--source-ranges ip-range --target-tags=network-tags

Deploying the VMs and SAP HANA

Before you begin configuring the HA cluster, you define and deploy the VM instances and SAP HANA systems that will serve as the primary and secondary nodes in your HA cluster.

To define and deploy the systems, you use the same Cloud Deployment Manager template that you use to deploy a SAP HANA system in the SAP HANA deployment guide.

However, to deploy two systems instead of one, you need to add the definition for the second system to the configuration file by copying and pasting the definition of the first system. After you create the second definition, you need to change the resource and instance names in the second definition. To protect against a zonal failure, specify a different zone in the same region. All other property values in the two definitions stay the same.

After the SAP HANA systems have deployed successfully, you define and configure the HA cluster.

The following instructions use the Cloud Shell, but are generally applicable to the Cloud SDK.

  1. Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA systems you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.

    Go to the quotas page

  2. Open the Cloud Shell or, if you installed the Cloud SDK on your local workstation, open a terminal.

    Go to the Cloud Shell

  3. Download the template.yaml configuration file template for the SAP HANA high-availability cluster to your working directory by entering the following command in the Cloud Shell or Cloud SDK:

    wget https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana/template.yaml
  4. Optionally, rename the template.yaml file to identify the configuration it defines.

  5. Open the template.yaml file in the Cloud Shell code editor or, if you are using the Cloud SDK, the text editor of your choice.

    To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.

  6. In the template.yaml file, complete the definition of the first VM and SAP HANA system. Specify the property values by replacing the brackets and their contents with the values for your installation. The properties are described in the following table.

    To create the VM instances without installing SAP HANA, delete or comment out all of the lines that begin with sap_hana_.

    Property Data type Description
    instanceName String The name of the VM instance currently being defined. Specify different names in the primary and secondary VM definitions. Names must be specified in lowercase letters, numbers, or hyphens.
    instanceType String The type of Compute Engine virtual machine that you need to run SAP HANA on. If you need a custom VM type, specify a predefined VM type with a number of vCPUs that is closest to the number you need while still being larger. After deployment is complete, modify the number of vCPUs and the amount of memory..
    zone String The Google Cloud zone in which to deploy the VM instance that your are defining. Specify different zones in the same region for the primary and secondary VM definitions. The zones must be in the same region that you selected for your subnet.
    subnetwork String The name of the subnetwork you created in a previous step. If you are deploying to a shared VPC, specify this value as [SHAREDVPC_PROJECT]/[SUBNETWORK]. For example, myproject/network1.
    linuxImage String The name of the Linux operating-system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/sles-15-sp1-sap. To specify a specific image, specify only the image name. For the list of available images and families, see the Images page in Cloud Console.
    linuxImageProject String The Google Cloud project that contains the image you are going to use. This project might be your own project or a Google Cloud image project, such as suse-sap-cloud. For more information about GCP image projects, see the Images page in the Compute Engine documentation.
    sap_hana_deployment_bucket String The name of the GCP storage bucket in your project that contains the SAP HANA installation and revision files that you uploaded in a previous step. Any upgrade revision files in the bucket are applied to SAP HANA during the deployment process.
    sap_hana_sid String The SAP HANA system ID (SID). The ID must consist of three alphanumeric characters and begin with a letter. All letters must be uppercase.
    sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0.
    sap_hana_sidadm_password String The password for the operating system (OS) administrator. Passwords must be at least eight characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_system_password String The password for the database superuser. Passwords must be at least 8 characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_sidadm_uid Integer The default value for the sidadm user ID is 900 to avoid user-created groups conflicting with SAP HANA. You can change this to a different value if you need to.
    sap_hana_sapsys_gid Integer The default group ID for sapsys is 79. By specifying a value above you can override this value to your requirements.
    sap_hana_scaleout_nodes Integer Specify 0. These instructions are for scale-up SAP HANA systems only.
    networkTag String A network tag that represents your VM instance for firewall or routing purposes. If you specify `publicIP: No` and do not specify a network tag, be sure to provide another means of access to the internet.
    publicIP Boolean Optional. Determines whether a public IP address is added to your VM instance. The default is Yes.
    serviceAccount String Optional. Specifies a service account to be used by the host VMs and by the programs that run on the host VMs. Specify the member email account of the service account. For example, svc-acct-name@project-id.iam.gserviceaccount.com. By default, the Compute Engine default service account is used. For more information, see Identity and access management for SAP programs on Google Cloud.
  7. Create the definition of the second VM and SAP HANA system by copying the definition of the first and pasting the copy after the first definition. See the example following these steps.

  8. In the definition of the second system, specify different values for the following properties than you specified in the first definition:

    • name
    • instanceName
    • zone
  9. Create the instances:

    gcloud deployment-manager deployments create deployment-name --config template-name.yaml

    The above command invokes the Deployment Manager, which deploys the VMs, downloads the SAP HANA software from your storage bucket, and installs SAP HANA, all according to the specifications in your template.yaml file.

    Deployment processing consists of two stages. In the first stage, Deployment Manager writes its status to the console. In the second stage, the deployment scripts write their status to Cloud Logging.

Example of a complete template.yaml configuration file

The following example shows a completed template.yaml configuration file that deploys two VM instances with a SAP HANA system installed.

The file contains the definitions of two resources to deploy: sap_hana_primary and sap_hana_secondary. Each resource definition contains the definitions for a VM and a SAP HANA instance.

The sap_hana_secondary resource definition was created by copying and pasting the first definition, and then modifying the values of name, instanceName, and zone properties. All other property values in the two resource definitions are the same.

The properties networkTag, serviceAccount, sap_hana_sidadm_uid, and sap_hana_sapsys_gid are from the Advanced Options section of the configuration file template. The properties sap_hana_sidadm_uid and sap_hana_sapsys_gid are included to show their default values, which are used because the properties are commented out.

imports:
‐ path: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana/sap_hana.py

resources:
‐ name: sap_hana_primary
  type: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana/sap_hana.py
  properties:
    instanceName: hana-ha-vm-1
    instanceType: n2-highmem-32
    zone: us-central1-a
    subnetwork: example-subnet-us-central1
    linuxImage: family/sles-15-sp1-sap
    linuxImageProject: suse-sap-cloud
    sap_hana_deployment_bucket: hana2-sp4-rev46
    sap_hana_sid: HA1
    sap_hana_instance_number: 22
    sap_hana_sidadm_password: Google123
    sap_hana_system_password: Google123
    sap_hana_scaleout_nodes: 0
    networkTag: cluster-ntwk-tag
    serviceAccount: limited-roles@example-project-123456.iam.gserviceaccount.com
    # sap_hana_sidadm_uid: 900
    # sap_hana_sapsys_gid: 79

‐ name: sap_hana_secondary
  type: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana/sap_hana.py
  properties:
    instanceName: hana-ha-vm-2
    instanceType: n2-highmem-32
    zone: us-central1-c
    subnetwork: example-subnet-us-central1
    linuxImage: family/sles-15-sp1-sap
    linuxImageProject: suse-sap-cloud
    sap_hana_deployment_bucket: hana2-sp4-rev46
    sap_hana_sid: HA1
    sap_hana_instance_number: 22
    sap_hana_sidadm_password: Google123
    sap_hana_system_password: Google123
    sap_hana_scaleout_nodes: 0
    networkTag: cluster-ntwk-tag
    serviceAccount: limited-roles@example-project-123456.iam.gserviceaccount.com
    # sap_hana_sidadm_uid: 900
    # sap_hana_sapsys_gid: 79

Create firewall rules that allow access to the host VMs

If you haven't done so already, create firewall rules that allow access to each host VM from the following sources:

  • For configuration purposes, your local workstation, a bastion host, or a jump server
  • For access between the cluster nodes, the other host VMs in the HA cluster

When you create VPC firewall rules, you specify the network tags that you defined in the template.yaml configuration file to designate your host VMs as the target for the rule.

To verify deployment, define a rule to allow SSH connections on port 22 from a bastion host or your local workstation.

For access between the cluster nodes, add a firewall rule that allows all connection types on any port from other VMs in the same subnetwork.

Make sure that the firewall rules for verifying deployment and for intra-cluster communication are created before proceeding to the next section. For instructions, see Adding firewall rules.

Verifying the deployment of the VMs and SAP HANA

Before you begin configuring the HA cluster, verify that the VMs and SAP HANA were deployed correctly by checking the logs, the OS directory mapping, and the SAP HANA installation.

Checking the logs

  1. Open Cloud Logging to check for errors and monitor the progress of the installation.

    Go to Cloud Logging

  2. On the Resources tab, select Global as your logging resource.

    • If "INSTANCE DEPLOYMENT COMPLETE" is displayed, Deployment Manager processing is complete and you can proceed to the next step.
    • If you see a quota error:

      1. On the IAM & Admin Quotas page, increase any of your quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA Planning Guide.
      2. On the Deployment Manager Deployments page, delete the deployment to clean up the VMs and persistent disks from the failed installation.
      3. Rerun the Deployment Manager.

      Logging display.

Checking the configuration of the VMs and SAP HANA

  1. After the SAP HANA system deploys without errors, connect to each VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for each VM instance, or you can use your preferred SSH method.

    SSH button on Compute Engine VM instances page.

  2. Change to the root user.

    $ sudo su -
  3. At the command prompt, enter df -h. On each VM, ensure that you see the /hana directories, such as /hana/data.

    Filesystem                        Size  Used Avail Use% Mounted on
    /dev/sda2                          30G  4.0G   26G  14% /
    devtmpfs                          126G     0  126G   0% /dev
    tmpfs                             126G     0  126G   0% /dev/shm
    tmpfs                             126G   17M  126G   1% /run
    tmpfs                             126G     0  126G   0% /sys/fs/cgroup
    /dev/sda1                         200M  9.7M  191M   5% /boot/efi
    /dev/mapper/vg_hana-shared        251G   49G  203G  20% /hana/shared
    /dev/mapper/vg_hana-sap            32G  240M   32G   1% /usr/sap
    /dev/mapper/vg_hana-data          426G  7.0G  419G   2% /hana/data
    /dev/mapper/vg_hana-log           125G  4.2G  121G   4% /hana/log
    /dev/mapper/vg_hanabackup-backup  512G   33M  512G   1% /hanabackup
    tmpfs                              26G     0   26G   0% /run/user/900
    tmpfs                              26G     0   26G   0% /run/user/899
    tmpfs                              26G     0   26G   0% /run/user/1000
  4. Change to the SAP admin user by replacing sid in the following command with the system ID that you specified in the configuration file template.

    # su - sidadm
  5. Ensure that the SAP HANA services, such as hdbnameserver, hdbindexserver, and others, are running on the instance by entering the following command:

    > HDB info

Disable SAP HANA autostart

For each SAP HANA instance in the cluster, make sure that SAP HANA autostart is disabled. For failovers, Pacemaker manages the starting and stopping of the SAP HANA instances in a cluster.

  1. On each host as sidadm, stop SAP HANA:

    > HDB stop
  2. On each host, open the SAP HANA profile by using an editor, such as vi:

    vi /usr/sap/SID/SYS/profile/SID_HDBinst_num_host_name
  3. Set the Autostart property to 0:

    Autostart=0
  4. Save the profile.

  5. On each host as sidadm, start SAP HANA:

    > HDB start

Configure SSH keys on the primary and secondary VMs

The SAP HANA secure store (SSFS) keys need to be synchronized between the hosts in the HA cluster. To simplify the synchronization, and to allow files like backups to be copied between the hosts in the HA cluster, these instructions have you create root SSH connections between the two hosts.

Your organization is likely to have guidelines that govern internal network communications. If necessary, after deployment is complete you can remove the metadata from the VMs and the keys from the authorized_keys directory.

If setting up direct SSH connections does not comply with your organization's guidelines, you can synchronize the SSFS keys and transfer files by using other methods, such as:

To enable SSH connections between the primary and secondary instances, follow these steps.

  1. On the primary host VM:

    1. SSH into the VM.

    2. Switch to root:

    $ sudo su -
    1. As root, generate an SSH key.

      # ssh-keygen
    2. Update the primary VM's metadata with information about the SSH key for the secondary VM.

      # gcloud compute instances add-metadata secondary-host-name \
      --metadata "ssh-keys=$(whoami):$(cat ~/.ssh/id_rsa.pub)" --zone secondary-zone
    3. Authorize the primary VM to itself

      # cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  2. On the secondary host VM:

    1. SSH into the VM.

    2. Switch to root:

    $ sudo su -
    1. As root, generate an SSH key.

      # ssh-keygen
    2. Update the secondary VM's metadata with information about the SSH key for the primary VM.

      # gcloud compute instances add-metadata primary-host-name \
      --metadata "ssh-keys=$(whoami):$(cat ~/.ssh/id_rsa.pub)" --zone primary-zone
    3. Authorize the secondary VM to itself

      # cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    4. Confirm that the SSH keys are set up properly by opening an SSH connection from the secondary system to the primary system.

      # ssh primary-host-name
  3. On the primary host VM as root, confirm the connection by opening an SSH connection to the secondary host VM:

    # ssh secondary-host-name

Create an SAP HANA database user for monitoring the cluster state

  1. On the primary host, as sidadm, sign into the SAP HANA database interactive terminal:

    > hdbsql -u system -p "system-password" -i inst_num
  2. In the interactive terminal, create the slehasync database user:

    => CREATE USER slehasync PASSWORD "hdb-user-password";
    => GRANT DATA ADMIN TO slehasync;
    => ALTER USER slehasync DISABLE PASSWORD LIFETIME;
  3. As sidadm, define the SLEHALOC user key in the SAP HANA secure user store (hdbuserstore):

    > PATH="$PATH:/usr/sap/SID/HDBinst_num/exe"
    > hdbuserstore SET SLEHALOC localhost:3inst_num15 slehasync hdb-user-password

Back up the databases

Create backups of your databases to initiate database logging for SAP HANA system replication and create a recovery point.

If you have multiple tenant databases in an MDC configuration, back up each tenant database.

The Deployment Manager template uses /hanabackup/data/SID as the default backup directory.

To create backups of new SAP HANA databases:

  1. On the primary host, switch to sidadm. Depending on your OS image, the command might be different.

    sudo -i -u sidadm
  2. Create database backups:

    • For a SAP HANA single-database-container system:

      > hdbsql -t -u system -p system-password -i inst-num \
        "backup data using file ('full')"

      The following example shows a successful response from a new SAP HANA system:

      0 rows affected (overall time 18.416058 sec; server time 18.414209 sec)
    • For a SAP HANA multi-database-container system (MDC), create a backup of the system database as well as any tenant databases:

      > hdbsql -t -d SYSTEMDB -u system -p system-password -i inst_num \
        "backup data using file ('full')"
      > hdbsql -t -d SID -u system -p system-password -i inst-num \
        "backup data using file ('full')"

    The following example shows a successful response from a new SAP HANA system:

    0 rows affected (overall time 16.590498 sec; server time 16.588806 sec)
  3. Confirm that the logging mode is set to normal:

    > hdbsql -u system -p system-password -i inst_num \
      "select value from "SYS"."M_INIFILE_CONTENTS" where key='log_mode'"

    You should see:

    VALUE
    "normal"

Enable SAP HANA system replication

As a part of enabling SAP HANA system replication, you need to copy the data and key files for the SAP HANA secure stores on the file system (SSFS) from the primary host to the secondary host. The method that this procedure uses to copy the files is just one possible method that you can use.

  1. On the primary host as sidadm, enable system replication:

    > hdbnsutil -sr_enable --name=primary-host-name
  2. On the secondary host:

    1. As sidadm, stop SAP HANA:

      > HDB stop
    2. As root, archive the existing SSFS data and key files:

      # cd /usr/sap/SID/SYS/global/security/rsecssfs/
      # mv data/SSFS_SID.DAT data/SSFS_SID.DAT-ARC
      # mv key/SSFS_SID.KEY key/SSFS_SID.KEY-ARC
    3. Copy the data file from the primary host:

      # scp -o StrictHostKeyChecking=no \
      primary-host-name:/usr/sap/SID/SYS/global/security/rsecssfs/data/SSFS_SID.DAT \
      /usr/sap/SID/SYS/global/security/rsecssfs/data/SSFS_SID.DAT
    4. Copy the key file from the primary host:

      # scp -o StrictHostKeyChecking=no \
      primary-host-name:/usr/sap/SID/SYS/global/security/rsecssfs/key/SSFS_SID.KEY \
      /usr/sap/SID/SYS/global/security/rsecssfs/key/SSFS_SID.KEY
    5. Update ownership of the files:

      # chown sidadm:sapsys /usr/sap/SID/SYS/global/security/rsecssfs/data/SSFS_SID.DAT
      # chown sidadm:sapsys /usr/sap/SID/SYS/global/security/rsecssfs/key/SSFS_SID.KEY
    6. Update permissions for the files:

      # chmod 644 /usr/sap/SID/SYS/global/security/rsecssfs/data/SSFS_SID.DAT
      # chmod 640 /usr/sap/SID/SYS/global/security/rsecssfs/key/SSFS_SID.KEY
    7. As sidadm, register the secondary SAP HANA system with SAP HANA system replication:

      > hdbnsutil -sr_register --remoteHost=primary-host-name --remoteInstance=inst_num \
      --replicationMode=syncmem --operationMode=logreplay --name=secondary-host-name
    8. As sidadm, start SAP HANA:

      > HDB start

Validating system replication

On the primary host as sidadm, confirm that SAP HANA system replication is active by running the following python script:

$ python $DIR_INSTANCE/exe/python_support/systemReplicationStatus.py

If replication is set up properly, among other indicators, the following values are displayed for the xsengine, nameserver, and indexserver services:

  • The Secondary Active Status is YES
  • The Replication Status is ACTIVE

Also, the overall system replication status shows ACTIVE.

Configure the Cloud Load Balancing failover support

The Internal TCP/UDP Load Balancing service with failover support routes traffic to the active host in an SAP HANA cluster based on a health check service. This offers protection in an Active/Passive configuration and can be extended to support an Active/Active (Read-enabled secondary) configuration.

Reserve an IP address for the virtual IP

The virtual IP (VIP) address , which is sometimes referred to as a floating IP address, follows the active SAP HANA system. The load balancer routes traffic that is sent to the VIP to the VM that is currently hosting the active SAP HANA system.

  1. Open Cloud Shell:

    Go to Cloud Shell

  2. Reserve an IP address for the virtual IP. This is the IP address that applications use to access SAP HANA. If you omit the --addresses flag, an IP address in the specified subnet is chosen for you:

    $ gcloud compute addresses create vip-name \
      --region cluster-region --subnet cluster-subnet \
      --addresses vip-address

    For more information about reserving a static IP, see Reserving a static internal IP address.

  3. Confirm IP address reservation:

    $ gcloud compute addresses describe vip-name \
      --region cluster-region

    You should see output similar to the following example:

    address: 10.0.0.19
    addressType: INTERNAL
    creationTimestamp: '2020-05-20T14:19:03.109-07:00'
    description: ''
    id: '8961491304398200872'
    kind: compute#address
    name: vip-for-hana-ha
    networkTier: PREMIUM
    purpose: GCE_ENDPOINT
    region: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1
    selfLink: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1/addresses/vip-for-hana-ha
    status: RESERVED
    subnetwork: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1/subnetworks/example-subnet-us-central1

Create instance groups for your host VMs

  1. In Cloud Shell, create two unmanaged instance groups and assign the primary host VM to one and the secondary host VM to the other:

    $ gcloud compute instance-groups unmanaged create primary-ig-name \
      --zone=primary-zone
    $ gcloud compute instance-groups unmanaged add-instances primary-ig-name \
      --zone=primary-zone \
      --instances=primary-host-name
    $ gcloud compute instance-groups unmanaged create secondary-ig-name \
      --zone=secondary-zone
    $ gcloud compute instance-groups unmanaged add-instances secondary-ig-name \
      --zone=secondary-zone \
      --instances=secondary-host-name
    
  2. Confirm the creation of the instance groups:

    $ gcloud compute instance-groups unmanaged list

    You should see output similar to the following example:

    NAME          ZONE           NETWORK          NETWORK_PROJECT        MANAGED  INSTANCES
    hana-ha-ig-1  us-central1-a  example-network  example-project-123456 No       1
    hana-ha-ig-2  us-central1-c  example-network  example-project-123456 No       1

Create a Compute Engine health check

  1. In Cloud Shell, create the health check. For the port used by the health check, choose a port that is in the private range, 49152-65535, to avoid clashing with other services. The check-interval and timeout values are slightly longer than the defaults so as to increase failover tolerance during Compute Engine live migration events. You can adjust the values, if necessary:

    $ gcloud compute health-checks create tcp health-check-name --port=healthcheck-port-num \
      --proxy-header=NONE --check-interval=10 --timeout=10 --unhealthy-threshold=2 \
      --healthy-threshold=2
  2. Confirm the creation of the health check:

    $ gcloud compute health-checks describe health-check-name

    You should see output similar to the following example:

    checkIntervalSec: 10
    creationTimestamp: '2020-05-20T21:03:06.924-07:00'
    healthyThreshold: 2
    id: '4963070308818371477'
    kind: compute#healthCheck
    name: hana-health-check
    selfLink: https://www.googleapis.com/compute/v1/projects/example-project-123456/global/healthChecks/hana-health-check
    tcpHealthCheck:
     port: 60000
     portSpecification: USE_FIXED_PORT
     proxyHeader: NONE
    timeoutSec: 10
    type: TCP
    unhealthyThreshold: 2

Create a firewall rule for the health checks

Define a firewall rule for a port in the private range that allows access to your host VMs from the IP ranges that are used by Compute Engine health checks, 35.191.0.0/16 and 130.211.0.0/22. For more information, see Creating firewall rules for health checks.

  1. If you don't already have one, create a firewall rule to allow the health checks:

    $ gcloud compute firewall-rules create  rule-name \
      --network network-name \
      --action ALLOW \
      --direction INGRESS \
      --source-ranges 35.191.0.0/16,130.211.0.0/22 \
      --target-tags network-tags \
      --rules tcp:hlth-chk-port-num

    For example:

    gcloud compute firewall-rules create  fw-allow-health-checks \
    --network example-network \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges 35.191.0.0/16,130.211.0.0/22 \
    --target-tags cluster-ntwk-tag \
    --rules tcp:60000

Configure the load balancer and failover group

  1. Create the load balancer backend service:

    $ gcloud compute backend-services create backend-service-name \
      --load-balancing-scheme internal \
      --health-checks health-check-name \
      --no-connection-drain-on-failover \
      --drop-traffic-if-unhealthy \
      --failover-ratio 1.0 \
      --region cluster-region \
      --global-health-checks
  2. Add the primary instance group to the backend service:

    $ gcloud compute backend-services add-backend backend-service-name \
      --instance-group primary-ig-name \
      --instance-group-zone primary-zone \
      --region cluster-region
  3. Add the secondary, failover instance group to the backend service:

    $ gcloud compute backend-services add-backend backend-service-name \
      --instance-group secondary-ig-name \
      --instance-group-zone secondary-zone \
      --failover \
      --region cluster-region
  4. Create a forwarding rule. For the IP address, specify the IP address that you reserved for the VIP:

    $ gcloud compute forwarding-rules create rule-name \
      --load-balancing-scheme internal \
      --address vip-address \
      --subnet cluster-subnet \
      --region cluster-region \
      --backend-service backend-service-name \
      --ports ALL

Test the load balancer configuration

Even though your backend instance groups won't register as healthy until later, you can test the load balancer configuration by setting up a listener to respond to the health checks. After setting up a listener, if the load balancer is configured correctly, the status of the backend instance groups changes to healthy.

The following sections present different methods that you can use to test the configuration.

Testing the load balancer with the socat utility

You can use the socat utility to temporarily listen on the health check port. You need to install the socatutility anyway, because you use it later when you configure cluster resources.

  1. On both host VMs as root, install the socat utility:

    # zypper install -y socat

  2. Start a socat process to listen for 60 seconds on the health check port:

    # timeout 60s socat - TCP-LISTEN:hlth-chk-port-num,fork

  3. In Cloud Shell, after waiting a few seconds for the health check to detect the listener, check the health of your backend instance groups:

    $ gcloud compute backend-services get-health backend-service-name \
      --region cluster-region

    You should see output similar to the following:

    ---
    backend: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-a/instanceGroups/hana-ha-ig-1
    status:
     healthStatus:
     ‐ healthState: HEALTHY
       instance: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-a/instances/hana-ha-vm-1
       ipAddress: 10.0.0.35
       port: 80
     kind: compute#backendServiceGroupHealth
    ---
    backend: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-c/instanceGroups/hana-ha-ig-2
    status:
     healthStatus:
     ‐ healthState: HEALTHY
       instance: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-c/instances/hana-ha-vm-2
       ipAddress: 10.0.0.34
       port: 80
     kind: compute#backendServiceGroupHealth

Testing the load balancer using port 22

If port 22 is open for SSH connections on your host VMs, you can temporarily edit the health checker to use port 22, which has a listener that can respond to the health checker.

To temporarily use port 22, follow these steps:

  1. Click your health check in the console:

    Go to Health checks page

  2. Click Edit.

  3. In the Port field, change the port number to 22.

  4. Click Save and wait a minute or two.

  5. In Cloud Shell, check the health of your backend instance groups:

    $ gcloud compute backend-services get-health backend-service-name \
      --region cluster-region

    You should see output similar to the following:

    ---
    backend: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-a/instanceGroups/hana-ha-ig-1
    status:
     healthStatus:
     ‐ healthState: HEALTHY
       instance: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-a/instances/hana-ha-vm-1
       ipAddress: 10.0.0.35
       port: 80
     kind: compute#backendServiceGroupHealth
    ---
    backend: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-c/instanceGroups/hana-ha-ig-2
    status:
     healthStatus:
     ‐ healthState: HEALTHY
       instance: https://www.googleapis.com/compute/v1/projects/example-project-123456/zones/us-central1-c/instances/hana-ha-vm-2
       ipAddress: 10.0.0.34
       port: 80
     kind: compute#backendServiceGroupHealth
  6. When you are done, change the health check port number back to the original port number.

Set up Pacemaker

The following procedure configures the SUSE implementation of a Pacemaker cluster on Compute Engine VMs for SAP HANA.

For more information about the configuring high-availability clusters on SLES, see the SUSE Linux Enterprise High Availability Extension documentation for your version of SLES.

Download the resource agent scripts

As root on both the primary and secondary hosts, download the required resource agent scripts:

# mkdir -p /usr/lib64/stonith/plugins/external
# curl https://storage.googleapis.com/sapdeploy/pacemaker-gcp/gcpstonith -o /usr/lib64/stonith/plugins/external/gcpstonith
# chmod +x /usr/lib64/stonith/plugins/external/gcpstonith

Create the Corosync configuration files

  1. Create a Corosync configuration file on the primary host:

    1. Create the following file:

      /etc/corosync/corosync.conf
    2. In the corosync.conf file on the primary host, add the following configuration, replacing the italic variable text with your values:

      totem {
       version: 2
       secauth: off
       crypto_hash: sha1
       crypto_cipher: aes256
       cluster_name: hacluster
       clear_node_high_bit: yes
       token: 20000
       token_retransmits_before_loss_const: 10
       join: 60
       consensus: 24000
       max_messages:  20
       transport: udpu
       interface {
         ringnumber: 0
         Bindnetaddr: static-ip-of-hdb-on-this-host
         mcastport: 5405
         ttl: 1
       }
      }
      logging {
       fileline:  off
       to_stderr: no
       to_logfile: no
       logfile: /var/log/cluster/corosync.log
       to_syslog: yes
       debug: off
       timestamp: on
       logger_subsys {
         subsys: QUORUM
         debug: off
       }
      }
      nodelist {
       node {
         ring0_addr: this-host-name
         nodeid: 1
       }
       node {
         ring0_addr: other-host-name
         nodeid: 2
       }
      }
      quorum {
       provider: corosync_votequorum
       expected_votes: 2
       two_node: 1
      }
  2. Create a Corosync configuration file on the secondary host by repeating the same steps you used for the primary host. Except for the static IP of the HDB on the Bindnetaddr property and the order of the host names in the nodelist, the configuration file property values are the same for each host.

Initialize the cluster

  1. On the primary host as root:

    1. Initialize the cluster:

      # corosync-keygen
      # ha-cluster-init -y csync2
    2. Start Pacemaker on the primary host:

      # systemctl enable pacemaker
      # systemctl start pacemaker
  2. On the secondary host as root:

    1. Join the secondary host to the cluster that is initialized on the primary host:

      # ha-cluster-join -y -c primary-host-name csync2
    2. Start Pacemaker on the secondary host:

      # systemctl enable pacemaker
      # systemctl start pacemaker
  3. On either host as root, confirm that the cluster shows both nodes:

    # crm_mon -s

    You should see output similar to the following:

    CLUSTER OK: 2 nodes online, 0 resources configured

Configure the cluster

To configure the cluster, you define general cluster properties and the cluster primitive resources.

Enable maintenance mode

  1. On either host as root, put the cluster in maintenance mode:

    # crm configure property maintenance-mode="true"

Configure the general cluster properties

  1. On the primary host:

    1. Set the general cluster properties:

      # crm configure property no-quorum-policy="stop"
      # crm configure property startup-fencing="true"
      # crm configure property stonith-timeout="300s"
      # crm configure property stonith-enabled="true"
      # crm configure rsc_defaults resource-stickiness="1000"
      # crm configure rsc_defaults migration-threshold="5000"
      # crm configure op_defaults timeout="600"

Create the STONITH primitive resources

  1. On the primary host as root:

    1. Add STONITH resources:

      # crm configure primitive STONITH-"primary-host-name" stonith:external/gcpstonith \
          op monitor interval="300s" timeout="60s" on-fail="restart" \
          op start interval="0" timeout="60s" onfail="restart" \
          params instance_name="primary-host-name" gcloud_path="/usr/bin/gcloud" logging="yes"
      # crm configure primitive STONITH-"secondary-host-name" stonith:external/gcpstonith \
          op monitor interval="300s" timeout="60s" on-fail="restart" \
          op start interval="0" timeout="60s" onfail="restart" \
          params instance_name="secondary-host-name" gcloud_path="/usr/bin/gcloud" logging="yes"
      # crm configure location LOC_STONITH_"primary-host-name" \
          STONITH-"primary-host-name" -inf: "primary-host-name"
      # crm configure location LOC_STONITH_"secondary-host-name" \
          STONITH-"secondary-host-name" -inf: "secondary-host-name"
    2. To configure the VIP address in the operating system, create a local cluster IP resource for the VIP address that you reserved earlier:

      # crm configure primitive rsc_vip_int-primary IPaddr2 \
          params ip=vip-address cidr_netmask=32 nic="eth0" op monitor interval=10s

Create a primitive resource for the helper health-check service

The load balancer uses a listener on the health-check port of each host to determine where the primary instance of the SAP HANA cluster is running.

To manage the listeners in the cluster, you create a resource for the listener.

These instructions use the socat utility as the listener.

  1. On both hosts as root, install the socat utility:

    # zypper in -y socat
  2. On the primary host:

    1. Create a resource for the helper health-check service:

      # crm configure primitive rsc_healthcheck-primary anything \
          params binfile="/usr/bin/socat" \
          cmdline_options="-U TCP-LISTEN:healthcheck-port-num,backlog=10,fork,reuseaddr /dev/null" \
          op monitor timeout=20s interval=10 \
          op_params depth=0

Group the VIP and the helper health-check service resources

Group the VIP and helper health-check service resources:

# crm configure group g-primary rsc_vip_int-primary rsc_healthcheck-primary

Create the SAPHanaTopology primitive resource

You define the SAPHanaTopology primitive resource in a temporary configuration file, which you then upload to Corosync.

On the primary host as root:

  1. Create a temporary configuration file for the SAPHanaTopology configuration parameters:

    # vi /tmp/cluster.tmp
  2. Copy and paste the SAPHanaTopology resource definitions into the /tmp/cluster.tmp file:

    primitive rsc_SAPHanaTopology_SID_HDBinst_num ocf:suse:SAPHanaTopology \
     operations \$id="rsc_sap2_SID_HDBinst_num-operations" \
     op monitor interval="10" timeout="600" \
     op start interval="0" timeout="600" \
     op stop interval="0" timeout="300" \
     params SID="SID" InstanceNumber="inst_num"
    
    clone cln_SAPHanaTopology_SID_HDBinst_num rsc_SAPHanaTopology_SID_HDBinst_num \
     meta is-managed="true" clone-node-max="1" target-role="Started" interleave="true"
  3. Edit the /tmp/cluster.tmp file to replace the variable text with the SID and instance number for your SAP HANA system.

  4. On the primary as root, load the contents of the /tmp/cluster.tmp file into Corosync:

    crm configure load update /tmp/cluster.tmp

Create the SAPHana primitive resource

You define the SAPHana primitive resource by using the same method that you used for the SAPHanaTopology resource: in a temporary configuration file, which you then upload to Corosync.

  1. Replace the temporary configuration file:

    # rm /tmp/cluster.tmp
    # vi /tmp/cluster.tmp
  2. Copy and paste the SAPHana resource definitions into the /tmp/cluster.tmp file:

    primitive rsc_SAPHana_SID_HDBinst_num ocf:suse:SAPHana \
     operations \$id="rsc_sap_SID_HDBinst_num-operations" \
     op start interval="0" timeout="3600" \
     op stop interval="0" timeout="3600" \
     op promote interval="0" timeout="3600" \
     op monitor interval="10" role="Master" timeout="700" \
     op monitor interval="11" role="Slave" timeout="700" \
     params SID="SID" InstanceNumber="inst_num" PREFER_SITE_TAKEOVER="true" \
     DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="true"
    
    ms msl_SAPHana_SID_HDBinst_num rsc_SAPHana_SID_HDBinst_num \
     meta is-managed="true" notify="true" clone-max="2" clone-node-max="1" \
     target-role="Started" interleave="true"
    
    colocation col_saphana_ip_SID_HDBinst_num 4000: g-primary:Started \
     msl_SAPHana_SID_HDBinst_num:Master
    order ord_SAPHana_SID_HDBinst_num Optional: cln_SAPHanaTopology_SID_HDBinst_num \
     msl_SAPHana_SID_HDBinst_num
  3. On the primary as root, load the contents of the /tmp/cluster.tmp file into Corosync:

    crm configure load update /tmp/cluster.tmp

Confirm SAP HANA system replication is active

  1. On the primary host, as sidadm, sign into the SAP HANA database interactive terminal:

    > hdbsql -u system -p "system-password" -i inst_num
  2. In the interactive terminal, check replication status:

    => select distinct REPLICATION_STATUS from SYS.M_SERVICE_REPLICATION

    The REPLICATION_STATUS should be "ACTIVE".

Alternatively, you can check the replication status by running the following python script as sidadm:

# python $DIR_INSTANCE/exe/python_support/systemReplicationStatus.py

Activate the cluster

  1. On the primary host as root, take the cluster out of maintenance mode:

    # crm configure property maintenance-mode="false"

    If you receive a prompt that asks you to remove "maintenance", enter y.

  2. Wait 15 seconds and then on the primary host as root, check the status of the cluster:

    # crm status

    The following examples shows the status of an active, properly configured cluster:

    Stack: corosync
    Current DC: hana-ha-vm-1 (version 2.0.1+20190417.13d370ca9-3.9.1-2.0.1+20190417.13d370ca9) - partition with quorum
    Last updated: Sun Jun  7 00:36:56 2020
    Last change: Sun Jun  7 00:36:53 2020 by root via crm_attribute on hana-ha-vm-1
    
    2 nodes configured
    8 resources configured
    
    Online: [ hana-ha-vm-1 hana-ha-vm-2 ]
    
    Full list of resources:
    
    STONITH-hana-ha-vm-1   (stonith:external/gcpstonith):  Started hana-ha-vm-2
    STONITH-hana-ha-vm-2   (stonith:external/gcpstonith):  Started hana-ha-vm-1
    Clone Set: cln_SAPHanaTopology_HA1_HDB22 [rsc_SAPHanaTopology_HA1_HDB22]
        Started: [ hana-ha-vm-1 hana-ha-vm-2 ]
    Resource Group: g-primary
        rsc_vip_int-primary        (ocf::heartbeat:IPaddr2):       Started hana-ha-vm-1
        rsc_healthcheck-primary        (ocf::heartbeat:anything):      Started hana-ha-vm-1
    Clone Set: msl_SAPHana_HA1_HDB22 [rsc_SAPHana_HA1_HDB22] (promotable)
        Masters: [ hana-ha-vm-1 ]
        Slaves: [ hana-ha-vm-2 ]

Test failover

Test your cluster by simulating a failure on the primary host. Use a test system or run the test on your production system before you release the system for use.

Backup the system before the test.

You can simulate a failure in a variety of ways, including:

  • HDB stop
  • HDB kill
  • shutdown -r (on the master node)
  • ip link set eth0 down
  • echo c > /proc/sysrq-trigger

These instructions use ip link set eth0 down to take the network interface offline, because it validates both failover as well as fencing.

  1. On the master host, as root, take the network interface offline:

    # ip link set eth0 down
  2. Follow the progress of the failover in Logging:

    Go to Logging

    The following example shows the log entries for a successful failover:

    Screenshot of the Logging logs for a failover

  3. Reconnect to either host using SSH and change to the root user.

  4. Enter crm status to confirm that the primary host is now active on the VM that used to contain the secondary host. Automatic restart is enabled in the cluster, so the stopped host will restart and assume the role of secondary host, as shown in the following example.

    Stack: corosync
    Current DC: hana-ha-vm-2 (version 2.0.1+20190417.13d370ca9-3.9.1-2.0.1+20190417.13d370ca9) - partition with quorum
    Last updated: Fri Jun 12 16:46:07 2020
    Last change: Fri Jun 12 16:46:07 2020 by root via crm_attribute on hana-ha-vm-2
    
    2 nodes configured
    8 resources configured
    
    Online: [ hana-ha-vm-1 hana-ha-vm-2 ]
    
    Full list of resources:
    
    STONITH-hana-ha-vm-1   (stonith:external/gcpstonith):  Started hana-ha-vm-2
    STONITH-hana-ha-vm-2   (stonith:external/gcpstonith):  Started hana-ha-vm-1
    Clone Set: cln_SAPHanaTopology_HA1_HDB22 [rsc_SAPHanaTopology_HA1_HDB22]
        Started: [ hana-ha-vm-1 hana-ha-vm-2 ]
    Resource Group: g-primary
        rsc_vip_int-primary        (ocf::heartbeat:IPaddr2):       Started hana-ha-vm-2
        rsc_healthcheck-primary        (ocf::heartbeat:anything):      Started hana-ha-vm-2
    Clone Set: msl_SAPHana_HA1_HDB22 [rsc_SAPHana_HA1_HDB22] (promotable)
        Masters: [ hana-ha-vm-2 ]
        Slaves: [ hana-ha-vm-1 ]

Troubleshooting

You can find the logs in the following directories:

  • /var/log/pacemaker.log
  • /var/log/cluster/corosync.log

Completing the NAT gateway installation

If you created a NAT gateway, complete the following steps.

  1. Add tags to all instances:

    export NETWORK_NAME="[YOUR_NETWORK_NAME]"
    export TAG="[YOUR_TAG_TEXT]"
    gcloud compute instances add-tags "[PRIMARY_VM_NAME]" --tags="$TAG" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances add-tags "[SECONDARY_VM_NAME]" --tags="$TAG" --zone=[SECONDARY_VM_ZONE]
  2. Delete external IPs:

    gcloud compute instances delete-access-config "[PRIMARY_VM_NAME]" --access-config-name "external-nat" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances delete-access-config "[SECONDARY_VM_NAME]" --access-config-name "external-nat" --zone=[SECONDARY_VM_ZONE]

Connecting to SAP HANA

If the host VMs don't have an external IP address for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.

  • To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.

  • To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.

Performing post-deployment tasks

Before using your SAP HANA instance, we recommend that you configure and backup your new SAP HANA database.

For more information:

What's next

See the following resource for more information:

  • Automated SAP HANA System Replication in Scale-Up in pacemaker cluster
  • SAP on Google Cloud: High availability
  • SAP HANA high availability and disaster recovery planning guide
  • For more information about VM administration and monitoring, see the SAP HANA Operations Guide