Automated SAP HANA HA deployment on SLES with load-balancer VIP implementation

This guide shows you how to automate the deployment of SAP HANA in a performance-optimized SUSE Linux Enterprise Server (SLES) high-availability (HA) cluster that uses Internal TCP/UDP Load Balancing to manage the virtual IP (VIP) address.

The guide uses Cloud Deployment Manager to deploy two Compute Engine virtual machines (VMs), two SAP HANA scale up systems, a virtual IP address (VIP) with a load balancer implementation, and an OS-based HA cluster, all according to the best practices from Google Cloud, SAP, and the OS vendor.

One of the SAP HANA systems functions as the primary, active system and the other functions as a secondary, standby system. You deploy both SAP HANA systems within the same region, ideally in different zones.

Overview of a high-availability Linux cluster for a single-node SAP HANA scaleup system

The deployed cluster includes the following functions and features:

  • The Pacemaker high-availability cluster resource manager.
  • A Google Cloud fencing mechanism.
  • A virtual IP (VIP) that uses a level 4 TCP internal load balancer implementation, including:
    • A reservation of the IP address that you select for the VIP
    • Two Compute Engine instance groups
    • A TCP internal load balancer
    • A Compute Engine health check
  • The SUSE high-availability pattern.
  • The SUSE SAPHanaSR resource agent package.
  • Synchronous system replication.
  • Memory preload.
  • Automatic restart of the failed instance as the new secondary instance.

To deploy an SAP HANA system without a Linux high-availability cluster or standby hosts, use the SAP HANA Deployment Guide.

This guide is intended for advanced SAP HANA users who are familiar with Linux high-availability configurations for SAP HANA.

Prerequisites

Before you create the SAP HANA high availability cluster, make sure that the following prerequisites are met:

  • You or your organization has a Google Cloud account and you have created a project for the SAP HANA deployment. For information about creating Google Cloud accounts and projects, see Setting up your Google account in the SAP HANA Deployment Guide.
  • The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Downloading SAP HANA in the SAP HANA Deployment Guide.
  • If you are using VPC internal DNS, the value of the VmDnsSetting variable in your project metadata must be either GlobalOnly or ZonalPreferred to enable the resolution of the node names across zones. The default setting of VmDnsSetting is ZonalOnly. For more information, see:

Creating a network

For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.

If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.

During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SAP-certified Linux images that are available from Google Cloud, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.

To set up networking:

  1. Go to Cloud Shell.

    Go to Cloud Shell

  2. To create a new network in the custom subnetworks mode, run:

    gcloud compute networks create [YOUR_NETWORK_NAME] --subnet-mode custom

    where [YOUR_NETWORK_NAME] is the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).

    Specify --subnet-mode custom to avoid using the default auto mode, which automatically creates a subnet in each Compute Engine region. For more information, see Subnet creation mode.

  3. Create a subnetwork, and specify the region and IP range:

    gcloud compute networks subnets create [YOUR_SUBNETWORK_NAME] \
            --network [YOUR_NETWORK_NAME] --region [YOUR_REGION] --range [YOUR_RANGE]

    where:

    • [YOUR_SUBNETWORK_NAME] is the new subnetwork.
    • [YOUR_NETWORK_NAME] is the name of the network you created in the previous step.
    • [REGION] is the region where you want the subnetwork.
    • [YOUR_RANGE] is the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
  4. Optionally, repeat the previous step and add additional subnetworks.

Setting up a NAT gateway

If you intend to create one or more VMs that will not have public IP addresses, you must create a NAT gateway so that your VMs can access the Internet to download Google's monitoring agent.

If you intend to assign an external public IP address to your VM, you can skip this step.

To create a NAT gateway:

  1. Create a VM to act as the NAT gateway in the subnet you just created:

    gcloud compute instances create [YOUR_VM_NAME] --can-ip-forward \
            --zone [YOUR_ZONE]  --image-family [YOUR_IMAGE_FAMILY] \
            --image-project [YOUR_IMAGE_PROJECT] \
            --machine-type=[YOUR_MACHINE_TYPE] --subnet [YOUR_SUBNETWORK_NAME] \
            --metadata startup-script="sysctl -w net.ipv4.ip_forward=1; iptables \
            -t nat -A POSTROUTING -o eth0 -j MASQUERADE" --tags [YOUR_VM_TAG]

    where:

    • [YOUR_VM_NAME] is the name of the VM you are creating that want to use for the NAT gateway.
    • [YOUR_ZONE] is the zone where you want the VM.
    • [YOUR_IMAGE_FAMILY] and [YOUR_IMAGE_PROJECT] specify the image you want to use for the NAT gateway.
    • [YOUR_MACHINE_TYPE] is any supported machine type. If you expect high network traffic, choose a machine type with that has at least eight virtual CPUs.
    • [YOUR_SUBNETWORK_NAME] is the name of the subnetwork where you want the VM.
    • [YOUR_VM_TAG] is a tag that is applied to the VM you are creating. If you use this VM as a bastion host, this tag is used to apply the related firewall rule only to this VM.
  2. Create a route that is tagged so that traffic passes through the NAT VM instead of the default Internet gateway:

    gcloud compute routes create [YOUR_ROUTE_NAME] \
            --network [YOUR_NETWORK_NAME] --destination-range 0.0.0.0/0 \
            --next-hop-instance [YOUR_VM_NAME] --next-hop-instance-zone \
            [YOUR_ZONE] --tags [YOUR_TAG_NAME] --priority 800

    where:

    • [YOUR_ROUTE_NAME] is the name of the route you are creating.
    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_NAME] is the VM you are using for your NAT gateway.
    • [YOUR_ZONE] is the zone where the VM is located.
    • [YOUR_TAG_NAME] is the tag on the route that directs traffic through the NAT VM.
  3. If you also want to use the NAT gateway VM as a bastion host, run the following command. This command creates a firewall rule that allows inbound SSH access to this instance from the Internet:

    gcloud compute firewall-rules create allow-ssh --network [YOUR_NETWORK_NAME] --allow tcp:22 --source-ranges 0.0.0.0/0 --target-tags "[YOUR_VM_TAG]"

    where:

    • [YOUR_NETWORK_NAME] is the network you created.
    • [YOUR_VM_TAG] is the tag you specified when you created the NAT gateway VM. This tag is used so this firewall rule applies only to the VM that hosts the NAT gateway, and not to all VMs in the network.

Adding firewall rules

By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.

HA clusters for SAP HANA require at least two firewall rules, one that allows the Compute Engine health check to check the health of the cluster nodes and another that allows the cluster nodes to communicate with each other.

If you are not using a shared VPC network, you need to create the firewall rule for the communication between the nodes, but not for the health checks. The Deployment Manager template creates the firewall rule for the health checks, which you can modify after deployment is complete, if needed.

If you are using a shared VPC network, a network administrator needs to create both firewall rules in the host project.

You can also create a firewall rule to allow external access to specified ports, or to restrict access between VMs on the same network. If the default VPC network type is used, some additional default rules also apply, such as the default-allow-internal rule, which allows connectivity between VMs on the same network on all ports.

Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.

Depending on your scenario, you can create firewall rules to allow access for:

  • The default SAP ports that are listed in TCP/IP of All SAP Products.
  • Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
  • SSH connections to your VM instance, including SSH from the browser.
  • Connection to your VM by using a third-party tool in Linux. Create a rule to allow access for the tool through your firewall.

To create a firewall rule:

Console

  1. In the Cloud Console, go to the Firewall rules page.

    OPEN FIREWALL RULES

  2. At the top of the page, click Create firewall rule.

    • In the Network field, select the network where your VM is located.
    • In the Targets field, specify the resources on Google Cloud that this rule applies to. For example, specify All instances in the network. Or to to limit the rule to specific instances on Google Cloud, enter tags in Specified target tags.
    • In the Source filter field, select one of the following:
      • IP ranges to allow incoming traffic from specific IP addresses. Specify the range of IP addresses in the Source IP ranges field.
      • Subnets to allow incoming traffic from a particular subnetwork. Specify the subnetwork name in the following Subnets field. You can use this option to allow access between the VMs in a 3-tier or scaleout configuration.
    • In the Protocols and ports section, select Specified protocols and ports and enter tcp:[PORT_NUMBER].
  3. Click Create to create your firewall rule.

gcloud

Create a firewall rule by using the following command:

$ gcloud compute firewall-rules create firewall-name
--direction=INGRESS --priority=1000 \
--network=network-name --action=ALLOW --rules=protocol:port \
--source-ranges ip-range --target-tags=network-tags

Creating a high-availability Linux cluster with SAP HANA installed

The following instructions use the Cloud Deployment Manager to create a SLES Linux cluster with two SAP HANA systems: a primary single-host SAP HANA system on one VM instance and a standby SAP HANA system on another VM instance in the same Compute Engine region. The SAP HANA systems use synchronous system replication and the standby system preloads the replicated data.

You define configuration options for the SAP HANA high-availability cluster in a Deployment Manager configuration file template.

The following instructions use the Cloud Shell, but are generally applicable to the Cloud SDK.

  1. Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA systems you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.

    Go to the quotas page

  2. Open the Cloud Shell or, if you installed the Cloud SDK on your local workstation, open a terminal.

    Go to the Cloud Shell

  3. Download the template.yaml configuration file template for the SAP HANA high-availability cluster to your working directory by entering the following command in the Cloud Shell or Cloud SDK:

    $ wget https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_ha_ilb/template.yaml
  4. Optionally, rename the template.yaml file to identify the configuration it defines.

  5. Open the template.yaml file in the Cloud Shell code editor or, if you are using the Cloud SDK, the text editor of your choice.

    To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.

  6. In the template.yaml file, update the property values by replacing the brackets and their contents with the values for your installation. The properties are described in the following table.

    To create the VM instances without installing SAP HANA, delete or comment out all of the lines that begin with sap_hana_.

    Property Data type Description
    primaryInstanceName String The name of the VM instance for the primary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens.
    secondaryInstanceName String The name of the VM instance for the secondary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens.
    primaryZone String The zone in which the primary SAP HANA system is deployed. The primary and secondary zones must be in the same region.
    secondaryZone String The zone in which the secondary SAP HANA system will be deployed. The primary and secondary zones must be in the same region.
    instanceType String The type of Compute Engine virtual machine that you need to run SAP HANA on. If you need a custom VM type, specify a predefined VM type with a number of vCPUs that is closest to the number you need while still being larger. After deployment is complete, modify the number of vCPUs and the amount of memory.
    network String The name of the network in which to create the load balancer that manages the VIP.

    If you are using a shared VPC network, you must add the ID of the host project as a parent directory of the network name. For example, host-project-id/network-name.

    subnetwork String The name of the subnetwork that you are using for your HA cluster.

    If you are using a shared VPC network, you must add the ID of the host project as a parent directory of the subnetwork name. For example, host-project-id/subnetwork-name.

    linuxImage String The name of the Linux operating- system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/sles-12-sp3-sap. To specify a specific image, specify only the image name. For the list of available image families, see the Images page in the Cloud console.
    linuxImageProject String The Google Cloud project that contains the image you are going to use. This project might be your own project or a Google Cloud image project. For SLES, specify suse-sap-cloud. For a list of GCP image projects, see the Images page in the Compute Engine documentation.
    sap_hana_deployment_bucket String The name of the GCP storage bucket in your project that contains the SAP HANA installation files that you uploaded in a previous step.
    sap_hana_sid String The SAP HANA system ID. The ID must consist of three alphanumeric characters and begin with a letter. All letters must be uppercase.
    sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0.
    sap_hana_sidadm_password String A temporary password for the operating system administrator to be used during deployment. When deployment is complete, change the password. Passwords must be at least eight characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_system_password String A temporary password for the database superuser. When deployment is complete, change the password. Passwords must be at least 8 characters and include at least one uppercase letter, one lowercase letter, and one number.
    sap_hana_scaleout_nodes Integer The number of additional SAP HANA worker hosts that you need. Specify 0, because scaleout hosts are not supported in high availability configurations currently.
    sap_vip String The IP address that you are going to use for your VIP. The IP address must be within the range of IP addresses that are assigned to your subnetwork. The Deployment Manager template reserves this IP address for you. In an active HA cluster, this IP address is always assigned to the active SAP HANA instance.
    primaryInstanceGroupName String Defines the name of the unmanaged instance group for the primary node. If you omit the parameter, the default name is ig-primaryInstanceName.
    secondaryInstanceGroupName String Defines the name of the unmanaged instance group for the secondary node. If you omit this parameter, the default name is ig-secondaryInstanceName.
    loadBalancerName String Defines the name of the TCP internal load balancer.

    The following example shows a completed configuration file, which directs the Deployment Manager to deploy a high-availability cluster with the primary SAP HANA system installed in the us-central1-a zone and the secondary SAP HANA system installed in the us-central1-c zone. Both systems will be installed on n1-highmem-96 VMs that are running the SLES 15 SP1 operating system.

    The template file references the sap_hana_ha_ilb folder in the Google Cloud sapdeploy bucket, which contains the scripts for an SAP HANA HA deployment that uses a load balancer VIP implementation.

    imports:
    ‐ path: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
    
    resources:
    ‐ name: sap_hana_ha
     type: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_ha_ilb/sap_hana_ha.py
     properties:
       primaryInstanceName: example-ha-vm1
       secondaryInstanceName: example-ha-vm2
       primaryZone: us-central1-a
       secondaryZone: us-central1-c
       instanceType: n1-highmem-96
       network: example-network
       subnetwork: example-subnet-us-central1
       linuxImage: family/sles-15-sp1-sap
       linuxImageProject: suse-sap-cloud
       # SAP HANA parameters
       sap_hana_deployment_bucket: hana2-sp4-rev46
       sap_hana_sid: HA1
       sap_hana_instance_number: 00
       sap_hana_sidadm_password: TempPa55word
       sap_hana_system_password: TempPa55word
       # VIP parameters
       sap_vip: 10.0.0.100
       primaryInstanceGroupName: ig-example-ha-vm1
       secondaryInstanceGroupName: ig-example-ha-vm2
       loadBalancerName: lb-ha1
       # Additional optional properties
       networkTag: hana-ha-ntwk-tag
       serviceAccount: sap-deploy-example@example-project-123456.iam.gserviceaccount.com
    
  7. Create the instances:

    $ gcloud deployment-manager deployments create deployment-name --config template-name.yaml

    The above command invokes the Deployment Manager, which sets up the Google Cloud infrastructure and then hands control over to a script that installs and configures SAP HANA and the HA cluster.

    While Deployment Manager has control, status messages are written to the Cloud Shell. After the script is invoked, status messages are written to Logging and are viewable in the Cloud Console, as described in Checking the Logging logs.

    Time to completion can vary, but the entire process usually takes less than 30 minutes.

Verifying the deployment of your HANA HA system

Verifying an SAP HANA HA cluster involves several different procedures:

  • Checking Logging
  • Checking the configuration of the VM and the SAP HANA installation
  • Checking the load balancer and the health of the instance groups
  • Checking the SAP HANA system using SAP HANA Studio
  • Performing a failover test

Checking the logs

  1. Open Logging to check for errors and monitor the progress of the installation.

    Go to Cloud Logging

  2. On the Logs Explorer page, select Global as your resource in the Query builder and run the query.

    • If "--- Finished" is displayed for both VM instances, proceed to the next step.

      Logging display.

    • If the logs contain errors:

      1. Correct the errors.

        For quota errors, go to the IAM & admin Quotas page in the Cloud Console and increase any quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA Planning Guide.

      2. On the Deployment Manager Deployments page in the Cloud Console, delete the deployment to clean up all of the infrastructure from the failed installation.

      3. Rerun the Deployment Manager.

Checking the configuration of the VM and the SAP HANA installation

  1. After the SAP HANA system deploys without errors, connect to each VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for each VM instance, or you can use your preferred SSH method.

    SSH button on Compute Engine VM instances page.

  2. Change to the root user.

    sudo su -
  3. At the command prompt, enter df -h. Ensure that you see output that includes the /hana directories, such as /hana/data.

    example-ha-vm1:~ # df -h
    Filesystem                        Size  Used Avail Use% Mounted on
    devtmpfs                          308G  8.0K  308G   1% /dev
    tmpfs                             461G   54M  461G   1% /dev/shm
    tmpfs                             308G  250M  307G   1% /run
    tmpfs                             308G     0  308G   0% /sys/fs/cgroup
    /dev/sda3                          30G  3.2G   27G  11% /
    /dev/sda2                          20M  3.6M   17M  18% /boot/efi
    /dev/mapper/vg_hana-shared        614G   50G  565G   9% /hana/shared
    /dev/mapper/vg_hana-sap            32G  278M   32G   1% /usr/sap
    /dev/mapper/vg_hana-data          951G  7.9G  943G   1% /hana/data
    /dev/mapper/vg_hana-log           307G  5.5G  302G   2% /hana/log
    /dev/mapper/vg_hanabackup-backup  1.3T  409G  840G  33% /hanabackup
  4. Check the status of the new cluster:

    crm status
    

    You should see results similar to the following the example, in which both VM instances are started and example-ha-vm1 is the active primary instance:

    example-ha-vm1:~ # crm status
    Stack: corosync
    Current DC: example-ha-vm1 (version 2.0.1+20190417.13d370ca9-3.15.1-2.0.1+20190417.13d370ca9) - partition with quorum
    Last updated: Thu Nov 19 16:34:14 2020
    Last change: Thu Nov 19 16:34:04 2020 by root via crm_attribute on example-ha-vm1
    
    2 nodes configured
    8 resources configured
    
    Online: [ example-ha-vm1 example-ha-vm2 ]
    
    Full list of resources:
    
     STONITH-example-ha-vm1 (stonith:external/gcpstonith):  Started example-ha-vm2
     STONITH-example-ha-vm2 (stonith:external/gcpstonith):  Started example-ha-vm1
     Resource Group: g-primary
         rsc_vip_int-primary        (ocf::heartbeat:IPaddr2):       Started example-ha-vm1
         rsc_vip_hc-primary (ocf::heartbeat:anything):      Started example-ha-vm1
     Clone Set: cln_SAPHanaTopology_HA1_HDB00 [rsc_SAPHanaTopology_HA1_HDB00]
         Started: [ example-ha-vm1 example-ha-vm2 ]
     Clone Set: msl_SAPHana_HA1_HDB00 [rsc_SAPHana_HA1_HDB00] (promotable)
         Masters: [ example-ha-vm1 ]
         Slaves: [ example-ha-vm2 ]
    
  5. Change to the SAP admin user by replacing [SID] in the following command with the [SID] value that you specified in the configuration file template.

    su - [SID]adm
    
  6. Ensure that the SAP HANA services, such as hdbnameserver, hdbindexserver, and others, are running on the instance by entering the following command:

    HDB info
    

Checking the load balancer and the health of the instance groups

To confirm that the load balancer and health check were set up correctly, check the load balancer and instance groups in the Cloud Console.

  1. Open the Load balancing page in the Cloud Console:

    Go to Cloud Load Balancing

  2. In the list of load balancers, confirm that a load balancer was created for your HA cluster.

  3. On the Load balancer details page in the Healthy column under Instance group in the Backend section, confirm that one of the instance groups shows "1/1" and the other shows "0/1". After a failover, the healthy indicator, "1/1", switches to the new active instance group.

    Shows the load balancer details page with the active primary instance group
indicated by "1/1" and the inactive secondary indicated by "0/1".

Checking the SAP HANA system using SAP HANA Studio

  1. Connect to the HANA system by using SAP HANA Studio. When defining the connection, specify the following values:

    • On the Specify System panel, specify the floating IP address as the Host Name.
    • On the Connection Properties panel, for database user authentication, specify the database superuser name and the password that you specified for the sap_hana_system_password property in the template.yaml file.

    For information from SAP about installing SAP HANA Studio, see SAP HANA Studio Installation and Update Guide.

  2. After SAP HANA Studio is connected to your HANA HA system, display the system overview by double-clicking the system name in the navigation pane on the left side of the window.

    Screenshot of the navigation pane in SAP HANA Studio

  3. Under General Information on the Overview tab, confirm that:

    • The Operational Status shows "All services started".
    • The System Replication Status shows "All services are active and in sync".

    Screenshot of the Overview tab in SAP HANA Studio

  4. Confirm the replication mode by clicking the System Replication Status link under General Information. Synchronous replication is indicated by SYNCMEM in the REPLICATION_MODE column on the System Replication tab.

    Screenshot of the System Replication Status tab in SAP HANA Studio

If any of the validation steps show that the installation failed:

  1. Resolve the errors.
  2. Delete the deployment from the Deployments page.
  3. Recreate the instances, as described in the last step of the previous section.

Performing a failover test

To perform a failover test:

  1. Connect to the primary VM by using SSH. You can connect from the Compute Engine VM instances page by clicking the SSH button for each VM instance, or you can use your preferred SSH method.

  2. At the command prompt, enter the following command:

    sudo ip link set eth0 down

    The ip link set eth0 down command triggers a failover by severing communications with the primary host.

  3. Reconnect to either host using SSH and change to the root user.

  4. Enter crm status to confirm that the primary host is now active on the VM that used to contain the secondary host. Automatic restart is enabled in the cluster, so the stopped host will restart and assume the role of secondary host, as shown in the following screenshot.

    Screenshot of the crm status output showing that the primary and secondary hosts switched VMs

  5. On the Load balancer details page in the console, confirm that the new active primary instance shows "1/1" in the Healthy column. Refresh the page, if necessary.

    Go to Cloud Load Balancing

    For example:

    Shows the load balancer details page with the "ig-example-ha-vm2" instance
showing "1/1" in the Healthy column.

  6. In SAP HANA Studio, confirm that you are still connected to the system by double-clicking the system entry in the navigation pane to refresh the system information.

  7. Click the System Replication Status link to confirm that the primary and secondary hosts have switched hosts and are active.

    Screenshot of the System Replication Status tab in SAP HANA Studio

Completing the NAT gateway installation

If you created a NAT gateway, complete the following steps.

  1. Add tags to all instances, including the worker hosts:

    export NETWORK_NAME="[YOUR_NETWORK_NAME]"
    export TAG="[YOUR_TAG_TEXT]"
    gcloud compute instances add-tags "[PRIMARY_VM_NAME]" --tags="$TAG" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances add-tags "[SECONDARY_VM_NAME]" --tags="$TAG" --zone=[SECONDARY_VM_ZONE]
  2. Delete external IPs:

    gcloud compute instances delete-access-config "[PRIMARY_VM_NAME]" --access-config-name "external-nat" --zone=[PRIMARY_VM_ZONE]
    gcloud compute instances delete-access-config "[SECONDARY_VM_NAME]" --access-config-name "external-nat" --zone=[SECONDARY_VM_ZONE]

Setting up the Google monitoring agent for SAP HANA

Optionally, you can set up the Google monitoring agent for SAP HANA, which collects metrics from SAP HANA and sends them to Monitoring. Monitoring lets you create dashboards for your metrics, set up custom alerts based on metric thresholds, and more.

To monitor an HA cluster, install the monitoring agent on a VM instance outside of the cluster. Specify the floating IP address of the cluster as the IP address of the host instance to monitor.

For more information on setting up and configuring the Google monitoring agent for SAP HANA, see the SAP HANA Monitoring Agent User Guide.

Connecting to SAP HANA

Note that because these instructions don't use an external IP address for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.

  • To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.

  • To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.

Performing post-deployment tasks

Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.

  1. Change the temporary passwords for the SAP HANA system administrator and database superuser.

  2. Update the SAP HANA software with the latest patches.

  3. Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).

  4. Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.

What's next