This guide shows you how to automate the deployment of SAP HANA in a Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES) high-availability (HA) cluster that uses an internal passthrough Network Load Balancer to manage the virtual IP (VIP) address.
The guide uses Terraform to deploy two Compute Engine virtual machines (VMs), two SAP HANA scale-up systems, a virtual IP address (VIP) with an internal passthrough Network Load Balancer implementation, and an OS-based HA cluster, all according to the best practices from Google Cloud, SAP, and the OS vendor.
One of the SAP HANA systems functions as the primary, active system and the other functions as a secondary, standby system. You deploy both SAP HANA systems within the same region, ideally in different zones.
The deployed cluster includes the following functions and features:
- The Pacemaker high-availability cluster resource manager.
- A Google Cloud fencing mechanism.
- A virtual IP (VIP) that uses a level 4 TCP internal load balancer
implementation, including:
- A reservation of the IP address that you select for the VIP.
- Two Compute Engine instance groups.
- A TCP internal load balancer.
- A Compute Engine health check.
- In RHEL HA clusters:
- The Red Hat high-availability pattern.
- The Red Hat resource agent and fencing packages.
- In SLES HA clusters:
- The SUSE high-availability pattern.
- The SUSE SAPHanaSR resource agent package.
- Synchronous system replication.
- Memory preload.
- Automatic restart of the failed instance as the new secondary instance.
If you need a scale-out system with standby hosts for SAP HANA automatic host failover, then you must instead see Terraform: SAP HANA scale-out system with host auto-failover deployment guide.
To deploy an SAP HANA system without a Linux high-availability cluster or standby hosts, use the Terraform: SAP HANA Deployment Guide.
This guide is intended for advanced SAP HANA users who are familiar with Linux high-availability configurations for SAP HANA.
Prerequisites
Before you create the SAP HANA high availability cluster, make sure that the following prerequisites are met:
- You have read the SAP HANA planning guide and the SAP HANA high-availability planning guide.
- You or your organization has a Google Cloud account and you have created a project for the SAP HANA deployment. For information about creating Google Cloud accounts and projects, see Setting up your Google account.
- If you require your SAP workload to run in compliance with data residency, access control, support personnel, or regulatory requirements, then you must create the required Assured Workloads folder. For more information, see Compliance and sovereign controls for SAP on Google Cloud.
The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Creating a Cloud Storage bucket for the SAP HANA installation files.
If OS login is enabled in your project metadata, you need to disable OS login temporarily until your deployment is complete. For deployment purposes, this procedure configures SSH keys in instance metadata. When OS login is enabled, metadata-based SSH key configurations are disabled, and this deployment fails. After deployment is complete, you can enable OS login again.
For more information, see:
If you are using VPC internal DNS, the value of the
vmDnsSetting
variable in your project metadata must be eitherGlobalOnly
orZonalPreferred
to enable the resolution of the node names across zones. The default setting ofvmDnsSetting
isZonalOnly
. For more information, see:
Creating a network
For security purposes, create a new network. You can control who has access by adding firewall rules or by using another access control method.
If your project has a default VPC network, don't use it. Instead, create your own VPC network so that the only firewall rules in effect are those that you create explicitly.
During deployment, VM instances typically require access to the internet to download Google Cloud's Agent for SAP. If you are using one of the SAP-certified Linux images that are available from Google Cloud, the VM instance also requires access to the internet in order to register the license and to access OS vendor repositories. A configuration with a NAT gateway and with VM network tags supports this access, even if the target VMs do not have external IPs.
To create a VPC network for your project, complete the following steps:
-
Create a custom mode network. For more information, see Creating a custom mode network.
-
Create a subnetwork, and specify the region and IP range. For more information, see Adding subnets.
Setting up a NAT gateway
If you need to create one or more VMs without public IP addresses, you need to use network address translation (NAT) to enable the VMs to access the internet. Use Cloud NAT, a Google Cloud distributed, software-defined managed service that lets VMs send outbound packets to the internet and receive any corresponding established inbound response packets. Alternatively, you can set up a separate VM as a NAT gateway.
To create a Cloud NAT instance for your project, see Using Cloud NAT.
After you configure Cloud NAT for your project, your VM instances can securely access the internet without a public IP address.
Adding firewall rules
By default, an implied firewall rule blocks incoming connections from outside your Virtual Private Cloud (VPC) network. To allow incoming connections, set up a firewall rule for your VM. After an incoming connection is established with a VM, traffic is permitted in both directions over that connection.
HA clusters for SAP HANA require at least two firewall rules, one that allows the Compute Engine health check to check the health of the cluster nodes and another that allows the cluster nodes to communicate with each other.If you are not using a shared VPC network, you need to create the firewall rule for the communication between the nodes, but not for the health checks. The Terraform configuration file creates the firewall rule for the health checks, which you can modify after deployment is complete, if needed.
If you are using a shared VPC network, a network administrator needs to create both firewall rules in the host project.
You can also create a firewall rule to allow external access to specified ports,
or to restrict access between VMs on the same network. If the default
VPC network type is used, some additional default rules also
apply, such as the default-allow-internal
rule, which allows connectivity
between VMs on the same network on all ports.
Depending on the IT policy that is applicable to your environment, you might need to isolate or otherwise restrict connectivity to your database host, which you can do by creating firewall rules.
Depending on your scenario, you can create firewall rules to allow access for:
- The default SAP ports that are listed in TCP/IP of All SAP Products.
- Connections from your computer or your corporate network environment to your Compute Engine VM instance. If you are unsure of what IP address to use, talk to your company's network administrator.
- SSH connections to your VM instance, including SSH-in-browser.
- Connection to your VM by using a third-party tool in Linux. Create a rule to allow access for the tool through your firewall.
To create the firewall rules for your project, see Creating firewall rules.
Creating a high-availability Linux cluster with SAP HANA installed
The following instructions use the Terraform configuration file to create a RHEL or SLES cluster with two SAP HANA systems, a primary single-host SAP HANA system on one VM instance and a standby SAP HANA system on another VM instance in the same Compute Engine region. The SAP HANA systems use synchronous system replication and the standby system preloads the replicated data.
You define configuration options for the SAP HANA high-availability cluster in a Terraform configuration file.
Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA systems you are about to install. If your quotas are insufficient, then you deployment fails.
For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.
Open the Cloud Shell.
Download the
sap_hana_ha.tf
configuration file for the SAP HANA high-availability cluster to your working directory:$
wget https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform/sap_hana_ha/terraform/sap_hana_ha.tfOpen the
sap_hana_ha.tf
file in the Cloud Shell code editor.To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.
In the
sap_hana_ha.tf
file, update the argument values by replacing the contents inside the double quotation marks with the values for your installation. The arguments are described in the following table.Argument Data type Description source
String Specifies the location and version of the Terraform module to use during deployment.
The
sap_hana_ha.tf
configuration file includes two instances of thesource
argument: one that is active and one that is included as a comment. Thesource
argument that is active by default specifieslatest
as the module version. The second instance of thesource
argument, which by default is deactivated by a leading#
character, specifies a timestamp that identifies a module version.If you need all of your deployments to use the same module version, then remove the leading
#
character from thesource
argument that specifies the version timestamp and add it to thesource
argument that specifieslatest
.project_id
String Specify the ID of your Google Cloud project in which you are deploying this system. For example, my-project-x
.machine_type
String Specify the type of Compute Engine virtual machine (VM) on which you need to run your SAP system. If you need a custom VM type, then specify a predefined VM type with a number of vCPUs that is closest to the number you need while still being larger. After deployment is complete, modify the number of vCPUs and the amount of memory. For example,
n1-highmem-32
.network
String Specify the name of the network in which you need to create the load balancer that manages the VIP. If you are using a shared VPC network, you must add the ID of the host project as a parent directory of the network name. For example,
HOST_PROJECT_ID/NETWORK_NAME
.subnetwork
String Specify the name of the subnetwork that you created in a previous step. If you are deploying to a shared VPC, then specify this value as SHARED_VPC_PROJECT_ID/SUBNETWORK
. For example,myproject/network1
.linux_image
String Specify the name of the Linux operating system image on which you want to deploy your SAP system. For example, rhel-9-2-sap-ha
orsles-15-sp5-sap
. For the list of available operating system images, see the Images page in the Google Cloud console.linux_image_project
String Specify the Google Cloud project that contains the image that you have specified for the argument linux_image
. This project might be your own project or a Google Cloud image project. For a Compute Engine image, specify eitherrhel-sap-cloud
orsuse-sap-cloud
. To find the image project for your operating system, see Operating system details.primary_instance_name
String Specify a name of the VM instance for the primary SAP HANA system. The name can contain lowercase letters, numbers, or hyphens. primary_zone
String Specify a zone in which the primary SAP HANA system is deployed. The primary and secondary zones must be in the same region. For example, us-east1-c
.secondary_instance_name
String Specify a name of the VM instance for the secondary SAP HANA system. The name can contain lowercase letters, numbers, or hyphens. secondary_zone
String Specify a zone in which the secondary SAP HANA system is deployed. The primary and secondary zones must be in the same region. For example, us-east1-b
.sap_hana_deployment_bucket
String To automatically install SAP HANA on the deployed VMs, specify the path of the Cloud Storage bucket that contains the SAP HANA installation files. Do not include gs://
in the path; include only the bucket name and the names of any folders. For example,my-bucket-name/my-folder
.The Cloud Storage bucket must exist in the Google Cloud project that you specify for the
project_id
argument.sap_hana_sid
String To automatically install SAP HANA on the deployed VMs, specify the SAP HANA system ID. The ID must consist of three alpha-numeric characters and begin with a letter. All letters must be in uppercase. For example, ED1
.sap_hana_instance_number
Integer Optional. Specify the instance number, 0 to 99, of the SAP HANA system. The default is 0
.sap_hana_sidadm_password
String To automatically install SAP HANA on the deployed VMs, specify a temporary SIDadm
password for the installation scripts to use during deployment. The password must contain at least 8 characters and include at least one uppercase letter, one lowercase letter, and a number.Instead of specifying password as plain text, we recommend that you use a secret. For more information, see Password management.
sap_hana_sidadm_password_secret
String Optional. If you are using Secret Manager to store the SIDadm
password, then specify the Name of the secret that corresponds to this password.In Secret Manager, make sure that the Secret value, which is the password, contains at least 8 characters and includes at least one uppercase letter, one lowercase letter, and a number.
For more information, see Password management.
sap_hana_system_password
String To automatically install SAP HANA on the deployed VMs, specify a temporary database superuser password for the installation scripts to use during deployment. The password must contain at least 8 characters and include at least one uppercase letter, one lowercase letter, and a number. Instead of specifying password as plain text, we recommend that you use a secret. For more information, see Password management.
sap_hana_system_password_secret
String Optional. If you are using Secret Manager to store the database superuser password, then specify the Name of the secret that corresponds to this password. In Secret Manager, make sure that the Secret value, which is the password, contains at least 8 characters and includes at least one uppercase letter, one lowercase letter, and a number.
For more information, see Password management.
sap_hana_double_volume_size
Boolean Optional. To double the HANA volume size, specify true
. This argument is useful when you want to deploy multiple SAP HANA instances or a disaster-recovery SAP HANA instance on the same VM. By default, the volume size is automatically calculated to be the minimum size required for the size of your VM, while still meeting the SAP certification and support requirements. The default value isfalse
.sap_hana_backup_size
Integer Optional. Specify size of the /hanabackup
volume in GB. If you don't specify this argument or set it to0
, then the installation script provisions Compute Engine instance with a HANA backup volume of two times the total memory.sap_hana_sidadm_uid
Integer Optional. Specify a value to override the default value of the SID_LCadm user ID. The default value is 900
. You can change this to a different value for consistency within your SAP landscape.sap_hana_sapsys_gid
Integer Optional. Overrides the default group ID for sapsys
. The default value is79
.sap_vip
String Specify the IP address that you are going to use for your VIP. The IP address must be within the range of IP addresses that are assigned to your subnetwork. The Terraform configuration file reserves this IP address for you. primary_instance_group_name
String Optional. Specify the name of the unmanaged instance group for the primary node. The default name is ig-PRIMARY_INSTANCE_NAME
.secondary_instance_group_name
String Optional. Specify the name of the unmanaged instance group for the secondary node. The default name is ig-SECONDARY_INSTANCE_NAME
.loadbalancer_name
String Optional. Specify the name of the internal passthrough Network Load Balancer. The default name is lb-SAP_HANA_SID-ilb
.network_tags
String Optional. Specify one or more comma-separated network tags that you want to associate with your VM instances for firewall or routing purposes. A network tag for the ILB components is automatically added to VM's Network tags.
nic_type
String Optional. Specify the network interface to use with the VM instance. You can specify the value GVNIC
orVIRTIO_NET
. To use a Google Virtual NIC (gVNIC), you need to specify an OS image that supports gVNIC as the value for thelinux_image
argument. For the OS image list, see Operating system details.If you do not specify a value for this argument, then the network interface is automatically selected based on the machine type that you specify for the
This argument is available inmachine_type
argument.sap_hana
module version202302060649
or later.disk_type
String Optional. Specify the default type of Persistent Disk or Hyperdisk volume that you want to deploy for the SAP data and log volumes in your deployment. For information about the default disk deployment performed by the Terraform configurations provided by Google Cloud, see Disk deployment by Terraform. The following are valid values for this argument:
pd-ssd
,pd-balanced
,hyperdisk-extreme
,hyperdisk-balanced
, andpd-extreme
. In SAP HANA scale-up deployments, a separate Balanced Persistent Disk is also deployed for the/hana/shared
directory.You can override this default disk type and the associated default disk size and default IOPS using some advanced arguments. For more information, navigate to your working directory, then run the
terraform init
command, and then see the/.terraform/modules/sap_hana_ha/variables.tf
file. Before you use these arguments in production, make sure to test them in a non-production environment.use_single_shared_data_log_disk
Boolean Optional. The default value is false
, which directs Terraform to deploy a separate persistent disk or Hyperdisk for each of the following SAP volumes:/hana/data
,/hana/log
,/hana/shared
, and/usr/sap
. To mount these SAP volumes on the same persistent disk or Hyperdisk, specifytrue
.enable_data_striping
Boolean Optional. This argument lets you deploy the /hana/data
volume on two disks. The default value isfalse
, which directs Terraform to deploy a single disk for hosting your/hana/data
volume.This argument is available in
sap_hana_ha
module version1.3.674800406
or later.include_backup_disk
Boolean Optional. This argument is applicable to SAP HANA scale-up deployments. The default value is true
, which directs Terraform to deploy a separate disk to host the/hanabackup
directory.The disk type is determined by the
backup_disk_type
argument. The size of this disk is determined by thesap_hana_backup_size
argument.If you set the value for
include_backup_disk
asfalse
, then no disk is deployed for the/hanabackup
directory.backup_disk_type
String Optional. For scale-up deployments, specify the type of Persistent Disk or Hyperdisk that you want to deploy for the /hanabackup
volume. For information about the default disk deployment performed by the Terraform configurations provided by Google Cloud, see Disk deployment by Terraform.The following are the valid values for this argument:
pd-ssd
,pd-balanced
,pd-standard
,hyperdisk-extreme
,hyperdisk-balanced
, andpd-extreme
.This argument is available in
sap_hana_ha
module version202307061058
or later.enable_fast_restart
Boolean Optional. This argument determines whether or not the SAP HANA Fast Restart option is enabled for your deployment. The default value is true
. Google Cloud strongly recommends enabling the SAP HANA Fast Restart option.This argument is available in
sap_hana_ha
module version202309280828
or later.public_ip
Boolean Optional. Determines whether or not a public IP address is added to your VM instance. The default value is true
.service_account
String Optional. Specify the email address of a user-managed service account to be used by the host VMs and by the programs that run on the host VMs. For example, svc-acct-name@project-id.iam.gserviceaccount.com
.If you specify this argument without a value, or omit it, then the installation script uses the Compute Engine default service account. For more information, see Identity and access management for SAP programs on Google Cloud.
sap_deployment_debug
Boolean Optional. Only when Cloud Customer Care asks you to enable debugging for your deployment, specify true
, which makes the deployment generate verbose deployment logs. The default value isfalse
.primary_reservation_name
String Optional. To use a specific Compute Engine VM reservation for provisioning the VM instance that hosts your HA cluster's primary SAP HANA instance, specify the name of the reservation. By default, the installation script selects any available Compute Engine reservation based on the following conditions. For a reservation to be usable, regardless of whether you specify a name or the installation script selects it automatically, the reservation must be set with the following:
-
The
specificReservationRequired
option is set totrue
or, in the Google Cloud console, the Select specific reservation option is selected. -
Some Compute Engine machine types support CPU platforms that are not
covered by the SAP certification of the machine type. If the target
reservation is for any of the following machine types, then the reservation
must specify the minimum CPU platforms as indicated:
n1-highmem-32
: Intel Broadwelln1-highmem-64
: Intel Broadwelln1-highmem-96
: Intel Skylakem1-megamem-96
: Intel Skylake
The minimum CPU platforms for all of the other machine types that are
certified by SAP for use on Google Cloud conform to the SAP minimum CPU
requirement.
secondary_reservation_name
String Optional. To use a specific Compute Engine VM reservation for provisioning the VM instance that hosts your HA cluster's secondary SAP HANA instance, specify the name of the reservation. By default, the installation script selects any available Compute Engine reservation based on the following conditions. For a reservation to be usable, regardless of whether you specify a name or the installation script selects it automatically, the reservation must be set with the following:
-
The
specificReservationRequired
option is set totrue
or, in the Google Cloud console, the Select specific reservation option is selected. -
Some Compute Engine machine types support CPU platforms that are not
covered by the SAP certification of the machine type. If the target
reservation is for any of the following machine types, then the reservation
must specify the minimum CPU platforms as indicated:
n1-highmem-32
: Intel Broadwelln1-highmem-64
: Intel Broadwelln1-highmem-96
: Intel Skylakem1-megamem-96
: Intel Skylake
The minimum CPU platforms for all of the other machine types that are
certified by SAP for use on Google Cloud conform to the SAP minimum CPU
requirement.
primary_static_ip
String Optional. Specify a valid static IP address for the primary VM instance in your high-availability cluster. If you don't specify one, then an IP address is automatically generated for your VM instance. For example, 128.10.10.10
.This argument is available in
sap_hana_ha
module version202306120959
or later.secondary_static_ip
String Optional. Specify a valid static IP address for the secondary VM instance in your high-availability cluster. If you don't specify one, then an IP address is automatically generated for your VM instance. For example, 128.11.11.11
.This argument is available in
sap_hana_ha
module version202306120959
or later.can_ip_forward
Boolean Specify whether sending and receiving of packets with non-matching source or destination IPs is allowed, which enables a VM to act like a router. The default value is
true
.If you only intend to use Google's internal load balancers to manage virtual IPs for the deployed VMs, then set the value to
false
. An internal load balancer is automatically deployed as part of high availability templates.The following examples show completed configuration file that defines a high-availability cluster for SAP HANA. The cluster uses an internal passthrough Network Load Balancer to manage the VIP.
Terraform deploys the Google Cloud resources that are defined in the configuration file and then scripts take over to configure the operating system, install SAP HANA, configure replication, and configure the Linux HA cluster.
Click
RHEL
orSLES
to see the example that is specific to your operating system. For clarity, comments in the configuration file are omitted in the examples.RHEL
# ... module "sap_hana_ha" { source = "https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform/sap_hana_ha/sap_hana_ha_module.zip" # # By default, this source file uses the latest release of the terraform module # for SAP on Google Cloud. To fix your deployments to a specific release # of the module, comment out the source argument above and uncomment the source argument below. # # source = "https://storage.googleapis.com/cloudsapdeploy/terraform/YYYYMMDDHHMM/terraform/sap_hana_ha/sap_hana_ha_module.zip" # # ... # project_id = "example-project-123456" machine_type = "n2-highmem-32" network = "example-network" subnetwork = "example-subnet-us-central1" linux_image = "rhel-8-4-sap-ha" linux_image_project = "rhel-sap-cloud" primary_instance_name = "example-ha-vm1" primary_zone = "us-central1-a" secondary_instance_name = "example-ha-vm2" secondary_zone = "us-central1-c" # ... sap_hana_deployment_bucket = "my-hana-bucket" sap_hana_sid = "HA1" sap_hana_instance_number = 00 sap_hana_sidadm_password = "TempPa55word" sap_hana_system_password = "TempPa55word" # ... sap_vip = 10.0.0.100 primary_instance_group_name = ig-example-ha-vm1 secondary_instance_group_name = ig-example-ha-vm2 loadbalancer_name = lb-ha1 # ... network_tags = hana-ha-ntwk-tag service_account = "sap-deploy-example@example-project-123456.iam.gserviceaccount.com" primary_static_ip = "10.0.0.1" secondary_static_ip = "10.0.0.2" enable_fast_restart = true # ... }
SLES
# ... module "sap_hana_ha" { source = "https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform/sap_hana_ha/sap_hana_ha_module.zip" # # By default, this source file uses the latest release of the terraform module # for SAP on Google Cloud. To fix your deployments to a specific release # of the module, comment out the source argument above and uncomment the source argument below. # # source = "https://storage.googleapis.com/cloudsapdeploy/terraform/YYYYMMDDHHMM/terraform/sap_hana_ha/sap_hana_ha_module.zip" # # ... # project_id = "example-project-123456" machine_type = "n2-highmem-32" network = "example-network" subnetwork = "example-subnet-us-central1" linux_image = "sles-15-sp3-sap" linux_image_project = "suse-sap-cloud" primary_instance_name = "example-ha-vm1" primary_zone = "us-central1-a" secondary_instance_name = "example-ha-vm2" secondary_zone = "us-central1-c" # ... sap_hana_deployment_bucket = "my-hana-bucket" sap_hana_sid = "HA1" sap_hana_instance_number = 00 sap_hana_sidadm_password = "TempPa55word" sap_hana_system_password = "TempPa55word" # ... sap_vip = 10.0.0.100 primary_instance_group_name = ig-example-ha-vm1 secondary_instance_group_name = ig-example-ha-vm2 loadbalancer_name = lb-ha1 # ... network_tags = hana-ha-ntwk-tag service_account = "sap-deploy-example@example-project-123456.iam.gserviceaccount.com" primary_static_ip = "10.0.0.1" secondary_static_ip = "10.0.0.2" enable_fast_restart = true # ... }
-
The
Initialize your current working directory and download the Terraform provider plugin and module files for Google Cloud:
terraform init
The
terraform init
command prepares your working directory for other Terraform commands.To force a refresh of the provider plugin and configuration files in your working directory, specify the
--upgrade
flag. If the--upgrade
flag is omitted and you don't make any changes in your working directory, Terraform uses the locally cached copies, even iflatest
is specified in thesource
URL.terraform init --upgrade
Optionally, create the Terraform execution plan:
terraform plan
The
terraform plan
command shows the changes required by your current configuration. If you skip this step, theterraform apply
command automatically creates a new plan and prompts you to approve it.Apply the execution plan:
terraform apply
When you are prompted to approve the actions, enter
yes
.The
terraform apply
command sets up the Google Cloud infrastructure and then hands control over to a script that configures the HA cluster and installs SAP HANA according to the arguments defined in the terraform configuration file.While Terraform has control, status messages are written to the Cloud Shell. After the scripts are invoked, status messages are written to Logging and are viewable in the Google Cloud console, as described in Check the logs.
Verifying the deployment of your HANA HA system
Verifying an SAP HANA HA cluster involves several different procedures:
- Checking Logging
- Checking the configuration of the VM and the SAP HANA installation
- Checking the cluster configuration
- Checking the load balancer and the health of the instance groups
- Checking the SAP HANA system using SAP HANA Studio
- Performing a failover test
Check the logs
In the Google Cloud console, open Cloud Logging to monitor installation progress and check for errors.
Filter the logs:
Logs Explorer
In the Logs Explorer page, go to the Query pane.
From the Resource drop-down menu, select Global, and then click Add.
If you don't see the Global option, then in the query editor, enter the following query:
resource.type="global" "Deployment"
Click Run query.
Legacy Logs Viewer
- In the Legacy Logs Viewer page, from the basic selector menu, select Global as your logging resource.
Analyze the filtered logs:
- If
"--- Finished"
is displayed, then the deployment processing is complete and you can proceed to the next step. If you see a quota error:
On the IAM & Admin Quotas page, increase any of your quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA planning guide.
Open Cloud Shell.
Go to your working directory and delete the deployment to clean up the VMs and persistent disks from the failed installation:
terraform destroy
When you are prompted to approve the action, enter
yes
.Rerun your deployment.
- If
Check the configuration of the VM and the SAP HANA installation
After the SAP HANA system deploys without errors, connect to each VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for each VM instance, or you can use your preferred SSH method.
Change to the root user.
sudo su -
At the command prompt, enter
df -h
. Ensure that you see output that includes the/hana
directories, such as/hana/data
.RHEL
[root@example-ha-vm1 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 126G 0 126G 0% /dev tmpfs 126G 54M 126G 1% /dev/shm tmpfs 126G 25M 126G 1% /run tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/sda2 30G 5.4G 25G 18% / /dev/sda1 200M 6.9M 193M 4% /boot/efi /dev/mapper/vg_hana-shared 251G 52G 200G 21% /hana/shared /dev/mapper/vg_hana-sap 32G 477M 32G 2% /usr/sap /dev/mapper/vg_hana-data 426G 9.8G 417G 3% /hana/data /dev/mapper/vg_hana-log 125G 7.0G 118G 6% /hana/log /dev/mapper/vg_hanabackup-backup 512G 9.3G 503G 2% /hanabackup tmpfs 26G 0 26G 0% /run/user/900 tmpfs 26G 0 26G 0% /run/user/899 tmpfs 26G 0 26G 0% /run/user/1003
SLES
example-ha-vm1:~ # df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 126G 8.0K 126G 1% /dev tmpfs 189G 54M 189G 1% /dev/shm tmpfs 126G 34M 126G 1% /run tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/sda3 30G 5.4G 25G 18% / /dev/sda2 20M 2.9M 18M 15% /boot/efi /dev/mapper/vg_hana-shared 251G 50G 202G 20% /hana/shared /dev/mapper/vg_hana-sap 32G 281M 32G 1% /usr/sap /dev/mapper/vg_hana-data 426G 8.0G 418G 2% /hana/data /dev/mapper/vg_hana-log 125G 4.3G 121G 4% /hana/log /dev/mapper/vg_hanabackup-backup 512G 6.4G 506G 2% /hanabackup tmpfs 26G 0 26G 0% /run/user/473 tmpfs 26G 0 26G 0% /run/user/900 tmpfs 26G 0 26G 0% /run/user/0 tmpfs 26G 0 26G 0% /run/user/1003
Check the status of the new cluster by entering the status command that is specific to your operating system:
RHEL
pcs status
SLES
crm status
You should see results similar to the following the example, in which both VM instances are started and
example-ha-vm1
is the active primary instance:RHEL
[root@example-ha-vm1 ~]# pcs status Cluster name: hacluster Cluster Summary: * Stack: corosync * Current DC: example-ha-vm1 (version 2.0.3-5.el8_2.4-4b1f869f0f) - partition with quorum * Last updated: Wed Jul 7 23:05:11 2021 * Last change: Wed Jul 7 23:04:43 2021 by root via crm_attribute on example-ha-vm2 * 2 nodes configured * 8 resource instances configured Node List: * Online: [ example-ha-vm1 example-ha-vm2 ] Full List of Resources: * STONITH-example-ha-vm1 (stonith:fence_gce): Started example-ha-vm2 * STONITH-example-ha-vm2 (stonith:fence_gce): Started example-ha-vm1 * Resource Group: g-primary: * rsc_healthcheck_HA1 (service:haproxy): Started example-ha-vm2 * rsc_vip_HA1_00 (ocf::heartbeat:IPaddr2): Started example-ha-vm2 * Clone Set: SAPHanaTopology_HA1_00-clone [SAPHanaTopology_HA1_00]: * Started: [ example-ha-vm1 example-ha-vm2 ] * Clone Set: SAPHana_HA1_00-clone [SAPHana_HA1_00] (promotable): * Masters: [ example-ha-vm2 ] * Slaves: [ example-ha-vm1 ] Failed Resource Actions: * rsc_healthcheck_HA1_start_0 on example-ha-vm1 'error' (1): call=29, status='complete', exitreason='', last-rc-change='2021-07-07 21:07:35Z', queued=0ms, exec=2097ms * SAPHana_HA1_00_monitor_61000 on example-ha-vm1 'not running' (7): call=44, status='complete', exitreason='', last-rc-change='2021-07-07 21:09:49Z', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
SLES
example-ha-vm1:~ # crm status Cluster Summary: * Stack: corosync * Current DC: example-ha-vm1 (version 2.0.4+20200616.2deceaa3a-3.9.1-2.0.4+20200616.2deceaa3a) - partition with quorum * Last updated: Wed Jul 7 22:57:59 2021 * Last change: Wed Jul 7 22:57:03 2021 by root via crm_attribute on example-ha-vm1 * 2 nodes configured * 8 resource instances configured Node List: * Online: [ example-ha-vm1 example-ha-vm2 ] Full List of Resources: * STONITH-example-ha-vm1 (stonith:fence_gce): Started example-ha-vm2 * STONITH-example-ha-vm2 (stonith:fence_gce): Started example-ha-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started example-ha-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started example-ha-vm1 * Clone Set: cln_SAPHanaTopology_HA1_HDB00 [rsc_SAPHanaTopology_HA1_HDB00]: * Started: [ example-ha-vm1 example-ha-vm2 ] * Clone Set: msl_SAPHana_HA1_HDB00 [rsc_SAPHana_HA1_HDB00] (promotable): * Masters: [ example-ha-vm1 ] * Slaves: [ example-ha-vm2 ]
Change to the SAP admin user by replacing SID_LC in the following command with the
sap_hana_sid
value that you specified in thesap_hana_ha.tf
file. The SID_LC value must be in lowercase.su - SID_LCadm
Ensure that the SAP HANA services, such as
hdbnameserver
,hdbindexserver
, and others, are running on the instance by entering the following command:HDB info
If you are using RHEL for SAP 9.0 or later, then make sure that the packages
chkconfig
andcompat-openssl11
are installed on your VM instance.For more information from SAP, see SAP Note 3108316 - Red Hat Enterprise Linux 9.x: Installation and Configuration .
Check your cluster configuration
Check the parameter settings of your cluster. Check both the settings that are displayed by your cluster software and the parameter settings in the cluster configuration file. Compare your settings to the settings in the examples below, which were created by the automation scripts that are used in this guide.
Click on the tab for your operating system.
RHEL
Display your cluster resource configurations:
pcs config show
The following example shows the resource configurations that are created by the automation scripts on RHEL 8.1 and later.
If you are running RHEL 7.7 or earlier, the resource definition
Clone: SAPHana_HA1_00-clone
does not includeMeta Attrs: promotable=true
.Cluster Name: hacluster Corosync Nodes: example-rha-vm1 example-rha-vm2 Pacemaker Nodes: example-rha-vm1 example-rha-vm2 Resources: Group: g-primary Resource: rsc_healthcheck_HA1 (class=service type=haproxy) Operations: monitor interval=10s timeout=20s (rsc_healthcheck_HA1-monitor-interval-10s) start interval=0s timeout=100 (rsc_healthcheck_HA1-start-interval-0s) stop interval=0s timeout=100 (rsc_healthcheck_HA1-stop-interval-0s) Resource: rsc_vip_HA1_00 (class=ocf provider=heartbeat type=IPaddr2) Attributes: cidr_netmask=32 ip=10.128.15.100 nic=eth0 Operations: monitor interval=3600s timeout=60s (rsc_vip_HA1_00-monitor-interval-3600s) start interval=0s timeout=20s (rsc_vip_HA1_00-start-interval-0s) stop interval=0s timeout=20s (rsc_vip_HA1_00-stop-interval-0s) Clone: SAPHanaTopology_HA1_00-clone Meta Attrs: clone-max=2 clone-node-max=1 interleave=true Resource: SAPHanaTopology_HA1_00 (class=ocf provider=heartbeat type=SAPHanaTopology) Attributes: InstanceNumber=00 SID=HA1 Operations: methods interval=0s timeout=5 (SAPHanaTopology_HA1_00-methods-interval-0s) monitor interval=10 timeout=600 (SAPHanaTopology_HA1_00-monitor-interval-10) reload interval=0s timeout=5 (SAPHanaTopology_HA1_00-reload-interval-0s) start interval=0s timeout=600 (SAPHanaTopology_HA1_00-start-interval-0s) stop interval=0s timeout=300 (SAPHanaTopology_HA1_00-stop-interval-0s) Clone: SAPHana_HA1_00-clone Meta Attrs: promotable=true Resource: SAPHana_HA1_00 (class=ocf provider=heartbeat type=SAPHana) Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=7200 InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HA1 Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true Operations: demote interval=0s timeout=3600 (SAPHana_HA1_00-demote-interval-0s) methods interval=0s timeout=5 (SAPHana_HA1_00-methods-interval-0s) monitor interval=61 role=Slave timeout=700 (SAPHana_HA1_00-monitor-interval-61) monitor interval=59 role=Master timeout=700 (SAPHana_HA1_00-monitor-interval-59) promote interval=0s timeout=3600 (SAPHana_HA1_00-promote-interval-0s) reload interval=0s timeout=5 (SAPHana_HA1_00-reload-interval-0s) start interval=0s timeout=3600 (SAPHana_HA1_00-start-interval-0s) stop interval=0s timeout=3600 (SAPHana_HA1_00-stop-interval-0s) Stonith Devices: Resource: STONITH-example-rha-vm1 (class=stonith type=fence_gce) Attributes: pcmk_delay_max=30 pcmk_monitor_retries=4 pcmk_reboot_timeout=300 port=example-rha-vm1 project=example-project-123456 zone=us-central1-a Operations: monitor interval=300s timeout=120s (STONITH-example-rha-vm1-monitor-interval-300s) start interval=0 timeout=60s (STONITH-example-rha-vm1-start-interval-0) Resource: STONITH-example-rha-vm2 (class=stonith type=fence_gce) Attributes: pcmk_monitor_retries=4 pcmk_reboot_timeout=300 port=example-rha-vm2 project=example-project-123456 zone=us-central1-c Operations: monitor interval=300s timeout=120s (STONITH-example-rha-vm2-monitor-interval-300s) start interval=0 timeout=60s (STONITH-example-rha-vm2-start-interval-0) Fencing Levels: Location Constraints: Resource: STONITH-example-rha-vm1 Disabled on: example-rha-vm1 (score:-INFINITY) (id:location-STONITH-example-rha-vm1-example-rha-vm1--INFINITY) Resource: STONITH-example-rha-vm2 Disabled on: example-rha-vm2 (score:-INFINITY) (id:location-STONITH-example-rha-vm2-example-rha-vm2--INFINITY) Ordering Constraints: start SAPHanaTopology_HA1_00-clone then start SAPHana_HA1_00-clone (kind:Mandatory) (non-symmetrical) (id:order-SAPHanaTopology_HA1_00-clone-SAPHana_HA1_00-clone-mandatory) Colocation Constraints: g-primary with SAPHana_HA1_00-clone (score:4000) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-g-primary-SAPHana_HA1_00-clone-4000) Ticket Constraints: Alerts: No alerts defined Resources Defaults: migration-threshold=5000 resource-stickiness=1000 Operations Defaults: timeout=600s Cluster Properties: cluster-infrastructure: corosync cluster-name: hacluster dc-version: 2.0.2-3.el8_1.2-744a30d655 have-watchdog: false stonith-enabled: true stonith-timeout: 300s Quorum: Options:
Display your cluster configuration file,
corosync.conf
:cat /etc/corosync/corosync.conf
The following example shows the parameters that the automation scripts set for RHEL 8.1 and later.
If you are using RHEL 7.7 or earlier, the value of
transport:
isudpu
instead ofknet
:totem { version: 2 cluster_name: hacluster transport: knet join: 60 max_messages: 20 token: 20000 token_retransmits_before_loss_const: 10 crypto_cipher: aes256 crypto_hash: sha256 } nodelist { node { ring0_addr: example-rha-vm1 name: example-rha-vm1 nodeid: 1 } node { ring0_addr: example-rha-vm2 name: example-rha-vm2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes timestamp: on }
SLES
Display your cluster resource configurations:
crm config show
The automation scripts that are used by this guide create the resource configurations that are shown in the following example:
node 1: example-ha-vm1 \ attributes hana_ha1_op_mode=logreplay lpa_ha1_lpt=1635380335 hana_ha1_srmode=syncmem hana_ha1_vhost=example-ha-vm1 hana_ha1_remoteHost=example-ha-vm2 hana_ha1_site=example-ha-vm1 node 2: example-ha-vm2 \ attributes lpa_ha1_lpt=30 hana_ha1_op_mode=logreplay hana_ha1_vhost=example-ha-vm2 hana_ha1_site=example-ha-vm2 hana_ha1_srmode=syncmem hana_ha1_remoteHost=example-ha-vm1 primitive STONITH-example-ha-vm1 stonith:fence_gce \ op monitor interval=300s timeout=120s \ op start interval=0 timeout=60s \ params port=example-ha-vm1 zone="us-central1-a" project="example-project-123456" pcmk_reboot_timeout=300 pcmk_monitor_retries=4 pcmk_delay_max=30 primitive STONITH-example-ha-vm2 sstonith:fence_gce \ op monitor interval=300s timeout=120s \ op start interval=0 timeout=60s \ params port=example-ha-vm2 zone="us-central1-c" project="example-project-123456" pcmk_reboot_timeout=300 pcmk_monitor_retries=4 primitive rsc_SAPHanaTopology_HA1_HDB00 ocf:suse:SAPHanaTopology \ operations $id=rsc_sap2_HA1_HDB00-operations \ op monitor interval=10 timeout=600 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=300 \ params SID=HA1 InstanceNumber=00 primitive rsc_SAPHana_HA1_HDB00 ocf:suse:SAPHana \ operations $id=rsc_sap_HA1_HDB00-operations \ op start interval=0 timeout=3600 \ op stop interval=0 timeout=3600 \ op promote interval=0 timeout=3600 \ op demote interval=0 timeout=3600 \ op monitor interval=60 role=Master timeout=700 \ op monitor interval=61 role=Slave timeout=700 \ params SID=HA1 InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true primitive rsc_vip_hc-primary anything \ params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:60000,backlog=10,fork,reuseaddr /dev/null" \ op monitor timeout=20s interval=10s \ op_params depth=0 primitive rsc_vip_int-primary IPaddr2 \ params ip=10.128.15.101 cidr_netmask=32 nic=eth0 \ op monitor interval=3600s timeout=60s group g-primary rsc_vip_int-primary rsc_vip_hc-primary ms msl_SAPHana_HA1_HDB00 rsc_SAPHana_HA1_HDB00 \ meta notify=true clone-max=2 clone-node-max=1 target-role=Started interleave=true clone cln_SAPHanaTopology_HA1_HDB00 rsc_SAPHanaTopology_HA1_HDB00 \ meta clone-node-max=1 target-role=Started interleave=true location LOC_STONITH_example-ha-vm1 STONITH-example-ha-vm1 -inf: example-ha-vm1 location LOC_STONITH_example-ha-vm2 STONITH-example-ha-vm2 -inf: example-ha-vm2 colocation col_saphana_ip_HA1_HDB00 4000: g-primary:Started msl_SAPHana_HA1_HDB00:Master order ord_SAPHana_HA1_HDB00 Optional: cln_SAPHanaTopology_HA1_HDB00 msl_SAPHana_HA1_HDB00 property cib-bootstrap-options: \ have-watchdog=false \ dc-version="1.1.24+20210811.f5abda0ee-3.18.1-1.1.24+20210811.f5abda0ee" \ cluster-infrastructure=corosync \ cluster-name=hacluster \ maintenance-mode=false \ stonith-timeout=300s \ stonith-enabled=true rsc_defaults rsc-options: \ resource-stickiness=1000 \ migration-threshold=5000 op_defaults op-options: \ timeout=600
Display your cluster configuration file,
corosync.conf
:cat /etc/corosync/corosync.conf
The automation scripts that are used by this guide specify parameters settings in the
corosync.conf
file as shown in the following example:totem { version: 2 secauth: off crypto_hash: sha1 crypto_cipher: aes256 cluster_name: hacluster clear_node_high_bit: yes token: 20000 token_retransmits_before_loss_const: 10 join: 60 max_messages: 20 transport: udpu interface { ringnumber: 0 bindnetaddr: 10.128.1.63 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: no logfile: /var/log/cluster/corosync.log to_syslog: yes debug: off timestamp: on logger_subsys { subsys: QUORUM debug: off } } nodelist { node { ring0_addr: example-ha-vm1 nodeid: 1 } node { ring0_addr: example-ha-vm2 nodeid: 2 } } quorum { provider: corosync_votequorum expected_votes: 2 two_node: 1 }
Check the load balancer and the health of the instance groups
To confirm that the load balancer and health check were set up correctly, check the load balancer and instance groups in the Google Cloud console.
Open the Load balancing page in the Google Cloud console:
In the list of load balancers, confirm that a load balancer was created for your HA cluster.
On the Load balancer details page in the Healthy column under Instance group in the Backend section, confirm that one of the instance groups shows "1/1" and the other shows "0/1". After a failover, the healthy indicator, "1/1", switches to the new active instance group.
Check the SAP HANA system using SAP HANA Studio
You can use either SAP HANA Cockpit or SAP HANA Studio to monitor and manage your SAP HANA systems in a high-availability cluster.
Connect to the HANA system by using SAP HANA Studio. When defining the connection, specify the following values:
- On the Specify System panel, specify the floating IP address as the Host Name.
- On the Connection Properties panel, for database user authentication,
specify the database superuser name and the password that you specified
for the
sap_hana_system_password
argument in thesap_hana_ha.tf
file.
For information from SAP about installing SAP HANA Studio, see SAP HANA Studio Installation and Update Guide.
After SAP HANA Studio is connected to your HANA HA system, display the system overview by double-clicking the system name in the navigation pane on the left side of the window.
Under General Information on the Overview tab, confirm that:
- The Operational Status shows "All services started".
- The System Replication Status shows "All services are active and in sync".
Confirm the replication mode by clicking the System Replication Status link under General Information. Synchronous replication is indicated by
SYNCMEM
in the REPLICATION_MODE column on the System Replication tab.
Clean up and retry deployment
If any of the deployment verification steps in the preceding sections show that the installation wasn't successful, then you must undo your deployment and retry it by completing the following steps:
Resolve any errors to ensure that your deployment doesn't fail again for the same reason. For information about checking the logs, or resolving quota related errors, see Check the logs.
Open Cloud Shell or, if you installed the Google Cloud CLI on your local workstation, then open a terminal.
Go to the directory that contains the Terraform configuration file that you used for this deployment.
Delete all resources that are part of your deployment by running the following command:
terraform destroy
When you are prompted to approve the action, enter
yes
.Retry your deployment as instructed earlier in this guide.
Perform a failover test
To perform a failover test, complete the following steps:
Connect to the primary VM by using SSH. You can connect from the Compute Engine VM instances page by clicking the SSH button for each VM instance, or you can use your preferred SSH method.
At the command prompt, enter the following command:
sudo ip link set eth0 down
The
ip link set eth0 down
command triggers a failover by severing communications with the primary host.Reconnect to either host using SSH and change to the root user.
Confirm that the primary host is now active on the VM that used to contain the secondary host. Automatic restart is enabled in the cluster, so the stopped host will restart and assume the role of secondary host.
RHEL
pcs status
SLES
crm status
The following examples show that the roles on each host have switched.
RHEL
[root@example-ha-vm1 ~]# pcs status Cluster name: hacluster Cluster Summary: * Stack: corosync * Current DC: example-ha-vm1 (version 2.0.3-5.el8_2.3-4b1f869f0f) - partition with quorum * Last updated: Fri Mar 19 21:22:07 2021 * Last change: Fri Mar 19 21:21:28 2021 by root via crm_attribute on example-ha-vm2 * 2 nodes configured * 8 resource instances configured Node List: * Online: [ example-ha-vm1 example-ha-vm2 ] Full List of Resources: * STONITH-example-ha-vm1 (stonith:fence_gce): Started example-ha-vm2 * STONITH-example-ha-vm2 (stonith:fence_gce): Started example-ha-vm1 * Resource Group: g-primary: * rsc_healthcheck_HA1 (service:haproxy): Started example-ha-vm2 * rsc_vip_HA1_00 (ocf::heartbeat:IPaddr2): Started example-ha-vm2 * Clone Set: SAPHanaTopology_HA1_00-clone [SAPHanaTopology_HA1_00]: * Started: [ example-ha-vm1 example-ha-vm2 ] * Clone Set: SAPHana_HA1_00-clone [SAPHana_HA1_00] (promotable): * Masters: [ example-ha-vm2 ] * Slaves: [ example-ha-vm1 ]
SLES
example-ha-vm2:~ # Cluster Summary: * Stack: corosync * Current DC: example-ha-vm2 (version 2.0.4+20200616.2deceaa3a-3.9.1-2.0.4+20200616.2deceaa3a) - partition with quorum * Last updated: Thu Jul 8 17:33:44 2021 * Last change: Thu Jul 8 17:33:07 2021 by root via crm_attribute on example-ha-vm2 * 2 nodes configured * 8 resource instances configured Node List: * Online: [ example-ha-vm1 example-ha-vm2 ] Full List of Resources: * STONITH-example-ha-vm1 (stonith:fence_gce): Started example-ha-vm2 * STONITH-example-ha-vm2 (stonith:fence_gce): Started example-ha-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started example-ha-vm2 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started example-ha-vm2 * Clone Set: cln_SAPHanaTopology_HA1_HDB00 [rsc_SAPHanaTopology_HA1_HDB00]: * Started: [ example-ha-vm1 example-ha-vm2 ] * Clone Set: msl_SAPHana_HA1_HDB00 [rsc_SAPHana_HA1_HDB00] (promotable): * Masters: [ example-ha-vm2 ] * Slaves: [ example-ha-vm1 ]
On the Load balancer details page in the console, confirm that the new active primary instance shows "1/1" in the Healthy column. Refresh the page, if necessary.
For example:
In SAP HANA Studio, confirm that you are still connected to the system by double-clicking the system entry in the navigation pane to refresh the system information.
Click the System Replication Status link to confirm that the primary and secondary hosts have switched hosts and are active.
Validate your installation of Google Cloud's Agent for SAP
After you have deployed a VM and installed your SAP system, validate that Google Cloud's Agent for SAP is functioning properly.
Verify that Google Cloud's Agent for SAP is running
To verify that the agent is running, follow these steps:
Establish an SSH connection with your host VM instance.
Run the following command:
systemctl status google-cloud-sap-agent
If the agent is functioning properly, then the output contains
active (running)
. For example:google-cloud-sap-agent.service - Google Cloud Agent for SAP Loaded: loaded (/usr/lib/systemd/system/google-cloud-sap-agent.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2022-12-02 07:21:42 UTC; 4 days ago Main PID: 1337673 (google-cloud-sa) Tasks: 9 (limit: 100427) Memory: 22.4 M (max: 1.0G limit: 1.0G) CGroup: /system.slice/google-cloud-sap-agent.service └─1337673 /usr/bin/google-cloud-sap-agent
If the agent isn't running, then restart the agent.
Verify that SAP Host Agent is receiving metrics
To verify that the infrastructure metrics are collected by Google Cloud's Agent for SAP and sent correctly to the SAP Host Agent, follow these steps:
- In your SAP system, enter transaction
ST06
. In the overview pane, check the availability and content of the following fields for the correct end-to-end setup of the SAP and Google monitoring infrastructure:
- Cloud Provider:
Google Cloud Platform
- Enhanced Monitoring Access:
TRUE
- Enhanced Monitoring Details:
ACTIVE
- Cloud Provider:
Set up monitoring for SAP HANA
Optionally, you can monitor your SAP HANA instances using Google Cloud's Agent for SAP. From version 2.0, you can configure the agent to collect the SAP HANA monitoring metrics and send them to Cloud Monitoring. Cloud Monitoring allows you to create dashboards to visualize these metrics, set up alerts based on metric thresholds, and more.
To monitor an HA cluster using Google Cloud's Agent for SAP, make sure to follow the guidance given in High-availability configuration for the agent.For more information about the collection of SAP HANA monitoring metrics using Google Cloud's Agent for SAP, see SAP HANA monitoring metrics collection.
Connect to SAP HANA
Note that because these instructions don't use an external IP address for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.
To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.
To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.
Configure HANA Active/Active (Read Enabled)
Starting with SAP HANA 2.0 SPS1, you can configure HANA Active/Active (Read Enabled) in a Pacemaker cluster. For instructions, see:
- Configure HANA Active/Active (Read Enabled) in a SUSE Pacemaker cluster
- Configure HANA Active/Active (Read Enabled) in a Red Hat Pacemaker cluster
Performing post-deployment tasks
Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.
Change the temporary passwords for the SAP HANA system administrator and database superuser.
Update the SAP HANA software with the latest patches.
If your SAP HANA system is deployed on a VirtIO network interface, then we recommend that you ensure the value of the TCP parameter
/proc/sys/net/ipv4/tcp_limit_output_bytes
is set to1048576
. This modification helps improve the overall network throughput on the VirtIO network interface without affecting the network latency.Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).
Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.
Evaluate your SAP HANA workload
To automate continuous validation checks for your SAP HANA high-availability workloads running on Google Cloud, you can use Workload Manager.
Workload Manager allows you to automatically scan and evaluate your SAP HANA high-availability workloads against best practices from SAP, Google Cloud, and OS vendors. This helps improve the quality, performance, and reliability of your workloads.
For information about the best practices that Workload Manager supports for evaluating SAP HANA high-availability workloads running on Google Cloud, see Workload Manager best practices for SAP. For information about creating and running an evaluation using Workload Manager, see Create and run an evaluation.
What's next
- For more information about VM administration of and monitoring, see the SAP HANA Operations Guide.