This guide shows you how to deploy a performance-optimized SUSE Linux Enterprise Server (SLES) cluster for a single host SAP HANA scaleup system on Google Cloud Platform (GCP). You complete a configuration file template and use Cloud Deployment Manager to deploy a cluster and two SAP HANA systems that incorporate best practices from Compute Engine and SAP.
One of the SAP HANA systems functions as the primary, active system and the other functions as a secondary, standby system. Each SAP HANA system is deployed on a Compute Engine VM within the same region, ideally in different zones.
The deployed cluster includes the following functions and features:
- The Pacemaker high-availability cluster resource manager.
- A GCP fencing mechanism.
- The SUSE high-availability pattern.
- The SUSE SAPHanaSR resource agent package.
- Synchronous system replication.
- Memory preload.
- Automatic restart of the failed instance as the new secondary instance.
If you need a scale-out system with standby hosts for SAP HANA automatic host failover, use the SAP HANA Scale-Out System with SAP HANA Host Auto-Failover Deployment Guide instead.
To deploy a SAP HANA system without a Linux high-availability cluster or standby hosts, use the SAP HANA Deployment Guide.
This guide is intended for advanced SAP HANA users who are familiar with Linux high-availability configurations for SAP HANA.
Before you create the SAP HANA high availability cluster, make sure that the following prerequisites are met:
- You or your organization has a GCP account and you have created a project for the SAP HANA deployment. For information about creating GCP accounts and projects, see Setting up your Google account in the SAP HANA Deployment Guide.
- The SAP HANA installation media is stored in a Cloud Storage bucket that is available in your deployment project and region. For information about how to upload SAP HANA installation media to a Cloud Storage bucket, see Downloading SAP HANA in the SAP HANA Deployment Guide.
Creating a network
When you create a project, a default network is created for the project. However, for security purposes, you should create a new network and control who has access by adding firewall rules or by using another access control method.
During deployment, VM instances typically require access to the internet to download Google's monitoring agent. If you are using one of the SUSE Linux Enterprise Server images that are available from Compute Engine, the VM instance also requires access to register the license and to access the SUSE repositories.
To set up networking:
Go to Cloud Shell.
To create a new network in the custom subnetworks mode, run:
gcloud compute networks create [YOUR_NETWORK_NAME] --subnet-mode custom
[YOUR_NETWORK_NAME]is the name of the new network. The network name can contain only lowercase characters, digits, and the dash character (-).
Make sure to specify the custom flag instead of using an automatic subnetwork. An automatic subnetwork always has the same assigned IP address range, which can cause issues if you have multiple subnetworks and want to use VPN.
Create a subnetwork, and specify the region and IP range:
gcloud compute networks subnets create [YOUR_SUBNETWORK_NAME] --network [YOUR_NETWORK_NAME] --region [YOUR_REGION] --range [YOUR_RANGE]
[YOUR_SUBNETWORK_NAME]is the new subnetwork.
[YOUR_NETWORK_NAME]is the name of the network you created in the previous step.
[REGION]is the region where you want the subnetwork.
[YOUR_RANGE]is the IP address range, specified in CIDR format, such as 10.1.0.0/24. If you plan to add more than one subnetwork, assign non-overlapping CIDR IP ranges for each subnetwork in the network. Note that each subnetwork and its internal IP ranges are mapped to a single region.
Optionally, repeat the previous step and add additional subnetworks.
Setting up a NAT gateway
If you intend to create one or more VMs that will not have public IP addresses, you must create a NAT gateway so that your VMs can access the Internet to download Google's monitoring agent.
If you intend to assign an external public IP address to your VM, you can skip this step.
To create a NAT gateway:
Create a VM to act as the NAT gateway in the subnet you just created:
gcloud compute instances create [YOUR_VM_NAME] --can-ip-forward \ --zone [YOUR_ZONE] --image-family [YOUR_IMAGE_FAMILY] \ --image-project [YOUR_IMAGE_PROJECT] \ --machine-type=[YOUR_MACHINE_TYPE] --subnet [YOUR_SUBNETWORK_NAME] \ --metadata startup-script="sysctl -w net.ipv4.ip_forward=1; iptables \ -t nat -A POSTROUTING -o eth0 -j MASQUERADE" --tags [YOUR_VM_TAG]
[YOUR_VM_NAME]is the name of the VM you are creating that want to use for the NAT gateway.
[YOUR_ZONE]is the zone where you want the VM.
[YOUR_IMAGE_PROJECT]specify the image you want to use for the NAT gateway.
[YOUR_MACHINE_TYPE]is any supported machine type. If you expect high network traffic, choose a machine type with that has at least eight virtual CPUs.
[YOUR_SUBNETWORK_NAME]is the name of the subnetwork where you want the VM.
[YOUR_VM_TAG]is a tag that is applied to the VM you are creating. If you use this VM as a bastion host, this tag is used to apply the related firewall rule only to this VM.
Create a route that is tagged so that traffic passes through the NAT VM instead of the default Internet gateway:
gcloud compute routes create [YOUR_ROUTE_NAME] \ --network [YOUR_NETWORK_NAME] --destination-range 0.0.0.0/0 \ --next-hop-instance [YOUR_VM_NAME] --next-hop-instance-zone \ [YOUR_ZONE] --tags [YOUR_TAG_NAME] --priority 800
[YOUR_ROUTE_NAME]is the name of the route you are creating.
[YOUR_NETWORK_NAME]is the network you created.
[YOUR_VM_NAME]is the VM you are using for your NAT gateway.
[YOUR_ZONE]is the zone where the VM is located.
[YOUR_TAG_NAME]is the tag on the route that directs traffic through the NAT VM.
If you also want to use the NAT gateway VM as a bastion host, run the following command. This command creates a firewall rule that allows inbound SSH access to this instance from the Internet:
gcloud compute firewall-rules create allow-ssh --network [YOUR_NETWORK_NAME] --allow tcp:22 --source-ranges 0.0.0.0/0 --target-tags "[YOUR_VM_TAG]"
[YOUR_NETWORK_NAME]is the network you created.
[YOUR_VM_TAG]is the tag you specified when you created the NAT gateway VM. This tag is used so this firewall rule applies only to the VM that hosts the NAT gateway, and not to all VMs in the network.
Creating a high-availability Linux cluster with SAP HANA installed
The following instructions use the Cloud Deployment Manager to create a SLES Linux cluster with two SAP HANA systems: a primary single-host SAP HANA system on one VM instance and a standby SAP HANA system on another VM instance in the same Compute Engine region. The SAP HANA systems use synchronous system replication and the standby system preloads the replicated data.
You define configuration options for the SAP HANA high-availability cluster in a Cloud Deployment Manager configuration file template.
The following instructions use the Cloud Shell, but are generally applicable to the Cloud SDK.
Confirm that your current quotas for resources such as persistent disks and CPUs are sufficient for the SAP HANA systems you are about to install. If your quotas are insufficient, deployment fails. For the SAP HANA quota requirements, see Pricing and quota considerations for SAP HANA.
Open the Cloud Shell or, if you installed the Cloud SDK on your local workstation, open a terminal.
template.yamlconfiguration file template for the SAP HANA high-availability cluster to your working directory by entering the following command in the Cloud Shell or Cloud SDK:
Optionally, rename the template.yaml file to identify the configuration it defines.
template.yamlfile in the Cloud Shell code editor or, if you are using the Cloud SDK, the text editor of your choice.
To open the Cloud Shell code editor, click the pencil icon in the upper right corner of the Cloud Shell terminal window.
template.yamlfile, update the property values by replacing the brackets and their contents with the values for your installation. The properties are described in the following table.
To create the VM instances without installing SAP HANA, delete or comment out all of the lines that begin with
Property Data type Description primaryInstanceName String The name of the VM instance for the primary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens. secondaryInstanceName String The name of the VM instance for the secondary SAP HANA system. Specify the name in lowercase letters, numbers, or hyphens. primaryZone String The zone in which the primary SAP HANA system is deployed. The primary and secondary zones must be in the same region. secondaryZone String The zone in which the secondary SAP HANA system will be deployed. The primary and secondary zones must be in the same region. instanceType String The type of Compute Engine virtual machine that you want to run SAP HANA on. subnetwork String The name of the subnetwork you created in a previous step. If you are deploying to a shared VPC, specify this value as [SHAREDVPC_PROJECT]/[SUBNETWORK]. For example, myproject/network1. linuxImage String The name of the Linux operating- system image or image family that you are using with SAP HANA. To specify an image family, add the prefix family/ to the family name. For example, family/sles-12-sp3-sap. To specify a specific image, specify only the image name. For the list of available image families, see the Images page in the Cloud console. linuxImageProject String The GCP project that contains the image you are going to use. This project might be your own project or a GCP image project. For SLES, specify suse-sap-cloud. For a list of GCP image projects, see the Images page in the Compute Engine documentation. sap_hana_deployment_bucket String The name of the GCP storage bucket in your project that contains the SAP HANA installation files that you uploaded in a previous step. sap_hana_sid String The SAP HANA system ID. The ID must consist of three alphanumeric characters and begin with a letter. All letters must be uppercase. sap_hana_instance_number Integer The instance number, 0 to 99, of the SAP HANA system. The default is 0. sap_hana_sidadm_password String The password for the operating system administrator. Passwords must be at least eight characters and include at least one uppercase letter, one lowercase letter, and one number. sap_hana_system_password String The password for the database superuser. Passwords must be at least 8 characters and include at least one uppercase letter, one lowercase letter, and one number. sap_hana_scaleout_nodes Integer The number of additional SAP HANA worker hosts that you need. Specify 0, because scaleout hosts are not supported in high availability configurations currently. sap_vip String The floating IP address that is always assigned to the active SAP HANA instance. The IP address must be within the range of IP addresses that are assigned to your subnetwork.
The following example shows a completed configuration file, which directs the Cloud Deployment Manager to deploy a high-availability cluster with the primary SAP HANA system installed in the us-west1-a zone and the secondary SAP HANA system installed in the us-west1-b zone. Both systems will be installed on n1-highmem-96 VMs that are running the SLES 12 SP2 operating system.
imports: - path: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_ha/sap_hana_ha.py resources: - name: sap_hana_ha type: https://storage.googleapis.com/sapdeploy/dm-templates/sap_hana_ha/sap_hana_ha.py properties: primaryInstanceName: example-ha-vm1 secondaryInstanceName: example-ha-vm2 primaryZone: us-central1-c secondaryZone: us-central1-f instanceType: n1-highmem-96 subnetwork: example-ha-subnetwork linuxImage: family/sles-12-sp2-sap linuxImageProject: suse-sap-cloud sap_hana_deployment_bucket: hana2sp3rev30 sap_hana_sid: HA1 sap_hana_instance_number: 00 sap_hana_sidadm_password: Google123 sap_hana_system_password: Google123 sap_hana_scaleout_nodes: 0 sap_vip: 10.1.0.24
Create the instances:
gcloud deployment-manager deployments create [DEPLOYMENT-NAME] --config [TEMPLATE-NAME].yaml
The above command invokes the Cloud Deployment Manager, which deploys the VMs, downloads the SAP HANA software from your storage bucket, and installs SAP HANA, all according to the specifications in your template.yaml file. The process takes approximately 10 to 15 minutes to complete. To check the progress of your deployment, follow the steps in the next section.
Verifying the deployment of your HANA HA system
Verifying an SAP HANA HA cluster involves several different procedures:
- Checking the Stackdriver logs
- Checking the configuration of the VM and the SAP HANA installation
- Checking the SAP HANA system using SAP HANA Studio
- Performing a failover test
Checking the Stackdriver logs
Open Stackdriver Logging to check for errors and monitor the progress of the installation.
On the Resources tab, select Global as your logging resource.
"INSTANCE DEPLOYMENT COMPLETE"is displayed, Cloud Deployment Manager processing is complete and you can proceed to the next step.
- If you see a quota error:
- On the IAM & admin Quotas page, increase any of your quotas that do not meet the SAP HANA requirements that are listed in the SAP HANA Planning Guide.
- On the Cloud Deployment Manager Deployments page, delete the deployment to clean up the VMs and persistent disks from the failed installation.
- Rerun the Cloud Deployment Manager.
Checking the configuration of the VM and the SAP HANA installation
After the SAP HANA system deploys without errors, connect to each VM by using SSH. From the Compute Engine VM instances page, you can click the SSH button for each VM instance, or you can use your preferred SSH method.
Change to the root user.
sudo su -
At the command prompt, enter
df -h. Ensure that you see output that includes the
/hanadirectories, such as
Check the status of the new cluster by issuing the following command:
You should see results similar to the following the example, in which
example-ha-vm1is the master and
example-ha-vm2is the slave:
Change to the SAP admin user by replacing [SID] in the following command with the [SID] value that you specified in the configuration file template.
su - [SID]adm
Ensure that the SAP HANA services, such as
hdbindexserver, and others, are running on the instance by entering the following command:
Checking the SAP HANA system using SAP HANA Studio
Connect to the HANA system by using SAP HANA Studio. When defining the connection, specify the following values:
- On the Specify System panel, specify the floating IP address as the Host Name.
- On the Connection Properties panel, for database user authentication, specify the database superuser name and the password that you specified for the sap_hana_system_password property in the template.yaml file.
For information from SAP about installing SAP HANA Studio, see SAP HANA Studio Installation and Update Guide.
After SAP HANA Studio is connected to your HANA HA system, display the system overview by double-clicking the system name in the navigation pane on the left side of the window.
Under General Information on the Overview tab, confirm that:
- The Operational Status shows "All services started".
- The System Replication Status shows "All services are active and in sync".
Confirm the replication mode by clicking the System Replication Status link under General Information. Synchronous replication is indicated by
SYNCMEMin the REPLICATION_MODE column on the System Replication tab.
If any of the validation steps show that the installation failed:
- Resolve the errors.
- Delete the deployment from the Deployments page.
- Recreate the instances, as described in the last step of the previous section.
Performing a failover test
To perform a failover test:
Connect to the primary VM by using SSH. You can connect from the Compute Engine VM instances page by clicking the SSH button for each VM instance, or you can use your preferred SSH method.
Change to the root user.
sudo su -
At the command prompt, enter the following command:
ifconfig eth0 down
ifconfig eth0 downcommand triggers a failover by severing communications with the primary host.
Follow the progress of the failover in Stackdriver:
The following example shows the log entries for a successful failover:
Reconnect to either host using SSH and change to the root user.
Enter 'crm status` to confirm that the primary host is now active on the VM that used to contain the secondary host. Automatic restart is enabled in the cluster, so the stopped host will restart and assume the role of secondary host, as shown in the following screenshot.
In SAP HANA Studio, confirm that you are still connected to the system by double-clicking the system entry in the navigation pane to refresh the system information.
Click the System Replication Status link to confirm that the primary and secondary hosts have switched hosts and are active.
Completing the NAT gateway installation
If you created a NAT gateway, complete the following steps.
Add tags to all instances, including the worker hosts:
export NETWORK_NAME="[YOUR_NETWORK_NAME]" export TAG="[YOUR_TAG_TEXT]"
gcloud compute instances add-tags "[PRIMARY_VM_NAME]" --tags="$TAG" --zone=[PRIMARY_VM_ZONE]
gcloud compute instances add-tags "[SECONDARY_VM_NAME]" --tags="$TAG" --zone=[SECONDARY_VM_ZONE]
Delete external IPs:
gcloud compute instances delete-access-config "[PRIMARY_VM_NAME]" --access-config-name "external-nat" --zone=[PRIMARY_VM_ZONE]
gcloud compute instances delete-access-config "[SECONDARY_VM_NAME]" --access-config-name "external-nat" --zone=[SECONDARY_VM_ZONE]
Setting up Google's monitoring agent for SAP HANA
Optionally, you can set up Google's monitoring agent for SAP HANA, which collects metrics from SAP HANA and sends them to Stackdriver Monitoring. Stackdriver Monitoring allows you to create dashboards for your metrics, set up custom alerts based on metric thresholds, and more.
To monitor an HA cluster, you can install the monitoring agent on either a VM instance outside of the cluster or on each VM instance in the cluster.
If you install the monitoring agent on a VM instance outside of the HA cluster, you specify the floating IP address of the cluster as the IP address of the host instance to monitor. If you install the monitoring agent on each VM in the cluster, when configuring each monitoring agent, you specify the local IP address of the VM instance that is hosting the monitoring agent.
For more information on setting up and configuring Google's monitoring agent for SAP HANA, see the SAP HANA Monitoring Agent User Guide.
Connecting to SAP HANA
Note that because these instructions don't use an external IP address for SAP HANA, you can only connect to the SAP HANA instances through the bastion instance using SSH or through the Windows server through SAP HANA Studio.
To connect to SAP HANA through the bastion instance, connect to the bastion host, and then to the SAP HANA instance(s) by using an SSH client of your choice.
To connect to the SAP HANA database through SAP HANA Studio, use a remote desktop client to connect to the Windows Server instance. After connection, manually install SAP HANA Studio and access your SAP HANA database.
Performing post-deployment tasks
Before using your SAP HANA instance, we recommend that you perform the following post-deployment steps. For more information, see SAP HANA Installation and Update Guide.
Update the SAP HANA software with the latest patches.
Install any additional components such as Application Function Libraries (AFL) or Smart Data Access (SDA).
Configure and backup your new SAP HANA database. For more information, see the SAP HANA operations guide.
- For more information about VM administration of and monitoring, see the SAP HANA Operations Guide.