SAP HANA Planning Guide

This guide provides an overview of what is required to run SAP HANA on Google Cloud Platform (GCP), and provides details that you can use when planning the implementation of a new SAP HANA system.

For details about how to deploy SAP HANA on GCP, see the SAP HANA Deployment Guide

About SAP HANA on GCP

SAP HANA is an in-memory, column-oriented, relational database that provides high-performance analytics and real-time data processing. At the core of this real-time data platform is the SAP HANA database. Customers can leverage ease of provisioning, highly scalable, and redundant GCP infrastructure capabilities to run their business critical workloads. GCP provides a set of physical assets, such as computers and hard disk drives, and virtual resources, such as Compute Engine virtual machines (VMs), located in Google data centers around the world.

When you deploy SAP HANA on GCP, you deploy to virtual machines running on Compute Engine. Compute Engine VMs provide persistent disks, which function similarly to physical disks in a desktop or a server, but are automatically managed for you by Compute Engine to ensure data redundancy and optimized performance.

GCP basics

GCP consists of many cloud-based services and products. When running SAP products on GCP, you mainly use the IaaS-based services offered through Compute Engine and Cloud Storage, as well as some platform-wide features, such as tools.

See the GCP platform overview for important concepts and terminology. This guide duplicates some information from the overview for convenience and context.

For an overview of considerations that enterprise-scale organizations should take into account when running on GCP, see best practices for enterprise organizations.

Interacting with GCP

GCP offers three main ways to interact with the platform, and your resources, in the cloud:

  • The Google Cloud Platform Console, which is a web-based user interface.
  • The gcloud command-line tool, which provides a superset of the functionality that GCP Console offers.
  • Client libraries, which provide APIs for accessing services and management of resources. Client libraries are useful when building your own tools.

GCP services

SAP deployments typically utilize some or all of the following GCP services:

Service Description
VPC Networking Connects your VM instances to each other and to the Internet. Each instance is a member of either a legacy network with a single global IP range, or a recommended subnet network, where the instance is a member of a single subnetwork that is a member of a larger network. Note that a network cannot span GCP projects, but a GCP project can have multiple networks.
Compute Engine Creates and manages VMs with your choice of operating system and software stack.
Persistent disks Persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD).
Google Cloud Platform Console Browser-based tool for managing Compute Engine resources. Use a template to describe all of the Compute Engine resources and instances you need. You don't have to individually create and configure the resources or figure out dependencies, because the GCP Console does that for you.
Cloud Storage You can back up your SAP database backups into Cloud Storage for added durability and reliability, with replication.
Stackdriver Monitoring Provides visibility into the deployment, performance, uptime, and health of Compute Engine, network, and persistent disks.

Stackdriver collects metrics, events, and metadata from GCP and uses these to generate insights through dashboards, charts, and alerts. You can monitor the compute metrics at no cost through Stackdriver Monitoring.
Cloud IAM Provides unified control over permissions for GCP resources. Control who can perform control-plane operations on your VMs, including creating, modifying, and deleting VMs and persistent disks, and creating and modifying networks.

Pricing and quotas

You can use the pricing calculator to estimate your usage costs. For more pricing information, see Compute Engine pricing, Cloud Storage pricing, and Stackdriver pricing.

GCP resources are subject to quotas. If you plan to use high-CPU or high-memory machines, you might need to request additional quota. For more information, see Compute Engine resource quotas.

Resource requirements

VM types

The following table shows the VM types and operating systems that are certified by SAP for production use on GCP in both single-node installations and, except for the n1-ultramem-160 machine type, scale-out installations. Scale-out installations can include up to 15 worker nodes, for a total of 16 nodes.

Custom machine types are also certified by SAP, but with certain limitations. The number of vCPUs must be between 32 and 64, and the ratio of memory to vCPUs must be 6.5 GB per vCPU.

For more information about different instance types and their use cases, see machine types.

For information about the regions and zones a particular machine type is available in, see Available regions & zones.

GCP Instance Type vCPU Memory (GB) Operating System CPU Platform
n1-highmem-32 32 208 RHEL 7.3 for SAP HANA
RHEL 7.4 for SAP HANA
SLES 11 SP4
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
Intel Broadwell
n1-highmem-64 64 416 RHEL 7.3 for SAP HANA
RHEL 7.4 for SAP HANA
SLES 11 SP4
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
Intel Broadwell
n1-highmem-96 96 624 RHEL 7.3 for SAP HANA
RHEL 7.4 for SAP HANA
SLES 11 SP4
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
Intel Skylake
n1-megamem-96 96 1,433 RHEL 7.3 for SAP HANA
RHEL 7.4 for SAP HANA
SLES 11 SP4
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
Intel Skylake
n1-ultramem-160 (Scaleup only) 160 Up to 3,844 RHEL 7.4 for SAP HANA
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
SLES 12 SP3
SLES 12 SP3 for SAP
Intel Broadwell
Custom machine types 32-64 6.5 GB for each vCPU RHEL 7.4 for SAP HANA
SLES 12 SP1
SLES 12 SP1 for SAP
SLES 12 SP2
SLES 12 SP2 for SAP
SLES 12 SP3
SLES 12 SP3 for SAP
Intel Broadwell

Storage configuration

SAP HANA is an in-memory database, so data is mostly stored and processed in memory. Protection against data loss is provided by saving the data to a persistent storage location.

To achieve optimal performance, the storage solution used for SAP HANA data and log volumes should meet SAP's storage KPIs. Google has worked with SAP to certify SSD persistent disks for use as the storage solution for SAP HANA workloads, as long as you use one of the supported VM types. VMs with 32 or more vCPUs and a 1.7 TiB volume for data and log files can achieve up to 400 MB/sec for writes, and 800 MB/sec for reads.

Storage for backups

Storage for SAP HANA backup is configured with standard HDD persistent disks. Standard HDD persistent disks are efficient and economical for handling sequential read-write operations, but are not optimized to handle high rates of random input-output operations per second (IOPS). SAP HANA uses sequential IO with large blocks to back up the database. Standard HDD persistent disks provide a low-cost, high-performance option for this scenario.

The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it easier to recover your database if necessary.

If you use SAP HANA dynamic tiering, the backup storage must be large enough to hold both the in-memory data and the data that is managed on disk by the dynamic tiering server.

SAP HANA dynamic tiering

SAP HANA dynamic tiering is certified by SAP for use in production environments on GCP. SAP HANA dynamic tiering extends SAP HANA data storage by storing data that is infrequently accessed on disk instead of in memory.

For more information, see SAP HANA Dynamic Tiering on GCP.

Memory configuration

See the supported VM types table.

Operating system selection

SAP HANA is certified to run on GCP on the following Linux operating systems:

You can use a Linux image that GCP provides and maintains (a public image) or you can provide and maintain your own Linux image (a custom image).

Use a custom image if the version of the SAP-certified operating system that you require is not available from GCP as a public image. The following steps, which are described in detail in Importing Boot Disk Images to Compute Engine, summarize the procedure for using a custom image:

  1. Prepare your boot disk so it can boot within the GCP Compute Engine environment and so you can access it after it boots.
  2. Create and compress the boot disk image file.
  3. Upload the image file to Cloud Storage and import the image to Compute Engine as a new custom image.
  4. Use the imported image to create a virtual machine instance and make sure it boots properly.
  5. Optimize the image and install the Linux Guest Environment so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.

After your custom image is ready, you can use it when creating VMs for your SAP HANA system.

If you are moving a RHEL operating system from an on-premises installation to GCP, you need to add Red Hat Cloud Access to your Red Hat subscription. For more information, see Red Hat Cloud Access.

For more information about the operating system images that GCP provides, see Images.

For more information about importing an operating system into GCP as a custom image, see Importing Boot Disk Images to Compute Engine.

For more information about the operating systems that SAP HANA supports, see:

Pricing and quota considerations for SAP HANA

You are responsible for the costs incurred for using the resources created by following this deployment guide. Use the pricing calculator to help estimate your actual costs.

Quotas

If you have a new GCP account, or if you haven't asked for an increased quota, you will need to do so to complete this guide. View your existing quota, and compare with the following table to see what increase to ask for. You can then request a quota-limit increase.

The following table shows quota values for single-node, scale-up SAP HANA systems by VM instance type. If you host SAP HANA Studio on GCP or use a NAT gateway and bastion host, add the values shown in the table to your total quota requirement.

Instance Type CPU Memory Standard PD SSD PD
n1-highmem-32 32 208 GB 448 GB 1,700 GB
n1-highmem-64 64 416 GB 864 GB 1,700 GB
n1-highmem-96 96 624 GB 1,280 GB 1,968 GB
n1-megamem-96 96 1,433 GB 2,898 GB 4,121 GB
n1-ultramem-160 160 3,844 GB 7,720 GB 10,148 GB
Bastion/NAT gateway 1 3.75 GB 8 GB 0 GB
SAP HANA Studio 1 3.75 GB 50 GB 0 GB

Licensing

Running SAP HANA on GCP requires you to bring your own license (BYOL).

For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.

Deployment architecture

SAP HANA on GCP supports single-node and multi-node architectures.

Single-node architecture

The following diagram shows the single-node architecture. In the diagram, notice both the deployment on GCP and the disk layout. You can use Cloud Storage to back up your local backups available in /hanabackup. This mount should be sized equal to or greater than the data mount.

Deployment Layout

Notice that the VM for SAP HANA has no public IP, which means it cannot be reached from an external network. Instead, the deployment uses a NAT bastion host and SAP HANA Studio for accessing SAP HANA. The SAP HANA Studio instance and the bastion host are deployed in the same subnetwork as the SAP HANA instance.

You provision a Windows host on which you install SAP HANA Studio, with the instance placed in the same subnetwork, and with firewall rules that enable you to connect to the SAP HANA database from SAP HANA Studio.

You deploy SAP HANA using a single-node, scale-up architecture that has the following components:

  • One Compute Engine instance for the SAP HANA database, with SSD persistent disks having a size greater than 1.7 terabytes, and a network bandwidth of up to 16 Gbps. The SSD persistent disk is partitioned and mounted to /hana/data to host the data.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork for SAP HANA.

  • An optional, but recommended, Internet gateway configured for outbound Internet access for your SAP HANA and other instances. This guide assumes you are using this gateway.

  • Compute Engine firewall rules restricting access to instances.

  • Persistent disk for backup of SAP HANA database.

  • Compute Engine VM, n1-standard-2, with Windows OS to host SAP HANA studio.

  • Compute Engine VM, n1-standard-1 as a bastion host.

  • Automated SAP HANA database installation with a configuration file that you create from a template.

  • SAP HANA Studio.

Multi-node architecture

The following diagram shows the multi-node architecture, which is described in the guide.

Multi-node architecture diagram.

This guide shows you how to set up a multi-node architecture.

As the workload demand increases, especially when using OLAP, a multi-node, scale-out architecture can distribute the load across all nodes.

The scale-out architecture consists of one master node and number of worker nodes. They are inter-connected through a network with a capacity up to 16 Gbps. Each node has its own /hana/data, /hana/log, /usr/sap volumes on SSD persistent disks, which provide consistent, high IOPS, IO services. The master node also serves as an NFS master for /hana/shared and /hanabackup volumes, which is mounted on each worker node. Compute Engine's live migration and automatic instance restart features provide for high availability. For more information, see the SAP HANA Operations Guide.

In the event of a live migration or automatic instance restart event, the protected-persistent-storage-based /hana/shared and /hanabackup volumes can be back online as soon as an instance is up.

Backing up directly to Cloud Storage buckets by using Cloud Storage FUSE is currently not supported. However, if you want to utilize this functionality, you can mount your Cloud Storage volumes to /hanabackup_gcs and create a script to schedule the movement of your backups from /hanabackup to /hanabackup_gcs.

You deploy SAP HANA on a multi-node scale-out architecture with the following components:

  • Deploy one Compute Engine instance referenced in VM types, defining the setup script as detailed later in this guide. This script creates the additional required virtual instances and automatically adds the required SSD and HDD storage to each host, mounted in the correct location. It creates a central NFS location for the shared HANA binaries and /hanabackup location. Finally it automatically installs SAP HANA across the entire cluster.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork if you prefer.

  • Optional Internet gateway configured for outbound Internet access for your SAP HANA instance and other instances. This guide assumes you are using this gateway.

  • Optional Compute Engine VM: n1-standard-2 with Windows operating system to host SAP HANA Studio.

  • Optional Compute Engine VM: n1-standard-1 as a bastion host.

  • Compute Engine firewall rules to restrict access to Compute Engine instances.

Support

Google Cloud Platform customers with Gold or Platinum Support can request assistance with SAP HANA provisioning and configuration questions on Compute Engine. You can find additional information about support options at the Google Cloud Platform Support page.

You can also contact SAP support for SAP-related issues. SAP does the initial evaluation of the support ticket and transfers the ticket through to the Google queue if SAP considers it an infrastructure issue.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...