SAP HANA planning guide

This guide provides an overview of what is required to run SAP HANA on Google Cloud, and provides details that you can use when planning the implementation of a new SAP HANA system.

For details about how to deploy SAP HANA on Google Cloud, see the SAP HANA Deployment Guide.

About SAP HANA on Google Cloud

SAP HANA is an in-memory, column-oriented, relational database that provides high-performance analytics and real-time data processing. Customers can leverage ease of provisioning, highly scalable, and redundant Google Cloud infrastructure capabilities to run their business critical workloads. Google Cloud provides a set of physical assets, such as computers and hard disk drives, and virtual resources, such as Compute Engine virtual machines (VMs), located in Google data centers around the world.

When you deploy SAP HANA on Google Cloud, you deploy to virtual machines running on Compute Engine. Compute Engine VMs provide persistent disks, which function similarly to physical disks in a desktop or a server, but are automatically managed for you by Compute Engine to ensure data redundancy and optimized performance.

Google Cloud basics

Google Cloud consists of many cloud-based services and products. When running SAP products on Google Cloud, you mainly use the IaaS-based services offered through Compute Engine and Cloud Storage, as well as some platform-wide features, such as tools.

See the Google Cloud platform overview for important concepts and terminology. This guide duplicates some information from the overview for convenience and context.

For an overview of considerations that enterprise-scale organizations should take into account when running on Google Cloud, see best practices for enterprise organizations.

Interacting with Google Cloud

Google Cloud offers three main ways to interact with the platform, and your resources, in the cloud:

  • The Google Cloud Console, which is a web-based user interface.
  • The gcloud command-line tool, which provides a superset of the functionality that Cloud Console offers.
  • Client libraries, which provide APIs for accessing services and management of resources. Client libraries are useful when building your own tools.

Google Cloud services

SAP deployments typically utilize some or all of the following Google Cloud services:

Service Description
VPC Networking Connects your VM instances to each other and to the Internet. Each instance is a member of either a legacy network with a single global IP range, or a recommended subnet network, where the instance is a member of a single subnetwork that is a member of a larger network. Note that a network cannot span Google Cloud projects, but a Google Cloud project can have multiple networks.
Compute Engine Creates and manages VMs with your choice of operating system and software stack.
Persistent disks Persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD).
Google Cloud Console Browser-based tool for managing Compute Engine resources. Use a template to describe all of the Compute Engine resources and instances you need. You don't have to individually create and configure the resources or figure out dependencies, because the Cloud Console does that for you.
Cloud Storage You can back up your SAP database backups into Cloud Storage for added durability and reliability, with replication.
Cloud Monitoring Provides visibility into the deployment, performance, uptime, and health of Compute Engine, network, and persistent disks.

Monitoring collects metrics, events, and metadata from Google Cloud and uses these to generate insights through dashboards, charts, and alerts. You can monitor the compute metrics at no cost through Monitoring.
IAM Provides unified control over permissions for Google Cloud resources. Control who can perform control-plane operations on your VMs, including creating, modifying, and deleting VMs and persistent disks, and creating and modifying networks.

Pricing and quotas

You can use the pricing calculator to estimate your usage costs. For more pricing information, see Compute Engine pricing, Cloud Storage pricing, and Google Cloud's operations suite pricing.

Google Cloud resources are subject to quotas. If you plan to use high-CPU or high-memory machines, you might need to request additional quota. For more information, see Compute Engine resource quotas.

Resource requirements

Certified machine types for SAP HANA

The following table shows the Google Cloud machine types that are certified by SAP for production use. The machine types include both Compute Engine virtual machines (VMs) and Bare Metal Solution bare-metal machines.

Except where noted in the table, SAP supports the machine types in both single-host (scale-up) and multi-host (scale-out) installations. Scale-out installations can include up to 15 worker hosts, for a total of 16 hosts.

Custom configurations of the general-purpose n1- and n2-highmem VM types are also certified by SAP. For more information, see Certified custom VM types for SAP HANA.

For the operating systems that are certified for use with HANA on each machine type, see Certified operating systems for SAP HANA.

For more information about different Compute Engine VM types and their use cases, see machine types.

Some machine types are not available in all Google Cloud regions. To check the regional availability of a Compute Engine virtual machine, see Available regions & zones. For Bare Metal Solution machines that are certified for SAP HANA, see Regional availability of Bare Metal Solution machines for SAP HANA.

SAP lists the certified machine types for SAP HANA in the SAP HANA Hardware Directory.

The SAPS numbers for each machine type can be found on the Certifications for SAP page.

Machine types vCPU Memory (GB) Operating system CPU platform Notes
N1 high-memory, general-purpose VM types
n1-highmem-32 32 208 RHEL, SUSE
Intel Broadwell NetApp CVS-Performance certified for scale up.
n1-highmem-64 64 416 RHEL, SUSE Intel Broadwell NetApp CVS-Performance certified for scale up.
n1-highmem-96 96 624 RHEL, SUSE Intel Skylake NetApp CVS-Performance certified for scale up.
N2 high-memory, general-purpose VM types
n2-highmem-32 32 Up to 256 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-48 48 Up to 384 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-64 64 Up to 512 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-80 80 Up to 640 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
M1 memory-optimized VM types
m1-megamem-96 96 1,433 RHEL, SUSE Intel Skylake NetApp CVS-Performance certified for scale up.
m1-ultramem-40 40 Up to 961 RHEL, SUSE Intel Broadwell Scale up only,
OLTP workloads only,
NetApp CVS-Performance certified for scale up.
m1-ultramem-80 80 Up to 1,922 RHEL, SUSE Intel Broadwell Scale up only,
OLTP workloads only,
NetApp CVS-Performance certified for scale up.
m1-ultramem-160 160 Up to 3,844 RHEL, SUSE Intel Broadwell OLAP workloads certified for scale up and scale out up to 16 nodes.
OLTP workloads certified for scale up only.
NetApp CVS-Performance certified for scale up only.
M2 memory-optimized VM types
m2-megamem-416 416 Up to 5,888 RHEL, SUSE Intel Cascade Lake OLAP workloads certified for scale up and scale out up to 16 nodes.
OLTP workloads certified for scale up only.
NetApp CVS-Performance certified for scale up only.
m2-ultramem-208 208 Up to 5,888 RHEL, SUSE Intel Cascade Lake Scale up only.
OLTP workloads only.
NetApp CVS-Performance certified for scale up.
m2-ultramem-416 416 Up to 11,776 RHEL, SUSE Intel Cascade Lake-SP OLAP workloads are certified with workload-based sizing for scale up or scale out up to 16 nodes.
OLTP workloads are certified for scale up or scale out up to 4 nodes.
Certification for OLTP scale out includes SAP S/4HANA.
NetApp CVS-Performance is certified with scale up only.
For scale out with S/4HANA, see SAP Note 2408419.
O2 memory-optimized Bare Metal Solution machine types
o2-ultramem-672-metal 672 Up to 18 TB RHEL, SUSE Intel Cascade Lake 12 sockets.
Scale up only in a three-tier architecture only.
OLTP workloads only,
Standard sizing.
o2-ultramem-896-metal 896 Up to 24 TB RHEL, SUSE Intel Cascade Lake 16 sockets.
Scale up in a three-tier architecture only.
OLTP workloads only,
Standard sizing.

Certified custom VM types for SAP HANA

The following table shows the customizable Compute Engine virtual machine (VM) types that are certified by SAP for production use of SAP HANA on Google Cloud.

SAP certifies only a subset of the custom VM type configurations that Compute Engine supports.

Custom VM configurations are subject to customization rules that are defined by Compute Engine. The rules differ depending on which machine type you are customizing. For complete customization rules, see Creating a VM Instance with a Custom Machine Type.

Base Google Cloud instance type vCPU Memory (GB) Operating system CPU platform
N1-highmem A number of vCPUs from 32 to 64 that is evenly divisible by 2. 6.5 GB for each vCPU RHEL, SUSE Intel Broadwell
N2-highmem (Scale up only) A number of vCPUs from 32 to 64 that is evenly divisible by 4. 8 GB for each vCPU RHEL, SUSE Intel Cascade Lake

Regional availability of Bare Metal Solution machines for SAP HANA

The following table shows the current Google Cloud regions that support SAP HANA on Bare Metal Solution.

Region Location
europe-west3 Frankfurt, Germany, Europe
europe-west4 Eemshaven, Netherlands, Europe
us-central1 Council Bluffs, Iowa, USA, North America
us-east4 Ashburn, Virginia, USA, North America
us-west2 Los Angeles, California, USA, North America

If you do not see the region that you need in the preceding table, contact Google Cloud Sales.

Memory configuration

Your memory configuration options are determined by Compute Engine VM instance type you choose. For more information, see the supported VM types table.

Certified operating systems for SAP HANA

The following table shows the Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating systems that are certified by SAP for production use with SAP HANA on Google Cloud.

Except where noted in the table, each operating system is supported with SAP HANA on all certified Compute Engine VM types.

For information about the current support status of each operating system and which operating systems are available from Google Cloud, see Operating system support for SAP HANA on GCP.

For information from SAP about which operating systems SAP supports with SAP HANA on Google Cloud, see the SAP HANA Hardware Directory.

The following table does not include:

  • Certified operating system versions that are no longer in mainstream support.
  • Operating system versions that are not specific to SAP.
Operating system Version Unsupported machine types
RHEL for SAP 7.6
7.7
8.1
SLES for SAP 12 SP3 m1-megamem
n1-highmem
o2-ultramem
12 SP4
12 SP5
15
15 SP1
15 SP2 o2-ultramem

Custom operating system images

You can use a Linux image that Google Cloud provides and maintains (a public image) or you can provide and maintain your own Linux image (a custom image).

Use a custom image if the version of the SAP-certified operating system that you require is not available from Google Cloud as a public image. The following steps, which are described in detail in Importing Boot Disk Images to Compute Engine, summarize the procedure for using a custom image:

  1. Prepare your boot disk so it can boot within the Google Cloud Compute Engine environment and so you can access it after it boots.
  2. Create and compress the boot disk image file.
  3. Upload the image file to Cloud Storage and import the image to Compute Engine as a new custom image.
  4. Use the imported image to create a virtual machine instance and make sure it boots properly.
  5. Optimize the image and install the Linux Guest Environment so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.

After your custom image is ready, you can use it when creating VMs for your SAP HANA system.

If you are moving a RHEL operating system from an on-premises installation to Google Cloud, you need to add Red Hat Cloud Access to your Red Hat subscription. For more information, see Red Hat Cloud Access.

For more information about the operating system images that Google Cloud provides, see Images.

For more information about importing an operating system into Google Cloud as a custom image, see Importing Boot Disk Images to Compute Engine.

For more information about the operating systems that SAP HANA supports, see:

OS clocksource on Compute Engine VMs

The default OS clocksource is kvm-clock for SLES and TSC for RHEL images.

Changing the OS clocksource is not necessary when SAP HANA is running on a Compute Engine VM. There is no difference in performance when using kvm-clock or TSC as the clocksource for Compute Engine VMs with SAP HANA.

If you need to change the OS clocksource to TSC, SSH into your VM and issue the following commands:

echo "tsc" | sudo tee /sys/devices/system/clocksource/*/current_clocksource
sudo cp /etc/default/grub /etc/default/grub.backup
sudo sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Persistent disk storage

For persistent block storage, you can attach Compute Engine persistent disks when you create your VMs or add them to your VMs later.

Compute Engine offers different types of persistent disks based on either solid-state drive (SSD) technology or standard hard-disk drive technology. Each type has different performance characteristics. Google Cloud manages the underlying hardware of persistent disks to ensure data redundancy and to optimize performance.

For performance reasons, the SAP HANA /hana/data and /hana/log volumes require SSD-based persistent disks. SSD-based persistent disks include the SSD (pd-ssd) and balanced (pd-balanced) persistent disk types.

For the boot disk and other SAP HANA volumes that do not need the same high performance that the /hana/data and /hana/log volumes do, you can use the following disk types in a production instance of SAP HANA:

  • For the /shared volume, you can either map it to the same SSD-based persistent disk as the /hana/data and/hana/log volumes or, if you map it to its own disk, you can use a pd-balanced persistent disk.
  • If you save your backups to a persistent disk, use a standard persistent disk (pd-standard) for the /hanabackup volume.
  • When you create the host VM, use a pd-balanced persistent disk for the boot disk.
The following figure shows approximate performance numbers for different persistent disks in suggested architectures for SAP HANA on Google Cloud. The actual numbers you might see in a similar configuration are likely to differ for a variety of reasons, including improvements made by Compute Engine over time.

Two SAP HANA systems are shown: the left one has `/hana/shared` on its own
balanced persistent disk and `/hana/data` and `/hana/log` on together on an SSD
persistent disk. The other system has `/hana/data`, `/hana/log`, and
`/hana/shared` together on a single SSD persistent disk, which is the
recommended architecture.

In the configuration on the left in the preceding figure, the /hana/data and /hana/log volumes are on an SSD persistent disk and the /hana/shared volume, which doesn't require as high performance, is on a balanced persistent disk, which costs less than an SSD persistent disk.

In the configuration on the right, the /hana/data, /hana/log, and /hana/shared volumes are all on a single SSD disk. This provides slightly better performance with one less disk to manage than the split model, where the /hana/shared volume is by itself on a balanced persistent disk. Persistent disks are located independently from your VMs, so you can detach or move persistent disks to keep your data, even after you delete your VMs.

In the Cloud Console, you can see the persistent disks that are attached to your VM instances under Additional disks on the VM instance details page for each VM instance.

For more information about the different types of Compute Engine persistent disks, their performance characteristics, and how to work with them, see the Compute Engine documentation:

Minimum sizes for SSD and balanced persistent disks

Within limits, which are described in Block storage performance, the performance of SSD persistent disks and balanced persistent disks increases as the size of the disk and the number of vCPUs increase.

The following table shows the recommended sizes for SSD and balanced persistent disks in a production environment for each certified Compute Engine VM type. The sizes assume that the /hana/data, /hana/log and /hana/shared volumes are all mapped to the disk. If your system is particularly performance sensitive, using pd-ssd is recommended for the best performance.

At a minimum, SAP HANA requires a sustained throughput of 400 MB per second for reads and writes, which is what an 834 GB pd-ssd or a 1,429 GB pd-balanced provides. The sizes listed in the table for each VM type are the persistent disk sizes that provide the SAP HANA performance that was required for the certification of that VM type.

As the persistent disk sizes in the table increase to accommodate the larger machine memory and data sizes, the throughput also increases up to the architectural limits that are described in Block storage performance.

Compute Engine VM type pd-ssd pd-balanced
n1-highmem-32 834 1,429
n1-highmem-64 1,155 1,980
n1-highmem-96 1,716 2,942
n2-highmem-32 834 1,429
n2-highmem-48 1,068 1,831
n2-highmem-64 1,414 2,424
n2-highmem-80 1,760 3,017
m1-megamem-96 3,287 4,286
m1-ultramem-40 2,626 4,286
m1-ultramem-80 3,874 4,286
m1-ultramem-160 6,180 6,180
m2-megamem-416 8,667 8,667
m2-ultramem-208 8,667 8,667
m2-ultramem-416 15,766 15,766

Determining persistent disk size

Calculate the amount of persistent disk storage that you need for the SAP HANA volumes based on the amount of memory that your selected Compute Engine VM type contains.

Persistent disk size requirements for scale-up systems

For SAP HANA scale-up systems, use the following formulas for each volume:

  • /hana/data: 1.2 x memory
  • /hana/log: either .5 x memory (adjusted to be a multiple of 64, if necessary) or 512 GB, whichever is smaller
  • /hana/shared: either 1 x memory or 1,024 GB, whichever is smaller
  • /usr/sap: 32 GB
  • /hanabackup: 2 x memory, optional allocation
Persistent disk size requirements for scale-out systems

For SAP HANA scale-out systems, use the same formula as SAP HANA scale-up systems for /hana/data and /hana/log volumes. For /hana/shared volume, calculate the persistent disk size based on the number of worker hosts in your deployment. For each four worker hosts, increase the disk size by 1 x memory. For example:

  • From 1 to 4 worker hosts: 1 x memory
  • From 5 to 8 worker hosts: 2 x memory
  • From 9 to 12 worker hosts: 3 x memory
  • From 13 to 16 worker hosts: 4 x memory

To determine your overall storage quota requirements for SAP HANA scale-out systems, you need to total up the disk sizes for each type of disk that is used with all of the hosts in the scale-out system. For example, if you have put /hana/data and /hana/log on pd-ssdpersistent disk, but /hana/shared on pd-balanced persistent disk, then you need separate totals for pd-ssd and pd-balanced to request separate quotas.

For an SAP HANA scale-out system with host auto-failover, you only need to calculate the persistent disk size for master and worker hosts. The standby hosts do not have their own /hana/data, /hana/log, and /user/sap volumes. If there is a failure, SAP HANA automatic failover unmounts the /hana/data, /hana/log, and /user/sap volumes from the failed host and mounts on them on standby host. The /hana/shared and /hanabackup volumes for a standby host are mounted on a separately deployed NFS solution.

Minimum persistent disk size for performance

Select a persistent disk size that is no smaller than the minimum size that is listed for your persistent disk type in Minimum sizes for SSD and balanced persistent disks.

For example, if you are running SAP HANA on an n2-highmem-32 VM instance, which has 256 GB of memory, your total storage requirement for the SAP HANA volumes is 723 GB. However, if you use an SSD persistent disk, the required minimum size is 834 GB, so you need to size your persistent disk at 834 GB or larger.

Apply any excess persistent disk storage to the /hana/data volume.

For information from SAP about sizing for SAP HANA, see Sizing SAP HANA.

Persistent disks deployed by the Deployment Manager templates

When you deploy an SAP HANA system by using the Cloud Deployment Manager scripts that Google Cloud provides, Deployment Manager allocates two persistent disks for SAP HANA:

  • A single SSD persistent disk for the /hana/data, /hana/log, /usr/sap, and /hana/shared directories
  • Optionally, a standard HDD persistent disk for the /hanabackup directory

Deployment Manager maps the SAP HANA /hana/data, /hana/log, /usr/sap, and /hana/shared directories each to its own logical volume for easy resizing and maps them to the SSD persistent disk in a single volume group.

Deployment Manager maps the /hanabackup directory to a logical volume in a separate volume group, which it then maps to a standard HDD persistent disk.

The following example shows how Deployment Manager maps the volumes for SAP HANA on a Compute Engine n2-highmem-32 VM, which has 256 GB of memory.

In the example, the vg_hana volume group is mapped to a single 834 GB SSD persistent disk, which is the required minimum size. With 256 GB of memory, the SAP HANA volumes require only about 723 GB of storage in total. To use all of the storage on the persistent disk, Deployment Manager allocated the excess disk space to the data volume. Deployment Manager sized the backup volume at 512 GB, double the memory, and mapped it to a standard persistent disk of the same size.

hana-ssd-example:~ # lvs
  LV     VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data   vg_hana       -wi-ao---- 426.00g
  log    vg_hana       -wi-ao---- 125.00g
  sap    vg_hana       -wi-ao----  32.00g
  shared vg_hana       -wi-ao---- 251.00g
  backup vg_hanabackup -wi-ao---- 512.00g

The sizes of your volumes for the same VM type might differ slightly from what is shown in the example.

Optional persistent disk storage for backups

When storing SAP HANA backups on a persistent disk, use standard HDD persistent disks. Standard HDD persistent disks are efficient and economical for handling sequential read-write operations, but are not optimized to handle high rates of random input-output operations per second (IOPS). SAP HANA uses sequential IO with large blocks to back up the database. Standard HDD persistent disks provide a low-cost, high-performance option for this scenario.

The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it easier to recover your database if necessary.

To make the SAP HANA backups available as a regional resource for disaster recovery, you can use Compute Engine persistent disk snapshots. You can schedule snapshots to regularly and automatically back up your persistent disk. For more information, see Persistent disk snapshots.

If you use SAP HANA dynamic tiering, the backup storage must be large enough to hold both the in-memory data and the data that is managed on disk by the dynamic tiering server.

You can use other mechanisms for storing SAP HANA backups. If you use the Cloud Storage Backint agent for SAP HANA, you can backup SAP HANA directly to a Cloud Storage bucket, which makes the use of a persistent disk for storing backups optional.

SAP HANA dynamic tiering

SAP HANA dynamic tiering is certified by SAP for use in production environments on Google Cloud. SAP HANA dynamic tiering extends SAP HANA data storage by storing data that is infrequently accessed on disk instead of in memory.

For more information, see SAP HANA Dynamic Tiering on Google Cloud.

SAP HANA Fast Restart option

For SAP HANA 2.0 SP04 and later, Google Cloud recommends the SAP HANA Fast Restart option.

SAP HANA Fast Restart reduces restart times in the event that SAP HANA terminates, but the operating system remains running. To reduce the restart time, SAP HANA leverages the SAP HANA persistent memory functionality to preserve MAIN data fragments of column store tables in DRAM that is mapped to the tmpfs file system.

Additionally, on VMs in the M2 family of Compute Engine memory-optimized VM types, SAP HANA Fast Restart improves recovery time if uncorrectable errors occur in memory. For more information, see Memory-error recovery with Fast Restart on Compute Engine VMs.

For information about configuring SAP HANA Fast Restart, see the configuration information in the deployment guide for your SAP HANA deployment scenario. For example, for a scale-up or scale-out without standby nodes deployment scenario, see Configuring SAP HANA Fast Restart.

Required OS settings for SAP HANA Fast Restart

To use SAP HANA Fast Restart, your operating system must be tuned as required by SAP.

If you use the Deployment Manager templates that Google Cloud provides, the templates configure the kernel settings for you.

If you don't use the Deployment Manager templates, SAP provides guidance for configuring both the RHEL and SLES operating systems for SAP HANA. For SAP HANA Fast Restart, pay particular attention to setting numa_balancing and transparent_hugepage correctly.

If you use RHEL, use the sap-hana tuned profile, if it is available. For the configuration steps, see:

If you use SLES, use the saptune tool from SUSE to apply the required configuration. To apply all of the recommended SAP HANA settings, including both of the preceding kernel parameters, specify the following saptune command:

saptune solution apply HANA

For more information about configuring SLES for SAP HANA, see:

Memory-error recovery with Fast Restart on Compute Engine VMs

Enabling SAP HANA Fast Restart on VMs in the M2 family of Compute Engine memory-optimized VM types reduces the time it takes SAP HANA to recover from uncorrectable memory errors.

By leveraging Intel processor capabilities, the M2 VM types can keep running when uncorrectable errors occur in the memory subsystem. If SAP HANA Fast Restart is enabled when the memory error occurs, the affected SAP HANA process restarts, but the whole database doesn't need to be reloaded, only the affected file block.

VM types that support memory-error recovery

Currently, the following Compute Engine VM types support memory-error recovery:

  • m2-megamem-416
  • m2-ultramem-208
  • m2-ultramem-416
Required operating systems for memory-error recovery

With the required kernel patches, the following operating systems support memory-error recovery with SAP HANA Fast Restart:

  • SUSE Linux Enterprise Server (SLES) for SAP, 12 SP3 or later.
    • Included in Compute Engine public images with a build date of v202103* or later.
    • If you need to apply the latest kernel patches to an existing deployment, follow your standard update process. For example, issue the following commands:
      • sudo zypper refresh
      • sudo zypper update
  • Red Hat Enterprise Linux (RHEL) for SAP, 8.4 or later. (Coming soon)

File server options

The file server options for SAP HANA on Google Cloud include Filestore and NetApp Cloud Volumes Service for Google Cloud.

For more information about all of the file server options for SAP on Google Cloud, see File sharing solutions for SAP on Google Cloud.

Filestore

For the /hana/shared volume only, you can use Filestore. However, with Filestore basic service tiers, all SAP HANA hosts that share the storage must be within the same Google Cloud zone because a Filestore instance is a zonal resource. This is particularly relevant for shared volumes in a scale-out configuration, where the compute nodes for the scale-out system must reside in the same zone for optimal latency. For more information, see Components in an SAP HANA scale-out system on Google Cloud.

NetApp Cloud Volumes Service for Google Cloud

NetApp Cloud Volumes Service for Google Cloud is a fully-managed, cloud-native data service platform that you can use to create an NFS file system for SAP HANA scale-up systems on all Compute Engine instance types that are certified for SAP HANA.

NetApp Cloud Volumes Service offers two service types: CVS and CVS-Performance. The CVS_Performance service type offers different service levels. You must use the NetApp Cloud Volumes Service CVS-Performance (NetApp CVS-Performance) service type and the Extreme service level with SAP HANA.

Support for NetApp CVS-Performance in scale-out deployments is limited to specific Compute Engine instance types, as noted in the table in Certified VM types for SAP HANA.

With NetApp CVS-Performance, you can place all of the SAP HANA directories, including /hana/data and /hana/logs, in shared storage, instead of using Compute Engine persistent disks. With most other shared storage systems, you can place only the /hana/shared directory in shared storage.

SAP support for NetApp CVS-Performance on Google Cloud is listed in the SAP HANA Hardware Directory.

Regional availability of NetApp CVS-Performance for SAP HANA

Your NetApp CVS-Performance volumes must be in the same region as your host VM instances.

Support for SAP HANA by NetApp CVS-Performance is not available in every region that NetApp CVS-Performance is available in.

You can use NetApp CVS-Performance with SAP HANA in the following Google Cloud regions:

Region Location
europe-west4 Eemshaven, Netherlands, Europe
us-east4 Ashburn, Northern Virginia, USA
us-west2 Los Angeles, California, USA

If you are interested in running SAP HANA with NetApp CVS-Performance in a Google Cloud region that is not listed above, contact sales.

NFS protocol support

NetApp CVS-Performance supports the NFSv3 and NFSv4.1 protocols with SAP HANA on Google Cloud.

NFSv3 is recommended for volumes that are configured to allow multiple TCP connections. NFSv4.1 is not yet supported with multiple TCP connections.

Volume requirements for NetApp Cloud Volumes Service with SAP HANA

The NetApp CVS-Performance volumes must be in the same region as the host VM instances.

For the /hana/data and /hana/log volumes, the Extreme service level of NetApp CVS-Performance is required. You can use the Premium service level for the /hana/shared directory if it is in a separate volume from the /hana/data and /hana/log directories.

For the best performance with SAP HANA systems that are larger than 1 TB, create separate volumes for /hana/data, /hana/log, and /hana/shared.

To meet SAP HANA performance requirements, the following minimum volume sizes are required when running SAP HANA with NetApp CVS-Performance:

Directory Minimum size
/hana/shared 1 TB
/hana/log 2.5 TB
/hana/data 4 TB

Adjust the size of your volumes to meet your throughput requirements. The minimum throughput rate for the Extreme service level is 128 MB per second for each 1 TB, so the throughput for 4TB of disk space is 512 MB per second. Provisioning more disk space for the /hana/data volume can reduce startup times. For the /hana/data volume, we recommend either 1.5 times the size of your memory or 4 TB, whichever is greater.

The minimum size for the /hanabackup volume is determined by your backup strategy. You can also use the Cloud Storage Backint agent for SAP HANA to backup the database directly into Cloud Storage.

Deploying a SAP HANA system with NetApp CVS-Performance

To deploy NetApp CVS-Performance with SAP HANA on Google Cloud, you need to deploy your VMs and install SAP HANA first. You can use the Deployment Manager templates that Google Cloud provides to deploy the VMs and SAP HANA, or you can create the VM instances and install SAP HANA manually.

If you use the Deployment Manager templates, the VMs are deployed with the /hana/data and /hana/log volume mapped to persistent disks. After you mount the NetApp CVS-Performance volumes to the VMs, you need to copy the contents of the persistent disks over, as described in the following steps.

To deploy SAP HANA with NetApp CVS-Performance by using the Deployment Manager templates that Google Cloud provides:

  1. Deploy SAP HANA with persistent disks by using the Cloud Deployment Manager templates that Google Cloud provides by following the instructions in the SAP HANA deployment guide.
  2. Create your NetApp CVS-Performance volumes. For complete NetApp instructions, see NetApp Cloud Volumes Service for Google Cloud documentation.

  3. Mount NetApp CVS-Performance to a temporary mount point by using the mount command with following settings:

    mount -t nfs -o options server:path mountpoint

    For options, use the following settings:

    rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock

    The option vers=3 indicates NFSv3. The option nconnect=16 specifies support for multiple TCP connections.

  4. Stop SAP HANA and any related services that are using the attached persistent disk volumes.

  5. Copy the contents of the persistent disk volumes to the corresponding NetApp CVS-Performance volumes.

  6. Detach the persistent disks.

  7. Remount the NetApp CVS-Performance volumes to the permanent mount points by updating the /etc/fstab with the following settings:

    server:path   /mountpoint   nfs   options   0 0

    For options, use the following settings:

    rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock

    For more information about updating the /etc/fstab file, see the nfs page in the Linux File Formats manual.

  8. For the best performance, update the fileio category in the SAP HANA global.ini file with the following suggested settings:

    Parameter Value
    async_read_submit on
    async_write_submit_active on
    async_write_submit_blocks all
    max_parallel_io_requests 128
    max_parallel_io_requests[data] 128
    max_parallel_io_requests[log] 128
    num_completion_queues 4
    num_completion_queues[data] 4
    num_completion_queues[log] 4
    num_submit_queues 8
    num_submit_queues[data] 8
    num_submit_queues[log] 8
  9. Restart SAP HANA.

  10. After confirming that everything works as expected, delete the persistent disks to avoid being charged for them.

User identification and resource access

When planning security for an SAP deployment on Google Cloud, you must identify:

  • The user accounts and applications that need access to the Google Cloud resources in your Google Cloud project
  • The specific Google Cloud resources in your project that each user needs to access

You must add each user to your project by adding their Google account ID to the project as a principal. For an application program that uses Google Cloud resources, you create a service account, which provides a user identity for the program within your project.

Compute Engine VMs have their own service account. Any programs that that run on a VM can use the VM service account, as long as the VM service account has the resource permissions that the program needs.

After you identify the Google Cloud resources that each user needs to use, you grant each user permission to use each resource by assigning resource-specific roles to the user. Review the predefined roles that IAM provides for each resource, and assign roles to each user that provide just enough permissions to complete the user's tasks or functions and no more.

If you need more granular or restrictive control over permissions than the predefined IAM roles provide, you can create custom roles.

For more information about the IAM roles that SAP programs need on Google Cloud, see Identity and access management for SAP programs on Google Cloud.

For an overview of identity and access management for SAP on Google Cloud, see Identity and access management overview for SAP on Google Cloud.

Pricing and quota considerations for SAP HANA

You are responsible for the costs incurred for using the resources created by following this deployment guide. Use the pricing calculator to help estimate your actual costs.

Quotas

SAP HANA requires more CPU and memory than many workloads on Google Cloud. If you have a new Google Cloud account, or if you haven't asked for an increased quota, you will need to do so to deploy SAP HANA.

The following table shows quota values for single-host, scale-up SAP HANA systems by VM instance type.

For a scale-out SAP HANA system or multiple scale-up systems, you need to include the total resource amounts for all systems. For guidance on determining the storage requirements for scale-out systems, see Determining persistent disk size.

View your existing quota, and compare with your resource (CPU, memory, and storage) requirements to see what increase to ask for. You can then request a quota-limit increase.

Instance type CPU Memory Standard PD SSD PD Balanced PD
n1-highmem-32 32 208 GB 448 GB 834 GB 1,429 GB
n1-highmem-64 64 416 GB 864 GB 1,155 GB 1,980 GB
n1-highmem-96 96 624 GB 1,280 GB 1,716 GB 3,264 GB
n2-highmem-32 32 256 GB 544 GB 834 GB 1,429 GB
n2-highmem-48 48 384 GB 800 GB 1,068 GB 1,830 GB
n2-highmem-64 64 512 GB 1,056 GB 1,414 GB 2,422 GB
n2-highmem-80 80 640 GB 1,312 GB 1,760 GB 2,860 GB
m1-megamem-96 96 1,433 GB 2,898 GB 3,287 GB 3,287 GB
m1-ultramem-40 40 961 GB 1,954 GB 2,626 GB 2,900 GB
m1-ultramem-80 80 1,922 GB 3,876 GB 3,874 GB 3,874 GB
m1-ultramem-160 160 3,844 GB 7,720 GB 6,180 GB 6,180 GB
m2-megamem-416 416 5,888 GB 11,832 GB 8,667 GB 8,667 GB
m2-ultramem-208 208 5,888 GB 11,832 GB 8,667 GB 8,667 GB
m2-ultramem-416 416 11,766 GB 23,564 GB 15,766 GB 15,766 GB

Licensing

Running SAP HANA on Google Cloud requires you to bring your own license (BYOL).

For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.

Deployment architectures

SAP HANA on Google Cloud supports scale-up and scale-out architectures.

Scale-up architecture

The following diagram shows the scale-up architecture. In the diagram, notice both the deployment on Google Cloud and the disk layout. You can use Cloud Storage to back up your local backups available in /hanabackup. This mount should be sized equal to or greater than the data mount.

Scale-up architecture

On Google Cloud, an SAP HANA single-host, scale-up architecture can include the following components:

  • One Compute Engine instance for the SAP HANA database with a network bandwidth of up to 16 Gbps.

  • The following Compute Engine persistent disks:

    • For the /hana/data and /hana/log volumes, you can use either of the following persistent-disk configurations:

      • One partitioned SSD-based persistent disks for both volumes
      • Two SSD-based persistent disks, one for each volume

      For performance, you must size SSD-based persistent disks according to the table in Minimum sizes for SSD and balanced persistent disks.

    • One balanced persistent disk for the boot disk.

    • Optionally, a standard persistent disk for the backup of SAP HANA database.

  • Compute Engine firewall rules restricting access to instances.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the Google Cloud region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork for SAP HANA.

  • Optional components:

If you provision your SAP HANA system without a public IP, it cannot connect directly to resources through the public internet, so you need to provide an indirect method for access.

  • Configure Google private access so that your VM can access the Google Cloud APIs.

  • Use Cloud NAT or configure a VM as a NAT gateway to access the public internet.

  • For administrative purposes, you can use TCP forwarding to connect to the systems. For information about using Identity-Aware Proxy for TCP forwarding, see Using IAP for TCP forwarding.

  • Use Compute Engine VM that is configured as a bastion host to access the public internet.

Deploying scale-up systems with Deployment Manager

Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of SAP HANA single-host scale-up systems.

The Deployment Manager scripts can be used for the following scenarios:

The Deployment Manager scripts can deploy the VMs, persistent disks, SAP HANA, and, in the case of the Linux HA cluster, the required HA components.

The Deployment Manager scripts do not deploy the following system components:

  • The network and subnetwork
  • Firewall rules
  • NAT gateways, bastion hosts, or their VMs
  • SAP HANA Studio or its VM

Scale-out architectures

The scale-out architecture consists of one master host, a number of worker hosts, and, optionally, one or more standby hosts. The hosts are interconnected through a network that supports sending data between hosts at rates of up to 16 Gbps.

As the workload demand increases, especially when using OLAP, a multi-host, scale-out architecture can distribute the load across all hosts.

The following diagram shows a scale-out architecture on Google Cloud.

Scale-out architecture diagram.

Standby hosts support the SAP HANA host auto-failover fault recovery solution. For more information about host auto-failover on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.

The following diagram shows a scale-out architecture with host auto-failover on Google Cloud.

Scale-out with host auto-failover architecture diagram.

Disk structures for SAP HANA scale-out systems on Google Cloud

Except for standby hosts, each host has its own /hana/data, /hana/log, and, usually, /usr/sap volumes on SSD persistent disks, which provide consistent, high IOPS, IO services. The master host also serves as an NFS master for /hana/shared and /hanabackup volumes, which is mounted on each worker and standby host.

For a standby host, the /hana/data and /hana/log volumes are not mounted until a takeover occurs.

High availability for SAP HANA scale-out systems on Google Cloud

The following features help ensure the high availability of an SAP HANA scale-out system:

  • Compute Engine live migration
  • Compute Engine automatic instance restart
  • SAP HANA host auto-failover with up to three SAP HANA standby hosts

For more information about high availability options on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.

In the event of a live migration or automatic instance restart event, the protected-persistent-storage-based /hana/shared and /hanabackup volumes can be back online as soon as an instance is up.

If you are using a standby host, in the event of a failure, SAP HANA automatic failover unmounts the /hana/data and /hana/log volumes from the failed host and mounts on them on standby host.

Components in an SAP HANA scale-out system on Google Cloud

A multi-host SAP HANA scale-out architecture on Google Cloud contains the following components:

  • 1 Compute Engine VM instance for each SAP HANA host in the system, including 1 master host, up to 15 worker hosts, and up to 3 optional standby hosts.

    Each VM uses the same Compute Engine machine type. For the machine types that are supported by SAP HANA, see VM types.

  • The following Compute Engine persistent disks:

    • Each VM must include an SSD persistent disk, mounted in the correct location.
    • Optionally, if you are not deploying an SAP HANA host auto-failover system, a standard persistent disk for the /hanabackup local volume per VM.
  • A separately deployed NFS solution for sharing the /hana/shared and the /hanabackup volumes with the worker and standby hosts. You can use Filestore or another NFS solution.

  • Compute Engine firewall rules or other network access controls that restrict access to your Compute Engine instances while allowing communication between the instances and any other distributed or remote resources that your SAP HANA system requires.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the Google Cloud region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork if you prefer.

  • Optional components:

If you provision your SAP HANA system without a public IP, it cannot connect directly to resources through the public internet, so you need to provide an indirect method for access.

  • Configure Google private access so that your VM can access the Google Cloud APIs.

  • Use Cloud NAT or configure a VM as a NAT gateway to access the public internet.

  • For administrative purposes, you can use TCP forwarding to connect to the systems. For information about using Identity-Aware Proxy for TCP forwarding, see Using IAP for TCP forwarding.

  • Use Compute Engine VM that is configured as a bastion host to access the public internet.

Deploying scale-out systems with Deployment Manager

Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of the SAP HANA multi-host scale-out systems.

The Deployment Manager scripts can deploy the VMs, persistent disks, and SAP HANA. The script also mounts the NFS solution to the VMs.

The Deployment Manager scripts do not deploy the following system components:

  • The network and subnetwork
  • The NFS solution
  • Firewall rules
  • NAT gateways, bastion hosts, or their VMs
  • SAP HANA Studio or its VM

Support

For issues with Google Cloud infrastructure or services, contact Customer Care. You can find contact information on the Support Overview page in the Google Cloud Console. If Customer Care determines that a problem resides in your SAP systems, you are referred to SAP Support.

For SAP product-related issues, log your support request with SAP support. SAP evaluates the support ticket and, if it appears to be a Google Cloud infrastructure issue, transfers the ticket to the Google Cloud component BC-OP-LNX-GOOGLE or BC-OP-NT-GOOGLE.

Support requirements

Before you can receive support for SAP systems and the Google Cloud infrastructure and services that they use, you must meet the minimum support plan requirements.

For more information about the minimum support requirements for SAP on Google Cloud, see:

What's next