SAP HANA planning guide

This guide provides an overview of what is required to run SAP HANA on Google Cloud, and provides details that you can use when planning the implementation of a new SAP HANA system.

For details about how to deploy SAP HANA on GCP, see the SAP HANA Deployment Guide.

About SAP HANA on Google Cloud

SAP HANA is an in-memory, column-oriented, relational database that provides high-performance analytics and real-time data processing. At the core of this real-time data platform is the SAP HANA database. Customers can leverage ease of provisioning, highly scalable, and redundant GCP infrastructure capabilities to run their business critical workloads. GCP provides a set of physical assets, such as computers and hard disk drives, and virtual resources, such as Compute Engine virtual machines (VMs), located in Google data centers around the world.

When you deploy SAP HANA on GCP, you deploy to virtual machines running on Compute Engine. Compute Engine VMs provide persistent disks, which function similarly to physical disks in a desktop or a server, but are automatically managed for you by Compute Engine to ensure data redundancy and optimized performance.

Google Cloud basics

Google Cloud consists of many cloud-based services and products. When running SAP products on Google Cloud, you mainly use the IaaS-based services offered through Compute Engine and Cloud Storage, as well as some platform-wide features, such as tools.

See the Google Cloud platform overview for important concepts and terminology. This guide duplicates some information from the overview for convenience and context.

For an overview of considerations that enterprise-scale organizations should take into account when running on Google Cloud, see best practices for enterprise organizations.

Interacting with Google Cloud

Google Cloud offers three main ways to interact with the platform, and your resources, in the cloud:

  • The Google Cloud Console, which is a web-based user interface.
  • The gcloud command-line tool, which provides a superset of the functionality that Cloud Console offers.
  • Client libraries, which provide APIs for accessing services and management of resources. Client libraries are useful when building your own tools.

GCP services

SAP deployments typically utilize some or all of the following Google Cloud services:

Service Description
VPC Networking Connects your VM instances to each other and to the Internet. Each instance is a member of either a legacy network with a single global IP range, or a recommended subnet network, where the instance is a member of a single subnetwork that is a member of a larger network. Note that a network cannot span Google Cloud projects, but a Google Cloud project can have multiple networks.
Compute Engine Creates and manages VMs with your choice of operating system and software stack.
Persistent disks Persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD).
Google Cloud Console Browser-based tool for managing Compute Engine resources. Use a template to describe all of the Compute Engine resources and instances you need. You don't have to individually create and configure the resources or figure out dependencies, because the Cloud Console does that for you.
Cloud Storage You can back up your SAP database backups into Cloud Storage for added durability and reliability, with replication.
Cloud Monitoring Provides visibility into the deployment, performance, uptime, and health of Compute Engine, network, and persistent disks.

Monitoring collects metrics, events, and metadata from Google Cloud and uses these to generate insights through dashboards, charts, and alerts. You can monitor the compute metrics at no cost through Monitoring.
IAM Provides unified control over permissions for Google Cloud resources. Control who can perform control-plane operations on your VMs, including creating, modifying, and deleting VMs and persistent disks, and creating and modifying networks.

Pricing and quotas

You can use the pricing calculator to estimate your usage costs. For more pricing information, see Compute Engine pricing, Cloud Storage pricing, and Google Cloud's operations suite pricing.

Google Cloud resources are subject to quotas. If you plan to use high-CPU or high-memory machines, you might need to request additional quota. For more information, see Compute Engine resource quotas.

Resource requirements

Certified machine types for SAP HANA

The following table shows the Google Cloud machine types that are certified by SAP for production use. The machine types include both Compute Engine virtual machines (VMs) and Bare Metal Solution bare-metal machines.

Except where noted in the table, SAP supports the machine types in both single-host (scale-up) and multi-host (scale-out) installations. Scale-out installations can include up to 15 worker hosts, for a total of 16 hosts.

Custom configurations of the general-purpose n1- and n2-highmem VM types are also certified by SAP. For more information, see Certified custom VM types for SAP HANA.

For the operating systems that are certified for use with HANA on each machine type, see Certified operating systems for SAP HANA.

For more information about different Compute Engine VM types and their use cases, see machine types.

Some machine types are not available in all Google Cloud regions. To check the regional availability of a Compute Engine virtual machine, see Available regions & zones. For Bare Metal Solution machines that are certified for SAP HANA, see Regional availability of Bare Metal Solution machines for SAP HANA.

SAP lists the certified machine types for SAP HANA in the SAP HANA Hardware Directory.

The SAPS numbers for each machine type can be found on the Certifications for SAP page.

Machine types vCPU Memory (GB) Operating system CPU platform Notes
N1 high-memory, general-purpose VM types
n1-highmem-32 32 208 RHEL, SUSE
Intel Broadwell NetApp CVS-Performance certified for scale up.
n1-highmem-64 64 416 RHEL, SUSE Intel Broadwell NetApp CVS-Performance certified for scale up.
n1-highmem-96 96 624 RHEL, SUSE Intel Skylake NetApp CVS-Performance certified for scale up.
N2 high-memory, general-purpose VM types
n2-highmem-32 32 Up to 256 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-48 48 Up to 384 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-64 64 Up to 512 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
n2-highmem-80 80 Up to 640 RHEL, SUSE Intel Cascade Lake Scale up only,
NetApp CVS-Performance certified for scale up.
M1 memory-optimized VM types
m1-megamem-96 96 1,433 RHEL, SUSE Intel Skylake NetApp CVS-Performance certified for scale up.
m1-ultramem-40 40 Up to 961 RHEL, SUSE Intel Broadwell Scale up only,
OLTP workloads only,
NetApp CVS-Performance certified for scale up.
m1-ultramem-80 80 Up to 1,922 RHEL, SUSE Intel Broadwell Scale up only,
OLTP workloads only,
NetApp CVS-Performance certified for scale up.
m1-ultramem-160 160 Up to 3,844 RHEL, SUSE Intel Broadwell OLAP workloads certified for scale up and scale out up to 16 nodes.
OLTP workloads certified for scale up only.
NetApp CVS-Performance certified for scale up only.
M2 memory-optimized VM types
m2-megamem-416 416 Up to 5,888 RHEL, SUSE Intel Cascade Lake OLAP workloads certified for scale up and scale out up to 16 nodes.
OLTP workloads certified for scale up only.
NetApp CVS-Performance certified for scale up only.
m2-ultramem-208 208 Up to 5,888 RHEL, SUSE Intel Cascade Lake Scale up only.
OLTP workloads only.
NetApp CVS-Performance certified for scale up.
m2-ultramem-416 416 Up to 11,776 RHEL, SUSE Intel Cascade Lake-SP OLAP workloads are certified for scale up with workload-based sizing.
OLTP workloads are certified for scale up or scale out up to 4 nodes.
Certification for OLTP scale out includes SAP S/4HANA.
NetApp CVS-Performance is supported with scale up or scale out.
For scale out with S/4HANA, see SAP Note 2408419.
O2 memory-optimized Bare Metal Solution machine types
o2-ultramem-672-metal 672 Up to 18 TB RHEL, SUSE Intel Cascade Lake 12 sockets.
Scale up only in a three-tier architecture only.
OLTP workloads only,
Standard sizing.
o2-ultramem-896-metal 896 Up to 24 TB RHEL, SUSE Intel Cascade Lake 16 sockets.
Scale up in a three-tier architecture only.
OLTP workloads only,
Standard sizing.

Certified custom VM types for SAP HANA

The following table shows the customizable Compute Engine virtual machine (VM) types that are certified by SAP for production use of SAP HANA on Google Cloud.

SAP certifies only a subset of the custom VM type configurations that Compute Engine supports.

Custom VM configurations are subject to customization rules that are defined by Compute Engine. The rules differ depending on which machine type you are customizing. For complete customization rules, see Creating a VM Instance with a Custom Machine Type.

Base Google Cloud instance type vCPU Memory (GB) Operating system CPU platform
N1-highmem A number of vCPUs from 32 to 64 that is evenly divisible by 2. 6.5 GB for each vCPU RHEL, SUSE Intel Broadwell
N2-highmem (Scale up only) A number of vCPUs from 32 to 64 that is evenly divisible by 4. 8 GB for each vCPU RHEL, SUSE Intel Cascade Lake

Regional availability of Bare Metal Solution machines for SAP HANA

The following table shows the current Google Cloud regions that support SAP HANA on Bare Metal Solution.

Region Location
europe-west3 Frankfurt, Germany, Europe
europe-west4 Eemshaven, Netherlands, Europe
us-east4 Ashburn, Virginia, USA, North America
us-west2 Los Angeles, California, USA, North America

If you do not see the region that you need in the preceding table, contact Google Cloud Sales.

Memory configuration

Your memory configuration options are determined by Compute Engine VM instance type you choose. For more information, see the supported VM types table.

SAP HANA Fast Restart memory configuration

Google Cloud recommends the SAP HANA Fast Restart option.

If you implement the Fast Restart option, you need to map and understand the host environment's non-uniform memory access (NUMA) topology. SAP HANA self-optimizes its memory access and process allocation based on a system's NUMA topology.

For more information from SAP, see SAP HANA Fast Restart Option.

Certified operating systems for SAP HANA

The following table shows the Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating systems that are certified by SAP for production use with SAP HANA on Google Cloud.

Except where noted in the table, each operating system is supported with SAP HANA on all certified Compute Engine VM types.

For information about the current support status of each operating system and which operating systems are available from Google Cloud, see Operating system support for SAP HANA on GCP.

For information from SAP about which operating systems SAP supports with SAP HANA on Google Cloud, see the SAP HANA Hardware Directory.

The following table does not include:

  • Certified operating system versions that are no longer in mainstream support.
  • Operating system versions that are not specific to SAP.
Operating system Version Unsupported machine types
RHEL for SAP 7.3 custom
m1-ultramem
m2-megamem
m2-ultramem
n2-highmem
o2-ultramem
7.4 m2-ultramem
o2-ultramem
7.6
7.7
8.1
SLES for SAP 12 SP3 m1-megamem
n1-highmem
o2-ultramem
12 SP4
12 SP5
15
15 SP1
15 SP2 o2-ultramem

Custom operating system images

You can use a Linux image that GCP provides and maintains (a public image) or you can provide and maintain your own Linux image (a custom image).

Use a custom image if the version of the SAP-certified operating system that you require is not available from GCP as a public image. The following steps, which are described in detail in Importing Boot Disk Images to Compute Engine, summarize the procedure for using a custom image:

  1. Prepare your boot disk so it can boot within the GCP Compute Engine environment and so you can access it after it boots.
  2. Create and compress the boot disk image file.
  3. Upload the image file to Cloud Storage and import the image to Compute Engine as a new custom image.
  4. Use the imported image to create a virtual machine instance and make sure it boots properly.
  5. Optimize the image and install the Linux Guest Environment so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.

After your custom image is ready, you can use it when creating VMs for your SAP HANA system.

If you are moving a RHEL operating system from an on-premises installation to GCP, you need to add Red Hat Cloud Access to your Red Hat subscription. For more information, see Red Hat Cloud Access.

For more information about the operating system images that GCP provides, see Images.

For more information about importing an operating system into GCP as a custom image, see Importing Boot Disk Images to Compute Engine.

For more information about the operating systems that SAP HANA supports, see:

OS clocksource on Compute Engine VMs

The default OS clocksource is kvm-clock for SLES and TSC for RHEL images.

Changing the OS clocksource is not necessary when SAP HANA is running on a Compute Engine VM. There is no difference in performance when using kvm-clock or TSC as the clocksource for Compute Engine VMs with SAP HANA.

If you need to change the OS clocksource to TSC, SSH into your VM and issue the following commands:

echo "tsc" | sudo tee /sys/devices/system/clocksource/*/current_clocksource
sudo cp /etc/default/grub /etc/default/grub.backup
sudo sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Persistent disk storage

For persistent block storage, you can attach Compute Engine persistent disks when you create your VMs or add them to your VMs later.

Compute Engine offers different types of persistent disks based on either solid-state drive (SSD) technology or standard hard-disk drive technology. Each type has different performance characteristics. Google Cloud manages the underlying hardware of persistent disks to ensure data redundancy and to optimize performance.

For performance reasons, the SAP HANA /hana/data and /hana/log volumes require SSD-based persistent disks. SSD-based persistent disks include the SSD (pd-ssd) and balanced (pd-balanced) persistent disk types.

For the boot disk and other SAP HANA volumes that do not need the same high performance that the /hana/data and /hana/log volumes do, you can use the following disk types in a production instance of SAP HANA:

  • For the /shared volume, you can either map it to the same SSD-based persistent disk as the /hana/data and/hana/log volumes or, if you map it to its own disk, you can use a pd-balanced persistent disk.
  • If you save your backups to a persistent disk, use a standard persistent disk (pd-standard) for the /hanabackup volume.
  • When you create the host VM, use a pd-balanced persistent disk for the boot disk.
The following figure shows approximate performance numbers for different persistent disks in suggested architectures for SAP HANA on Google Cloud. The actual numbers you might see in a similar configuration are likely to differ for a variety of reasons, including improvements made by Compute Engine over time.

Two SAP HANA systems are shown: the left one has `/hana/shared` on its own
balanced persistent disk and `/hana/data` and `/hana/log` on together on an SSD
persistent disk. The other system has `/hana/data`, `/hana/log`, and
`/hana/shared` together on a single SSD persistent disk, which is the
recommended architecture.

In the configuration on the left in the preceding figure, the /hana/data and /hana/log volumes are on an SSD persistent disk and the /hana/shared volume, which doesn't require as high performance, is on a balanced persistent disk, which costs less than an SSD persistent disk.

In the configuration on the right, the /hana/data, /hana/log, and /hana/shared volumes are all on a single SSD disk. This provides slightly better performance with one less disk to manage than the split model, where the /hana/shared volume is by itself on a balanced persistent disk. Persistent disks are located independently from your VMs, so you can detach or move persistent disks to keep your data, even after you delete your VMs.

In the Cloud Console, you can see the persistent disks that are attached to your VM instances under Additional disks on the VM instance details page for each VM instance.

For more information about the different types of Compute Engine persistent disks, their performance characteristics, and how to work with them, see the Compute Engine documentation:

Minimum sizes for SSD and balanced persistent disks

Within limits, which are described in Block storage performance, the performance of SSD persistent disks and balanced persistent disks increases as the size of the disk and the number of vCPUs increase.

The following table shows the recommended sizes for SSD and balanced persistent disks in a production environment for each certified Compute Engine VM type. The sizes assume that the /hana/data, /hana/log and /hana/shared volumes are all mapped to the disk. If your system is particularly performance sensitive, using pd-ssd is recommended for the best performance.

At a minimum, SAP HANA requires a sustained throughput of 400 MB per second for reads and writes, which is what an 834 GB pd-ssd or a 1,429 GB pd-balanced provides. The sizes listed in the table for each VM type are the persistent disk sizes that provide the SAP HANA performance that was required for the certification of that VM type.

As the persistent disk sizes in the table increase to accommodate the larger machine memory and data sizes, the throughput also increases up to the architectural limits that are described in Block storage performance.

Compute Engine VM type pd-ssd pd-balanced
n1-highmem-32 834 1,429
n1-highmem-64 1,155 1,980
n1-highmem-96 1,716 2,942
n2-highmem-32 834 1,429
n2-highmem-48 1,068 1,831
n2-highmem-64 1,414 2,424
n2-highmem-80 1,760 3,017
m1-megamem-96 3,287 4,286
m1-ultramem-40 2,626 4,286
m1-ultramem-80 3,874 4,286
m1-ultramem-160 6,180 6,180
m2-megamem-416 8,667 8,667
m2-ultramem-208 8,667 8,667
m2-ultramem-416 15,766 15,766

Determining persistent disk size

For SAP HANA scale-up systems, calculate the amount of persistent disk storage that you need for the SAP HANA volumes based on the amount of memory that your selected Compute Engine VM type contains. Use the following formulas for each volume:

  • /hana/data: 1.2 x memory
  • /hana/log: either .5 x memory (adjusted to be a multiple of 64, if necessary) or 512 GB, whichever is smaller
  • /hana/shared: either 1 x memory or 1,024 GB, whichever is smaller
  • /usr/sap: 32 GB
  • /hanabackup: 2 x memory

Select a persistent disk size that is no smaller than the minimum size that is listed for your persistent disk type in Minimum sizes for SSD and balanced persistent disks.

For example, if you are running SAP HANA on an n2-highmem-32 VM instance, which has 256 GB of memory, your total storage requirement for the SAP HANA volumes is 723 GB. However, if you use an SSD persistent disk, the required minimum size is 834 GB, so you need to size your persistent disk at 834 GB or larger.

Apply any excess persistent disk storage to the /hana/data volume.

For information from SAP about sizing for SAP HANA, see Sizing SAP HANA.

Persistent disks deployed by the Deployment Manager templates

When you deploy an SAP HANA system by using the Cloud Deployment Manager scripts that Google Cloud provides, Cloud Deployment Manager allocates two persistent disks for SAP HANA:

  • A single SSD persistent disk for the /hana/data, /hana/log, /usr/sap, and /hana/shared directories
  • A standard HDD persistent disk for the /hanabackup directory

Cloud Deployment Manager maps the SAP HANA /hana/data, /hana/log, /usr/sap, and /hana/shared directories each to its own logical volume for easy resizing and maps them to the SSD persistent disk in a single volume group.

Deployment Manager maps the /hanabackup directory to a logical volume in a separate volume group, which it then maps to a standard HDD persistent disk.

The following example shows how Deployment Manager maps the volumes for SAP HANA on a Compute Engine n2-highmem-32 VM, which has 256 GB of memory.

In the example, the vg_hana volume group is mapped to a single 834 GB SSD persistent disk, which is the required minimum size. With 256 GB of memory, the SAP HANA volumes require only about 723 GB of storage in total. To use all of the storage on the persistent disk, Deployment Manager allocated the excess disk space to the data volume. Deployment Manager sized the backup volume at 512 GB, double the memory, and mapped it to a standard persistent disk of the same size.

hana-ssd-example:~ # lvs
  LV     VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data   vg_hana       -wi-ao---- 426.00g
  log    vg_hana       -wi-ao---- 125.00g
  sap    vg_hana       -wi-ao----  32.00g
  shared vg_hana       -wi-ao---- 251.00g
  backup vg_hanabackup -wi-ao---- 512.00g

The sizes of your volumes for the same VM type might differ slightly from what is shown in the example.

Storage for backups

Storage for SAP HANA backup is configured with standard HDD persistent disks. Standard HDD persistent disks are efficient and economical for handling sequential read-write operations, but are not optimized to handle high rates of random input-output operations per second (IOPS). SAP HANA uses sequential IO with large blocks to back up the database. Standard HDD persistent disks provide a low-cost, high-performance option for this scenario.

The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it easier to recover your database if necessary.

If you use SAP HANA dynamic tiering, the backup storage must be large enough to hold both the in-memory data and the data that is managed on disk by the dynamic tiering server.

If you use the Cloud Storage Backint agent for SAP HANA, you can backup SAP HANA directly to a Cloud Storage bucket, which makes the use of a persistent disk for storing backups optional.

SAP HANA dynamic tiering

SAP HANA dynamic tiering is certified by SAP for use in production environments on GCP. SAP HANA dynamic tiering extends SAP HANA data storage by storing data that is infrequently accessed on disk instead of in memory.

For more information, see SAP HANA Dynamic Tiering on Google Cloud.

SAP HANA Fast Restart option

For SAP HANA 2.0 SP04 and later, Google Cloud recommends the SAP HANA Fast Restart option.

To implement the Fast Restart option, see SAP HANA Fast Restart Option in the SAP HANA documentation.

If you deploy an SAP HANA system by using the Cloud Deployment Manager template that Google Cloud provides, you need to create and mount the TMPFS filesystem after the host VM and the base SAP HANA system are successfully deployed.

For more information about memory allocation for SAP HANA Fast Restart, see SAP HANA Fast Restart memory configuration.

File server options

The file server options for SAP HANA on Google Cloud include Filestore and NetApp Cloud Volumes Service for Google Cloud.

For more information about all of the file server options for SAP on Google Cloud, see File sharing solutions for SAP on Google Cloud.

Filestore

For the /hana/shared volume only, you can use Filestore. However, with Filestore basic service tiers, all SAP HANA hosts that share the storage must be within the same Google Cloud zone because a Filestore instance is a zonal resource. This is particularly relevant for shared volumes in a scale-out configuration, where the compute nodes for the scale-out system must reside in the same zone for optimal latency. For more information, see Components in a SAP HANA scale-out system on Google Cloud.

NetApp Cloud Volumes Service for Google Cloud

NetApp Cloud Volumes Service for Google Cloud is a fully-managed, cloud-native data service platform that you can use to create an NFS file system for SAP HANA scale-up systems on all Compute Engine instance types that are certified for SAP HANA.

NetApp Cloud Volumes Service offers two service types: CVS and CVS-Performance. The CVS_Performance service type offers different service levels. You must use the NetApp Cloud Volumes Service CVS-Performance (NetApp CVS-Performance) service type and the Extreme service level with SAP HANA.

Support for NetApp CVS-Performance in scale-out deployments is limited to specific Compute Engine instance types, as noted in the table in Certified VM types for SAP HANA.

With NetApp CVS-Performance, you can place all of the SAP HANA directories, including /hana/data and /hana/logs, in shared storage, instead of using Compute Engine persistent disks. With most other shared storage systems, you can place only the /hana/shared directory in shared storage.

SAP support for NetApp CVS-Performance on Google Cloud is listed in the SAP HANA Hardware Directory.

Regional availability of NetApp CVS-Performance for SAP HANA

Your NetApp CVS-Performance volumes must be in the same region as your host VM instances.

Support for SAP HANA by NetApp CVS-Performance is not available in every region that NetApp CVS-Performance is available in.

You can use NetApp CVS-Performance with SAP HANA in the following Google Cloud regions:

Region Location
europe-west4 Eemshaven, Netherlands, Europe
us-east4 Ashburn, Northern Virginia, USA
us-west2 Los Angeles, California, USA

If you are interested in running SAP HANA with NetApp CVS-Performance in a Google Cloud region that is not listed above, contact sales.

NFS protocol support

NetApp CVS-Performance supports the NFSv3 and NFSv4.1 protocols with SAP HANA on Google Cloud.

NFSv3 is recommended for volumes that are configured to allow multiple TCP connections. NFSv4.1 is not yet supported with multiple TCP connections.

Volume requirements for NetApp Cloud Volumes Service with SAP HANA

The NetApp CVS-Performance volumes must be in the same region as the host VM instances.

For the /hana/data and /hana/log volumes, the Extreme service level of NetApp CVS-Performance is required. You can use the Premium service level for the /hana/shared directory if it is in a separate volume from the /hana/data and /hana/log directories.

For the best performance with SAP HANA systems that are larger than 1 TB, create separate volumes for /hana/data, /hana/log, and /hana/shared.

To meet SAP HANA performance requirements, the following minimum volume sizes are required when running SAP HANA with NetApp CVS-Performance:

Directory Minimum size
/hana/shared 1 TB
/hana/log 2.5 TB
/hana/data 4 TB

Adjust the size of your volumes to meet your throughput requirements. The minimum throughput rate for the Extreme service level is 128 MB per second for each 1 TB, so the throughput for 4TB of disk space is 512 MB per second. Provisioning more disk space for the /hana/data volume can reduce startup times. For the /hana/data volume, we recommend either 1.5 times the size of your memory or 4 TB, whichever is greater.

The minimum size for the /hanabackup volume is determined by your backup strategy. You can also use the Cloud Storage Backint agent for SAP HANA to backup the database directly into Cloud Storage.

Deploying a SAP HANA system with NetApp CVS-Performance

To deploy NetApp CVS-Performance with SAP HANA on Google Cloud, you need to deploy your VMs and install SAP HANA first. You can use the Deployment Manager templates that Google Cloud provides to deploy the VMs and SAP HANA, or you can create the VM instances and install SAP HANA manually.

If you use the Deployment Manager templates, the VMs are deployed with the /hana/data and /hana/log volume mapped to persistent disks. After you mount the NetApp CVS-Performance volumes to the VMs, you need to copy the contents of the persistent disks over, as described in the following steps.

To deploy SAP HANA with NetApp CVS-Performance by using the Deployment Manager templates that Google Cloud provides:

  1. Deploy SAP HANA with persistent disks by using the Cloud Deployment Manager templates that Google Cloud provides by following the instructions in the SAP HANA deployment guide.
  2. Create your NetApp CVS-Performance volumes. For complete NetApp instructions, see NetApp Cloud Volumes Service for Google Cloud documentation.

  3. Mount NetApp CVS-Performance to a temporary mount point by using the mount command with following settings:

    mount -t nfs -o options server:path mountpoint

    For options, use the following settings:

    rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock

    The option vers=3 indicates NFSv3. The option nconnect=16 specifies support for multiple TCP connections.

  4. Stop SAP HANA and any related services that are using the attached persistent disk volumes.

  5. Copy the contents of the persistent disk volumes to the corresponding NetApp CVS-Performance volumes.

  6. Detach the persistent disks.

  7. Remount the NetApp CVS-Performance volumes to the permanent mount points by updating the /etc/fstab with the following settings:

    server:path   /mountpoint   nfs   options   0 0

    For options, use the following settings:

    rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock

    For more information about updating the /etc/fstab file, see the nfs page in the Linux File Formats manual.

  8. For the best performance, update the fileio category in the SAP HANA global.ini file with the following suggested settings:

    Parameter Value
    async_read_submit on
    async_write_submit_active on
    async_write_submit_blocks all
    max_parallel_io_requests 128
    max_parallel_io_requests[data] 128
    max_parallel_io_requests[log] 128
    num_completion_queues 4
    num_completion_queues[data] 4
    num_completion_queues[log] 4
    num_submit_queues 8
    num_submit_queues[data] 8
    num_submit_queues[log] 8
  9. Restart SAP HANA.

  10. After confirming that everything works as expected, delete the persistent disks to avoid being charged for them.

User identification and resource access

When planning security for an SAP deployment on Google Cloud, you must identify:

  • The user accounts and applications that need access to the Google Cloud resources in your Google Cloud project
  • The specific Google Cloud resources in your project that each user needs to access

You must add each user to your project by adding their Google account ID to the project as a member. For an application program that uses Google Cloud resources, you create a service account, which provides a user identity for the program within your project.

Compute Engine VMs have their own service account. Any programs that that run on a VM can use the VM service account, as long as the VM service account has the resource permissions that the program needs.

After you identify the Google Cloud resources that each user needs to use, you grant each user permission to use each resource by assigning resource-specific roles to the user. Review the predefined roles that IAM provides for each resource, and assign roles to each user that provide just enough permissions to complete the user's tasks or functions and no more.

If you need more granular or restrictive control over permissions than the predefined IAM roles provide, you can create custom roles.

For more information about the IAM roles that SAP programs need on Google Cloud, see Identity and access management for SAP programs on Google Cloud.

For an overview of identity and access management for SAP on Google Cloud, see Identity and access management overview for SAP on Google Cloud.

Pricing and quota considerations for SAP HANA

You are responsible for the costs incurred for using the resources created by following this deployment guide. Use the pricing calculator to help estimate your actual costs.

Quotas

If you have a new GCP account, or if you haven't asked for an increased quota, you will need to do so to deploy SAP HANA. View your existing quota, and compare with the following table to see what increase to ask for. You can then request a quota-limit increase.

The following table shows quota values for single-host, scale-up SAP HANA systems by VM instance type. If you host SAP HANA Studio on GCP or use a NAT gateway and bastion host, add the values shown in the table to your total quota requirement.

Instance type CPU Memory Standard PD SSD PD Balanced PD
n1-highmem-32 32 208 GB 448 GB 834 GB 1,429 GB
n1-highmem-64 64 416 GB 864 GB 1,155 GB 1,980 GB
n1-highmem-96 96 624 GB 1,280 GB 1,716 GB 3,264 GB
n2-highmem-32 32 256 GB 544 GB 834 GB 1,429 GB
n2-highmem-48 48 384 GB 800 GB 1,068 GB 1,830 GB
n2-highmem-64 64 512 GB 1,056 GB 1,414 GB 2,422 GB
n2-highmem-80 80 640 GB 1,312 GB 1,760 GB 2,860 GB
m1-megamem-96 96 1,433 GB 2,898 GB 3,287 GB 3,287 GB
m1-ultramem-40 40 961 GB 1,954 GB 2,626 GB 2,900 GB
m1-ultramem-80 80 1,922 GB 3,876 GB 3,874 GB 3,874 GB
m1-ultramem-160 160 3,844 GB 7,720 GB 6,180 GB 6,180 GB
m2-megamem-416 416 5,888 GB 11,832 GB 8,667 GB 8,667 GB
m2-ultramem-208 208 5,888 GB 11,832 GB 8,667 GB 8,667 GB
m2-ultramem-416 416 11,766 GB 23,564 GB 15,766 GB 15,766 GB
Bastion/NAT gateway 1 3.75 GB 8 GB 0 GB 0 GB
SAP HANA Studio 1 3.75 GB 50 GB 0 GB 0 GB

Note: Currently, the `m2-megamem-416` Compute Engine instance type is certified by SAP only if the data and log volumes are stored on NetApp Cloud Volumes Service for Google Cloud, so no persistent disk storage is required.

Licensing

Running SAP HANA on GCP requires you to bring your own license (BYOL).

For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.

Deployment architecture

SAP HANA on GCP supports single-host and multi-host architectures.

Single-host architecture

The following diagram shows the single-host architecture. In the diagram, notice both the deployment on GCP and the disk layout. You can use Cloud Storage to back up your local backups available in /hanabackup. This mount should be sized equal to or greater than the data mount.

Deployment Layout

Notice that the VM for SAP HANA has no public IP, which means it cannot be reached from an external network. Instead, the deployment uses a NAT bastion host and SAP HANA Studio for accessing SAP HANA. The SAP HANA Studio instance and the bastion host are deployed in the same subnetwork as the SAP HANA instance.

You provision a Windows host on which you install SAP HANA Studio, with the instance placed in the same subnetwork, and with firewall rules that enable you to connect to the SAP HANA database from SAP HANA Studio.

You deploy SAP HANA using a single-host, scale-up architecture that has the following components:

  • One Compute Engine instance for the SAP HANA database, with a 834 GB or larger SSD persistent disk, and a network bandwidth of up to 16 Gbps. The SSD persistent disk is partitioned and mounted to /hana/data and /hana/log to host the data and logs.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork for SAP HANA.

  • An optional, but recommended, Internet gateway configured for outbound Internet access for your SAP HANA and other instances. This guide assumes you are using this gateway.

  • Compute Engine firewall rules restricting access to instances.

  • Persistent disk for backup of SAP HANA database.

  • Compute Engine VM, n1-standard-2, with Windows OS to host SAP HANA studio.

  • Compute Engine VM, n1-standard-1 as a bastion host.

  • Automated SAP HANA database installation with a configuration file that you create from a template.

  • SAP HANA Studio.

Deploying scale-up systems with Deployment Manager

Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of SAP HANA single-host scale-up systems.

The Deployment Manager scripts can be used for the following scenarios:

The Deployment Manager scripts can deploy the VMs, persistent disks, SAP HANA, and, in the case of the Linux HA cluster, the required HA components.

The Deployment Manager scripts do not deploy the following system components:

  • The network and subnetwork
  • Firewall rules
  • NAT gateways, bastion hosts, or their VMs
  • SAP HANA Studio or its VM

Multi-host architecture

The following diagram shows a multi-host architecture on Google Cloud.

Multi-host architecture diagram.

As the workload demand increases, especially when using OLAP, a multi-host, scale-out architecture can distribute the load across all hosts.

The scale-out architecture consists of one master host, a number of worker hosts, and, optionally, one or more standby hosts. The hosts are interconnected through a network that supports sending data between hosts at rates of up to 16 Gbps.

Standby hosts support the SAP HANA host auto-failover fault recovery solution. For more information about host auto-failover on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.

Disk structures for SAP HANA scale-out systems on Google Cloud

Except for standby hosts, each host has its own /hana/data, /hana/log, and, usually, /usr/sap volumes on SSD persistent disks, which provide consistent, high IOPS, IO services. The master host also serves as an NFS master for /hana/shared and /hanabackup volumes, which is mounted on each worker and standby host.

For a standby host, the /hana/data and /hana/log volumes are not mounted until a takeover occurs.

High availability for SAP HANA scale-out systems on Google Cloud

The following features help ensure the high availability of a SAP HANA scale-out system:

  • Compute Engine live migration
  • Compute Engine automatic instance restart
  • SAP HANA host auto-failover with up to three SAP HANA standby hosts

For more information about high availability options on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.

In the event of a live migration or automatic instance restart event, the protected-persistent-storage-based /hana/shared and /hanabackup volumes can be back online as soon as an instance is up.

If you are using a standby host, in the event of a failure, SAP HANA automatic failover unmounts the /hana/data and /hana/log volumes from the failed host and mounts on them on standby host.

Components in a SAP HANA scale-out system on Google Cloud

A multi-host SAP HANA scale-out architecture on Google Cloud contains the the following components:

  • 1 Compute Engine VM instance for each SAP HANA host in the system, including 1 master host, up to 15 worker hosts, and up to 3 optional standby hosts.

    Each VM uses the same Compute Engine machine type. For the machine types that are supported by SAP HANA, see VM types.

    Each VM must include SSD and HDD storage, mounted in the correct location.

  • A separately deployed NFS solution for sharing the /hana/shared and the /hanabackup volumes with the worker and standby hosts. You can use Filestore or another NFS solution.

  • An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork if you prefer.

  • Optionally, an internet gateway configured for outbound internet access for your SAP HANA instance and other instances.

  • Optionally, a Compute Engine n1-standard-2 VM with the Windows operating system installed to host SAP HANA Studio.

  • Optionally, a Compute Engine n1-standard-1 VM for a bastion host.

  • Compute Engine firewall rules or other network access controls that restrict access to your Compute Engine instances while allowing communication between the instances and any other distributed or remote resources that your SAP HANA system requires.

Deploying scale-out systems with Deployment Manager

Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of the SAP HANA multi-host scale-out systems.

The Deployment Manager scripts can deploy the VMs, persistent disks, and SAP HANA. The script also mounts the NFS solution to the VMs.

The Deployment Manager scripts do not deploy the following system components:

  • The network and subnetwork
  • The NFS solution
  • Firewall rules
  • NAT gateways, bastion hosts, or their VMs
  • SAP HANA Studio or its VM

Support

For issues with Google Cloud infrastructure or services, contact Customer Care. You can find contact information on the Support Overview page in the Google Cloud Console. If Customer Care determines that a problem resides in your SAP systems, you are referred to SAP Support.

For SAP product-related issues, log your support request with SAP support. SAP evaluates the support ticket and, if it appears to be a Google Cloud infrastructure issue, transfers the ticket to the Google Cloud component BC-OP-LNX-GOOGLE or BC-OP-NT-GOOGLE.

Support requirements

Before you can receive support for SAP systems and the Google Cloud infrastructure and services that they use, you must meet the minimum support plan requirements.

For more information about the minimum support requirements for SAP on Google Cloud, see:

What's next