This guide provides an overview of what is required to run SAP HANA on Google Cloud, and provides details that you can use when planning the implementation of a new SAP HANA system.
For details about how to deploy SAP HANA on Google Cloud, see:
- For single-host scale-up and multi-host scale-out deployments, see:
- For scale-out system with host auto-failover deployments, see:
- For scale-up high-availability cluster configurations, see:
- For scale-out high-availability cluster configurations, see:
About SAP HANA on Google Cloud
SAP HANA is an in-memory, column-oriented, relational database that provides high-performance analytics and real-time data processing. Customers can leverage ease of provisioning, highly scalable, and redundant Google Cloud infrastructure capabilities to run their business critical workloads. Google Cloud provides a set of physical assets, such as computers and hard disk drives, and virtual resources, such as Compute Engine virtual machines (VMs), located in Google data centers around the world.
When you deploy SAP HANA on Google Cloud, you deploy to virtual machines running on Compute Engine. Compute Engine VMs provide persistent disks, which function similarly to physical disks in a desktop or a server, but are automatically managed for you by Compute Engine to ensure data redundancy and optimized performance.
Google Cloud basics
Google Cloud consists of many cloud-based services and products. When running SAP products on Google Cloud, you mainly use the IaaS-based services offered through Compute Engine and Cloud Storage, as well as some platform-wide features, such as tools.
See the Google Cloud platform overview for important concepts and terminology. This guide duplicates some information from the overview for convenience and context.
For an overview of considerations that enterprise-scale organizations should take into account when running on Google Cloud, see the Google Cloud Architecture Framework.
Interacting with Google Cloud
Google Cloud offers three main ways to interact with the platform, and your resources, in the cloud:
- The Google Cloud console, which is a web-based user interface.
- The
gcloud
command-line tool, which provides a superset of the functionality that Google Cloud console offers. - Client libraries, which provide APIs for accessing services and management of resources. Client libraries are useful when building your own tools.
Google Cloud services
SAP deployments typically utilize some or all of the following Google Cloud services:
Service | Description |
---|---|
VPC Networking | Connects your VM instances to each other and to the Internet. Each instance is a member of either a legacy network with a single global IP range, or a recommended subnet network, where the instance is a member of a single subnetwork that is a member of a larger network. Note that a network cannot span Google Cloud projects, but a Google Cloud project can have multiple networks. |
Compute Engine | Creates and manages VMs with your choice of operating system and software stack. |
Persistent Disk and Hyperdisk |
You can use Compute Engine Persistent Disk and Hyperdisk:
|
Google Cloud console | Browser-based tool for managing Compute Engine resources. Use a template to describe all of the Compute Engine resources and instances you need. You don't have to individually create and configure the resources or figure out dependencies, because the Google Cloud console does that for you. |
Cloud Storage | You can back up your SAP database backups into Cloud Storage for added durability and reliability, with replication. |
Cloud Monitoring | Provides visibility into the deployment, performance, uptime, and health of
Compute Engine, network, and persistent disks. Monitoring collects metrics, events, and metadata from Google Cloud and uses these to generate insights through dashboards, charts, and alerts. You can monitor the compute metrics at no cost through Monitoring. |
IAM | Provides unified control over permissions for Google Cloud resources. Control who can perform control-plane operations on your VMs, including creating, modifying, and deleting VMs and persistent disks, and creating and modifying networks. |
Pricing and quotas
You can use the pricing calculator to estimate your usage costs. For more pricing information, see Compute Engine pricing, Cloud Storage pricing, and Google Cloud's operations suite pricing.
Google Cloud resources are subject to quotas. If you plan to use high-CPU or high-memory machines, you might need to request additional quota. For more information, see Compute Engine resource quotas.
Compliance and sovereign controls
If you require your SAP workload to run in compliance with data residency, access control, support personnel, or regulatory requirements, then you must plan for using Assured Workloads - a service that helps you run secure and compliant workloads on Google Cloud without compromising the quality of your cloud experience. For more information, see Compliance and sovereign controls for SAP on Google Cloud.
Resource requirements
Certified machine types for SAP HANA
For SAP HANA, SAP certifies only a subset of the machine types that are available from Google Cloud.
The machine types that SAP certifies for SAP HANA include both Compute Engine virtual machines (VMs) and Bare Metal Solution bare-metal machines.
Custom configurations of the general-purpose n1- and n2-highmem VM types are also certified by SAP. For more information, see Certified custom VM types for SAP HANA.
For the operating systems that are certified for use with HANA on each machine type, see Certified operating systems for SAP HANA.
Some machine types are not available in all Google Cloud regions. To check the regional availability of a Compute Engine virtual machine, see Available regions & zones. For Bare Metal Solution machines that are certified for SAP HANA, see Regional availability of Bare Metal Solution machines for SAP HANA.
SAP lists the certified machine types for SAP HANA in the Certified and Supported SAP HANA Hardware Directory.
For more information about different Compute Engine VM types and their use cases, see machine types.
Certified Compute Engine VMs for SAP HANA
The following table shows Compute Engine VMs that are certified by SAP for SAP HANA:
The following table shows all of the Google Cloud machine types that are certified by SAP for production use of SAP HANA.
The table does not include the machine types that SAP certifies for SAP Business One on SAP HANA. For the machine types that SAP certifies for SAP HANA with SAP Business One, see Certified SAP applications on Google Cloud.
Machine types | vCPUs | Memory | Operating system | CPU platform | Application type | Notes |
---|---|---|---|---|---|---|
N1 high-memory, general-purpose VM types | ||||||
n1-highmem-32 |
32 | 208 GB | RHEL, SUSE |
Intel Broadwell | OLAP or OLTP | Block storage: Compute Engine persistent disks or, for scale up only, NetApp CVS-Performance. |
n1-highmem-64 |
64 | 416 GB | RHEL, SUSE | Intel Broadwell | OLAP or OLTP | Block storage: Compute Engine persistent disks or, for scale up only, NetApp CVS-Performance. |
n1-highmem-96 |
96 | 624 GB | RHEL, SUSE | Intel Skylake | OLAP or OLTP | Block storage: Compute Engine persistent disks or, for scale up only, NetApp CVS-Performance. |
N2 high-memory, general-purpose VM types | ||||||
n2-highmem-32 |
32 | 256 GB | RHEL, SUSE | Intel Ice Lake, Intel Cascade Lake |
OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
n2-highmem-48 |
48 | 384 GB | RHEL, SUSE | Intel Ice Lake, Intel Cascade Lake |
OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
n2-highmem-64 |
64 | 512 GB | RHEL, SUSE | Intel Ice Lake, Intel Cascade Lake |
OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
n2-highmem-80 |
80 | 640 GB | RHEL, SUSE | Intel Ice Lake, Intel Cascade Lake |
OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
n2-highmem-96 |
96 | 768 GB | RHEL, SUSE | Intel Ice Lake | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
n2-highmem-128 |
128 | 864 GB | RHEL, SUSE | Intel Ice Lake | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
C3 general-purpose VM types | ||||||
c3-standard-44 |
44 | 176 GB | RHEL, SUSE | Intel Sapphire Rapids | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
c3-highmem-44 |
44 | 352 GB | RHEL, SUSE | Intel Sapphire Rapids | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
c3-highmem-88 |
88 | 704 GB | RHEL, SUSE | Intel Sapphire Rapids | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
c3-highmem-176 |
176 | 1,408 GB | RHEL, SUSE | Intel Sapphire Rapids | OLAP or OLTP | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
M1 memory-optimized VM types | ||||||
m1-megamem-96 |
96 | 1,433 GB | RHEL, SUSE | Intel Skylake | OLAP or OLTP | OLAP: scale up or scale out up to 16 nodes. OLTP: scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or, for OLTP scale up only, NetApp CVS-Performance. |
m1-ultramem-40 |
40 | 961 GB | RHEL, SUSE | Intel Broadwell | OLTP only | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
m1-ultramem-80 |
80 | 1,922 GB | RHEL, SUSE | Intel Broadwell | OLTP only | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
m1-ultramem-160 |
160 | 3,844 GB | RHEL, SUSE | Intel Broadwell | OLAP or OLTP | 2 TB OLAP workloads certified for scale up and scale out up to 16 nodes. Up to 4 TB OLAP workloads supported with workload based sizing. OLTP workloads certified for scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or, for OLTP scale up only, NetApp CVS-Performance. |
M2 memory-optimized VM types | ||||||
m2-megamem-416 |
416 | 5,888 GB | RHEL, SUSE | Intel Cascade Lake | OLAP or OLTP | OLAP workloads certified for scale up and scale out up to 16 nodes. OLTP workloads are certified for scale up or scale out up to 4 nodes. Certification for OLTP scale out includes SAP S/4HANA. For scale out with S/4HANA, see SAP Note 2408419. Block storage: Compute Engine persistent disks, Hyperdisks, or, for scale up only, NetApp CVS-Performance. |
m2-ultramem-208 |
208 | 5,888 GB | RHEL, SUSE | Intel Cascade Lake | OLTP only | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
m2-ultramem-416 |
416 | 11,776 GB | RHEL, SUSE | Intel Cascade Lake-SP | OLAP or OLTP | OLAP workloads are certified with workload-based sizing for
scale up or scale out up to 16 nodes. OLTP workloads are certified for scale up or scale out up to 4 nodes. Certification for OLTP scale out includes SAP S/4HANA. Block storage: Compute Engine persistent disks, Hyperdisks, or, for scale up only, NetApp CVS-Performance. For scale out with S/4HANA, see SAP Note 2408419. |
m2-hypermem-416 |
416 | 8,832 GB | RHEL, SUSE | Intel Cascade Lake | OLTP only | OLTP workloads are certified for scale up or scale out up to 4 nodes.
Certification for OLTP scale out includes SAP S/4HANA. Block storage: Compute Engine persistent disks, Hyperdisks, or, for scale up only, NetApp CVS-Performance. For scale out with S/4HANA, see SAP Note 2408419. |
M3 memory-optimized VM types | ||||||
m3-ultramem-32 |
32 | 976 GB | RHEL, SUSE | Intel Ice Lake | OLTP only | Scale up only. Block storage: Compute Engine persistent disks or NetApp CVS-Performance. |
m3-ultramem-64 |
64 | 1,952 GB | RHEL, SUSE | Intel Ice Lake | OLTP only | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
m3-ultramem-128 |
128 | 3,904 GB | RHEL, SUSE | Intel Ice Lake | OLAP or OLTP | OLAP workloads are certified with workload-based sizing for scale up. OLTP workloads are certified for scale up. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
m3-megamem-64 |
64 | 976 GB | RHEL, SUSE | Intel Ice Lake | OLAP only | Scale up only. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance. |
m3-megamem-128 |
128 | 1,952 GB | RHEL, SUSE | Intel Ice Lake | OLAP only | Scale up or scale out, up to 16 nodes. Block storage: Compute Engine persistent disks, Hyperdisks, or NetApp CVS-Performance (scale-up only). |
Certified Bare Metal Solution machines for SAP HANA
The following table shows Bare Metal Solution machines that are certified by SAP for SAP HANA exclusively in a three-tier architecture.
To see which regions these certified machine types are available in, see Regional availability of Bare Metal Solution machines for SAP HANA.
Bare Metal Solution machine type | CPU cores | vCPU | Sockets | Memory | CPU platform | Operating system | Application type | Notes |
---|---|---|---|---|---|---|---|---|
O2 memory-optimized Bare Metal Solution machine types | ||||||||
o2-ultramem-672-metal |
336 | 672 | 12 | 18 TB | Intel Cascade Lake | RHEL, SUSE | OLTP only | Scale up only in a three-tier architecture only. Standard sizing. |
o2-ultramem-896-metal |
448 | 896 | 16 | 24 TB | Intel Cascade Lake | RHEL, SUSE | OLTP only | Scale up in a three-tier architecture only. Standard sizing. |
Certified custom machine types for SAP HANA
The following table shows the Compute Engine custom machine types that are certified by SAP for production use of SAP HANA on Google Cloud.
SAP certifies only a subset of the custom machine types that are available from Compute Engine.
Custom machine types are subject to customization rules that are defined by Compute Engine. The rules differ depending on which machine type you are customizing. For complete customization rules, see Creating a custom VM instance.
Base machine type | vCPUs | Memory (GB) | Operating system | CPU platforms |
---|---|---|---|---|
N1-highmem | A number of vCPUs from 32 to 64 that is evenly divisible by 2. | 6.5 GB for each vCPU | RHEL, SUSE | Intel Broadwell |
N2-highmem (Scale up only) | On Intel Ice Lake, a number of vCPUs from 32 to 80 that is evenly
divisible by 4. On Intel Cascade Lake, a number of vCPUs from 32 to 80 that is evenly divisible by 4. |
Up to 8 GB per vCPU | RHEL, SUSE | Intel Ice Lake, Intel Cascade Lake |
Regional availability of Bare Metal Solution machines for SAP HANA
The following table shows the current Google Cloud regions that support SAP HANA on Bare Metal Solution.
Region | Location |
---|---|
europe-west3 |
Frankfurt, Germany, Europe |
europe-west4 |
Eemshaven, Netherlands, Europe |
us-central1 |
Council Bluffs, Iowa, USA, North America |
us-east4 |
Ashburn, Virginia, USA, North America |
If you do not see the region that you need in the preceding table, contact Google Cloud Sales.
Memory configuration
Your memory configuration options are determined by Compute Engine VM instance type you choose. For more information, see the Certified machine types for SAP HANA table.
Network configuration
Your Compute Engine VM network capabilities are determined by its machine family, and not by its network interface (NIC) or IP address.
Based on its machine type, your VM instance is capable of 2-32 Gbps network throughput. Certain machine types also support throughputs up to 100 Gbps, which requires the use of the Google Virtual NIC (gVNIC) interface type with a Tier_1 network configuration. The ability to achieve these throughput rates is further dependent on the traffic direction and the type of the destination IP address.
Compute Engine VM network interfaces are backed by redundant and resilient network infrastructure using physical and software-defined network components. These interfaces inherit the redundancy and resiliency of the underlying platform. Multiple virtual NICs can be used for traffic separation, but that offers no additional resilience or performance benefits.
A single NIC provides the needed performance for SAP HANA deployments on Compute Engine. Your particular use case, security requirements or preferences might also require additional interfaces for separating traffic, such as Internet traffic, internal SAP HANA System Replication traffic, or other flows that might benefit from specific network policy rules. We recommend that you employ traffic encryption offered by the application, and secure network access following a least privilege firewall policy to restrict access.
Consider the need for traffic separation early on as part of your network design and allocate additional NICs when you deploy VMs. You must attach each network interface to a different Virtual Private Cloud network. Your choice for the number of network interfaces depends on the level of isolation that you require, with up to 8 interfaces allowed for VMs with 8 vCPUs or more.
For example, you might define a Virtual Private Cloud network for your SAP HANA SQL application clients (SAP NetWeaver application servers, custom applications, etc.), and a separate network for inter-server traffic, such as SAP HANA System Replication. Consider that too many segments might complicate management and troubleshooting of network issues. If you change your mind later, then you can use Compute Engine machine images to recreate your VM instance while retaining all associated configuration, metadata and data.
For more information, see networking overview for VMs, multiple network interfaces and VM network bandwidth.
Certified operating systems for SAP HANA
The following table shows the Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating systems that are certified by SAP for production use with SAP HANA on Google Cloud.
Except where noted in the table, each operating system is supported with SAP HANA on all certified Compute Engine VM types.
For information about the current support status of each operating system and which operating systems are available from Google Cloud, see Operating system support for SAP HANA on Google Cloud.
For information from SAP about which operating systems SAP supports with SAP HANA on Google Cloud, go to Certified and Supported SAP HANA Hardware Directory, click on the required machine type, and then see Operating System.
The following table does not include:
- Certified operating system versions that are no longer in mainstream support.
- Operating system versions that are not specific to SAP.
Operating system | Version | Unsupported machine types |
---|---|---|
RHEL for SAP | 9.0 Note | |
8.8 | ||
8.6 | ||
8.4 | ||
8.2 | ||
8.1 |
m3-ultramem m3-megamem c3-standard c3-highmem
|
|
7.9 | ||
7.7 |
m3-ultramem m3-megamem c3-standard c3-highmem
|
|
SLES for SAP | 15 SP5 | |
15 SP4 | ||
15 SP3 | ||
15 SP2 | o2-ultramem |
|
15 SP1 |
m3-ultramem m3-megamem c3-standard c3-highmem
|
|
12 SP5 |
Custom operating system images
You can use a Linux image that Google Cloud provides and maintains (a public image) or you can provide and maintain your own Linux image (a custom image).
Use a custom image if the version of the SAP-certified operating system that you require is not available from Google Cloud as a public image. The following steps, which are described in detail in Importing Boot Disk Images to Compute Engine, summarize the procedure for using a custom image:
- Prepare your boot disk so it can boot within the Google Cloud Compute Engine environment and so you can access it after it boots.
- Create and compress the boot disk image file.
- Upload the image file to Cloud Storage and import the image to Compute Engine as a new custom image.
- Use the imported image to create a virtual machine instance and make sure it boots properly.
- Optimize the image and install the Linux Guest Environment so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.
After your custom image is ready, you can use it when creating VMs for your SAP HANA system.
If you are moving a RHEL operating system from an on-premises installation to Google Cloud, you need to add Red Hat Cloud Access to your Red Hat subscription. For more information, see Red Hat Cloud Access.
For more information about the operating system images that Google Cloud provides, see Images.
For more information about importing an operating system into Google Cloud as a custom image, see Importing Boot Disk Images to Compute Engine.
For more information about the operating systems that SAP HANA supports, see:
- Certified and Supported SAP HANA Hardware Directory
- SAP Note 2235581 - SAP HANA: Supported Operating Systems
OS clocksource on Compute Engine VMs
The default OS clocksource is kvm-clock for SLES and TSC for RHEL images.
Changing the OS clocksource is not necessary when SAP HANA is running on a Compute Engine VM. There is no difference in performance when using kvm-clock or TSC as the clocksource for Compute Engine VMs with SAP HANA.
If you need to change the OS clocksource to TSC, SSH into your VM and issue the following commands:
echo "tsc" | sudo tee /sys/devices/system/clocksource/*/current_clocksource sudo cp /etc/default/grub /etc/default/grub.backup sudo sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Persistent disk storage
For persistent block storage, you can attach Compute Engine persistent disks or Hyperdisks when you create your VMs or add them to your VMs later.
Supported disk types
Compute Engine offers different types of persistent disks and Hyperdisks based on either solid-state drive (SSD) technology or standard hard disk drive (HDD) technology. Each type has different performance characteristics. Google Cloud manages the underlying hardware of the disks to ensure data redundancy and to optimize performance.
For performance reasons, the SAP HANA /hana/data
and /hana/log
volumes
require SSD-based persistent disks or Hyperdisks. SSD-based Compute Engine
persistent disks and Hyperdisks that are certified by SAP for use with SAP HANA
include the following types:
- Balanced persistent disk (
pd-balanced
)- Provides cost-effective and reliable block storage.
- Use balanced persistent disk as the recommended solution for the
/hana/shared
,/hanabackup
, and/usr/sap
volumes, and the boot disk. - Supports PD Async Replication that you can use for cross-region active-passive disaster recovery. For more information, see Disaster recovery using PD Async Replication.
- SSD persistent disk (
pd-ssd
)- Provides reliable, high-performance block storage.
- Supports PD Async Replication that you can use for cross-region active-passive disaster recovery. For more information, see Disaster recovery using PD Async Replication.
- Hyperdisk Extreme (
hyperdisk-extreme
)- Provides higher maximum IOPS and throughput options than SSD persistent disk.
- You select the performance you need by provisioning IOPS, which also determines your throughput. For more information, see Throughput.
- You can use Hyperdisk Extreme for the
/hana/log
and/hana/data
volumes when you require the highest performance. For a list of the machine types that support Hyperdisk Extreme, see Machine type support. - To enable the best performance from Hyperdisk Extreme for SAP HANA, update your SAP HANA system properties as recommended in Hyperdisk Extreme performance.
- Extreme persistent disk (
pd-extreme
)- While extreme persistent disk is certified for use with SAP HANA, we recommend that you use Hyperdisk Extreme instead, which provides greater performance. If you want to use extreme persistent disks, then make sure to provision the disks in accordance with the information in Minimum sizes for SSD-based persistent disks and Hyperdisks.
For the boot disk and other SAP HANA volumes, you can use the following disk types:
- For the
/hana/shared
,/hanabackup
, or the/usr/sap
volume, to map it to its own disk, you can use a balanced persistent disk (pd-balanced
). - If you save your backups to a disk, then we recommend that you use a
balanced persistent disk (
pd-balanced
). If you want to reduce costs, then you can use a standard HDD persistent disk (pd-standard
). If you want faster backups, then you use a balanced persistent disk. Also, make sure that your VM type supports the disk type. - When you create the host VM, use a balanced persistent disk for the boot disk.
Support disk layouts
The following figure shows the disk storage layout in suggested architectures for SAP HANA on Google Cloud.
In the preceding figure, the configuration on the left uses a split-disk layout.
The /hana/data
and /hana/log
volumes are on separate Hyperdisks and the
/hana/shared
and /usr/sap
volumes, which don't require as high a
performance, are on individual balanced persistent disks, which cost less than a
Hyperdisk Extreme.
The configuration on the right uses a unified-disk layout, where the
/hana/data
, /hana/log
, /hana/shared
, and /usr/sap
volumes are all
mounted on a single Hyperdisk Extreme.
Persistent disks and Hyperdisks are located independently from your VMs, so you can detach or move these disks to keep your data, even after you delete your VMs.
In the Google Cloud console, you can see the persistent disks and Hyperdisks that are attached to your VM instances under Additional disks on the VM instance details page for each VM instance.
For more information about the different types of Compute Engine Persistent Disk and Hyperdisk volumes, their performance characteristics, and how to work with them, see the following documentation:
- Storage options
- About Hyperdisks
- Block storage performance
- Other factors that affect performance
- Adding or resizing zonal persistent disks
- Creating persistent disk snapshots
- Migrating existing SAP HANA Persistent Disk volumes to Hyperdisk Extreme volumes
Minimum sizes for SSD-based persistent disks and Hyperdisks
When you size certain Compute Engine SSD-based persistent disks for SAP HANA, you need to account for not only the storage requirements of your SAP HANA instance, but also for the performance of the persistent disk.
Within limits, the performance of an SSD or balanced persistent disk increases as the size of the disk and the number of vCPUs increase. If an SSD or balanced persistent disk is too small, then it might not provide the performance that SAP HANA requires.
The performance of Hyperdisk Extreme is not affected by disk size. Its performance is determined by the IOPS that you provision. For information about the performance of Hyperdisk Extreme, see About Hyperdisks.
A 550 GB SSD or a 943 GB balanced persistent disk provides a sustained throughput of 400 MB per second for reads and writes, which is the minimum. For general information about persistent disk performance, see Block storage performance.
The following table shows the recommended sizes for SSD persistent disk (pd-ssd
),
balanced persistent disk (pd-balanced
), and Hyperdisk Extreme (hyperdisk-extreme
)
to meet SAP HANA performance requirements in a
production environment for each Compute Engine
machine type that is certified for SAP HANA. The minimum sizes for Hyperdisk
Extreme, which
are based solely on the amount of memory, are included in the table
for reference.
The sizes in the following table assume that you are mounting all the SAP HANA volumes on individual disks.
pd-balanced
sizes
Compute Engine VM type |
/hana/data size (GB)
|
/hana/log size (GB)
|
/hana/shared size (GB) |
/usr/sap size (GB) |
Total size (GB) |
---|---|---|---|---|---|
n1-highmem-32 |
599 | 104 | 208 | 32 | 943 |
n1-highmem-64 |
499 | 208 | 416 | 32 | 1,155 |
n1-highmem-96 |
748 | 312 | 624 | 32 | 1,716 |
n2-highmem-32 |
527 | 128 | 256 | 32 | 943 |
n2-highmem-48 |
460 | 192 | 384 | 32 | 1,068 |
n2-highmem-64 |
614 | 256 | 512 | 32 | 1,414 |
n2-highmem-80 |
768 | 320 | 640 | 32 | 1,760 |
n2-highmem-96 |
921 | 384 | 768 | 32 | 2,105 |
n2-highmem-128 |
1,036 | 432 | 864 | 32 | 2,364 |
c3-standard-44 |
647 | 88 | 176 | 32 | 943 |
c3-highmem-44 |
422 | 176 | 352 | 32 | 982 |
c3-highmem-88 |
844 | 352 | 704 | 32 | 1,932 |
c3-highmem-176 |
1,689 | 512 | 1,024 | 32 | 3,257 |
m1-megamem-96 |
1,719 | 512 | 1,024 | 32 | 3,287 |
m1-ultramem-40 |
1,153 | 480 | 961 | 32 | 2,626 |
m1-ultramem-80 |
2,306 | 512 | 1,024 | 32 | 3,874 |
m1-ultramem-160 |
4,612 | 512 | 1,024 | 32 | 6,180 |
m2-megamem-416 |
7,065 | 512 | 1,024 | 32 | 8,633 |
m2-ultramem-208 |
7,065 | 512 | 1,024 | 32 | 8,633 |
m2-ultramem-416 |
14,092 | 512 | 1,024 | 32 | 15,660 |
m2-hypermem-416 |
10,598 | 512 | 1,024 | 32 | 12,166 |
m3-ultramem-32 |
1,171 | 488 | 976 | 32 | 2,667 |
m3-ultramem-64 |
2,342 | 512 | 1,024 | 32 | 3,910 |
m3-ultramem-128 |
4,684 | 512 | 1,024 | 32 | 6,252 |
m3-megamem-64 |
1,171 | 488 | 976 | 32 | 2,667 |
m3-megamem-128 |
2,342 | 512 | 1,024 | 32 | 3,910 |
pd-ssd
sizes
Compute Engine VM type |
/hana/data size (GB)
|
/hana/log size (GB)
|
/hana/shared size (GB) |
/usr/sap size (GB) |
Total size (GB) |
---|---|---|---|---|---|
n1-highmem-32 |
249 | 104 | 208 | 32 | 593 |
n1-highmem-64 |
499 | 208 | 416 | 32 | 1,155 |
n1-highmem-96 |
748 | 312 | 624 | 32 | 1,716 |
n2-highmem-32 |
307 | 128 | 256 | 32 | 723 |
n2-highmem-48 |
460 | 192 | 384 | 32 | 1,068 |
n2-highmem-64 |
614 | 256 | 512 | 32 | 1,414 |
n2-highmem-80 |
768 | 320 | 640 | 32 | 1,760 |
n2-highmem-96 |
921 | 384 | 768 | 32 | 2,105 |
n2-highmem-128 |
1,036 | 432 | 864 | 32 | 2,364 |
c3-standard-44 |
254 | 88 | 176 | 32 | 550 |
c3-highmem-44 |
422 | 176 | 352 | 32 | 982 |
c3-highmem-88 |
844 | 352 | 704 | 32 | 1,932 |
c3-highmem-176 |
1,689 | 512 | 1,024 | 32 | 3,257 |
m1-megamem-96 |
1,719 | 512 | 1,024 | 32 | 3,287 |
m1-ultramem-40 |
1,153 | 480 | 961 | 32 | 2,626 |
m1-ultramem-80 |
2,306 | 512 | 1,024 | 32 | 3,874 |
m1-ultramem-160 |
4,612 | 512 | 1,024 | 32 | 6,180 |
m2-megamem-416 |
7,065 | 512 | 1,024 | 32 | 8,633 |
m2-ultramem-208 |
7,065 | 512 | 1,024 | 32 | 8,633 |
m2-ultramem-416 |
14,092 | 512 | 1,024 | 32 | 15,660 |
m2-hypermem-416 |
10,598 | 512 | 1,024 | 32 | 12,166 |
m3-ultramem-32 |
1,171 | 488 | 976 | 32 | 2,667 |
m3-ultramem-64 |
2,342 | 512 | 1,024 | 32 | 3,910 |
m3-ultramem-128 |
4,684 | 512 | 1,024 | 32 | 6,252 |
m3-megamem-64 |
1,171 | 488 | 976 | 32 | 2,667 |
m3-megamem-128 |
2,342 | 512 | 1,024 | 32 | 3,910 |
hyperdisk-extreme
sizes
When you use Hyperdisk Extreme to host the /hana/data
and /hana/log
volumes, make sure to host the
/hana/shared
and /usr/sap
volumes on separate
balanced persistent disks. This is because the /hana/shared
and
/usr/sap
volumes don't require as high a performance as the
data and log volumes.
Compute Engine VM type |
/hana/data size (GB)
and IOPS
|
/hana/log size (GB)
and IOPS
|
/hana/shared size (GB) |
/usr/sap size (GB) |
Total size (GB) |
---|---|---|---|---|---|
n2-highmem-80 |
768 GB with 10,000 IOPS | 320 GB with 10,000 IOPS | 640 | 32 | 1,760 |
n2-highmem-96 |
921 GB with 10,000 IOPS | 384 GB with 10,000 IOPS | 768 | 32 | 2,105 |
n2-highmem-128 |
1,036 GB with 10,000 IOPS | 432 GB with 10,000 IOPS | 864 | 32 | 2,364 |
c3-highmem-88 |
844 GB with 10,000 IOPS | 352 GB with 10,000 IOPS | 704 | 32 | 1,932 |
c3-highmem-176 |
1,689 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 3,257 |
m1-megamem-96 |
1,719 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 3,287 |
m1-ultramem-80 |
2,306 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 3,874 |
m1-ultramem-160 |
4,612 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 6,180 |
m2-megamem-416 |
7,065 GB with 14,130 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 8,633 |
m2-ultramem-208 |
7,065 GB with 14,130 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 8,633 |
m2-ultramem-416 |
14,092 GB with 28,184 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 15,660 |
m2-hypermem-416 |
10,598 GB with 21,196 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 12,166 |
m3-ultramem-64 |
2,342 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 3,910 |
m3-ultramem-128 |
4,684 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 6,252 |
m3-megamem-64 |
1,171 GB with 10,000 IOPS | 488 GB with 10,000 IOPS | 976 | 32 | 2,667 |
m3-megamem-128 |
2,342 GB with 10,000 IOPS | 512 GB with 10,000 IOPS | 1,024 | 32 | 3,910 |
Disk sizes for mounting all SAP HANA volumes on a single disk
The sizes in the following table assume that you are using a single disk to host
the /hana/data
, /hana/log
, /hana/shared
,
and /usr/sap
volumes.
Compute Engine VM type | pd-balanced size (GB) |
pd-ssd size (GB) |
hyperdisk-extreme size (GB) and IOPS |
---|---|---|---|
n1-highmem-32 |
943 | 593 | Not applicable (N/A) |
n1-highmem-64 |
1,155 | 1,155 | N/A |
n1-highmem-96 |
1,716 | 1,716 | N/A |
n2-highmem-32 |
943 | 723 | N/A |
n2-highmem-48 |
1,068 | 1,068 | N/A |
n2-highmem-64 |
1,414 | 1,414 | N/A |
n2-highmem-80 |
1,760 | 1,760 | 1,760 GB with 20,000 IOPS |
n2-highmem-96 |
2,105 | 2,105 | 2,105 GB with 20,000 IOPS |
n2-highmem-128 |
2,364 | 2,364 | 2,364 GB with 20,000 IOPS |
c3-standard-44 |
943 | 550 | N/A |
c3-highmem-44 |
982 | 982 | N/A |
c3-highmem-88 |
1,932 | 1,932 | 1,932 |
c3-highmem-176 |
3,257 | 3,257 | 3,257 |
m1-megamem-96 |
3,287 | 3,287 | 3,287 GB with 20,000 IOPS |
m1-ultramem-40 |
2,626 | 2,626 | N/A |
m1-ultramem-80 |
3,874 | 3,874 | 3,874 GB with 20,000 IOPS |
m1-ultramem-160 |
6,180 | 6,180 | 6,180 GB with 20,000 IOPS |
m2-megamem-416 |
8,633 | 8,633 | 8,633 GB with 24,130 IOPS |
m2-ultramem-208 |
8,633 | 8,633 | 8,633 GB with 24,130 IOPS |
m2-ultramem-416 |
15,660 | 15,660 | 15,660 GB with 38,184 IOPS |
m2-hypermem-416 |
12,166 | 12,166 | 12,166 GB with 31,196 IOPS |
m3-ultramem-32 |
2,667 | 2,667 | N/A |
m3-ultramem-64 |
3,910 | 3,910 | 3,910 GB with 20,000 IOPS |
m3-ultramem-128 |
6,252 | 6,252 | 6,252 GB with 20,000 IOPS |
m3-megamem-64 |
2,667 | 2,667 | 2,667 GB with 20,000 IOPS |
m3-megamem-128 |
3,910 | 3,910 | 3,910 GB with 20,000 IOPS |
Determining persistent disk or Hyperdisk size
Calculate the amount of persistent disk storage that you need for the SAP HANA volumes based on the amount of memory that your selected Compute Engine machine type contains.
The following guidance about disk sizes refers to the minimum sizes that Google Cloud and SAP recommend for your deployments to balance both performance and total cost of ownership. Disk sizes can be increased up to the limit that the underlying disk types support. For information about the required minimum disk sizes, see Minimum sizes for SSD-based persistent disks.
Persistent disk size requirements for scale-up systems
For SAP HANA scale-up systems, use the following formulas for each volume:
/hana/data
: 1.2 x memory/hana/log
: either 0.5 x memory or 512 GB, whichever is smaller/hana/shared
: either 1 x memory or 1,024 GB, whichever is smaller/usr/sap
: 32 GB/hanabackup
: 2 x memory, optional allocation
Persistent disk size requirements for scale-out systems
For SAP HANA scale-out systems, use the same formula as SAP HANA scale-up systems
for /hana/data
, /hana/log
, and /usr/sap
volumes. For /hana/shared
volume, calculate
the persistent disk or Hyperdisk size based on the number of worker hosts in your deployment.
For each four worker hosts, increase the disk size by 1 x memory, or 1 TB,
whichever is smaller. For example:
- From 1 to 4 worker hosts: 1 x memory, or 1 TB, whichever is smaller
- From 5 to 8 worker hosts: 2 x memory, or 2 TB, whichever is smaller
- From 9 to 12 worker hosts: 3 x memory, or 3 TB, whichever is smaller
- From 13 to 16 worker hosts: 4 x memory, or 4 TB, whichever is smaller
To determine your overall storage quota requirements for SAP HANA scale-out systems,
you need to total up the disk sizes for each type of disk that is used with all
of the hosts in the scale-out system. For example, if you have put /hana/data
and /hana/log
on pd-ssd
persistent disks, but /hana/shared
and /usr/sap
on pd-balanced
persistent disks, then you need separate totals for pd-ssd
and pd-balanced
to
request separate quotas.
For an SAP HANA scale-out system with host auto-failover, you only need to calculate
the persistent disk size for master and worker hosts. The standby hosts do not have
their own /hana/data
, /hana/log
, and /usr/sap
volumes. If there is a
failure, then SAP HANA automatic failover unmounts the /hana/data
, /hana/log
, and
/usr/sap
volumes from the failed host and mounts on them on a standby host. The
/hana/shared
and /hanabackup
volumes for a standby host are mounted on a
separately-deployed NFS solution.
Allocating additional persistent disk storage
Select a persistent disk or Hyperdisk size that is no smaller than the minimum size that is listed for your persistent disk or Hyperdisk type in Minimum sizes for SSD and balanced persistent disks and Hyperdisks.
If you are using SSD or balanced persistent disks, then the minimum size might be determined by SAP HANA performance requirements rather than SAP HANA storage requirements.
For example, if you are running SAP HANA on an n2-highmem-32
VM instance,
which has 256 GB of memory, your total storage requirement for the
SAP HANA volumes is 723 GB: 307 GB for the data volume, 128 GB for the log
volume, 256 GB for the shared volume, and 32 GB for the /usr/sap
volume.
However, if you use a balanced persistent
disk, then the required minimum size is 943 GB, where the additional 220 GB is
allocated to the data volume to meet the required performance. Therefore, if you
use a n2-highmem-32
VM instance with balanced persistent disks to run SAP HANA,
then you need to provision a persistent disk storage of 943 GB or more.
so you need to size your persistent disk at 943 GB or more. The additional provision of 220 GB is applied to the data volume to provide the required performance.
Apply any additional persistent disk storage to the /hana/data
volume.
For information from SAP about sizing for SAP HANA, see Sizing SAP HANA.
Hyperdisk Extreme performance
Hyperdisk Extreme provides higher maximum IOPS and
throughput options for the /hana/log
and /hana/data
volumes than the other
SSD-based persistent disks. For more information about
provisioning IOPS for Hyperdisk Extreme, see Throughput.
Unlike with other SSD-based persistent disks, when you use Hyperdisk Extreme with SAP HANA, you don't need to worry about performance when you size the Hyperdisk. The sizing for Hyperdisk Extreme is based solely on the storage requirements of SAP HANA. For more information about sizing persistent disks or Hyperdisks, see Determining persistent disk size.
While using Hyperdisk Extreme with SAP HANA, to enable the best performance, we recommend that you update your SAP HANA system properties as follows:
- Update your
global.ini
file:- In the
fileio
section, setnum_completion_queues = 12
- In the
fileio
section, setnum_submit_queues = 12
- In the
- Update your
indexserver.ini
file:- In the
parallel
section, settables_preloaded_in_parallel = 32
- In the
global
section, setload_table_numa_aware = true
- In the
The number of IOPS that you provision when you create a Hyperdisk Extreme determines its maximum throughput. The following formula can be used as a starting point. It provides a minimum of 2500 MB/s throughput (256 KB per IOPS * 10,000 IOPS) and more for larger machine types with larger disks.
- When using the default deployment with separate disks for
/hana/log
and/hana/data
:- IOPS for the data disk:
maximum(10,000, size of data disk in GB * 2)
- IOPS for the log disk:
maximum(10,000, size of log disk in GB * 2)
- IOPS for the data disk:
- When a single disk is used for
/hana/data
,/hana/log
,/hana/shared
, and/usr/sap
:- IOPS for disk:
maximum(10,000, size of data disk GB * 2) + maximum(10,000, size of log disk in GB * 2)
- IOPS for disk:
The maximum number of IOPS that you can provision can differ depending on the machine type you are using. For a list of the machine types that support Hyperdisk Extreme, as well as the maximum IOPS and throughput that Hyperdisk Extreme can provide with each machine type, see Machine type support.
Persistent disks and Hyperdisks deployed by the deployment automation scripts
When you deploy an SAP HANA system using the Terraform configurations that Google Cloud provides, the deployment script allocates persistent disks or Hyperdisks for the SAP volumes as follows:
By default, separate disks are deployed for each of the following directories:
/hana/data
,/hana/log
,/hana/shared
, and/usr/sap
.Optionally, you can choose to deploy a single-disk layout where a single persistent disk or Hyperdisk hosts these SAP directories. Also, for SAP HANA scale-out deployments, the
/hana/shared
directory is hosted by an NFS solution.Optionally, a disk for the
/hanabackup
directory.
The following example shows how Terraform maps the volumes for SAP HANA on a
Compute Engine n2-highmem-32
VM, which has 256 GB of memory.
hana-ssd-example:~ # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg_hana_data -wi-ao---- 308.00g log vg_hana_log -wi-ao---- 128.00g shared vg_hana_shared -wi-ao---- 256.00g usrsap vg_hana_usrsap -wi-ao---- 32.00g backup vg_hanabackup -wi-ao---- 512.00g
The sizes of your volumes for the same machine type might differ slightly from what is shown in this example.
When you use the Deployment Manager templates that
Google Cloud provides for SAP HANA, or when you opt to deploy a
single-disk layout using the Terraform configurations, the deployment script
maps the SAP HANA /hana/data
, /hana/log
, /usr/sap
, and /hana/shared
directories each to its own logical volume for easy resizing and maps them to
the SSD-based persistent disk or Hyperdisk in a single volume group.
Terraform or Deployment Manager maps the /hanabackup
directory
to a logical volume in a separate volume group, which it then maps to a balanced
persistent disk (pd-balanced
).
Optional persistent disk storage for backups
When storing SAP HANA backups on a disk, we recommend that you use a Balanced Persistent Disk (pd-balanced
).
If you want to reduce costs, you can use a Standard HDD Persistent Disk (pd-standard
). However, use a balanced persistent disk when a higher throughput
or concurrency is needed.
The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it easier to recover your database if necessary.
To make the SAP HANA backups available as a regional resource for disaster recovery, you can use Compute Engine persistent disk snapshots. You can schedule snapshots to regularly and automatically back up your persistent disk. For more information, see Persistent disk snapshots.
If you use SAP HANA dynamic tiering, then the backup storage must be large enough to hold both the in-memory data and the data that is managed on disk by the dynamic tiering server.
You can use other mechanisms for storing SAP HANA backups. If you use the Cloud Storage Backint agent for SAP HANA, then you can backup SAP HANA directly to a Cloud Storage bucket, which makes the use of a persistent disk for storing backups optional.
SAP HANA dynamic tiering
SAP HANA dynamic tiering is certified by SAP for use in production environments on Google Cloud. SAP HANA dynamic tiering extends SAP HANA data storage by storing data that is infrequently accessed on disk instead of in memory.
For more information, see SAP HANA Dynamic Tiering on Google Cloud.
SAP HANA Fast Restart option
For SAP HANA 2.0 SP04 and later, Google Cloud strongly recommends the SAP HANA Fast Restart option.
SAP HANA Fast Restart reduces restart time in the event that SAP HANA
terminates, but the operating system remains running. To reduce the restart
time, SAP HANA leverages the SAP HANA persistent memory functionality to
preserve MAIN data fragments of column store tables in DRAM that is
mapped to the tmpfs
file system.
Additionally, on VMs in the M2 and M3 families of Compute Engine memory-optimized machine types, SAP HANA Fast Restart improves recovery time if uncorrectable errors occur in memory. For more information, see Memory-error recovery with Fast Restart on Compute Engine VMs.
For information about configuring SAP HANA Fast Restart, see the configuration information in the deployment guide for your SAP HANA deployment scenario. For example, for a scale-up or scale-out without standby nodes deployment scenario, see Configuring SAP HANA Fast Restart.
Required OS settings for SAP HANA Fast Restart
To use SAP HANA Fast Restart, your operating system must be tuned as required by SAP.
If you use the Terraform configuration files or Deployment Manager templates that Google Cloud provides, then the kernel settings are set for you.
If you don't use the deployment files that Google Cloud provides, SAP
provides guidance for configuring both the RHEL and SLES operating systems
for SAP HANA. For SAP HANA Fast Restart, pay particular attention to
setting numa_balancing
and transparent_hugepage
correctly.
If you use RHEL, use the sap-hana
tuned profile, if it is available.
For the configuration steps, see:
- SAP note 2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
- SAP note 2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
- SAP note 3108302 - SAP HANA DB: Recommended OS Settings for RHEL 9
If you use SLES, use the saptune
tool from SUSE to apply the required
configuration. To apply all of the recommended SAP HANA settings, including
both of the preceding kernel parameters, specify the following
saptune
command:
saptune solution apply HANA
For more information about configuring SLES for SAP HANA, see:
- SAP note 2205917 - SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications 12
- SAP note 2684254 - SAP HANA DB: Recommended OS settings for SLES 15 / SLES for SAP Applications 15
Memory-error recovery with Fast Restart on Compute Engine VMs
Enabling SAP HANA Fast Restart on VMs in the M2 and M3 families of Compute Engine memory-optimized machine types reduces the time it takes SAP HANA to recover from uncorrectable memory errors.
By leveraging Intel processor capabilities, the M2 and M3 machine types can keep running when uncorrectable errors occur in the memory subsystem. If SAP HANA Fast Restart is enabled when the memory error occurs, the affected SAP HANA process restarts, but the whole database doesn't need to be reloaded, only the affected file block.
Machine types that support memory-error recovery
The following Compute Engine machine types support memory-error recovery:
m3-ultramem-32
m3-ultramem-64
m3-ultramem-128
m3-megamem-64
m3-megamem-128
m2-ultramem-208
m2-ultramem-416
m2-megamem-416
m2-hypermem-416
Required operating systems for memory-error recovery
With the required kernel patches, the following operating systems support memory-error recovery with SAP HANA Fast Restart:
- SUSE Linux Enterprise Server (SLES) for SAP, 12 SP3 or later.
- Included in Compute Engine public images with a build date of v202103* or later.
- If you need to apply the latest kernel patches to an existing
deployment, follow your standard update process. For example,
issue the following commands:
- sudo zypper refresh
- sudo zypper update
- Red Hat Enterprise Linux (RHEL) for SAP, 8.4 or later.
File server options
The file server options for SAP HANA on Google Cloud include Filestore and NetApp Cloud Volumes Service for Google Cloud.
For more information about all of the file server options for SAP on Google Cloud, see File sharing solutions for SAP on Google Cloud.
Filestore
For the /hana/shared
volume in a single zone scale-out configuration, we
suggest using the Filestore Basic
service tier as it is desgined for zonal resources. For scenarios where
additional resilience is required, you can use
Filestore Enterprise. For more information, see
Components in an SAP HANA scale-out system on Google Cloud.
NetApp Cloud Volumes Service for Google Cloud
NetApp Cloud Volumes Service for Google Cloud is a fully-managed, cloud-native data service platform that you can use to create an NFS file system for SAP HANA scale-up systems on all Compute Engine instance types that are certified for SAP HANA.
NetApp Cloud Volumes Service offers two service types: CVS and CVS-Performance. The CVS_Performance service type offers different service levels. You must use the NetApp Cloud Volumes Service CVS-Performance (NetApp CVS-Performance) service type and the Extreme service level with SAP HANA.
Support for NetApp CVS-Performance in scale-out deployments is limited to specific Compute Engine instance types, as noted in the table in Certified machine types for SAP HANA.
With NetApp CVS-Performance, you can place all of the SAP HANA directories,
including /hana/data
and /hana/logs
, in shared storage, instead of
using Compute Engine persistent disks. With most other shared storage
systems, you can place only the /hana/shared
directory in shared storage.
SAP support for NetApp CVS-Performance on Google Cloud is listed in the Certified and Supported SAP HANA Hardware Directory.
Regional availability of NetApp CVS-Performance for SAP HANA
Your NetApp CVS-Performance volumes must be in the same region as your host VM instances.
Support for SAP HANA by NetApp CVS-Performance is not available in every region that NetApp CVS-Performance is available in.
You can use NetApp CVS-Performance with SAP HANA in the following Google Cloud regions:
Region | Location |
---|---|
europe-west4 |
Eemshaven, Netherlands, Europe |
us-east4 |
Ashburn, Northern Virginia, USA |
us-west2 |
Los Angeles, California, USA |
If you are interested in running SAP HANA with NetApp CVS-Performance in a Google Cloud region that is not listed above, contact sales.
NFS protocol support
NetApp CVS-Performance supports the NFSv3 and NFSv4.1 protocols with SAP HANA on Google Cloud.
NFSv3 is recommended for volumes that are configured to allow multiple TCP connections. NFSv4.1 is not yet supported with multiple TCP connections.
Volume requirements for NetApp Cloud Volumes Service with SAP HANA
The NetApp CVS-Performance volumes must be in the same region as the host VM instances.
For the /hana/data
and /hana/log
volumes, the Extreme service level of
NetApp CVS-Performance is required. You can use the
Premium service level for the /hana/shared
directory if it is in a
separate volume from the /hana/data
and /hana/log
directories.
For the best performance with SAP HANA systems that are larger than 1 TB,
create separate volumes for /hana/data
, /hana/log
, and /hana/shared
.
To meet SAP HANA performance requirements, the following minimum volume sizes are required when running SAP HANA with NetApp CVS-Performance:
Directory | Minimum size |
---|---|
|
1 TB |
|
2.5 TB |
|
4 TB |
Adjust the size of your volumes to meet your throughput requirements. The
minimum throughput rate for the Extreme service level
is 128 MB per second for each 1 TB, so the throughput for 4TB of disk space
is 512 MB per second.
Provisioning more disk space for the /hana/data
volume can reduce
startup times. For the /hana/data
volume, we recommend either 1.5 times the
size of your memory or 4 TB, whichever is greater.
The minimum size for the /hanabackup
volume is determined by your
backup strategy. You can also use the Cloud Storage Backint agent for SAP HANA to backup
the database directly into Cloud Storage.
Deploying a SAP HANA system with NetApp CVS-Performance
To deploy NetApp CVS-Performance with SAP HANA on Google Cloud, you need to deploy your VMs and install SAP HANA first. You can use the Terraform configuration files or Deployment Manager templates that Google Cloud provides to deploy the VMs and SAP HANA, or you can create the VM instances and install SAP HANA manually.
If you use the Terraform configuration files or
Deployment Manager templates, then the VMs are
deployed with the /hana/data
and /hana/log
volume mapped to
persistent disks. After you mount the NetApp CVS-Performance volumes to
the VMs, you need to copy the contents of the persistent disks over, as
described in the following steps.
To deploy SAP HANA with NetApp CVS-Performance by using the deployment files that Google Cloud provides:
Deploy SAP HANA with persistent disks by following the instructions of your choice:
Create your NetApp CVS-Performance volumes. For complete NetApp instructions, see NetApp Cloud Volumes Service for Google Cloud documentation.
Mount NetApp CVS-Performance to a temporary mount point by using the
mount
command with following settings:mount -t nfs -o options server:path mountpoint
For options, use the following settings:
rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock
The option
vers=3
indicates NFSv3. The optionnconnect=16
specifies support for multiple TCP connections.Stop SAP HANA and any related services that are using the attached persistent disk volumes.
Copy the contents of the persistent disk volumes to the corresponding NetApp CVS-Performance volumes.
Detach the persistent disks.
Remount the NetApp CVS-Performance volumes to the permanent mount points by updating the
/etc/fstab
with the following settings:server:path /mountpoint nfs options 0 0
For options, use the following settings:
rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock
For more information about updating the
/etc/fstab
file, see thenfs
page in the Linux File Formats manual.For the best performance, update the
fileio
category in the SAP HANAglobal.ini
file with the following suggested settings:Parameter Value async_read_submit
on
async_write_submit_active
on
async_write_submit_blocks
all
max_parallel_io_requests
128
max_parallel_io_requests[data]
128
max_parallel_io_requests[log]
128
num_completion_queues
4
num_completion_queues[data]
4
num_completion_queues[log]
4
num_submit_queues
8
num_submit_queues[data]
8
num_submit_queues[log]
8
Restart SAP HANA.
After confirming that everything works as expected, delete the persistent disks to avoid being charged for them.
User identification and resource access
When planning security for an SAP deployment on Google Cloud, you must identify:
- The user accounts and applications that need access to the Google Cloud resources in your Google Cloud project
- The specific Google Cloud resources in your project that each user needs to access
You must add each user to your project by adding their Google account ID to the project as a principal. For an application program that uses Google Cloud resources, you create a service account, which provides a user identity for the program within your project.
Compute Engine VMs have their own service account. Any programs that that run on a VM can use the VM service account, as long as the VM service account has the resource permissions that the program needs.
After you identify the Google Cloud resources that each user needs to use, you grant each user permission to use each resource by assigning resource-specific roles to the user. Review the predefined roles that IAM provides for each resource, and assign roles to each user that provide just enough permissions to complete the user's tasks or functions and no more.
If you need more granular or restrictive control over permissions than the predefined IAM roles provide, you can create custom roles.
For more information about the IAM roles that SAP programs need on Google Cloud, see Identity and access management for SAP programs on Google Cloud.
For an overview of identity and access management for SAP on Google Cloud, see Identity and access management overview for SAP on Google Cloud.
Pricing and quota considerations for SAP HANA
You are responsible for the costs incurred for using the resources created by following this deployment guide. Use the pricing calculator to help estimate your actual costs.
Quotas
SAP HANA requires more CPU and memory than many workloads on Google Cloud. If you have a new Google Cloud account, or if you haven't asked for an increased quota, then you would need to do so to deploy SAP HANA.
The following table shows quota values for single-host, scale-up SAP HANA systems by VM instance type.
For a scale-out SAP HANA system or multiple scale-up systems, you need to include the total resource amounts for all systems. For guidance on determining the storage requirements for scale-out systems, see Determining persistent disk size.
View your existing quota, and compare with your resource (CPU, memory, and storage) requirements to see what increase to ask for. You can then request a quota-limit increase.
While extreme persistent disk (pd-extreme
) is still certified for use with SAP
HANA, we recommend that you use Hyperdisk Extreme instead, which provides
greater performance. If you want to use extreme persistent disks, then you must
provision them using the Hyperdisk Extreme sizes.
Instance type | vCPU | Memory | Standard PD (Optional) | SSD PD | Balanced PD | Hyperdisk Extreme |
---|---|---|---|---|---|---|
n1-highmem-32 |
32 | 208 GB | 448 GB | 593 GB | 943 GB | Not applicable (N/A) |
n1-highmem-64 |
64 | 416 GB | 864 GB | 1,155 GB | 1,155 GB | N/A |
n1-highmem-96 |
96 | 624 GB | 1,280 GB | 1,716 GB | 1,716 GB | N/A |
n2-highmem-32 |
32 | 256 GB | 544 GB | 723 GB | 943 GB | N/A |
n2-highmem-48 |
48 | 384 GB | 800 GB | 1,068 GB | 1,068 GB | N/A |
n2-highmem-64 |
64 | 512 GB | 1,056 GB | 1,414 GB | 1,414 GB | N/A |
n2-highmem-80 |
80 | 640 GB | 1,312 GB | 1,760 GB | 1,760 GB | 1,760 GB |
n2-highmem-96 |
96 | 768 GB | 1,568 GB | 2,105 GB | 2,105 GB | 2,105 GB |
n2-highmem-128 |
128 | 864 GB | 1,760 GB | 2,364 GB | 2,364 GB | 2,364 GB |
c3-standard-44 |
44 | 176 GB | N/A | 507 GB | 507 GB | N/A |
c3-highmem-44 |
44 | 352 GB | N/A | 982 GB | 982 GB | N/A |
c3-highmem-88 |
88 | 704 GB | N/A | 1,932 GB | 1,932 GB | 1,932 GB |
c3-highmem-176 |
176 | 1,408 GB | N/A | 3,257 GB | 3,257 GB | 3,257 GB |
m1-megamem-96 |
96 | 1,433 GB | 2,898 GB | 3,287 GB | 3,287 GB | 3,287 GB |
m1-ultramem-40 |
40 | 961 GB | 1,954 GB | 2,626 GB | 2,626 GB | N/A |
m1-ultramem-80 |
80 | 1,922 GB | 3,876 GB | 3,874 GB | 3,874 GB | 3,874 GB |
m1-ultramem-160 |
160 | 3,844 GB | 7,720 GB | 6,180 GB | 6,180 GB | 6,180 GB |
m2-megamem-416 |
416 | 5,888 GB | 11,832 GB | 8,633 GB | 8,633 GB | 8,633 GB |
m2-ultramem-208 |
208 | 5,888 GB | 11,832 GB | 8,633 GB | 8,633 GB | 8,633 GB |
m2-ultramem-416 |
416 | 11,766 GB | 23,564 GB | 15,660 GB | 15,660 GB | 15,660 GB |
m2-hypermem-416 |
416 | 8,832 GB | 17,696 GB | 12,166 GB | 12,166 GB | 12,166 GB |
m3-ultramem-32 |
32 | 976 GB | N/A | 2,667 GB | 2,667 GB | N/A |
m3-ultramem-64 |
64 | 1,952 GB | N/A | 3,910 GB | 3,910 GB | 3,910 GB |
m3-ultramem-128 |
128 | 3,904 GB | N/A | 6,252 GB | 6,252 GB | 6,252 GB |
m3-megamem-64 |
64 | 976 GB | N/A | 2,667 GB | 2,667 GB | 2,667 GB |
m3-megamem-128 |
128 | 1,952 GB | N/A | 3,910 GB | 3,910 GB | 3,910 GB |
Licensing
Running SAP HANA on Google Cloud requires you to bring your own license (BYOL).
For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.
Deployment architectures
On Google Cloud, you can deploy SAP HANA in scale-up and scale-out architectures.
Scale-up architecture
The following diagram shows the scale-up architecture. In the diagram, notice
both the deployment on Google Cloud and the disk layout. You can use Cloud Storage
to back up your local backups available in /hanabackup
. This mount must be
sized equal to or greater than the data mount.
On Google Cloud, an SAP HANA single-host, scale-up architecture can include the following components:
One Compute Engine instance for the SAP HANA database with a network bandwidth of up to 32 Gbps, or up to 100 Gbps on selected machine types using high-bandwidth networking. For information about the machine types that are certified for use with SAP HANA, see Certified machine types for SAP HANA.
SSD based Compute Engine Persistent Disk or Hyperdisk volumes, as follows:
One disk for each of the following directories:
/hana/data
,/hana/log
/hana/shared
, and/usr/sap
. For information about disk recommendations for SAP HANA, see Persistent disk storage. For SAP optimum performance, the Persistent Disk or Hyperdisk volumes must be sized according to the table in Minimum sizes for SSD-based Persistent Disks and Hyperdisks.One Balanced Persistent Disk for the boot disk.
Optionally, a disk for the backup of the SAP HANA database.
Compute Engine firewall rules restricting access to instances.
Google Cloud's Agent for SAP. From version 2.0, you can configure this agent to collect the SAP HANA monitoring metrics, which enables you to monitor your SAP HANA instances.
An optional, but recommended, subnetwork with a custom topology and IP ranges in the Google Cloud region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork for SAP HANA.
Optional components:
- The Cloud Storage Backint agent for SAP HANA to backup directly to Cloud Storage.
- SAP HANA Cockpit or SAP HANA Studio on a small Compute Engine VM.
If you provision your SAP HANA system without a public IP, then it cannot connect directly to resources through the public internet. Therefore, you need to provide an indirect method for internet access using the following options:
Configure Google private access so that your VM can access the Google Cloud APIs.
Use Cloud NAT or configure a VM as a NAT gateway to access the public internet.
For administrative purposes, you can use TCP forwarding to connect to the systems. For information about using Identity-Aware Proxy for TCP forwarding, see Using IAP for TCP forwarding.
Use Compute Engine VM that is configured as a bastion host to access the public internet.
Scale-out architectures
The scale-out architecture consists of one master host, a number of worker hosts, and, optionally, one or more standby hosts. The hosts are interconnected through a network that supports sending data between hosts at rates of up to 32 Gbps, or up to 100 Gbps on selected machine types using high-bandwidth networking.
As the workload demand increases, especially when using Online Analytical Processing (OLAP), a multi-host scale-out architecture can distribute the load across all hosts.
The following diagram shows a scale-out architecture for SAP HANA on Google Cloud:
Standby hosts support the SAP HANA host auto-failover fault recovery solution. For more information about host auto-failover on Google Cloud, see SAP HANA host auto-failover on Google Cloud.
The following diagram shows a scale-out architecture with host auto-failover on Google Cloud.
Disk structures for SAP HANA scale-out systems on Google Cloud
Except for standby hosts, each host has its own /hana/data
, /hana/log
, and,
usually, /usr/sap
volumes on SSD-based persistent disks or Hyperdisks, which provide
consistent, high IOPS, and IO services. The master host also serves as an NFS
master for the /hana/shared
and /hanabackup
volumes, and this NFS master is mounted on
each worker and standby host.
For a standby host, the /hana/data
and /hana/log
volumes are not mounted
until a takeover occurs.
Components in an SAP HANA scale-out system on Google Cloud
A multi-host SAP HANA scale-out architecture on Google Cloud contains the following components:
1 Compute Engine VM instance for each SAP HANA host in the system, including 1 master host, up to 15 worker hosts, and up to 3 optional standby hosts.
Each VM uses the same Compute Engine machine type. For information about the machine types that are certified for use with SAP HANA, see Certified machine types for SAP HANA.
SSD based Persistent Disk or Hyperdisk volumes, as follows:
- Each VM must include a disk, mounted in the correct location.
- Optionally, if you are not deploying an SAP HANA host auto-failover system,
then a disk for the
/hanabackup
local volume for each VM instance.
A separately deployed NFS solution for sharing the
/hana/shared
and the/hanabackup
volumes with the worker and standby hosts. You can use Filestore or another NFS solution.Compute Engine firewall rules or other network access controls that restrict access to your Compute Engine instances while allowing communication between the instances and any other distributed or remote resources that your SAP HANA system requires.
Google Cloud's Agent for SAP. From version 2.0, you can configure this agent to collect the SAP HANA monitoring metrics, which enables you to monitor your SAP HANA instances.
An optional, but recommended, subnetwork with a custom topology and IP ranges in the Google Cloud region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork if you prefer.
Optional components:
- The Cloud Storage Backint agent for SAP HANA to store SAP HANA backups directly in a Cloud Storage bucket, and retrieve as required.
- SAP HANA Cockpit or SAP HANA Studio on a small Compute Engine VM.
If you provision your SAP HANA system without a public IP, then it cannot connect directly to resources through the public internet. Therefore, you need to provide an indirect method for internet access using the following options:
Configure Google private access so that your VM can access the Google Cloud APIs.
Use Cloud NAT or configure a VM as a NAT gateway to access the public internet.
For administrative purposes, you can use TCP forwarding to connect to the systems. For information about using Identity-Aware Proxy for TCP forwarding, see Using IAP for TCP forwarding.
Use Compute Engine VM that is configured as a bastion host to access the public internet.
High availability for SAP HANA systems on Google Cloud
To design a high-availability configuration for SAP HANA on Google Cloud, you can use a combination of Google Cloud, SAP, and OS-native features.
For information about the high-availability options, see SAP HANA high-availability planning guide.
Automation for SAP HANA deployments
Google Cloud provides Terraform configuration files and Deployment Manager templates that you can use to automate the deployment of Google Cloud infrastructure and, optionally, SAP HANA.
The deployment automation options that Google Cloud provides support the following SAP HANA deployment scenarios:
- Scale up
- Scale up in a two-node high-availability cluster
- Scale out without standby nodes
- Scale out without standby nodes in a high-availability cluster
- Scale out with SAP HANA Host auto-failover standby nodes
For more information about the automation for the scale-up or scale-out deployment scenarios, see:
Automating the deployment of the SAP HANA instance
Optionally, you can include the installation of SAP HANA with the automated deployment of the Google Cloud infrastructure.
The installation scripts that Google Cloud provides install SAP HANA after the infrastructure is deployed.
If any problems prevent the installation of an SAP HANA instance, the infrastructure is usually still deployed and configured. You can then either use the deployed infrastructure and install SAP HANA manually or delete the infrastructure, correct the problem, and rerun the deployment automation until the SAP HANA instance is installed successfully.
When you use the installation scripts that Google Cloud to install SAP HANA, you must provide values for certain parameters. If you omit these parameters or don't specify valid values for all of them, then the installation script fails to install the SAP HANA instance on the deployed infrastructure.
When you use the Terraform configuration files that Google Cloud provides to install SAP HANA, you must provide valid values for the following arguments:
sap_hana_deployment_bucket
,sap_hana_sid
,sap_hana_sidadm_uid
,sap_hana_sidadm_password
, andsap_hana_system_password
. For more information about the Terraform arguments, see Terraform: SAP HANA scale-up deployment guide.When you use the Deployment Manager templates that Google Cloud provides to install SAP HANA, you must provide valid values for the following configuration parameters:
sap_hana_deployment_bucket
,sap_hana_sid
,sap_hana_instance_number
,sap_hana_sidadm_password
,sap_hana_system_password
, andsap_hana_scaleout_nodes
. For more information about the Deployment Manager properties, see Deployment Manager: SAP HANA scale-up deployment guide.
Password management
To automate the installation of SAP HANA on the deployed Compute Engine
VMs, you must specify the passwords for the SIDadm
user
and the database user. You can specify these passwords in your Terraform
configuration file in the following ways:
(Recommended) To provide the passwords to the installation scripts securely, you can create secrets using Secret Manager, which is a charged service of Google Cloud, and then specify the names of the secrets as values for the
sap_hana_sidadm_password_secret
andsap_hana_system_password_secret
arguments.For information about Secret Manager pricing, see Secret Manager pricing.
Alternatively, you can specify the passwords in plain text on the
sap_hana_sidadm_password
andsap_hana_system_password
arguments.
Custom VMs and automated deployments
The Terraform configuration files and Deployment Manager templates do not support the specification of Compute Engine custom VMs.
If you need to use a custom VM type, deploy a small predefined VM type first and, after the deployment is complete, customize the VM as needed.
For more information about modifying VMs, see Modifying VM configurations for SAP systems.
Deployment automation for scale-up systems
Google Cloud provides Terraform configuration files and Deployment Manager configuration templates that you can use to automate the deployment of SAP HANA single-host scale-up systems.
The Terraform or Deployment Manager scripts can be used for the following scenarios:
A stand-alone, scale-up SAP HANA system.
See the Terraform or Deployment Manager deployment guide.
An active and standby scale-up SAP HANA system on a Linux high-availability cluster.
See the Terraform or Deployment Manager deployment guide.
The Terraform or Deployment Manager scripts can deploy the VMs, persistent disks, SAP HANA, and, in the case of the Linux HA cluster, the required HA components.
The Deployment Manager scripts do not deploy the following system components:
- The network and subnetwork
- Firewall rules
- NAT gateways, bastion hosts, or their VMs
- SAP HANA Studio or its VM
Except SAP HANA Studio or its VM, you can use Terraform to deploy all of these system components.
For information about creating these components, see the Prerequisites section in the following guides:
- Terraform: SAP HANA scale-up deployment guide
- Deployment Manager: SAP HANA scale-up deployment guide
Deployment automation for scale-out systems
Google Cloud provides Terraform configuration files and Deployment Manager configuration templates that you can use to automate the deployment of the SAP HANA multi-host scale-out systems.
- To deploy a scale-out system that does not include SAP HANA host auto-failover, see Terraform: SAP HANA Deployment Guide or Deployment Manager: SAP HANA Deployment Guide.
- To deploy a scale-out system without standby hosts in a Linux high-availability cluster, see Terraform: SAP HANA high-availability cluster configuration guide.
- To deploy a scale-out system that includes standby hosts, see Terraform: SAP HANA Scale-Out System with Host Auto-Failover Deployment Guide or Deployment Manager: SAP HANA Scale-Out System with Host Auto-Failover Deployment Guide.
The Terraform configuration or Deployment Manager templates can deploy the VMs, persistent disks, and SAP HANA. They can also map NFS mount points to the SAP HANA shared and backup volumes. For multi-host scale-out deployments, the Terraform configuration or Deployment Manager template can also deploy new Filestore instances to host the SAP HANA shared and backup volumes.
The Deployment Manager scripts do not deploy the following system components:
- The network and subnetwork
- Firewall rules
- NAT gateways, bastion hosts, or their VMs
- SAP HANA Studio or its VM
Except SAP HANA Studio or its VM, you can use Terraform to deploy all of these system components.
File sharing solutions for multi-host scale-out deployments
The Terraform configuration that Google Cloud provides for SAP HANA
multi-host scale-out deployment, by default, creates NFS exports for the
/hana/shared
and /hanabackup
volumes on the primary SAP HANA VM instance
and shares the volumes with the worker nodes.
However, if you want to use an NFS solution for sharing the /hana/shared
and
/hanabackup
volumes with your worker hosts, then you can use one of the
following options:
To associate an existing NFS solution that you have deployed on Google Cloud, you need to specify the NFS mount points of the
/hana/shared
and/hanabackup
volumes to thesap_hana_shared_nfs
andsap_hana_backup_nfs
arguments, respectively, in your Terraform configuration file.To deploy new Filestore instances and associate their file shares with the
/hana/shared
and/hanabackup
volumes, you need to define agoogle_filestore_instance
resource and then specify the names of the file shares to thesap_hana_shared_nfs_resource
andsap_hana_backup_nfs_resource
arguments, respectively, in your Terraform configuration file.
To view an example, see sample configuration.
Support
For issues with Google Cloud infrastructure or services, contact Customer Care. You can find contact information on the Support Overview page in the Google Cloud console. If Customer Care determines that a problem resides in your SAP systems, you are referred to SAP Support.
For SAP product-related issues, log your support request with
SAP support.
SAP evaluates the support ticket and, if it appears to be a Google Cloud
infrastructure issue, transfers the ticket to the Google Cloud component
BC-OP-LNX-GOOGLE
or BC-OP-NT-GOOGLE
.
Support requirements
Before you can receive support for SAP systems and the Google Cloud infrastructure and services that they use, you must meet the minimum support plan requirements.
For more information about the minimum support requirements for SAP on Google Cloud, see:
- Getting support for SAP on Google Cloud
- SAP Note 2456406 - SAP on Google Cloud Platform: Support Prerequisites (An SAP user account is required)
What's next
- For more information from SAP about SAP HANA dynamic tiering, see SAP HANA Dynamic Tiering.