This guide provides an overview of what is required to run SAP HANA on Google Cloud, and provides details that you can use when planning the implementation of a new SAP HANA system.
For details about how to deploy SAP HANA on GCP, see the SAP HANA Deployment Guide.
About SAP HANA on Google Cloud
SAP HANA is an in-memory, column-oriented, relational database that provides high-performance analytics and real-time data processing. At the core of this real-time data platform is the SAP HANA database. Customers can leverage ease of provisioning, highly scalable, and redundant GCP infrastructure capabilities to run their business critical workloads. GCP provides a set of physical assets, such as computers and hard disk drives, and virtual resources, such as Compute Engine virtual machines (VMs), located in Google data centers around the world.
When you deploy SAP HANA on GCP, you deploy to virtual machines running on Compute Engine. Compute Engine VMs provide persistent disks, which function similarly to physical disks in a desktop or a server, but are automatically managed for you by Compute Engine to ensure data redundancy and optimized performance.
Google Cloud basics
Google Cloud consists of many cloud-based services and products. When running SAP products on Google Cloud, you mainly use the IaaS-based services offered through Compute Engine and Cloud Storage, as well as some platform-wide features, such as tools.
See the Google Cloud platform overview for important concepts and terminology. This guide duplicates some information from the overview for convenience and context.
For an overview of considerations that enterprise-scale organizations should take into account when running on Google Cloud, see best practices for enterprise organizations.
Interacting with Google Cloud
Google Cloud offers three main ways to interact with the platform, and your resources, in the cloud:
- The Google Cloud Console, which is a web-based user interface.
- The
gcloud
command-line tool, which provides a superset of the functionality that Cloud Console offers. - Client libraries, which provide APIs for accessing services and management of resources. Client libraries are useful when building your own tools.
GCP services
SAP deployments typically utilize some or all of the following Google Cloud services:
Service | Description |
---|---|
VPC Networking | Connects your VM instances to each other and to the Internet. Each instance is a member of either a legacy network with a single global IP range, or a recommended subnet network, where the instance is a member of a single subnetwork that is a member of a larger network. Note that a network cannot span Google Cloud projects, but a Google Cloud project can have multiple networks. |
Compute Engine | Creates and manages VMs with your choice of operating system and software stack. |
Persistent disks | Persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD). |
Google Cloud Console | Browser-based tool for managing Compute Engine resources. Use a template to describe all of the Compute Engine resources and instances you need. You don't have to individually create and configure the resources or figure out dependencies, because the Cloud Console does that for you. |
Cloud Storage | You can back up your SAP database backups into Cloud Storage for added durability and reliability, with replication. |
Cloud Monitoring | Provides visibility into the deployment, performance, uptime, and health of
Compute Engine, network, and persistent disks. Monitoring collects metrics, events, and metadata from Google Cloud and uses these to generate insights through dashboards, charts, and alerts. You can monitor the compute metrics at no cost through Monitoring. |
IAM | Provides unified control over permissions for Google Cloud resources. Control who can perform control-plane operations on your VMs, including creating, modifying, and deleting VMs and persistent disks, and creating and modifying networks. |
Pricing and quotas
You can use the pricing calculator to estimate your usage costs. For more pricing information, see Compute Engine pricing, Cloud Storage pricing, and Google Cloud's operations suite pricing.
Google Cloud resources are subject to quotas. If you plan to use high-CPU or high-memory machines, you might need to request additional quota. For more information, see Compute Engine resource quotas.
Resource requirements
Certified VM types for SAP HANA
The following table shows the Compute Engine virtual machine (VM) types that are certified by SAP for production use on Google Cloud. Except where noted in the table, SAP supports the VM types in both single-host (scale-up) and multi-host (scale-out) installations. Scale-out installations can include up to 15 worker hosts, for a total of 16 hosts.
Custom configurations of the general-purpose n1- and n2-highmem VM types are also certified by SAP. For more information, see Certified custom VM types for SAP HANA.
For the operating systems that are certified for use with HANA on each VM type, see Certified operating systems for SAP HANA.
For more information about different VM types and their use cases, see machine types.
Some VM types might not be available in all Google Cloud regions. To confirm that a machine type is available in a region, see Available regions & zones.
SAP lists the certified VM instance types for SAP HANA in the SAP HANA Hardware Directory.
Google Cloud instance type | vCPU | Memory (GB) | Operating system | CPU platform | Notes |
---|---|---|---|---|---|
N1 high-memory, general-purpose machine types | |||||
n1-highmem-32 | 32 | 208 | RHEL, SUSE |
Intel Broadwell | NetApp Cloud Volumes Service certified for scale up. |
n1-highmem-64 | 64 | 416 | RHEL, SUSE | Intel Broadwell | NetApp Cloud Volumes Service certified for scale up. |
n1-highmem-96 | 96 | 624 | RHEL, SUSE | Intel Skylake | NetApp Cloud Volumes Service certified for scale up. |
N2 high-memory, general-purpose machine types | |||||
n2-highmem-32 | 32 | Up to 256 | RHEL, SUSE | Intel Cascade Lake | Scale up only, NetApp Cloud Volumes Service certified for scale up. |
n2-highmem-48 | 48 | Up to 384 | RHEL, SUSE | Intel Cascade Lake | Scale up only, NetApp Cloud Volumes Service certified for scale up. |
n2-highmem-64 | 64 | Up to 512 | RHEL, SUSE | Intel Cascade Lake | Scale up only, NetApp Cloud Volumes Service certified for scale up. |
n2-highmem-80 | 80 | Up to 640 | RHEL, SUSE | Intel Cascade Lake | Scale up only, NetApp Cloud Volumes Service certified for scale up. |
M1 memory-optimized machine types | |||||
m1-megamem-96 | 96 | 1,433 | RHEL, SUSE | Intel Skylake | NetApp Cloud Volumes Service certified for scale up. |
m1-ultramem-40 | 40 | Up to 961 | RHEL, SUSE | Intel Broadwell | Scale up only, OLTP workloads only, NetApp Cloud Volumes Service certified for scale up. |
m1-ultramem-80 | 80 | Up to 1,922 | RHEL, SUSE | Intel Broadwell | Scale up only, OLTP workloads only, NetApp Cloud Volumes Service certified for scale up. |
m1-ultramem-160 | 160 | Up to 3,844 | RHEL, SUSE | Intel Broadwell | NetApp Cloud Volumes Service certified for scale up. |
M2 memory-optimized machine types | |||||
m2-megamem-416 | 416 | Up to 5,888 | RHEL, SUSE | Intel Cascade Lake | Scale up only. OLAP workloads are currently certified only with the /hana/data and /hana/log volumes stored in
NetApp Cloud Volumes Service.OLTP workloads can use either Compute Engine persistent disks or NetApp Cloud Volumes Service. |
m2-ultramem-208 | 208 | Up to 5,888 | RHEL, SUSE | Intel Cascade Lake | Scale up only, OLTP workloads only, NetApp Cloud Volumes Service certified for scale up. |
m2-ultramem-416 | 416 | Up to 11,776 | RHEL, SUSE | Intel Cascade Lake-SP | Scale up or scale out up to 4 nodes. OLTP workloads only, including S/4HANA. NetApp Cloud Volumes Service is supported with scale up or scale out. For scale out with S/4HANA, see SAP Note 2408419. |
Certified custom VM types for SAP HANA
The following table shows the customizable Compute Engine virtual machine (VM) types that are certified by SAP for production use of SAP HANA on Google Cloud.
SAP certifies only a subset of the custom VM type configurations that Compute Engine supports.
Custom VM configurations are subject to customization rules that are defined by Compute Engine. The rules differ depending on which machine type you are customizing. For complete customization rules, see Creating a VM Instance with a Custom Machine Type.
Base Google Cloud instance type | vCPU | Memory (GB) | Operating system | CPU platform |
---|---|---|---|---|
N1-highmem | A number of vCPUs from 32 to 64 that is evenly divisible by 2. | 6.5 GB for each vCPU | RHEL, SUSE | Intel Broadwell |
N2-highmem (Scale up only) | A number of vCPUs from 32 to 64 that is evenly divisible by 4. | 8 GB for each vCPU | RHEL, SUSE | Intel Cascade Lake |
Storage configuration
SAP HANA is an in-memory database, but even though most data is stored and processed in memory, SAP HANA protects against data loss by saving the data to a persistent storage location.
Persistent disk storage
For persistent block storage, you can attach Compute Engine persistent disks when you create your VMs or add them to your VMs later.
Compute Engine offers different types of persistent disks. Each type has different performance characteristics. Google Cloud manages the underlying hardware of persistent disks to ensure data redundancy and to optimize performance.
The types of persistent disks that you can use with SAP HANA are determined by both SAP performance requirements and the performance requirements of your workload.
For a production instance of SAP HANA, you can use the following recommended persistent disk configurations as the starting point for performance tuning:
- For both the
/hana/data
and/hana/log
volumes, use a single SSD persistent disk (pd-ssd
) that is at least 834 GB in size. SSD persistent disks are backed by solid-state drives (SSD). - For the
/shared
volume, use the same SSD persistent disk as the/hana/data
and/hana/log
volumes or, if you map it to its own disk, use a balanced persistent disk (pd-balanced
). Balanced persistent disks balance cost and performance, and are backed by SSD. - If you save your backups to a persistent disk, use a standard persistent
disk (
pd-standard
) for the/hanabackup
volume. Standard persistent disks are backed by standard hard-disk drives. - When you create the host VM, use a balanced persistent disk for the boot disk.
Within limits, the performance of an SSD disk scales with the size of the
disk and the number of vCPUs, which is why SSD disks that are used for the
/hana/data
and /hana/log
volumes must be at least 834 GB in size and
the VM instance must have at least 32 vCPUs. This configuration provides
a sustained throughput of up to 400 MB per second for reads and writes.
In the configuration on the left in the preceding figure, the /hana/data
and /hana/log
volumes are on an SSD persistent disk and the /hana/shared
volume, which doesn't require as high performance, is on a
balanced persistent disk, which costs less than an SSD persistent disk.
In the configuration on the right, the /hana/data
, /hana/log
, and
/hana/shared
volumes are all on a single SSD disk. This provides
slightly better performance with one less disk to manage than the split
model, where the /hana/shared
volume is by itself on a balanced
persistent disk.
For non-production instances of SAP HANA, such as instances that are used
for development or quality assurance, or for instances that are running
workloads that do not require high performance,
you can use a balanced persistent disk for the /hana/data
and
/hana/log
volumes.
In the Cloud Console, you can see the persistent disks that are attached to your VM instances under Additional disks on the VM instance details page for each VM instance.
For more information about the different types of Compute Engine persistent disks, their performance characteristics, and how to work with them, see the Compute Engine documentation:
- Storage options
- Block storage performance
- Other factors that affect performance
- Adding or resizing zonal persistent disks
- Creating persistent disk snapshots
Persistent disks deployed by the Deployment Manager templates
If you deploy an SAP HANA system by using the Cloud Deployment Manager scripts that Google Cloud provides, Cloud Deployment Manager allocates an SSD persistent disk that is, at a minimum, 834 GB. If your SAP HANA system requires more persistent storage, Cloud Deployment Manager automatically adjusts the sizing of the persistent disks.
Cloud Deployment Manager maps the SAP HANA data
, log
, sap
, and
shared
directories to the single SSD persistent disk in a single
Linux volume group. Each directory is mapped to its own logical volume for
easy resizing.
In the following example, the vg_hana
volume group is mapped to a single
834 GB SSD persistent disk. The vg_hanabackup
volume group is mapped to a
standard HDD persistent disk. The sizes of your volumes might differ slightly
from what is shown in the example.
hana-ssd-example:~ # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg_hana -wi-ao---- 496.00g log vg_hana -wi-ao---- 102.00g sap vg_hana -wi-ao---- 32.00g shared vg_hana -wi-ao---- 204.00g backup vg_hanabackup -wi-ao---- 416.00g
Storage for backups
Storage for SAP HANA backup is configured with standard HDD persistent disks. Standard HDD persistent disks are efficient and economical for handling sequential read-write operations, but are not optimized to handle high rates of random input-output operations per second (IOPS). SAP HANA uses sequential IO with large blocks to back up the database. Standard HDD persistent disks provide a low-cost, high-performance option for this scenario.
The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it easier to recover your database if necessary.
If you use SAP HANA dynamic tiering, the backup storage must be large enough to hold both the in-memory data and the data that is managed on disk by the dynamic tiering server.
If you use the Cloud Storage Backint agent for SAP HANA, you can backup SAP HANA directly to a Cloud Storage bucket, which makes the use of a persistent disk for storing backups optional.
SAP HANA dynamic tiering
SAP HANA dynamic tiering is certified by SAP for use in production environments on GCP. SAP HANA dynamic tiering extends SAP HANA data storage by storing data that is infrequently accessed on disk instead of in memory.
For more information, see SAP HANA Dynamic Tiering on Google Cloud.
NetApp Cloud Volumes Service for Google Cloud
NetApp Cloud Volumes Service for Google Cloud is a SAP-certified, fully-managed, cloud-native data service platform that you can use to create an NFS file system for SAP HANA scale-up systems on all Compute Engine instance types that are certified for SAP HANA.
Support for NetApp Cloud Volumes Service in scale-out deployments is limited to specific Compute Engine instance types, as noted in the table in Certified VM types for SAP HANA.
With NetApp Cloud Volumes Service, you can place all of the SAP HANA directories,
including /hana/data
and /hana/logs
, in shared storage, instead of
using Compute Engine persistent disks. With most other shared storage
systems, you can place only the /hana/shared
directory in shared storage.
SAP support for NetApp Cloud Volumes Service on Google Cloud is listed in the SAP HANA Hardware Directory.
Regional availability of NetApp Cloud Volumes Service for SAP HANA
Your NetApp Cloud Volumes Service volumes must be in the same region as your host VM instances.
Support for SAP HANA by NetApp Cloud Volumes Service is not available in every region that NetApp Cloud Volumes Service is available in.
You can use NetApp Cloud Volumes Service with SAP HANA in the following Google Cloud regions:
Region | Location |
---|---|
europe-west4 |
Eemshaven, Netherlands, Europe |
us-east4 |
Ashburn, Northern Virginia, USA |
us-west2 |
Los Angeles, California, USA |
If you are interested in running SAP HANA with NetApp Cloud Volumes Service in a Google Cloud region that is not listed above, contact sales.
NFS protocol support
NetApp Cloud Volumes Service supports the NFSv3 and NFSv4.1 protocols with SAP HANA on Google Cloud.
NFSv3 is recommended for volumes that are configured to allow multiple TCP connections. NFSv4.1 is not yet supported with multiple TCP connections.
Volume requirements for NetApp Cloud Volumes Service with SAP HANA
The NetApp Cloud Volumes Service volumes must be in the same region as the host VM instances.
For the /hana/data
and /hana/log
volumes, the Extreme service level of
NetApp Cloud Volumes Service is required. You can use the
Premium service level for the /hana/shared
directory if it is in a
separate volume from the /hana/data
and /hana/log
directories.
For the best performance with SAP HANA systems that are larger than 1 TB,
create separate volumes for /hana/data
, /hana/log
, and /hana/shared
.
To meet SAP HANA performance requirements, the following minimum volume sizes are required when running SAP HANA with NetApp Cloud Volume Services:
Directory | Minimum size |
---|---|
|
1 TB |
|
2.5 TB |
|
4 TB |
Adjust the size of your volumes to meet your throughput requirements. The
minimum throughput rate for the Extreme service level
is 128 MB per second for each 1 TB, so the throughput for 4TB of disk space
is 512 MB per second.
Provisioning more disk space for the /hana/data
volume can reduce
startup times. For the /hana/data
volume, we recommend either 1.5 times the
size of your memory or 4 TB, whichever is greater.
The minimum size for the /hanabackup
volume is determined by your
backup strategy. You can also use the Cloud Storage Backint agent for SAP HANA to backup
the database directly into Cloud Storage.
Deploying a SAP HANA system with NetApp Cloud Volumes Service
To deploy NetApp Cloud Volumes Service with SAP HANA on Google Cloud, you need to deploy your VMs and install SAP HANA first. You can use the Deployment Manager templates that Google Cloud provides to deploy the VMs and SAP HANA, or you can create the VM instances and install SAP HANA manually.
If you use the Deployment Manager templates, the VMs are
deployed with the /hana/data
and /hana/log
volume mapped to
persistent disks. After you mount the NetApp Cloud Volumes Service volumes to
the VMs, you need to copy the contents of the persistent disks over, as
described in the following steps.
To deploy SAP HANA with NetApp Cloud Volumes Service by using the Deployment Manager templates that Google Cloud provides:
- Deploy SAP HANA with persistent disks by using the Cloud Deployment Manager templates that Google Cloud provides by following the instructions in the SAP HANA deployment guide.
Create your NetApp Cloud Volumes Service volumes. For complete NetApp instructions, see NetApp Cloud Volumes Service for Google Cloud documentation.
Mount NetApp Cloud Volumes Service to a temporary mount point by using the
mount
command with following settings:mount -t nfs -o options server:path mountpoint
For options, use the following settings:
rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock
The option
vers=3
indicates NFSv3. The optionnconnect=16
specifies support for multiple TCP connections.Stop SAP HANA and any related services that are using the attached persistent disk volumes.
Copy the contents of the persistent disk volumes to the corresponding NetApp Cloud Volumes Service volumes.
Detach the persistent disks.
Remount the NetApp Cloud Volumes Service volumes to the permanent mount points by updating the
/etc/fstab
with the following settings:server:path /mountpoint nfs options 0 0
For options, use the following settings:
rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,tcp,nconnect=16,noatime,nolock
For more information about updating the
/etc/fstab
file, see thenfs
page in the Linux File Formats manual.For the best performance, update the
fileio
category in the SAP HANAglobal.ini
file with the following suggested settings:Parameter Value async_read_submit
on
async_write_submit_active
on
async_write_submit_blocks
all
max_parallel_io_requests
128
max_parallel_io_requests[data]
128
max_parallel_io_requests[log]
128
num_completion_queues
4
num_completion_queues[data]
4
num_completion_queues[log]
4
num_submit_queues
8
num_submit_queues[data]
8
num_submit_queues[log]
8
Restart SAP HANA.
After confirming that everything works as expected, delete the persistent disks to avoid being charged for them.
Filestore
For the /hana/shared
volume only, you can use Filestore.
However, with Filestore, all SAP HANA hosts that share the
storage must be within the same Google Cloud zone.
Memory configuration
See the supported VM types table.
Certified operating systems for SAP HANA
SAP HANA runs on either the Red Hat Enterprise Linux (RHEL) operating system or the SUSE Linux Enterprise Server (SLES) operating system.
The following table shows the RHEL and SLES operating systems that are certified by SAP for production use with SAP HANA on Google Cloud.
Except where noted in the table, each operating system is supported with SAP HANA on all certified Compute Engine VM types.
For information about the current support status of each operating system and which operating systems are available from Google Cloud, see Operating system support for SAP HANA on GCP.
For information from SAP about which operating systems SAP supports with SAP HANA on Google Cloud, see the SAP HANA Hardware Directory.
The following table does not include:
- Certified operating system versions that are no longer in mainstream support.
- Operating system versions that are not specific to SAP.
Operating system | Version | Unsupported VM types |
---|---|---|
RHEL for SAP | 7.3 |
n2-highmem m1-ultramem m2-megamem m2-ultramem custom |
7.4 | m2-ultramem | |
7.6 |
||
7.7 |
||
8.1 |
||
SLES for SAP | 12 SP3 | n1-highmem m1-megamem |
12 SP4 | ||
12 SP5 | ||
15 | ||
15 SP1 | ||
15 SP2 |
Custom operating system images
You can use a Linux image that GCP provides and maintains (a public image) or you can provide and maintain your own Linux image (a custom image).
Use a custom image if the version of the SAP-certified operating system that you require is not available from GCP as a public image. The following steps, which are described in detail in Importing Boot Disk Images to Compute Engine, summarize the procedure for using a custom image:
- Prepare your boot disk so it can boot within the GCP Compute Engine environment and so you can access it after it boots.
- Create and compress the boot disk image file.
- Upload the image file to Cloud Storage and import the image to Compute Engine as a new custom image.
- Use the imported image to create a virtual machine instance and make sure it boots properly.
- Optimize the image and install the Linux Guest Environment so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.
After your custom image is ready, you can use it when creating VMs for your SAP HANA system.
If you are moving a RHEL operating system from an on-premises installation to GCP, you need to add Red Hat Cloud Access to your Red Hat subscription. For more information, see Red Hat Cloud Access.
For more information about the operating system images that GCP provides, see Images.
For more information about importing an operating system into GCP as a custom image, see Importing Boot Disk Images to Compute Engine.
For more information about the operating systems that SAP HANA supports, see:
User identification and resource access
When planning security for an SAP deployment on Google Cloud, you must identify:
- The user accounts and applications that need access to the Google Cloud resources in your Google Cloud project
- The specific Google Cloud resources in your project that each user needs to access
You must add each user to your project by adding their Google account ID to the project as a member. For an application program that uses Google Cloud resources, you create a service account, which provides a user identity for the program within your project.
Compute Engine VMs have their own service account. Any programs that that run on a VM can use the VM service account, as long as the VM service account has the resource permissions that the program needs.
After you identify the Google Cloud resources that each user needs to use, you grant each user permission to use each resource by assigning resource-specific roles to the user. Review the predefined roles that IAM provides for each resource, and assign roles to each user that provide just enough permissions to complete the user's tasks or functions and no more.
If you need more granular or restrictive control over permissions than the predefined IAM roles provide, you can create custom roles.
For more information about the IAM roles that SAP programs need on Google Cloud, see Identity and access management for SAP programs on Google Cloud.
For an overview of identity and access management for SAP on Google Cloud, see Identity and access management overview for SAP on Google Cloud.
Pricing and quota considerations for SAP HANA
You are responsible for the costs incurred for using the resources created by following this deployment guide. Use the pricing calculator to help estimate your actual costs.
Quotas
If you have a new GCP account, or if you haven't asked for an increased quota, you will need to do so to deploy SAP HANA. View your existing quota, and compare with the following table to see what increase to ask for. You can then request a quota-limit increase.
The following table shows quota values for single-host, scale-up SAP HANA systems by VM instance type. If you host SAP HANA Studio on GCP or use a NAT gateway and bastion host, add the values shown in the table to your total quota requirement.
Instance type | CPU | Memory | Standard PD | SSD PD |
---|---|---|---|---|
n1-highmem-32 | 32 | 208 GB | 448 GB | 834 GB |
n1-highmem-64 | 64 | 416 GB | 864 GB | 1,280 GB |
n1-highmem-96 | 96 | 624 GB | 1,280 GB | 1,904 GB |
n2-highmem-32 | 32 | 256 GB | 544 GB | 834 GB |
n2-highmem-48 | 48 | 384 GB | 800 GB | 1,184 GB |
n2-highmem-64 | 64 | 512 GB | 1,056 GB | 1,568 GB |
n2-highmem-80 | 80 | 640 GB | 1,312 GB | 1,952 GB |
m1-megamem-96 | 96 | 1,433 GB | 2,898 GB | 3,717 GB |
m1-ultramem-40 | 40 | 961 GB | 1,954 GB | 2,914 GB |
m1-ultramem-80 | 80 | 1,922 GB | 3,876 GB | 4,451 GB |
m1-ultramem-160 | 160 | 3,844 GB | 7,720 GB | 7,334 GB |
m2-megamem-416 | 416 | 5,888 GB | OLAP: Not applicable(Note) OLTP: 11,832 GB |
OLAP: Not applicable(Note) OLTP: 10,442 GB |
m2-ultramem-208 | 208 | 5,888 GB | 11,832 GB | 10,442 GB |
m2-ultramem-416 | 416 | 11,766 GB | 23,564 GB | 19,217 GB | Bastion/NAT gateway | 1 | 3.75 GB | 8 GB | 0 GB |
SAP HANA Studio | 1 | 3.75 GB | 50 GB | 0 GB |
Note: Currently, the `m2-megamem-416` Compute Engine instance type is certified by SAP only if the data and log volumes are stored on NetApp Cloud Volumes Service for Google Cloud, so no persistent disk storage is required.
Licensing
Running SAP HANA on GCP requires you to bring your own license (BYOL).
For more information from SAP about managing your SAP HANA licenses, see License Keys for the SAP HANA Database.
Deployment architecture
SAP HANA on GCP supports single-host and multi-host architectures.
Single-host architecture
The following diagram shows the single-host architecture. In the diagram, notice
both the deployment on GCP and the disk layout. You can use Cloud Storage to
back up your local backups available in /hanabackup
. This mount should be
sized equal to or greater than the data mount.
Notice that the VM for SAP HANA has no public IP, which means it cannot be reached from an external network. Instead, the deployment uses a NAT bastion host and SAP HANA Studio for accessing SAP HANA. The SAP HANA Studio instance and the bastion host are deployed in the same subnetwork as the SAP HANA instance.
You provision a Windows host on which you install SAP HANA Studio, with the instance placed in the same subnetwork, and with firewall rules that enable you to connect to the SAP HANA database from SAP HANA Studio.
You deploy SAP HANA using a single-host, scale-up architecture that has the following components:
One Compute Engine instance for the SAP HANA database, with a 834 GB or larger SSD persistent disk, and a network bandwidth of up to 16 Gbps. The SSD persistent disk is partitioned and mounted to
/hana/data
and/hana/log
to host the data and logs.An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork for SAP HANA.
An optional, but recommended, Internet gateway configured for outbound Internet access for your SAP HANA and other instances. This guide assumes you are using this gateway.
Compute Engine firewall rules restricting access to instances.
Persistent disk for backup of SAP HANA database.
Compute Engine VM,
n1-standard-2
, with Windows OS to host SAP HANA studio.Compute Engine VM,
n1-standard-1
as a bastion host.Automated SAP HANA database installation with a configuration file that you create from a template.
SAP HANA Studio.
Deploying scale-up systems with Deployment Manager
Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of SAP HANA single-host scale-up systems.
The Deployment Manager scripts can be used for the following scenarios:
- A stand-alone, scale-up SAP HANA system
- An active and standby scale-up SAP HANA system on a Linux high-availability cluster
The Deployment Manager scripts can deploy the VMs, persistent disks, SAP HANA, and, in the case of the Linux HA cluster, the required HA components.
The Deployment Manager scripts do not deploy the following system components:
- The network and subnetwork
- Firewall rules
- NAT gateways, bastion hosts, or their VMs
- SAP HANA Studio or its VM
Multi-host architecture
The following diagram shows a multi-host architecture on Google Cloud.
As the workload demand increases, especially when using OLAP, a multi-host, scale-out architecture can distribute the load across all hosts.
The scale-out architecture consists of one master host, a number of worker hosts, and, optionally, one or more standby hosts. The hosts are interconnected through a network that supports sending data between hosts at rates of up to 16 Gbps.
Standby hosts support the SAP HANA host auto-failover fault recovery solution. For more information about host auto-failover on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.
Disk structures for SAP HANA scale-out systems on Google Cloud
Except for standby hosts, each host has its own /hana/data
, /hana/log
, and,
usually, /usr/sap
volumes on SSD persistent disks, which provide consistent,
high IOPS, IO services. The master host also serves as an NFS master for
/hana/shared
and /hanabackup
volumes, which is mounted on each worker
and standby host.
For a standby host, the /hana/data
and /hana/log
volumes are not mounted
until a takeover occurs.
High availability for SAP HANA scale-out systems on Google Cloud
The following features help ensure the high availability of a SAP HANA scale-out system:
- Compute Engine live migration
- Compute Engine automatic instance restart
- SAP HANA host auto-failover with up to three SAP HANA standby hosts
For more information about high availability options on Google Cloud, see the SAP HANA High Availability and Disaster Recovery Planning Guide.
In the event of a live migration or automatic instance restart event, the
protected-persistent-storage-based /hana/shared
and /hanabackup
volumes can
be back online as soon as an instance is up.
If you are using a standby host, in the event of a failure, SAP HANA automatic
failover unmounts the /hana/data
and /hana/log
volumes
from the failed host and mounts on them on standby host.
Components in a SAP HANA scale-out system on Google Cloud
A multi-host SAP HANA scale-out architecture on Google Cloud contains the the following components:
1 Compute Engine VM instance for each SAP HANA host in the system, including 1 master host, up to 15 worker hosts, and up to 3 optional standby hosts.
Each VM uses the same Compute Engine machine type. For the machine types that are supported by SAP HANA, see VM types.
Each VM must include SSD and HDD storage, mounted in the correct location.
A separately deployed NFS solution for sharing the
/hana/shared
and the/hanabackup
volumes with the worker and standby hosts. You can use Filestore or another NFS solution.An optional, but recommended, subnetwork with a custom topology and IP ranges in the GCP region of your choice. The SAP HANA database and the other Compute Engine instances are launched within this subnetwork. You can use an existing subnetwork if you prefer.
Optionally, an internet gateway configured for outbound internet access for your SAP HANA instance and other instances.
Optionally, a Compute Engine
n1-standard-2
VM with the Windows operating system installed to host SAP HANA Studio.Optionally, a Compute Engine
n1-standard-1
VM for a bastion host.Compute Engine firewall rules or other network access controls that restrict access to your Compute Engine instances while allowing communication between the instances and any other distributed or remote resources that your SAP HANA system requires.
Deploying scale-out systems with Deployment Manager
Google Cloud provides Deployment Manager configuration templates that you can use to automate the deployment of the SAP HANA multi-host scale-out systems.
- To deploy a scale-out system that does not include SAP HANA host auto-failover, see the SAP HANA Deployment Guide.
The Deployment Manager scripts can deploy the VMs, persistent disks, and SAP HANA. The script also mounts the NFS solution to the VMs.
The Deployment Manager scripts do not deploy the following system components:
- The network and subnetwork
- The NFS solution
- Firewall rules
- NAT gateways, bastion hosts, or their VMs
- SAP HANA Studio or its VM
Support
For issues with Google Cloud infrastructure or services, contact Google Cloud Support. You can find contact information on the Support Overview page in the Google Cloud Console. If Google Cloud Support determines that a problem resides in your SAP systems, you are referred to SAP Support.
For SAP product-related issues, log your support request with SAP support. SAP evaluates the support ticket and, if it appears to be a Google Cloud infrastructure issue, transfers the ticket to the Google Cloud component BC-OP-LNX-GOOGLE or BC-OP-NT-GOOGLE.
Support requirements
Before you can receive support for SAP systems and the Google Cloud infrastructure and services that they use, you must meet the minimum support plan requirements.
For more information about the minimum support requirements for SAP on Google Cloud, see:
- Getting support for SAP on Google Cloud
- SAP Note 2456406 (An SAP user account is required)
What's next
- For more information from SAP about SAP HANA dynamic tiering, see SAP HANA Dynamic Tiering.