Workload Manager best practices for SAP

This document lists the best practices that Workload Manager supports for evaluating SAP workloads running on Google Cloud. To learn about Workload Manager, see Product overview.

Best practices for SAP workloads

The following table shows the Workload Manager best practices for evaluating SAP workloads that run on Google Cloud.

Note that to enable Workload Manager for evaluating your SAP workloads, you must set up Google Cloud's Agent for SAP on the host VMs.

Category Best practice name and description Severity
SAP General Set up Google Cloud's Agent for SAP on VMs that run SAP applications

Google Cloud's Agent for SAP is required for SAP support on any VM that runs an SAP system.

For more information, see Google Cloud's Agent for SAP planning guide.
Severity: Medium Medium
SAP HANA SAP HANA: Use a certified OS

To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine VM, you must use an Operating System version that is certified by SAP and Google Cloud for use with SAP HANA.

For more information, see OS support for SAP HANA on Google Cloud.
Severity: High Critical
SAP HANA SAP HANA: Use a certified custom VM types

To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine custom VM, you must use a custom VM type that is certified by SAP and Google Cloud for use with SAP HANA.

For more information, see Certified custom machine types for SAP HANA.
Severity: High Critical
SAP HANA SAP HANA: Map the SAP HANA data and log volumes to the same type of SSD-based persistent disk

For performance reasons, the SAP HANA /hana/data and /hana/log volumes must be mapped to the same type of SSD-based persistent disk. You can map both volumes to the same single persistent disk or, if the same persistent disk type is used for each, you can map each volume to a separate persistent disk.

For more information, see the SAP HANA planning guide.
Severity: High Critical
SAP HANA SAP HANA: Use a certified VM type

To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine VM, you must use a VM type that is certified by SAP and Google Cloud for use with SAP HANA.

For more information, see Certified Compute Engine VMs for SAP HANA.
Severity: High Critical
SAP NetWeaver SAP NetWeaver: Use a certified OS

To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine VM, you must use an operating system version that is certified by SAP and Google Cloud for use with SAP NetWeaver.

For more information, see OS support for SAP NetWeaver on Google Cloud.
Severity: High Critical
SAP HANA SAP HANA: SAP Minimum allowable sizes for SSD-based persistent disk options

For block storage, SAP HANA requires a minimum throughput of 400 MB per second. If you are using SSD or balanced persistent disks, use the minimum size for that persistent disk type to provide the necessary throughput. If you are using extreme persistent disks, provision a minimum of 20,000 IOPS.

For more information, see Persistent disk storage in the SAP HANA planning guide.
Severity: High Critical
SAP High Availability Corosync: Use the recommended value for the join parameter

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, the Corosync join parameter must be configured correctly to conform to Google Cloud best practices.

For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
Severity: High High
SAP High Availability Corosync: Use the recommended value for the max_messages parameter

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, to avoid message flooding between cluster nodes during token processing, set the Corosync max_messages parameter to a value of 20.

For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
Severity: High High
SAP High Availability Corosync: Use the recommended value for the token_retransmits_before_loss_const parameter

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the Corosync parameter token_retransmits_before_loss_const to a value of 10 or more to conform to Google Cloud best practices.

For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
Severity: High Critical
SAP High Availability Corosync: Use the recommended value for the token parameter for high-availability

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the value of the Corosync token parameter to the recommended timeout value of 20000 to conform to the Google Cloud best practice for failure detection.

For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
Severity: High Critical
SAP High Availability Corosync: transport protocol is set correctly

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the value of the Corosync transport protocol as appropriate for your Operating System. For Red Hat systems of version 8 and later, the parameter should be set to knet. For other supported Operating Systems, a value of udpu is expected.

For more information, see the guide for your OS:
Severity: High Critical
SAP High Availability Corosync: Use the recommended value for the consensus parameter for high availability

In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, the default value of the consensus parameter is set to 1.2 times the value of the token parameter. It is recommended not to modify this value. If you change the default value, make sure that it is at least 1.2 times the token value.

For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
Severity: Medium Medium
SAP High Availability Pacemaker: Set pcmk_delay_max on the fencing device cluster resource

To avoid fence race conditions in Linux Pacemaker high-availability clusters for SAP, the pcmk_delay_max parameter must be specified with a value of 30 or greater in the definition of the fencing resource.

For more information, see Special Options for Fencing Resources.
Severity: High Critical
SAP High Availability Pacemaker: Use the recommended value for the timeout parameter

The definition of the SAP HANA resource in a Linux Pacemaker HA cluster contains a timeout value for the stop, start, promote, and demote operations. For Linux Pacemaker HA clusters for SAP on Google Cloud, we recommend a value of at least 3600 for each operation.

For more information, see the guide for your OS:
Severity: High Critical
SAP High Availability Pacemaker: High-availability cluster 'migration-threshold' set to recommended value for SAP HANA

To migrate the SAP HANA resource to a new cluster node in the event of a failure in a Linux Pacemaker high-availability cluster, the SAP HANA resource definition must specify the migration-threshold parameter with the recommended value of 5000. This parameter determines the number of errors before a failover occurs and marks the cluster node as ineligible to host the SAP HANA resource.

For more information, see the guide for your OS:
Severity: High High
SAP High Availability Pacemaker: Update the resource location preference constraints

A Linux Pacemaker HA cluster contains a location preference constraint that has been set on one or more resources. For Linux Pacemaker HA clusters for SAP on Google Cloud, we recommend removing the location preference to avoid situations where the clustering software attempts to set resources with a specific node affinity, such as when a resource is manually moved between nodes in the cluster.

For more information, see the guide for your OS:
Severity: High Critical
SAP High Availability Pacemaker: Deactivate maintenance mode

To allow a Linux Pacemaker high-availability cluster configuration to monitor and manage its application resources, the cluster nodes that host those resources must not be in the maintenance mode.

For more information, see the guide for your OS:
Severity: High Critical
SAP High Availability Pacemaker: Use the recommended value for the topology monitor setting

A Linux Pacemaker HA cluster contains a SAP HANA topology resource that includes a monitor operation, which has an interval value and a timeout value. For Linux Pacemaker HA clusters for SAP on Google Cloud, we recommend a value between 10 and 60 seconds for the interval, and a value of 600 seconds for the timeout.

Severity: High Critical
SAP General Enable automatic restart for SAP workloads

To ensure that the VM restarts automatically in the event of a failure, enable the Compute Engine automatic restart policy for any VM that is running an SAP workload.

For more information, see Set VM host maintenance policy.
Severity: High Critical
SAP HANA SAP HANA: Enable SAP HANA Fast Restart

Compute Engine includes functionality based on Intel's Memory RAS that can significantly reduce the impact of all memory errors that would otherwise cause VM crashes. When combined with SAP HANA's fast restart capability (available since HANA 2.0 SP04), SAP HANA systems are able to recover from such failure events. This configuration is recommended on all Memory Optimized virtual machine families.

For more information, see SAP HANA Fast Restart option.
Severity: High Critical
SAP General Set VM maintenance policy to MIGRATE for SAP workloads

To prevent any platform maintenance events from stopping or restarting a VM that is running SAP workloads, the onHostMaintenance parameter for the VM must be set to the recommended option MIGRATE.

For more information, see Set VM host maintenance policy.
Severity: High Critical
SAP High Availability High Availability: Set the system replication hook for SAP HANA

In a SAP HANA high-availability configuration the system replication hook provided by the Operating System vendor has not been implemented. This may lead to incorrect reporting of the replication state of SAP HANA System Replication to Linux Pacemaker clusters.

For more information, see the guide for your OS:
Severity: High Critical
SAP High Availability High Availability: Ensure multi-zonal setup for SAP HANA

To ensure resiliency of an SAP HANA high-availability configuration, the primary and secondary nodes must exist in different zones in the same region.

For more information, see the SAP HANA planning guide.
Severity: Medium Medium
SAP NetWeaver SAP NetWeaver: Use a certified custom VM type

To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine custom VM, you must use a custom VM type that is certified by SAP and Google Cloud for use with SAP NetWeaver.

For more information, see Certified machines in the SAP NetWeaver planning guide.
Severity: High Critical
SAP NetWeaver SAP NetWeaver: Use a certified VM type

To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine VM, you must use a VM type that is certified by SAP and Google Cloud for use with SAP NetWeaver.

For more information, see Machine types in the SAP NetWeaver planning guide.
Severity: High Critical
SAP Security SAP HANA Security: Enable encryption for data and log backups

Encryption protects backups from unauthorized access by encrypting the backup data before it is transferred to the backup location. This means that even if an unauthorized user gains access to the backup data, they cannot read it without the decryption key. This is applicable for both file-based backups and backups created using third-party backup tools. We recommend that you enable backup encryption in the SAP HANA system.

For more information, see system backup encryption statement in the SAP HANA reference guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Users with DEVELOPMENT privileges in production environment

At least one user or role has the DEVELOPMENT privilege in the production database. Google Cloud recommends that you have no users with this privilege.

For more information, see DEVELOPMENT privilege section in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP Security SAP HANA Security: Unchanged initial password

The force_first_password_change parameter in SAP HANA specifies whether users are required to change their password after they are created. We recommend that you enable force_first_password_change parameter.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Users with SAP_INTERNAL_HANA_SUPPORT privileges in production environment

At least one user has the SAP_INTERNAL_HANA_SUPPORT role. This is an internal role that enables low level access to data. It should only be assigned to administrator or support at the request of SAP Development and during an active SAP support request.

For more information, see SAP_INTERNAL_HANA_SUPPORT role in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP Security SAP HANA Security: Prevention of password reuse

Password reuse is a common security vulnerability. The last_used_passwords parameter in SAP HANA prevents users from reusing their most recent passwords. The parameter specifies the number of past passwords that a user is not allowed to use when changing their current password. We recommend that you set last_used_passwords to 5 or higher.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Encryption status of the log volume

Encryption protects SAP HANA logs from unauthorized access. One way to do this is to encrypt the logs at the operating system level. SAP HANA also supports encryption in the persistence layer, which can provide additional security. Recommendation is to encrypt log volumes.

For more information, see recommendations for data encryption in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP Security SAP HANA Security: Maximum invalid connection attempts

The maximum_invalid_connect_attempts parameter in SAP HANA specifies the maximum number of failed login attempts; the user is locked as soon as this number is reached. We recommend that you set maximum_invalid_connect_attempts to 6 or higher.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Maximum password lifetime

The maximum_password_lifetime parameter in SAP HANA specifies the number of days after which a user's password expires. The parameter enforces security measures to change the user password periodically. Recommendation is to set maximum_password_lifetime parameter to 182 or lower.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Maximum unused initial password lifetime

The initial password is only meant to serve a temporary purpose. The maximum_unused_initial_password_lifetime parameter in SAP HANA specifies the number of days for which the initial password or any password set by a user administrator for a user is valid. Recommendation is to set maximum_unused_initial_password_lifetime to 7 or lower.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Maximum unused productive password lifetime

The maximum_unused_productive_password_lifetime parameter in SAP HANA specifies the number of days after which a password expires if the user has not logged in. It helps in reducing the risk of compromised accounts due to prolonged inactivity of the user account. We recommend that you set maximum_unused_productive_password_lifetime to 365 or lower.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Minimal Password Length

The minimal_password_length parameter in SAP HANA specifies the minimum number of characters that a password must contain.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Minimum Password Lifetime

The minimum_password_lifetime parameter in SAP HANA specifies the minimum number of days that must elapse before a user can change their password.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Password Expire Warning Time

Checks the number of days before a password is due to expire that the user receives notification.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Password Layout

The password_layout parameter in SAP HANA specifies the character types that the password must contain; at least one character of each selected character type is required.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Password lock time

The password_lock_time parameter in SAP HANA specifies the number of minutes for which a user is locked after the maximum number of failed login attempts. Recommendation is to set password_lock_time to 1440 or higher.

For more information, see password policy configuration options in the SAP HANA One security guide.
Severity: Medium Medium
SAP Security SAP HANA Security: Encryption status of the persistent data volume

We recommend that you protect SAP HANA data from unauthorized access. One way to do this is to encrypt the data at the operating system level. SAP HANA also supports encryption in the persistence layer, which can provide additional security. We recommend that you encrypt data volumes.

For more information, see recommendations for data encryption in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP Security SAP HANA Security: HANA versions affected by CVE-2019-0357

CVE-2019-0357 is a vulnerability that allows database users with administrator privileges to run operating system commands as root on particular SAP HANA versions.

For more information, see SAP security note for CVE-2019-0357.
Severity: Medium Medium
SAP Security SAP HANA Security: Users with debug privileges in production environment

At least one user has the DEBUG or ATTACH DEBUGGER privilege in the system. We recommend not to have users with this privilege.

For more information, see recommendations for database users in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP Security SAP HANA Security: Restricted senders in system replication configuration

System replication is configured with allowed_sender when the listen interface is .global.

For more information, see recommendations for network configurations in SAP HANA security checklists and recommendations.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Enable data and log compression

Data and log compression can be used for the initial full data shipping, the subsequential delta data shipping, as well as for the continuous log shipping. Data and log compression can be configured to reduce the amount of traffic between systems, especially over long distances (for example, when using the ASYNC replication mode).

For more information, see Data and Log Compression in the SAP HANA System Replication guide.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Set logshipping_async_buffer_size on the primary site

If system replication is disconnected during a full data shipment, then replication has to start from scratch. In order to reduce the risk of buffer full situations, the logshipping_async_buffer_size parameter can be adjusted to a value of 1 GB on the primary site.

For more information, see SAP HANA System Replication in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Use the recommended value for the datashipping_parallel_channels parameter

The SAP HANA parameter datashipping_parallel_channels defines the number of network channels used by full or delta datashipping. The default value is 4, which means that four network channels are used to ship data.

For more information, see datashipping_parallel_channels in the SAP HANA Administration Guide.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Use the recommended value for the logshipping_max_retention_size parameter

In context of logreplay operations modes the logshipping_max_retention_size SAP HANA parameter defines the maximum amount of redo logs that are kept on primary site for synchronization with the secondary site (default: 1 TB). If the underlying file system isn't large enough to hold the complete configured retention size, it can happen in the worst case that the file system runs full and the primary site comes to a standstill.

For more information, see SAP HANA System Replication in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check the status of the HANA license

A permanent license key is required to operate on a HANA system. If a permanent license key expires, a (second) temporary license key is automatically generated and will be valid for 28 days.

For more information, see License Keys for SAP HANA Database in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check the status of logmode

If log_mode is set to 'normal', HANA creates regular log backups, allowing for point-in-time recovery (restoring up to the moment before a failure). If log_mode is set to 'overwrite', no log backups are created; you can only recover the database to the last data backup.

For more information, see Log Modes in the SAP HANA Administration Guide.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Regular backup catalog housekeeping is needed to improve backup performance

The backup catalog can grow quite large over time, especially if it is not regularly cleaned up. This can lead to performance problems and can make it difficult to find the backups that are needed.

For more information, see SAP HANA multiple issue caused by large Log Backups due to large Backup Catalog size in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check for appropriate configuration of the automatic_reorg_threshold parameter

The automatic_reorg_threshold parameter specifies when automatic reorganization of row store tables is triggered. If the value is set to 30(default) automatic reorganization will not be triggered as often as it could be.

For more information, see Incorrect SAP HANA Alert 71: 'Row store fragmentation' in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check for appropriate configuration of the log_disk_usage_reclaim_threshold parameter

If the log partition file system disk usage ('usedDiskSpace' in percent of 'totalDiskSpace') is above the specified threshold, the logger will automatically trigger an internal 'log release' (0 = disabled). As default, the logger will keep all free log segments cached for reuse, segments will only be removed if a reclaim is triggered explicitly via 'ALTER SYSTEM RECLAIM LOG' or if a 'DiskFull'/'LogFull' event is hit on logger level. This threshold parameter can be used to trigger the reclaim internally before a 'DiskFull'/'LogFull' situation occurs.

For more information, see log_disk_usage_reclaim_threshold in the SAP HANA Configuration Parameter Reference .
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Verify the time of the last table consistency check

Regular consistency checks are required to detect hidden corruptions as early as possible.

For more information, see SAP HANA Consistency Checks and Corruptions in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check for appropriate configuration of garbage collection parameters

In databases with more than 235 GB allocation limit, the gc_unused_memory_threshold_rel and gc_unused_memory_threshold_abs parameters have to be configured. These parameters help to reduce the risk of hiccups (e.g. due to MemoryReclaim waits) when garbage collection happens reactively.

For more information, see SAP HANA Garbage Collection in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check for appropriate configuration of the max_cpuload_for_parallel_merge parameter

By default, multiple auto merges (up to num_merge_threads) of different tables or partitions can be executed up to a CPU utilization limit of 45%, but as soon as this limit is exceeded, a maximum of one auto merge is executed at any time. This can in the worst case result in an increased auto merge backlog although sufficient system resources for handling parallel auto merges would still be available. In this case you can consider increasing this parameter to a value that is both higher than the usual CPU utilization and lower than a critical limit that would allow auto merges to introduce resource bottlenecks.

For more information, see SAP HANA Delta Merges in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check for appropriate configuration of the parallel_merge_threads parameter

If parallel_merge_threads is set to a specific value, this value is used for parallelism while token_per_table defines the number of consumed tokens.

For more information, see SAP HANA Delta Merges in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Verify default and worker stack size parameters

The thread stack parameter default_stack_size_kb and worker_stack_size_kb determines the amount of data a newly created thread can access.

For more information, see Indexserver Crash Due to STACK OVERFLOW in Evaluator::ExpressionParser in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check to see that all hosts in a scale-out environment have a consistent OS version and Kernel version

In a scale-out SAP HANA environment, maintaining consistency in OS and kernel across all nodes within the system is crucial for optimal performance and stability.

For more information, see SAP HANA: Supported Operating Systems in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Insights: Check to see that all hosts in a scale-out environment have a consistent timezone

In a scale-out SAP HANA environment, maintaining consistency in timezones is crucial to maintain system stability.

For more information, see Check HANA DB for DST switch in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Performance: Check for appropriate configuration of the tables_preloaded_in_parallel parameter in X4 VMs

The tables_preloaded_in_parallel parameter lets you control the number of tables loaded in parallel after you start your SAP HANA system, providing flexibility for performance optimization. We recommend a minimum value of 32.

For more information, see SAP HANA Loads and Unloads in the SAP Knowledge Base.
Severity: Medium Medium
SAP HANA Insights SAP HANA Performance: Enable the load_table_numa_aware parameter

To improve the performance of NUMA-based SAP HANA systems, enable the load_table_numa_aware parameter. When this parameter is enabled, SAP HANA optimizes data placement across NUMA nodes during table loading.

For more information, see SAP HANA Non-Uniform Memory Access (NUMA) in the SAP Knowledge Base.
Severity: Medium Medium
SAP General SAP General: Configure OS settings for X4 instances

To ensure that X4 instances are optimized to support SAP workloads, you must run the command-line utility provided by Google Cloud's Agent for SAP to verify that the OS configuration matches best practice recommendations.

For more information, see Post-deployment tasks in the SAP HANA planning guide.
Severity: Medium Medium