SAP HANA operations guide

This guide provides instructions for operating SAP HANA systems deployed on Google Cloud by following Terraform: SAP HANA scale-up deployment guide. Note that this guide is not intended to replace any of the standard SAP documentation.

Administering an SAP HANA system on Google Cloud

This section shows how to perform administrative tasks typically required to operate an SAP HANA system, including information about starting, stopping, and cloning systems.

Starting and stopping instances

You can stop one or multiple SAP HANA hosts at any time; stopping an instance shuts down the instance. If the shutdown doesn't complete within 2 minutes, the instance is forced to halt. As a best practice, you should first stop SAP HANA running on the instance before you stop the instance.

Stopping a VM

Stopping a virtual machine (VM) instance causes Compute Engine to send the ACPI power-off signal to the instance. You are not billed for the Compute Engine instance after the instance is stopped. If you have persistent disks attached to the instance, the disks are not deleted and you will be charged for them.

If the data on the persistent disk is important, you can either keep the disk or create a snapshot of the persistent disk and delete the disk to save on costs. You can create another disk from the snapshot when you need the data again.

To stop an instance:

  1. In the Google Cloud console, go to the VM Instances page.

    Go to VM Instances page

  2. Select one or more instances that you want to stop.

  3. At the top of the VM instances page, click STOP.

For more information, see Stopping an instance.

Restarting a VM

  1. In the Google Cloud console, go to the VM Instances page.

    Go to VM Instances page

  2. Select the instances that you want to restart.

  3. At the top right-hand of the page, click START to restart the instances.

For more information, see Restarting an instance.

Modifying a VM

You can change various attributes of a VM, including the VM type, after the VM is deployed. Some changes might require you to restore your SAP system from backups, while others only require you to restart the VM.

For more information, see Modifying VM configurations for SAP systems.

Creating a snapshot of SAP HANA

To generate a point-in-time backup of your persistent disk, you can create a snapshot. Compute Engine redundantly stores multiple copies of each snapshot across multiple locations with automatic checksums to ensure the integrity of your data.

To create a snapshot, follow the Compute Engine instructions for creating snapshots. Pay careful attention to the preparation steps before creating a consistent snapshot, such as flushing the disk buffers to disk, to make sure that the snapshot is consistent.

Snapshots are useful for the following use cases:

Use case Details
Provide an easy, software-independent, and cost-effective data backup solution. Backup your data, log, backup and shared disks with snapshots. Schedule a daily backup of these disks for point in time backups of your entire dataset. After the first snapshot, only the incremental block changes are stored in subsequent snapshots. This helps save costs.
Migrate to a different storage type. Compute Engine offers different types of persistent disks, including types backed by standard (magnetic) storage and types backed by solid-state drive storage (SSD-based persistent disks). Each has different cost and performance characteristics. For example, use a standard type for your backup volume and use an SSD-based type for the /hana/log and /hana/data volumes, because they require higher performance. To migrate between storage types, use the volume snapshot, then create a new volume using the snapshot and select a different storage type.
Migrate SAP HANA to another region or zone. Use snapshots to move your SAP HANA system from one zone to another zone in the same region or even to another region. Snapshots can be used globally within Google Cloud to create disks in another zone or region. To move to another region or zone, create a snapshot of your disks including the root disk, and then create the virtual machines in your desired zone or region with disks created from those snapshots.

Migrating existing SAP HANA Persistent Disk volumes to Hyperdisk Extreme volumes

You can migrate existing Persistent Disk volumes to Hyperdisk Extreme volumes for your SAP HANA systems running on Google Cloud. Hyperdisk Extreme provides better performance for SAP HANA than the SSD-based Persistent Disk types.

To migrate your Persistent Disk volumes to Hyperdisk Extreme volumes, you use Google Cloud's persistent disk snapshots and SAP HANA Fast Restart Option. The SAP HANA Fast Restart option is used as a helper that reduces downtime when switching disk types by eliminating the need to wait for the tables to load. You should take into account the time needed to reload row store and Binary Large Object (BLOB) data types.

Although the migration process requires minimal downtime, the actual duration of the downtime depends on the time taken to complete the following tasks:

  • Creating snapshots. To reduce downtime while taking snapshots, you can take snapshots of the disks before the planned migration activity, and then take some additional snapshots closer to the activity, which results in a smaller difference between the snapshots.
  • Creating Hyperdisk Extreme volumes using the snapshots of your Persistent Disk volumes.
  • Reloading SAP HANA tables into SAP HANA memory.

In the event of any problem during the migration, you can revert to the existing disks because they are unaffected by this procedure, and are available until your delete them.

Before you begin

Before you migrate your SAP HANA Persistent Disk volumes to Hyperdisk Extreme volumes, make sure the following conditions are met:

  • SAP HANA is running on a certified Compute Engine VM type that supports Hyperdisk Extreme.
  • SAP HANA data and log use separate persistent disks for /hana/data and /hana/log volumes.
  • Linux logical volume management is used for SAP HANA storage persistence. While direct storage can be used, it would require explicit device remapping through /etc/fstab table.
  • SAP HANA Fast Restart is enabled for your SAP HANA system. For more information about how to enable SAP HANA Fast Restart, see Enabling SAP HANA Fast Restart.
  • A valid backup of the SAP HANA database is available. This backup can be used to restore the database, if required.
  • If the target VM instance is part of a high-availability cluster, then make sure that the cluster is in maintenance mode.
  • The SAP HANA database is up and running.
  • The tmpfs file system is fully loaded with the content of the MAIN data fragments. To view the file system utilization, run the following command: df -Th.

    You should see output similar to the following:

    #  df -Th
    Filesystem                        Type      Size  Used Avail Use% Mounted on
    ...
    /dev/mapper/vg_hana_shared-shared xfs       1.0T   56G  968G   6% /hana/shared
    /dev/mapper/vg_hana_data-data     xfs        14T  5.7T  8.2T  41% /hana/data
    /dev/mapper/vg_hana_log-log       xfs       512G  7.2G  505G   2% /hana/log
    /dev/mapper/vg_hana_usrsap-usrsap xfs        32G  276M   32G   1% /usr/sap
    tmpfsDB10                         tmpfs     5.7T  800G  4.9T  14% /hana/tmpfs0/DB1
    tmpfsDB11                         tmpfs     5.7T  796G  4.9T  14% /hana/tmpfs1/DB1
    tmpfsDB12                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs2/DB1
    tmpfsDB13                         tmpfs     5.7T  780G  4.9T  14% /hana/tmpfs3/DB1
    tmpfsDB14                         tmpfs     5.7T  816G  4.9T  15% /hana/tmpfs4/DB1
    tmpfsDB15                         tmpfs     5.7T  780G  4.9T  14% /hana/tmpfs5/DB1
    tmpfsDB16                         tmpfs     5.7T  816G  4.9T  15% /hana/tmpfs6/DB1
    tmpfsDB17                         tmpfs     5.7T  780G  4.9T  14% /hana/tmpfs7/DB1
    

Migrate Persistent Disk volumes to Hyperdisk Extreme volumes

This section describes how to migrate the disk type of two persistent disks for /hana/data and /hana/logvolumes from Persistent Disks (pd-ssd) to Hyperdisk Extreme.

To illustrate the migration process, the following example configuration is used:

  • Machine type: m2-ultramem-416 (12 TB memory, 416 vCPUs)
  • SAP HANA scale-up system deployed using the Google Cloud document Terraform: SAP HANA scale-up deployment guide.
    • OS: SLES for SAP 15 SP1
    • SAP HANA: HANA 2 SPS06, Patch 63
    • Default disk type: pd-ssd
    • /hana/data and /hana/log volumes are mounted on separate disks and are built using LVM and XFS
    • The SAP HANA Fast Restart is enabled and about 6 TB of data is loaded into the database

To migrate Persistent Disk volumes to Hyperdisk Extreme volumes, perform the following steps:

  1. Stop the SAP HANA database by using one of the following commands:

    HDB stop
    

    Or,

    sapcontrol -nr INSTANCE_NUMBER -function StopSystem HDB
    

    Replace INSTANCE_NUMBER with the instance number for your SAP HANA system.

    For more information, see the SAP document Starting and Stopping SAP HANA Systems.

  2. Unmount the /hana/data and /hana/log file systems:

    umount /hana/data
    umount /hana/log
    
  3. Determine the names of data and log persistent disks by using one of the following methods:

    • Run the following command:

      ls -l /dev/disk/by-id/
      

      The output shows the mapping of disk names to devices:

      ...
      lrwxrwxrwx 1 root root  9 May 18 20:14 google-hana-vm-data00001 -> ../../sdb
      lrwxrwxrwx 1 root root  9 May 18 20:14 google-hana-vm-log00001 -> ../../sdc
      ...
      
    • Run the following gcloud compute command:

      gcloud compute instances describe INSTANCE_NAME --zone=ZONE
      
      

      Replace the following:

      • INSTANCE_NAME: the name of the VM instance.
      • ZONE: the zone of the VM instance.

      The output shows the VM instance details including the associated disk information:

      gcloud compute instances describe hana-vm --zone europe-west4-a
      ...
      disks:
      - autoDelete: false
      deviceName: hana-vm-shared00001
      diskSizeGb: '1024'
      - autoDelete: false
      deviceName: hana-vm-usrsap00001
      diskSizeGb: '32'
      - autoDelete: false
      deviceName: hana-vm-data00001
      diskSizeGb: '14093'
      - autoDelete: false
      deviceName: hana-vm-log00001
      diskSizeGb: '512'
      
      
    • In the Google Cloud console, go to the Compute Engine VM instances page, and click the VM name. The Storage section displays the associated disk information.

  4. Create snapshots of the data and log persistent disks:

    gcloud compute snapshots create DATA_DISK-snapshot \
      --project=PROJECT_ID \
      --source-disk-zone=SOURCE_DISK_ZONE  \
      --source-disk=DATA_DISK
    gcloud compute snapshots create LOG_DISK-snapshot \
      --project=PROJECT_ID \
      --source-disk-zone=SOURCE_DISK_ZONE \
      --source-disk=LOG_DISK
    

    Replace the following:

    • DATA_DISK: the name of the data persistent disk from which you need to create a snapshot. This name is prefixed to the data volume snapshot.
    • LOG_DISK: the name of the log persistent disk from which you need to create a snapshot. This name is prefixed to the log volume snapshot.
    • PROJECT_ID: the ID of the project.
    • SOURCE_DISK_ZONE: the zone of the persistent disk from which you need to create a snapshot.

    For more information about creating snapshots, see Create and manage disk snapshots.

  5. Create new Hyperdisk Extreme disks for /hana/data and /hana/log volumes based on the snapshots:

    gcloud compute disks create DATA_DISK-hdx \
         --project=PROJECT_ID \
         --zone=ZONE \
         --type=hyperdisk-extreme \
         --provisioned-iops=IOPS_DATA_DISK \
         --source-snapshot=DATA_DISK-snapshot
     gcloud compute disks create LOG_DISK-hdx \
         --project=PROJECT_ID \
         --zone=ZONE \
         --type=hyperdisk-extreme \
         --provisioned-iops=IOPS_LOG_DISK \
         --source-snapshot=LOG_DISK-snapshot
    

    Replace the following:

    • DATA_DISK: the name of the original persistent disk data volume that is prefixed to the Hyperdisk Extreme data volume and snapshot of the data volume.
    • LOG_DISK: the name of the original persistent disk log volume that is prefixed to the Hyperdisk Extreme log volume and snapshot of the log volume.
    • PROJECT_ID: the ID of the project.
    • ZONE: the zone where you need to create Hyperdisk Extreme disks.
    • IOPS_DATA_DISK: the provisioned IOPS of the Hyperdisk Extreme disk for data volume. You set IOPS according to your performance requirements.
    • IOPS_LOG_DISK: the provisioned IOPS of the Hyperdisk Extreme disk for log volume. You set IOPS according to your performance requirements.

      For information about the minimum IOPS for Hyperdisk Extreme volumes attached to your instance type, see Minimum sizes for SSD-based persistent disks and Hyperdisks.

    For more information about restoring from a snapshot, see Restore from a snapshot.

  6. Detach the old SAP HANA persistent disks:

    gcloud compute instances detach-disk INSTANCE_NAME \
      --disk=DATA_DISK \
      --zone=ZONE
    gcloud compute instances detach-disk INSTANCE_NAME \
      --disk=LOG_DISK \
      --zone=ZONE
    

    Replace the following:

    • INSTANCE_NAME: the name of the VM instance.
    • DATA_DISK: the name of the data volume persistent disk to detach.
    • LOG_DISK: the name of the log volume persistent disk to detach.
    • ZONE: the zone where the persistent disks exist.
  7. Attach the new Hyperdisk Extreme disks.

    gcloud compute instances attach-disk INSTANCE_NAME \
        --disk=DATA_DISK-hdx \
        --zone=ZONE
    gcloud compute instances attach-disk INSTANCE_NAME \
       --disk=LOG_DISK-hdx \
        --zone=ZONE
    

    Replace the following:

    • INSTANCE_NAME: the name of the VM instance.
    • DATA_DISK: the name of the data volume Hyperdisk Extreme to attach.
    • LOG_DISK: the name of the log volume Hyperdisk Extreme to attach.
    • ZONE: the zone where the new Hyperdisk Extreme disks exist.
  8. To mount the new volumes, perform the following steps as sudo or root user:

    1. Remove all device mapping definitions to avoid LVM device mapping conflicts:

      dmsetup remove_all
      
    2. Scan all disks for volume groups, rebuild caches, and create missing volumes (including LVM):

      vgscan -v --mknodes
      

      You should see output similar to the following:

      Scanning all devices to initialize lvmetad.
      Reading volume groups from cache.
      Found volume group "vg_hana_data" using metadata type lvm2
      Found volume group "vg_hana_shared" using metadata type lvm2
      Found volume group "vg_hana_log" using metadata type lvm2
      Found volume group "vg_hana_usrsap" using metadata type lvm2
      
    3. Activate the volume groups:

      vgchange -ay
      

      You should see output similar to the following:

       1 logical volume(s) in volume group "vg_hana_data" now active
       1 logical volume(s) in volume group "vg_hana_shared" now active
       1 logical volume(s) in volume group "vg_hana_log" now active
       1 logical volume(s) in volume group "vg_hana_usrsap" now active
      
    4. Scan for logical volumes:

      lvscan
      

      You should see output similar to the following:

      ACTIVE            '/dev/vg_hana_data/data' [13.76 TiB] inherit
      ACTIVE            '/dev/vg_hana_shared/shared' [1024.00 GiB] inherit
      ACTIVE            '/dev/vg_hana_log/log' [512.00 GiB] inherit
      ACTIVE            '/dev/vg_hana_usrsap/usrsap' [32.00 GiB] inherit
      
    5. Mount the disks:

      mount -av
      

      You should see output similar to the following:

      /                        : ignored
      /boot/efi                : already mounted
      /hana/shared             : already mounted
      /hana/data               : already mounted
      /hana/log                : already mounted
      /usr/sap                 : already mounted
      swap                     : ignored
      /hana/tmpfs0/DB1         : already mounted
      /hana/tmpfs1/DB1         : already mounted
      /hana/tmpfs2/DB1         : already mounted
      /hana/tmpfs3/DB1         : already mounted
      /hana/tmpfs4/DB1         : already mounted
      /hana/tmpfs5/DB1         : already mounted
      /hana/tmpfs6/DB1         : already mounted
      /hana/tmpfs7/DB1         : already mounted
      
  9. Verify the new volumes:

    • Verify the file systems utilization:

      df -Th
      

      You should see output similar to the following:

      Filesystem                        Type      Size  Used Avail Use% Mounted on
      ...
      /dev/mapper/vg_hana_shared-shared xfs       1.0T   56G  968G   6% /hana/shared
      /dev/mapper/vg_hana_usrsap-usrsap xfs        32G  277M   32G   1% /usr/sap
      tmpfsDB10                         tmpfs     5.7T  784G  4.9T  14% /hana/tmpfs0/DB1
      tmpfsDB11                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs1/DB1
      tmpfsDB12                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs2/DB1
      tmpfsDB13                         tmpfs     5.7T  782G  4.9T  14% /hana/tmpfs3/DB1
      tmpfsDB14                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs4/DB1
      tmpfsDB15                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs5/DB1
      tmpfsDB16                         tmpfs     5.7T  783G  4.9T  14% /hana/tmpfs6/DB1
      tmpfsDB17                         tmpfs     5.7T  782G  4.9T  14% /hana/tmpfs7/DB1
      /dev/mapper/vg_hana_log-log       xfs       512G  7.2G  505G   2% /hana/log
      /dev/mapper/vg_hana_data-data     xfs        14T  5.7T  8.2T  41% /hana/data
      
    • Verify that the devices are linked to the new volumes:

      lsblk
      

      You should see output similar to the following:

      NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      ...
      sdd                       8:48   0    1T  0 disk
      └─vg_hana_shared-shared 254:0    0 1024G  0 lvm  /hana/shared
      sde                       8:64   0   32G  0 disk
      └─vg_hana_usrsap-usrsap 254:3    0   32G  0 lvm  /usr/sap
      sdf                       8:80   0 13.8T  0 disk
      └─vg_hana_data-data     254:1    0 13.8T  0 lvm  /hana/data
      sdg                       8:96   0  512G  0 disk
      └─vg_hana_log-log       254:2    0  512G  0 lvm  /hana/log
      
  10. Start the SAP HANA instance by using one of the following commands:

    HDB start
    

    Or,

    sapcontrol -nr INSTANCE_NUMBER -function StartSystem HDB
    

    Replace INSTANCE_NUMBER with the instance number for your SAP HANA system.

    For more information, see Starting and Stopping SAP HANA Systems.

Fallback

If the disk migration fails, you can use the original disks as a fallback option because they contain the data as it existed before the migration procedure began.

To restore the original state, perform the following steps:

  1. Stop the VM instance.
  2. Detach the newly created Hyperdisk Extreme volumes.
  3. Reattach the original disks to the VM instance.
  4. Start the VM instance.

Changing the disk settings

You can change the provisioned IOPS or throughput, or increase the size of Hyperdisk volumes once every 4 hours. If you attempt to modify the disk again before the 4 hours have expired, then you receive a rate limited error message like Cannot update provisioned throughput due to being rate limited. To resolve these errors, wait for 4 hours since your last modification before attempting to modify the disk again.

Use this procedure only in emergencies when you can't wait for 4 hours to adjust the disk size, provisioned IOPS, or throughput of the Hyperdisk volumes.

To change the disk settings, perform the following steps:

  1. Stop your SAP HANA instance by running one of the following commands:

    • HDB stop
    • sapcontrol -nr INSTANCE_NUMBER -function StopSystem HDB

    Replace INSTANCE_NUMBER with the instance number for your SAP HANA system.

    For more information, see Starting and Stopping SAP HANA Systems.

  2. Create a snapshot or image of your existing disk:

    Snapshot-based backup

      gcloud compute snapshots create SNAPSHOT_NAME \
          --project=PROJECT_NAME \
          --source-disk=SOURCE_DISK_NAME \
          --source-disk-zone=ZONE \
          --storage-location=LOCATION
    

    Replace the following:

    • SNAPSHOT_NAME: name of the snapshot that you want to create.
    • PROJECT_NAME: the name of your Google Cloud project.
    • SOURCE_DISK_NAME: the source disk used to create the snapshot.
    • ZONE: zone of the source disk to operate on.
    • LOCATION: Cloud Storage location, either regional or multi-regional, where snapshot content is to be stored.

      For more information, see Create and manage disk snapshots.

    Image-based backup

      gcloud compute images create IMAGE_NAME \
          --project=PROJECT_NAME \
          --source-disk=SOURCE_DISK_NAME \
          --source-disk-zone=ZONE \
          --storage-location=LOCATION
    

    Replace the following:

    • IMAGE_NAME: name of the disk image that you want to create.
    • PROJECT_NAME: the name of your Google Cloud project.
    • SOURCE_DISK_NAME: the source disk used to create the image.
    • ZONE: zone of the source disk to operate on.
    • LOCATION: Cloud Storage location, either regional or multi-regional, where image content is to be stored.

      For more information, see Create custom images.

  3. Create a new disk from the snapshot or image.

    For Hyperdisk volumes, make sure to specify the disk size, IOPS, and throughput to meet your workload requirements. For more information about provisioning IOPS and throughput for Hyperdisk, see About IOPS and throughput provisioning for Hyperdisk.

    From a snapshot

      gcloud compute disks create NEW_DISK_NAME \
          --project=PROJECT_NAME \
          --type=DISK_TYPE \
          --size=DISK_SIZE \
          --zone=ZONE \
          --source-snapshot=SOURCE_SNAPSHOT_NAME \
          --provisioned-iops=IOPS \
          --provisioned-throughput=THROUGHPUT
    

    Replace the following:

    • NEW_DISK_NAME: name of the disk that you want to create.
    • PROJECT_NAME: the name of your Google Cloud project.
    • DISK_TYPE: the type of disk to create.
    • DISK_SIZE: size of the disk.
    • ZONE: zone of the disks to create.
    • SOURCE_SNAPSHOT: source snapshot used to create the disks.
    • IOPS: provisioned IOPS of disk to create.
    • THROUGHPUT: provisioned throughput of disk to create.

    From an image

        gcloud compute disks create NEW_DISK_NAME \
            --project=PROJECT_NAME \
            --type=DISK_TYPE \
            --size=DISK_SIZE \
            --zone=ZONE \
            --image=SOURCE_IMAGE_NAME \
            --image-project=IMAGE_PROJECT_NAME \
            --provisioned-iops=IOPS \
            --provisioned-throughput=THROUGHPUT
    

    Replace the following:

    • NEW_DISK_NAME: name of the disk that you want to create.
    • PROJECT_NAME: the name of your Google Cloud project.
    • DISK_TYPE: the type of disk to create.
    • DISK_SIZE: size of the disk.
    • ZONE: zone of the disks to create.
    • SOURE_IMAGE_NAME: the source image to apply to the disks being created.
    • IMAGE_PROJECT_NAME: the Google Cloud project against which all image and image family references are going to be resolved.
    • IOPS: provisioned IOPS of disk to create.
    • THROUGHPUT: provisioned throughput of disk to create.

    For more information, see gcloud compute disks create.

  4. Detach the existing disk from your SAP HANA system:

    gcloud compute instances detach-disk INSTANCE_NAME \
        --disk OLD_DISK_NAME \
        --zone ZONE \
        --project PROJECT_NAME
    

    Replace the following:

    • INSTANCE_NAME: name of the instance to operate on.
    • OLD_DISK_NAME: the disk to detach by its resource name.
    • ZONE: zone of the instance to operate on.
    • PROJECT_NAME: the name of your Google Cloud project.

    For more information, see gcloud compute instances detach-disk.

  5. Attach the new disk to your SAP HANA system:

    gcloud compute instances attach-disk INSTANCE_NAME \
        --disk NEW_DISK_NAME \
        --zone ZONE \
        --project PROJECT_NAME
    

    Replace the following:

    • INSTANCE_NAME: name of the instance to operate on.
    • NEW_DISK_NAME: the name of the disk to attach to the instance.
    • ZONE: zone of the instance to operate on.
    • PROJECT_NAME: the name of your Google Cloud project.

    For more information, see gcloud compute instances attach-disk.

  6. Validate if the mount points are attached correctly:

      lsblk
    

    You should see output similar to the following:

        NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
        ...
        sdd                       8:48   0    1T  0 disk
        └─vg_hana_shared-shared 254:0    0 1024G  0 lvm  /hana/shared
        sde                       8:64   0   32G  0 disk
        └─vg_hana_usrsap-usrsap 254:3    0   32G  0 lvm  /usr/sap
        sdf                       8:80   0 13.8T  0 disk
        └─vg_hana_data-data     254:1    0 13.8T  0 lvm  /hana/data
        sdg                       8:96   0  512G  0 disk
        └─vg_hana_log-log       254:2    0  512G  0 lvm  /hana/log
    
  7. Start your SAP HANA instance by running one of the following commands:

    • HDB start
    • sapcontrol -nr INSTANCE_NUMBER -function StartSystem HDB

    Replace INSTANCE_NUMBER with the instance number for your SAP HANA system.

    For more information, see Starting and Stopping SAP HANA Systems.

  8. Validate the disk size, IOPS, and throughput of your new Hyperdisk volume:

    gcloud compute disks describe DISK_NAME \
        --zone ZONE \
        --project PROJECT_NAME
    

    Replace the following:

    • DISK_NAME: name of the disk to describe.
    • ZONE: zone of the disk to describe.
    • PROJECT_NAME: the name of your Google Cloud project.

    For more information, see gcloud compute disks describe.

Cloning your SAP HANA system

You can create snapshots of an existing SAP HANA system on Google Cloud to create an exact clone of the system.

To clone a single-host SAP HANA system:

  1. Create a snapshot of your data and backup disks.

  2. Create new disks using the snapshots.

  3. In the Google Cloud console, go to the VM Instances page.

    Go to VM Instances page

  4. Click the instance to clone to open the instance detail page, and then click Clone.

  5. Attach the disks that were created from the snapshots.

To clone a multi-host SAP HANA system:

  1. Provision a new SAP HANA system with the same configuration as the SAP HANA system you want to clone.

  2. Perform a data backup of the original system.

  3. Restore the backup of the original system into the new system.

Installing and updating the gcloud CLI

After a VM is deployed for SAP HANA and the operating system is installed, an up-to-date Google Cloud CLI is required for various purposes, such as transferring files to and from Cloud Storage, interacting with network services, and so forth.

If you follow the instructions in the SAP HANA deployment guide, then the gcloud CLI is installed automatically for you.

However, if you bring your own operating system to Google Cloud as a custom image or you are using an older public image provided by Google Cloud, then you might need to install or update the gcloud CLI yourself.

To check if the gcloud CLI is installed and whether updates are available, open a terminal or command prompt and enter:

 gcloud version

If the command is not recognized, the gcloud CLI is not installed.

To install the gcloud CLI, follow the instructions in Installing the gcloud CLI.

To replace version 140 or earlier of the SLES-integrated gcloud CLI:

  1. Log into the VM by using ssh.

  2. Switch to the super user:

     sudo su
    
  3. Enter the following commands:

     bash <(curl -s https://dl.google.com/dl/cloudsdk/channels/rapid/install_google_cloud_sdk.bash) --disable-prompts --install-dir=/usr/local
     update-alternatives --install /usr/bin/gsutil gsutil /usr/local/google-cloud-sdk/bin/gsutil 1 --force
     update-alternatives --install /usr/bin/gcloud gcloud /usr/local/google-cloud-sdk/bin/gcloud 1 --force
     gcloud --quiet compute instances list
    

Enabling SAP HANA Fast Restart

Google Cloud strongly recommends enabling SAP HANA Fast Restart for each instance of SAP HANA, especially for larger instances. SAP HANA Fast Restart reduces restart time in the event that SAP HANA terminates, but the operating system remains running.

As configured by the automation scripts that Google Cloud provides, the operating system and kernel settings already support SAP HANA Fast Restart. You need to define the tmpfs file system and configure SAP HANA.

To define the tmpfs file system and configure SAP HANA, you can follow the manual steps or use the automation script that Google Cloud provides to enable SAP HANA Fast Restart. For more information, see:

For the complete authoritative instructions for SAP HANA Fast Restart, see the SAP HANA Fast Restart Option documentation.

Manual steps

Configure the tmpfs file system

After the host VMs and the base SAP HANA systems are successfully deployed, you need to create and mount directories for the NUMA nodes in the tmpfs file system.

Display the NUMA topology of your VM

Before you can map the required tmpfs file system, you need to know how many NUMA nodes your VM has. To display the available NUMA nodes on a Compute Engine VM, enter the following command:

lscpu | grep NUMA

For example, an m2-ultramem-208 VM type has four NUMA nodes, numbered 0-3, as shown in the following example:

NUMA node(s):        4
NUMA node0 CPU(s):   0-25,104-129
NUMA node1 CPU(s):   26-51,130-155
NUMA node2 CPU(s):   52-77,156-181
NUMA node3 CPU(s):   78-103,182-207
Create the NUMA node directories

Create a directory for each NUMA node in your VM and set the permissions.

For example, for four NUMA nodes that are numbered 0-3:

mkdir -pv /hana/tmpfs{0..3}/SID
chown -R SID_LCadm:sapsys /hana/tmpfs*/SID
chmod 777 -R /hana/tmpfs*/SID
Mount the NUMA node directories to tmpfs

Mount the tmpfs file system directories and specify a NUMA node preference for each with mpol=prefer:

SID specify the SID with uppercase letters.

mount tmpfsSID0 -t tmpfs -o mpol=prefer:0 /hana/tmpfs0/SID
mount tmpfsSID1 -t tmpfs -o mpol=prefer:1 /hana/tmpfs1/SID
mount tmpfsSID2 -t tmpfs -o mpol=prefer:2 /hana/tmpfs2/SID
mount tmpfsSID3 -t tmpfs -o mpol=prefer:3 /hana/tmpfs3/SID
Update /etc/fstab

To ensure that the mount points are available after an operating system reboot, add entries into the file system table, /etc/fstab:

tmpfsSID0 /hana/tmpfs0/SID tmpfs rw,relatime,mpol=prefer:0
tmpfsSID1 /hana/tmpfs1/SID tmpfs rw,relatime,mpol=prefer:1
tmpfsSID1 /hana/tmpfs2/SID tmpfs rw,relatime,mpol=prefer:2
tmpfsSID1 /hana/tmpfs3/SID tmpfs rw,relatime,mpol=prefer:3

Optional: set limits on memory usage

The tmpfs file system can grow and shrink dynamically.

To limit the memory used by the tmpfs file system, you can set a size limit for a NUMA node volume with the size option. For example:

mount tmpfsSID0 -t tmpfs -o mpol=prefer:0,size=250G /hana/tmpfs0/SID

You can also limit overall tmpfs memory usage for all NUMA nodes for a given SAP HANA instance and a given server node by setting the persistent_memory_global_allocation_limit parameter in the [memorymanager] section of the global.ini file.

SAP HANA configuration for Fast Restart

To configure SAP HANA for Fast Restart, update the global.ini file and specify the tables to store in persistent memory.

Update the [persistence] section in the global.ini file

Configure the [persistence] section in the SAP HANA global.ini file to reference the tmpfs locations. Separate each tmpfs location with a semicolon:

[persistence]
basepath_datavolumes = /hana/data
basepath_logvolumes = /hana/log
basepath_persistent_memory_volumes = /hana/tmpfs0/SID;/hana/tmpfs1/SID;/hana/tmpfs2/SID;/hana/tmpfs3/SID

The preceding example specifies four memory volumes for four NUMA nodes, which corresponds to the m2-ultramem-208. If you were running on the m2-ultramem-416, you would need to configure eight memory volumes (0..7).

Restart SAP HANA after modifying the global.ini file.

SAP HANA can now use the tmpfs location as persistent memory space.

Specify the tables to store in persistent memory

Specify specific column tables or partitions to store in persistent memory.

For example, to turn on persistent memory for an existing table, execute the SQL query:

ALTER TABLE exampletable persistent memory ON immediate CASCADE

To change the default for new tables add the parameter table_default in the indexserver.ini file. For example:

[persistent_memory]
table_default = ON

For more information on how to control columns, tables and which monitoring views provide detailed information, see SAP HANA Persistent Memory.

Automated steps

The automation script that Google Cloud provides to enable SAP HANA Fast Restart makes changes to directories /hana/tmpfs*, file /etc/fstab, and SAP HANA configuration. When you run the script, you might need to perform additional steps depending on whether this is the initial deployment of your SAP HANA system or you are resizing your machine to a different NUMA size.

For the initial deployment of your SAP HANA system or resizing the machine to increase the number of NUMA nodes, make sure that SAP HANA is running during the execution of automation script that Google Cloud provides to enable SAP HANA Fast Restart.

When you resize your machine to decrease the number of NUMA nodes, make sure that SAP HANA is stopped during the execution of the automation script that Google Cloud provides to enable SAP HANA Fast Restart. After the script is executed, you need to manually update the SAP HANA configuration to complete the SAP HANA Fast Restart setup. For more information, see SAP HANA configuration for Fast Restart.

To enable SAP HANA Fast Restart, follow these steps:

  1. Establish an SSH connection with your host VM.

  2. Switch to root:

    sudo su -

  3. Download the sap_lib_hdbfr.sh script:

    wget https://storage.googleapis.com/cloudsapdeploy/terraform/latest/terraform/lib/sap_lib_hdbfr.sh
  4. Make the file executable:

    chmod +x sap_lib_hdbfr.sh
  5. Verify that the script has no errors:

    vi sap_lib_hdbfr.sh
    ./sap_lib_hdbfr.sh -help

    If the command returns an error, contact Cloud Customer Care. For more information about contacting Customer Care, see Getting support for SAP on Google Cloud.

  6. Run the script after replacing SAP HANA system ID (SID) and password for the SYSTEM user of the SAP HANA database. To securely provide the password, we recommend that you use a secret in Secret Manager.

    Run the script by using the name of a secret in Secret Manager. This secret must exist in the Google Cloud project that contains your host VM instance.

    sudo ./sap_lib_hdbfr.sh -h 'SID' -s SECRET_NAME 

    Replace the following:

    • SID: specify the SID with uppercase letters. For example, AHA.
    • SECRET_NAME: specify the name of the secret that corresponds to the password for the SYSTEM user of the SAP HANA database. This secret must exist in the Google Cloud project that contains your host VM instance.

    Alternatively, you can run the script using a plain text password. After SAP HANA Fast Restart is enabled, make sure to change your password. Using plain text password is not recommended as your password would be recorded in the command-line history of your VM.

    sudo ./sap_lib_hdbfr.sh -h 'SID' -p 'PASSWORD'

    Replace the following:

    • SID: specify the SID with uppercase letters. For example, AHA.
    • PASSWORD: specify the password for the SYSTEM user of the SAP HANA database.

For a successful initial run, you should see an output similar to the following:

INFO - Script is running in standalone mode
ls: cannot access '/hana/tmpfs*': No such file or directory
INFO - Setting up HANA Fast Restart for system 'TST/00'.
INFO - Number of NUMA nodes is 2
INFO - Number of directories /hana/tmpfs* is 0
INFO - HANA version 2.57
INFO - No directories /hana/tmpfs* exist. Assuming initial setup.
INFO - Creating 2 directories /hana/tmpfs* and mounting them
INFO - Adding /hana/tmpfs* entries to /etc/fstab. Copy is in /etc/fstab.20220625_030839
INFO - Updating the HANA configuration.
INFO - Running command: select * from dummy
DUMMY
"X"
1 row selected (overall time 4124 usec; server time 130 usec)

INFO - Running command: ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_persistent_memory_volumes') = '/hana/tmpfs0/TST;/hana/tmpfs1/TST;'
0 rows affected (overall time 3570 usec; server time 2239 usec)

INFO - Running command: ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistent_memory', 'table_unload_action') = 'retain';
0 rows affected (overall time 4308 usec; server time 2441 usec)

INFO - Running command: ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') SET ('persistent_memory', 'table_default') = 'ON';
0 rows affected (overall time 3422 usec; server time 2152 usec)

Setting up your SAP support channel with SAProuter

If you need to allow an SAP support engineer to access your SAP HANA systems on Google Cloud, you can do so using SAProuter. Follow these steps:

  1. Launch the Compute Engine VM instance that the SAProuter software will be installed on, and assign an external IP address so the instance has internet access.

  2. Create a new, static external IP address and then assign this IP address to the instance.

  3. Create and configure a specific SAProuter firewall rule in your network. In this rule, allow only the required inbound and outbound access to the SAP support network, for the SAProuter instance.

    Limit the inbound and outbound access to a specific IP address that SAP provides for you to connect to, along with TCP port 3299. Add a target tag to your firewall rule and enter your instance name. This ensures that the firewall rule applies only to the new instance. See the firewall rules documentation for additional details about creating and configuring firewall rules.

  4. Install the SAProuter software, following SAP Note 1628296, and create a saprouttab file that allows access from SAP to your SAP HANA systems on Google Cloud.

  5. Set up the connection with SAP. For your internet connection, use Secure Network Communication. For more information, see SAP Remote Support – Help.

Configuring your network

You are provisioning your SAP HANA system by using VMs with the Google Cloud virtual network. Google Cloud uses state-of-the-art, software-defined networking and distributed-systems technologies to host and deliver your services around the world.

For SAP HANA, create a non-default subnet network with non-overlapping CIDR IP address ranges for each subnetwork in the network. Note that each subnetwork and its internal IP address ranges are mapped to a single region.

A subnetwork spans all of the zones in the region where it is created. However, when you create a VM instance, you specify a zone and a subnetwork for the VM. For example, you can create one set of instances in subnetwork1 and in zone1 of region1 and another set of instances in subnetwork2 and in zone2 of region1, depending on your needs.

A new network has no firewall rules and hence no network access. You should create firewall rules that open access to your SAP HANA instances based on a minimum privilege model. The firewall rules apply to the entire network and can also be configured to apply to specific target instances by using the tagging mechanism.

Routes are global, not regional, resources that are attached to a single network. User-created routes apply to all instances in a network. This means you can add a route that forwards traffic from instance to instance within the same network, even across subnetworks, without requiring external IP addresses.

For your SAP HANA instance, launch the instance with no external IP address and configure another VM as a NAT gateway for external access. This configuration requires you to add your NAT gateway as a route for your SAP HANA instance. This procedure is described in the deployment guide.

Security

The following sections discuss security operations.

Minimum privilege model

Your first line of defense is to restrict who can reach the instance by using firewalls. By creating firewall rules, you can restrict all traffic to a network or target machines on a given set of ports to specific source IP addresses. You should follow the minimum-privilege model to restrict access to the specific IP addresses, protocols, and ports that need access. For example, you should always set up a bastion host, and allow SSH into your SAP HANA system only from that host.

Configuration changes

You should configure your SAP HANA system and the operating system with recommended security settings. For example, make sure that only relevant network ports are listed to allow access, harden the operating system you are running SAP HANA, and so on.

Refer to the following SAP notes (SAP user account required):

Disabling unneeded SAP HANA Services

If you do not require SAP HANA Extended Application Services (SAP HANA XS), disable the service. Refer to SAP note 1697613: Removing the SAP HANA XS Classic Engine service from the topology.

After the service has been disabled, remove all the TCP ports that were opened for the service. In Google Cloud, this means editing your firewall rules for your network to remove these ports from the access list.

Audit logging

Cloud Audit Logs consists of two log streams, admin activity and data access, both of which are automatically generated by Google Cloud. These can help you answer the questions, "Who did what, where, and when?" in your Google Cloud project.

Admin activity logs contain log entries for API calls or administrative actions that modify the configuration or metadata of a service or project. This log is always enabled and is visible by all project members.

Data access logs contain log entries for API calls that create, modify, or read user-provided data managed by a service, such as data stored in a database service. This type of logging is enabled by default in your project and is accessible to you through Cloud Logging, or through your activity feed.

Securing a Cloud Storage bucket

If you use Cloud Storage to host your backups for your data and log, make sure you use TLS (HTTPS) while sending data to Cloud Storage from your instances to protect data in transit. Cloud Storage automatically encrypts data at rest. You can specify your own encryption keys if you have your own key-management system.

Refer to the following additional security resources for your SAP HANA environment on Google Cloud:

High availability for SAP HANA on Google Cloud

Google Cloud provides a variety of options for ensuring high availability for your SAP HANA system, including the Compute Engine live migration and automatic restart features. These features, along with the high monthly uptime percentage of Compute Engine VMs, might make paying for and maintaining standby systems unnecessary.

However, if required, you can deploy a multi-host scale-out system that includes standby hosts for SAP HANA Host Auto-failover, or you can deploy a scale-up system with a standby SAP HANA instance in a high-availability Linux cluster.

For more information about the high availability options for SAP HANA on Google Cloud, see the SAP HANA high-availability planning guide.

Enable the SAP HANA HA/DR provider hook

SUSE recommends that you enable the SAP HANA HA/DR provider hooks, which allows SAP HANA to send out notifications for certain events and improves failure detection. The SAP HANA HA/DR provider hooks require SAP HANA 2.0 SPS 03 or a later version.

On both the primary and secondary site, complete the following steps:

  1. As root or SID_LCadm, open the global.ini file for editing:

    > vi /hana/shared/SID/global/hdb/custom/config/global.ini
  2. Add the following definitions to the global.ini file:

    Scale-up

    [ha_dr_provider_SAPHanaSR]
    provider = SAPHanaSR
    path = /usr/share/SAPHanaSR/
    execution_order = 1
    
    [ha_dr_provider_suschksrv]
    provider = susChkSrv
    path = /usr/share/SAPHanaSR/
    execution_order = 3
    action_on_lost = stop
    
    [trace]
    ha_dr_saphanasr = info

    Scale-out

    [ha_dr_provider_saphanasrmultitarget]
    provider = SAPHanaSrMultiTarget
    path = /usr/share/SAPHanaSR-ScaleOut/
    execution_order = 1
    
    [ha_dr_provider_sustkover]
    provider = susTkOver
    path = /usr/share/SAPHanaSR-ScaleOut/
    execution_order = 2
    sustkover_timeout = 30
    
    [ha_dr_provider_suschksrv]
    provider = susChkSrv
    path = /usr/share/SAPHanaSR-ScaleOut/
    execution_order = 3
    action_on_lost = stop
    
    [trace]
    ha_dr_saphanasrmultitarget = info
    ha_dr_sustkover = info

  3. As root, create a custom configuration file in the /etc/sudoers.d directory by running the following command. This new configuration file allows the SID_LCadm user to access the cluster node attributes when the srConnectionChanged() hook method is called.

    > sudo visudo -f /etc/sudoers.d/SAPHanaSR
  4. In the /etc/sudoers.d/SAPHanaSR file, add the following text:

    Scale-up

    Replace the following:

    • SITE_A: the site name of the primary SAP HANA server
    • SITE_B: the site name of the secondary SAP HANA server
    • SID_LC: the SID, specified using lowercase letters
    To view the site names, you can execute the command crm_mon -A1 | grep site, as the root user, on either the SAP HANA primary server or the secondary server.
    Cmnd_Alias SOK_SITEA = /usr/sbin/crm_attribute -n hana_SID_LC_site_srHook_SITE_A -v SOK -t crm_config -s SAPHanaSR
    Cmnd_Alias SFAIL_SITEA = /usr/sbin/crm_attribute -n hana_SID_LC_site_srHook_SITE_A -v SFAIL -t crm_config -s SAPHanaSR
    Cmnd_Alias SOK_SITEB = /usr/sbin/crm_attribute -n hana_SID_LC_site_srHook_SITE_B -v SOK -t crm_config -s SAPHanaSR
    Cmnd_Alias SFAIL_SITEB = /usr/sbin/crm_attribute -n hana_SID_LC_site_srHook_SITE_B -v SFAIL -t crm_config -s SAPHanaSR
    SID_LCadm ALL=(ALL) NOPASSWD: SOK_SITEA, SFAIL_SITEA, SOK_SITEB, SFAIL_SITEB

    Scale-out

    Replace SID_LC with the SID in lowercase letters.

    SID_LCadm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_SID_LC_site_srHook_*
    SID_LCadm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_SID_LC_gsh *
    SID_LCadm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=SID_LC *

  5. In your /etc/sudoers file, make sure that the following text is included:

    • For SLES for SAP 15 SP3 and higher:

      @includedir /etc/sudoers.d

    • For versions up to SLES for SAP 15 SP2:

      #includedir /etc/sudoers.d

      Note that the # in this text is part of the syntax and does not mean that the line is a comment.

  6. Set Pacemaker into maintenance mode:

    > crm configure property maintenance-mode=true

  7. Apply the changes:

    HANA SPS4 and above

    As SID_LCadm load the changes on both primary and secondary master SAP HANA nodes.

    > hdbnsutil -reloadHADRProviders

    Use one of the following options to avoid or minimize the downtime of your primary site:

    Option 1

    As SID_LCadm restart the secondary site.

    > HDB restart

    Option 2

    Perform a controlled failover from primary to secondary

    HANA SPS3

    As SID_LCadm restart both primary and secondary SAP HANA systems:

    > HDB restart

  8. Unset Pacemaker from maintenance mode:

    > crm configure property maintenance-mode=false

  9. After you complete the cluster configuration for SAP HANA, you can verify that the hook functions correctly during a failover test as described in Troubleshooting the SAPHanaSR python hook and HA cluster takeover takes too long on HANA indexserver failure.

Disaster recovery

The SAP HANA system provides several high availability features to make sure that your SAP HANA database can withstand failures at the software or infrastructure level. Among these features is SAP HANA System replication and SAP HANA backups, both of which Google Cloud supports.

For more information about SAP HANA backups, see Backup and recovery.

For more information about system replication, see the SAP HANA disaster recovery planning guide.

Backup and recovery

Backups are vital for protecting your system of record (your database). Because SAP HANA is an in-memory database, creating backups regularly and implementing a proper backup strategy help you recover your SAP HANA database in situations such as data corruption or data loss due to an unplanned outage or failure in your infrastructure. SAP HANA system provides built-in backup and recovery features to help you do this. You can use Google Cloud services such as Cloud Storage to serve as the backup destination for SAP HANA backup.

You can also enable the Backint feature of Google Cloud's Agent for SAP so that you can use Cloud Storage directly for backups and recoveries.

For information about backup and recovery recommendations for SAP HANA systems running on Compute Engine bare metal instances such as X4, see Backup and recovery for SAP HANA on bare metal instances.

This document assumes you are familiar with SAP HANA backup and recovery, along with the following SAP service Notes:

Using Compute Engine Persistent Disk volumes and Cloud Storage for backups

If you followed the Terraform based deployment instructions provided by Google Cloud to deploy your SAP HANA system, then you have an SAP HANA installation with a /hanabackup directory hosted on a Balanced Persistent Disk volume.

To create your online database backups to the /hanabackup directory, you use the standard SAP tools such as SAP HANA Studio, SAP HANA Cockpit, SAP ABAP transaction DB13, or the SAP HANA SQL statements. Finally, you save the completed backup by uploading it to a Cloud Storage bucket, from which you can download the backup, when you need to recover your SAP HANA system.

Using Compute Engine to create backups and disk snapshots

You can use Compute Engine for SAP HANA backups, and you also have the option of backing up the entire disk hosting your SAP HANA data and log volumes using standard disk snapshots.

If you followed the instructions in the deployment guide, then you have an SAP HANA installation with a /hanabackup directory for your online database backups. You can use that same directory to store snapshots of the /hanabackup volume and maintain a point-in-time backup of your SAP HANA data and log volumes.

An advantage of standard disk snapshots is that they are incremental, where each subsequent backup only stores incremental block changes instead of creating an entirely new backup. Compute Engine redundantly stores multiple copies of each snapshot across multiple locations with automatic checksums to ensure the integrity of your data.

The following is an illustration of incremental backups:

Snapshot diagram

Cloud Storage as your backup destination

Cloud Storage is a good choice to use as your backup destination for SAP HANA because it provides high durability and availability of data.

Cloud Storage is an object store for files of any type or format. It has virtually unlimited storage and you don't have to worry about provisioning it or adding more capacity to it. An object in Cloud Storage consists of file data and its associated metadata, and can be up to 5 TB in size. A Cloud Storage bucket can store any number of objects.

With Cloud Storage, your data is stored in multiple locations, which provides high durability and high availability. When you upload your data to Cloud Storage or copy your data within it, Cloud Storage reports the action as successful only if object redundancy is achieved.

The following table shows the storage options offered by Cloud Storage:

Data read/write frequency The recommended Cloud Storage option
Frequent reads or writes Choose the Standard storage class for databases that are in use, as they might frequently access Cloud Storage for writing and reading backup files.
Infrequent reads or writes Choose Nearline or Coldline storage for infrequently accessed data, such as archived backups that need to be maintained following your organization's retention policy. Nearline is a good choice for backed-up data that you plan to access at most once a month, while Coldline is better for data that has very low probability of being accessed, such as once a year at most.
Archival data Choose Archive storage for your long-term archival data. Archive is a good choice for data that you need to retain a copy of for an extended period of time, but which you don't intend to access more than once a year. For example, use Archive storage for backups that you need to retain for a long term to satisfy regulatory requirements. Consider replacing your tape-based backup solution with Archive.

When you plan your usage of these storage options, start with the frequently accessed tier and age your backup data through to the infrequent access tiers. Backups generally become rarely used as they become older. The probability of needing a backup that is 3 years old is extremely low and you can age this backup into the Archive tier to save on costs. For information about Cloud Storage costs, see Cloud Storage pricing.

Cloud Storage compared to tape backup

The conventional, on-premises backup destination is tape. Cloud Storage has many benefits over tape, including the ability to automatically store backups "offsite" from the source system, because data in Cloud Storage is replicated across multiple facilities. This also means that the backups stored in Cloud Storage are highly available.

Another key difference is the speed of restoring backups when you need to use them. If you need to create a new SAP HANA system from a backup, or restore an existing system from a backup, then Cloud Storage provides faster access to your data, which helps you build the system faster.

Backint feature of Google Cloud's Agent for SAP

You can use Cloud Storage directly for backups and recoveries for both on-premises and cloud installations by using the SAP-certified Backint feature of Google Cloud's Agent for SAP.

For more information about this feature, see Backint based backup and recovery for SAP HANA.

Back up and recover SAP HANA by using Backint

The following sections provide information about how you can back up and recover SAP HANA by using the Backint feature of Google Cloud's Agent for SAP.

Triggering data and delta backups

To trigger a backup for the SAP HANA data volume and send it to Cloud Storage using the Backint feature of Google Cloud's Agent for SAP, you can use SAP HANA Studio, SAP HANA Cockpit, SAP HANA SQL, or the DBA Cockpit.

The following are SAP HANA SQL statements for triggering data backups:

  • To create a full backup for the system database:

    BACKUP DATA USING BACKINT ('BACKUP_NAME');

    Replace BACKUP_NAME with the name that you want to set for the backup.

  • To create a full backup for a tenant database:

    BACKUP DATA FOR TENANT_SID USING BACKINT ('BACKUP_NAME');

    Replace TENANT_SID with the SID of the tenant database.

  • To create differential and incremental backups:

    BACKUP DATA BACKUP_TYPE USING BACKINT ('BACKUP_NAME');
    BACKUP DATA BACKUP_TYPE FOR TENANT_SID USING BACKINT ('BACKUP_NAME');
    

    Replace BACKUP_TYPE with DIFFERENTIAL or INCREMENTAL, depending on the type of backup that you want to create.

There are multiple options that you can use while triggering data backups. For information about these options, see the SAP HANA SQL reference guide BACKUP DATA Statement (Backup and Recovery).

For more information about data and delta backups, see the SAP documents Data Backups and Delta Backups.

Triggering log backups

To trigger a backup for the SAP HANA log volume and send it to Cloud Storage using the Backint feature of Google Cloud's Agent for SAP, complete the following steps:

  1. Create a full database backup. For instructions, see the SAP documentation for your SAP HANA version.
  2. In the SAP HANA global.ini file, set the parameter catalog_backup_using_backint to yes.

Make sure that the log mode for your SAP HANA system is normal, which is the default value. If the log mode is set to overwrite, then the SAP HANA database disables the creation of log backups.

For more information about log backups, see the SAP document Log Backups.

Querying the backup catalog

The SAP HANA backup catalog is a vital part of the backup and recovery operations. It contains information about the backups created for the SAP HANA database.

To query the backup catalog for information about backups of a tenant database, complete the following steps:

  1. Take the tenant database offline.
  2. On the system database, run the following SQL statement:

    BACKUP COMPLETE LIST DATA FOR TENANT_SID;

    Alternatively, to query for a specific point in time, run the following SQL statement:

    BACKUP LIST DATA FOR TENANT_SID UNTIL TIMESTAMP 'YYYY-MM-DD';

    The statement creates the strategyOutput.xml file in the following directory: /usr/sap/SID/HDBINSTANCE_NUMBER/HOST_NAME/trace/DB_TENANT_SID.

For information about the BACKUP LIST DATA statement, see the SAP HANA SQL reference guide BACKUP DATA Statement (Backup and Recovery). For information about the backup catalog, see the SAP document Backup Catalog.

Recovering a database

When you perform a recovery using a multi-streamed data backup, SAP HANA uses the same number of channels that were used when the backup was created. For more information, see the SAP document Prerequisites: Recovery Using Multistreamed Backups.

To restore an SAP HANA database backup that you created using the Backint feature of Google Cloud's Agent for SAP, SAP HANA provides the RECOVER DATA and RECOVER DATABASE SQL statements.

Both SQL statements restore backups from the Cloud Storage bucket that you specified for the bucket parameter in your PARAMETERS.json file, unless you've specified a bucket for the recover_bucket parameter.

The following are sample SQL statements for recovering an SAP HANA database using a backup that you created using the Backint feature of Google Cloud's Agent for SAP:

  • To recover a tenant database by specifying the backup filename:

    RECOVER DATA FOR TENANT_SID USING BACKINT('BACKUP_NAME') CLEAR LOG;
  • To recover a tenant database by specifying the backup ID:

    RECOVER DATA FOR TENANT_SID USING BACKUP_ID BACKUP_ID CLEAR LOG;

    Replace BACKUP_ID with the ID of the required backup.

  • To recover a tenant database by specifying the backup ID when you need to use the backup of the SAP HANA backup catalog, which is stored in your Cloud Storage bucket:

    RECOVER DATA FOR TENANT_SID USING BACKUP_ID BACKUP_ID USING CATALOG BACKINT CLEAR LOG;
  • To recover a tenant database to a specific point in time or to a specific log position:

    RECOVER DATABASE FOR TENANT_SID UNTIL TIMESTAMP 'YYYY-MM-DD HH:MM:SS' CHECK ACCESS USING BACKINT;
  • To recover a tenant database using a backup from an external database:

    RECOVER DATABASE FOR TENANT_SID UNTIL TIMESTAMP 'YYYY-MM-DD HH:MM:SS' CLEAR LOG USING SOURCE 'SOURCE_TENANT_SID@SOURCE_SID' USING CATALOG BACKINT CHECK ACCESS USING BACKINT

    Replace the following:

    • SOURCE_TENANT_SID: the SID of the source tenant database
    • SOURCE_SID: the SID of the SAP system where the source tenant database exists

If you need to recover an SAP HANA database when the SAP HANA backup catalog is not available in the backup stored in your Cloud Storage bucket, then follow the instructions given in the SAP Note 3227931 - Recover a HANA DB From Backint Without a HANA Backup Catalog.

Managing identity and access to backups

When you use Cloud Storage or Compute Engine to back up your SAP HANA data, access to those backups is controlled by Identity and Access Management (IAM). This feature gives admins the ability to authorize who can take action on specific resources. IAM provides you with centralized control and visibility for managing all of your Google Cloud resources, including your backups.

IAM also provides a full audit trail history of permissions authorization, removal, and delegation gets surfaced automatically for your admins. This lets you configure policies that monitor access to your data in the backups, allowing you to complete the full access-control cycle with your data. IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.

To grant a principal access to your backups in Cloud Storage:

  1. In the Google Cloud console, go to the IAM & Admin page:

    Go to IAM & Admin

  2. Specify the user to whom you want to grant access, and then assign the role Storage > Storage Object Creator:

    IAM screencap

How to create file system based backups for SAP HANA

SAP HANA systems deployed on Google Cloud using the deployment guide are configured with a set of Persistent Disk or Hyperdisk volumes to be used as an NFS-mounted backup destination. SAP HANA backups are first stored on these local disks, and after which you need to copy them to Cloud Storage for long-term storage. You can either manually copy the backups over to Cloud Storage or schedule the copy to Cloud Storage in a crontab.

If you are using the Backint feature of Google Cloud's Agent for SAP, then you back up to and recover from a Cloud Storage bucket directly, thereby negating the need for persistent disk storage for backups.

To start or schedule the SAP HANA data backups, you can use SAP HANA Studio, SQL commands, or the DBA Cockpit. Log backups are written automatically unless disabled. The following screenshot shows an example:

Backups screencap

Configuring SAP HANA global.ini

If you followed the deployment guide instructions, then the SAP HANA global.ini configuration file is customized with database backups stored in /hanabackup/data/ and automatic log archival files are stored in /hanabackup/log/. The following is an example of how the global.ini looks:

[persistence]
basepath_datavolumes = /hana/data
basepath_logvolumes = /hana/log
basepath_databackup = /hanabackup/data
basepath_logbackup = /hanabackup/log

[system_information]
usage = production

To customize the global.ini configuration file for the Backint feature of Google Cloud's Agent for SAP, see Configure SAP HANA for the Backint feature.

Notes for scale-out deployments

In a scale-out implementation, a high-availability solution that uses live migration and automatic restart works in the same way as in a single-host setup. The main difference is that the /hana/shared volume is NFS-mounted to all the worker hosts and mastered in the HANA master. There is a brief period of inaccessibility on the NFS volume in the event of a master host's live migration or auto restart. When the master host restarts, the NFS volume shortly starts functioning again on all hosts, and the normal operations resume automatically.

The SAP HANA backup volume, /hanabackup, must be available on all hosts during backup and recovery operations. In the event of failure, you must verify that /hanabackup is mounted on all hosts and remount any that are not. When you choose to copy the backup set to another volume or Cloud Storage, run the copy on the master host to achieve better I/O performance and reduce network usage. To simplify the backup and recovery process, you can use Cloud Storage Fuse to mount the Cloud Storage bucket on each host.

The scale-out performance is only as good as your data distribution. The better the data is distributed, the better your query performance is. This requires that you know your data well, understand how the data is being consumed, and design table distribution and partitioning accordingly. For more information, see the SAP Note 2081591 - FAQ: SAP HANA Table Distribution.

Gcloud Python

Gcloud Python is an idiomatic Python client that you can use to access Google Cloud services. This guide uses Gcloud Python to perform backup and restore operations to and from Cloud Storage for your SAP HANA database backups.

If you followed the deployment guide instructions, Gcloud Python libraries are already available in the Compute Engine instances.

The libraries are open source and allow you to operate on your Cloud Storage bucket to store and retrieve backup data.

You can run the following command to list objects in your Cloud Storage bucket. You can use it to list the available backups:

python 2>/dev/null - <<EOF
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.get_bucket("<bucket_name>")
blobs = bucket.list_blobs()
for fileblob in blobs:
     print(fileblob.name)
EOF

For complete details about Gcloud Python, see the storage client library reference documentation.

Example backup and restore

The following sections illustrate the procedure that you might follow for a typical backup and restore tasks using SAP HANA Studio.

Example backup creation
  1. In the SAP HANA Backup Editor, select Open Backup Wizard.

    Backup Wizard

    1. Select File as the destination type. This backs up the database to files in the specified file system.
    2. Specify the backup destination, /hanabackup/data/SID, and the backup prefix. Replace SID with the system ID of your SAP system.
    3. Click Next.
  2. Click Finish in the confirmation form to start the backup.

  3. When the backup starts, a status window displays the progress of your backup. Wait for the backup to complete.

    Backup Progress

    When the backup is complete, the backup summary displays a Finished message.

  4. Sign in to your SAP HANA system and verify that the backups are available at the expected locations in the file system. For example:

    Backup List1 Backup List2

  5. Push or synchronize the backup files from the /hanabackup file system to Cloud Storage. The following sample Python script pushes the data from /hanabackup/data and /hanabackup/log to the bucket used for backups, in the form NODE_NAME/DATA or LOG/YYYY/MM/DD/HH/BACKUP_FILE_NAME. This lets you identify backup files based on the time during which the backup was copied. Run this gcloud Python script on your operating system bash prompt:

    python 2>/dev/null - <<EOF
    import os
    import socket
    from datetime import datetime
    from google.cloud import storage
    storage_client = storage.Client()
    today = datetime.today()
    current_hour = today.strftime('%Y/%m/%d/%H')
    hostname = socket.gethostname()
    bucket = storage_client.get_bucket("hanabackup")
    for subdir, dirs, files in os.walk('/hanabackup/data/H2D/'):
      for file in files:
          backupfilename = os.path.join(subdir, file)
          if 'COMPLETE_DATA_BACKUP' in backupfilename:
                only_filename = backupfilename.split('/')[-1]
                backup_file = hostname + '/data/' + current_hour + '/' + only_filename
                blob = bucket.blob(backup_file)
                blob.upload_from_filename(filename=backupfilename)
    for subdir, dirs, files in os.walk('/hanabackup/log/H2D/'):
      for file in files:
          backupfilename = os.path.join(subdir, file)
          if 'COMPLETE_DATA_BACKUP' in backupfilename:
              only_filename = backupfilename.split('/')[-1]
              backup_file = hostname + '/log/' + current_hour + '/' + only_filename
              blob = bucket.blob(backup_file)
              blob.upload_from_filename(filename=backupfilename)
    EOF
    
  6. Use either the Gcloud Python libraries or Google Cloud console to list the backup data.

Example restoration of backup
  1. If the backup files are not available in the /hanabackup directory but are available in Cloud Storage, then download the files from Cloud Storage, by running the following script from your operating system bash prompt:

    python - <<EOF
    from google.cloud import storage
    storage_client = storage.Client()
    bucket = storage_client.get_bucket("hanabackup")
    blobs = bucket.list_blobs()
    for fileblob in blobs:
      blob = bucket.blob(fileblob.name)
      fname = str(fileblob.name).split('/')[-1]
      blob.chunk_size=1<<30
      if 'log' in fname:
          blob.download_to_filename('/hanabackup/log/H2D/' + fname)
      else:
          blob.download_to_filename('/hanabackup/data/H2D/' + fname)
    EOF
    
  2. To recover the SAP HANA database, click Backup and Recovery > Recover System:

    Recover system

  3. Click Next.

  4. Specify the location of your backups in your local file system and click Add.

  5. Click Next.

  6. Select Recover without the backup catalog:

    Recover Nocat

  7. Click Next.

  8. Select File as the destination type, then specify the location of the backup files and the correct prefix for your backup. If you followed the Example backup creation procedure, then remember that COMPLETE_DATA_BACKUP was set as the prefix.

  9. Click Next twice.

  10. Click Finish to start the recovery.

  11. When recovery completes, resume normal operations and remove backup files from the /hanabackup/data/SID/* directories.

What's next

You might find the following standard SAP documents helpful:

You might also find the following Google Cloud documents useful: