SAP HANA operations guide

This guide provides instructions for operating SAP HANA systems deployed on Google Cloud Platform (GCP) by following the SAP HANA on GCP deployment guide. Note that this guide is not intended to replace any of the standard SAP documentation.

Administering an SAP HANA system on GCP

This section shows how to perform administrative tasks typically required to operate an SAP HANA system, including information about starting, stopping, and cloning systems.

Starting and stopping instances

You can stop one or multiple SAP HANA hosts at any time; stopping an instance shuts down the instance. If the shutdown doesn't complete within 2 minutes, the instance is forced to halt. As a best practice, you should first stop SAP HANA running on the instance before you stop the instance.

Stopping a VM

Stopping a virtual machine (VM) instance causes Compute Engine to send the ACPI power-off signal to the instance. You are not billed for the Compute Engine instance after the instance is stopped. If you have persistent disks attached to the instance, the disks are not deleted and you will be charged for them.

If the data on the persistent disk is important, you can either keep the disk or create a snapshot of the persistent disk and delete the disk to save on costs. You can create another disk from the snapshot when you need the data again.

To stop an instance:

  1. In the Google Cloud Platform Console, go to the VM Instances page.

    Go to VM Instances page

  2. Select one or more instances that you want to stop.

  3. At the top of the VM instances page, click Stop stop.

For more information, see Stopping an instance.

Restarting a VM

  1. In the GCP Console, go to the VM Instances page.

    Go to VM Instances page

  2. Select the instances that you want to restart.

  3. At the top right-hand of the page, click Start play_arrow to restart the instances.

For more information, see Restarting an instance.

Creating a snapshot of SAP HANA

To generate a point-in-time backup of your persistent disk, you can create a snapshot. Compute Engine redundantly stores multiple copies of each snapshot across multiple locations with automatic checksums to ensure the integrity of your data.

To create a snapshot, follow the Compute Engine instructions for creating snapshots. Pay careful attention to the preparation steps before creating a consistent snapshot, such as flushing the disk buffers to disk, to make sure that the snapshot is consistent.

Snapshots are useful for the following use cases:

Use case Details
Provide an easy, software-independent, and cost-effective data backup solution. Backup your data, log, backup and shared disks with snapshots. Schedule a daily backup of these disks for point in time backups of your entire dataset. After the first snapshot, only the incremental block changes are stored in subsequent snapshots. This helps save costs.
Migrate to a different storage type. Persistent disks have two different storage types, standard (magnetic) and SSD, that have different cost and performance characteristics. For example, use standard for your backup volume and use SSD for your log and data volume, since they require higher performance. To migrate between storage types, use the volume snapshot, then create a new volume using the snapshot and select a different storage type.
Migrate SAP HANA to another region or zone. Use snapshots to move your SAP HANA system from one zone to another zone in the same region or even to another region. Snapshots can be used globally within GCP to create disks in another zone or region. To move to another region or zone, create a snapshot of your disks including the root disk, and then create the virtual machines in your desired zone or region with disks created from those snapshots.

Cloning your SAP HANA system

You can create snapshots of an existing SAP HANA system on GCP to create an exact clone of the system.

To clone a single-host SAP HANA system:

  1. Create a snapshot of your data and backup disks.

  2. Create new disks using the snapshots.

  3. In the Google Cloud Platform Console, go to the VM Instances page.

    Go to VM Instances page

  4. Click the instance to clone to open the instance detail page, and then click Clone.

  5. Attach the disks that were created from the snapshots.

To clone a multi-host SAP HANA system:

  1. Provision a new SAP HANA system with the same configuration as the SAP HANA system you want to clone.

  2. Perform a data backup of the original system.

  3. Restore the backup of the original system into the new system.

Installing and updating the Cloud SDK

After a VM is deployed for SAP HANA and the operating system is installed, an up-to-date Cloud SDK is required for various purposes, such as transferring files to and from Cloud Storage, interacting with network services, and so forth.

If you follow the instructions in the SAP HANA deployment guide, the Cloud SDK is installed automatically for you.

However, if you bring your own operating system to GCP as a custom image or you are using an older public image provided by GCP, you might need to install or update the Cloud SDK yourself.

To check if the Cloud SDK is installed and whether updates are available, open a terminal or command prompt and enter:

 gcloud version

If the command is not recognized, the Cloud SDK is not installed.

To install the Cloud SDK, follow the instructions in the Cloud SDK quickstarts.

To replace version 140 or earlier of the SLES-integrated Cloud SDK:

  1. Log into the VM by using ssh.

  2. Switch to the super user:

     sudo su
    
  3. Enter the following commands:

     bash <(curl -s https://dl.google.com/dl/cloudsdk/channels/rapid/install_google_cloud_sdk.bash) --disable-prompts --install-dir=/usr/local
     update-alternatives --install /usr/bin/gsutil gsutil /usr/local/google-cloud-sdk/bin/gsutil 1 --force
     update-alternatives --install /usr/bin/gcloud gcloud /usr/local/google-cloud-sdk/bin/gcloud 1 --force
     gcloud --quiet compute instances list
    

Setting up your SAP support channel with SAProuter

If you need to allow an SAP support engineer to access your SAP HANA systems on GCP, you can do so using SAProuter. Follow these steps:

  1. Launch the Compute Engine VM instance that the SAProuter software will be installed on, and assign an external IP address so the instance has internet access.

  2. Create a new, static external IP address and then assign this IP address to the instance.

  3. Create and configure a specific SAProuter firewall rule in your network. In this rule, allow only the required inbound and outbound access to the SAP support network, for the SAProuter instance.

    Limit the inbound and outbound access to a specific IP address that SAP provides for you to connect to, along with TCP port 3299. Add a target tag to your firewall rule and enter your instance name. This ensures that the firewall rule applies only to the new instance. See the firewall rules documentation for additional details about creating and configuring firewall rules.

  4. Install the SAProuter software, following SAP Note 1628296, and create a saprouttab file that allows access from SAP to your SAP HANA systems on GCP.

  5. Set up the connection with SAP. For your internet connection, use Secure Network Communication. For more information, see SAP Remote Support – Help.

Configuring your network

You are provisioning your SAP HANA system by using VMs with the GCP virtual network. GCP uses state-of-the-art, software-defined networking and distributed-systems technologies to host and deliver your services around the world.

For SAP HANA, create a non-default subnet network with non-overlapping CIDR IP address ranges for each subnetwork in the network. Note that each subnetwork and its internal IP address ranges are mapped to a single region.

A subnetwork spans all of the zones in the region where it is created. However, when you create a VM instance, you specify a zone and a subnetwork for the VM. For example, you can create one set of instances in subnetwork1 and in zone1 of region1 and another set of instances in subnetwork2 and in zone2 of region1, depending on your needs.

A new network has no firewall rules and hence no network access. You should create firewall rules that open access to your SAP HANA instances based on a minimum privilege model. The firewall rules apply to the entire network and can also be configured to apply to specific target instances by using the tagging mechanism.

Routes are global, not regional, resources that are attached to a single network. User-created routes apply to all instances in a network. This means you can add a route that forwards traffic from instance to instance within the same network, even across subnetworks, without requiring external IP addresses.

For your SAP HANA instance, launch the instance with no external IP address and configure another VM as a NAT gateway for external access. This configuration requires you to add your NAT gateway as a route for your SAP HANA instance. This procedure is described in the deployment guide.

Security

The following sections discuss security operations.

Minimum privilege model

Your first line of defense is to restrict who can reach the instance by using firewalls. By creating firewall rules, you can restrict all traffic to a network or target machines on a given set of ports to specific source IP addresses. You should follow the minimum-privilege model to restrict access to the specific IP addresses, protocols, and ports that need access. For example, you should always set up a bastion host, and allow SSH into your SAP HANA system only from that host.

Configuration changes

You should configure your SAP HANA system and the operating system with recommended security settings. For example, make sure that only relevant network ports are whitelisted for access, harden the operating system you are running SAP HANA, and so on.

Refer to the following SAP notes:

Disabling unneeded SAP HANA Services

If you do not require SAP HANA Extended Application Services (SAP HANA XS), disable the service. Refer to SAP note 1697613: Remove XS Engine out of SAP HANA database.

After the service has been disabled, remove all the TCP ports that were opened for the service. In GCP, this means editing your firewall rules for your network to remove these ports from the whitelist.

Audit logging

Cloud Audit Logs consists of two log streams, admin activity and data access, both of which are automatically generated by GCP. These can help you answer the questions, "Who did what, where, and when?" in your Google Cloud Platform project.

Admin activity logs contain log entries for API calls or administrative actions that modify the configuration or metadata of a service or project. This log is always enabled and is visible by all project members.

Data access logs contain log entries for API calls that create, modify, or read user-provided data managed by a service, such as data stored in a database service. This type of logging is enabled by default in your project and is accessible to you through Stackdriver Logging, or through your activity feed.

Securing a Cloud Storage bucket

If you use Cloud Storage to host your backups for your data and log, make sure you use TLS (HTTPS) while sending data to Cloud Storage from your instances to protect data in transit. Cloud Storage automatically encrypts data at rest. You can specify your own encryption keys if you have your own key-management system.

Refer to the Cloud Storage security documentation for best practices for Cloud Storage.

Refer to the following additional security resources for your SAP HANA environment on GCP:

High availability for SAP HANA on GCP

GCP provides a variety of options for ensuring high availability for your SAP HANA system, including the Compute Engine live migration and automatic restart features. These features, along with the high monthly uptime percentage of Compute Engine VMs, might make paying for and maintaining standby systems unnecessary.

However, if required, you can deploy a multi-host scale-out system that includes standby hosts for SAP HANA Host Auto-failover, or you can deploy a scale-up system with a standby SAP HANA instance in a high-availability Linux cluster.

For more information about the high availability options for SAP HANA on GCP, see SAP HANA high-availability and disaster recovery planning guide.

Disaster recovery

The SAP HANA system provides several high availability features to make sure that your SAP HANA database can withstand failures at the software or infrastructure level. Among these features is SAP HANA System replication and SAP HANA backups, both of which GCP supports.

For more information about SAP HANA backups, see Backup and recovery.

For more information about system replication, see the SAP HANA high-availability and disaster recovery planning guide.

Backup and recovery

Backups are vital for protecting your system of record (your database). Because SAP HANA is an in-memory database, you should create regular backups so you can recover from instances of data corruption. SAP HANA system provides native backup and recovery features to help you do this. You can use GCP services such as Cloud Storage to serve as the backup destination for SAP HANA backup.

You can also install the Cloud Storage Backint agent for SAP HANA so that you can use Cloud Storage directly for backups and recoveries.

This document assumes you are familiar with SAP HANA backup and recovery, along with the following SAP service Notes:

Using Compute Engine persistent disks and Cloud Storage for backups

If you followed the deployment instructions, you have an SAP HANA installation with a /hanabackup directory. This is backed by using a standard persistent disk. You use the standard SAP tools to create your online database backups to /hanabackup directory. Finally, you save the completed backup by uploading it to a Cloud Storage bucket, from which you can download the backup, when you need to recover.

Using Compute Engine to create backups and disk snapshots

You can use Compute Engine for SAP HANA backups, and you also have the option of backing up the entire disk hosting your data and log using persistent-disk snapshots.

If you followed the instructions in the deployment guide, you have an SAP HANA installation with a /hanabackup directory for your online database backups. You can use that same directory to store snapshots of the backup volume and maintain a point-in-time backup of your data and log.

An advantage of snapshots is that they are incremental, where each subsequent backup only stores incremental block changes instead of creating an entirely new backup. Compute Engine redundantly stores multiple copies of each snapshot across multiple locations with automatic checksums to ensure the integrity of your data.

Here is an illustration of the incremental backups:

Snapshot diagram

Cloud Storage as your backup destination

Cloud Storage is a good choice to use as your backup destination for SAP HANA because it provides high durability and availability of data.

Cloud Storage is an object store for files of any type or format. It has virtually unlimited storage and you do not have to worry about provisioning it or adding more capacity to it. An object in Cloud Storage consists of file data and its associated metadata, and can be up to 5 TB in size. A Cloud Storage bucket can store any number of objects.

With Cloud Storage, your data is stored in multiple locations, which provides high durability and high availability. When you upload your data to Cloud Storage or copy your data within it, Cloud Storage reports the action as successful only if object redundancy is achieved.

The following table shows the different storage options you have if you use Cloud Storage:

Data access needed Cloud Storage options recommended
Frequent access Choose Multi-Regional or Regional storage class for backups accessed multiple times in a month. Multi-Regional is useful for disaster recovery because its data is stored redundantly in at least two regions separated by at least 100 miles within the multi-regional location of the bucket. Note that data stored as Multi-Regional storage can be placed only in multi-regional locations, such as the United States, the European Union, or Asia, not specific regional locations such as us-central1 or asia-east1.
Infrequent access Choose Nearline or Coldline storage for infrequently accessed data. Nearline is a good choice for backed-up data you plan to access at most once a month, while Coldline is better for data that has very low probability of access, perhaps once a year at most. Consider replacing your tape-based backup solution with Nearline or Coldline.

When you plan your usage of these storage options, start with the frequently accessed tier and age your backup data through to the infrequent access tiers. Backups generally become rarely used as they become older. The probability of needing a backup that is 3 years old is extremely low and you can age this backup into the Coldline tier to save on costs, which are currently 7/10th of a cent per GB/month.

Cloud Storage compared to tape backup

The traditional, on-premises backup destination is tape. Cloud Storage has many benefits over tape, including the ability to automatically store backups "offsite" from the source system, since data in Cloud Storage is replicated across multiple facilities. This also means that the backups stored in Cloud Storage are highly available.

Another key difference is the speed of restoring backups when you need to use them. If you need to create a new SAP HANA system from backup or restore an existing system from backups, Cloud Storage provides faster access to your data and helps you build the system faster.

Cloud Storage Backint agent for SAP HANA

You can use Cloud Storage directly for backups and recoveries for both on-premises and cloud installations by using the SAP-certified Cloud Storage Backint agent for SAP HANA (Backint agent).

The Backint agent is integrated with the SAP HANA so that you can store and retrieve backups directly from Cloud Storage by using the native SAP backup and recovery functions.

When you use the Backint agent, you don't need to use persistent disk storage for backups.

For installation instructions for the Backint agent, see the Cloud Storage Backint agent for SAP HANA installation guide.

For more information about the SAP certification of the Backint agent, see:

Storing backups in Cloud Storage buckets

Use multi-regional buckets, unless you are certain that the bucket will be used for backups and recoveries by only SAP HANA instances that are running in the same region as the bucket.

If you are running SAP HANA on GCP, create your bucket in the same GCP project as the SAP HANA instances that will use the bucket so you can use default application authentication.

Multistreaming data backups with the Backint agent

The Backint agent supports multistreaming for backups larger than 128 GB.

Multistreaming is useful for increasing throughput and backing up databases that are larger than 5 TB, the maximum size for a single object stored in Cloud Storage.

The optimum number of channels that you use for multistreaming depends on the Cloud Storage bucket type you are using and the environment in which SAP HANA is running. Also consider the throughput capability of the data disk attached to your HANA instance, as well as the bandwidth your administrator allocates for backup activities.

You can adjust the throughput by changing the number of streams, or limit throughput by using the #RATE_LIMIT_MB parameter in parameters.txt, the Backint agent configuration file.

For a multi-regional bucket, start with 8 channels by setting the parallel_data_backup_backint_channels parameter to 8 in the SAP HANA global.ini configuration file.

For a regional bucket, start with 12 channels by setting the parallel_data_backup_backint_channels in the global.ini file to 12.

Adjust the number of channels as necessary to meet your backup performance objectives.

As stated in the SAP HANA documentation, each additional channel requires an I/O buffer of 512 MB. Specify the size of the I/O buffer by using the data_backup_buffer_size appropriately parameter in the backup section of the global.ini file. For more information regarding the affect of the IO buffer size on backup times, see SAP Note 2657261.

For more information about multistreaming, in the SAP HANA Administration Guide that is specific to your SAP HANA version, see Multistreaming Data Backups with Third-Party Backup Tools.

Authentication for the Backint agent

GCP uses a service account to authenticate a Backint agent to a Cloud Storage bucket.

When SAP HANA is running on GCP

If both SAP HANA and the Cloud Storage bucket are in the same GCP project, the Backint agent can use your project's default service account to authenticate with Cloud Storage, which saves you the trouble of setting up the authentication yourself.

If you can't use the default service account because your company uses a custom authentication method, you can create a GCP service account for Backint agent, create a service account key, and assign the role of Storage Object Admin to the service account so that the Backint agent can create and delete the backups in the Cloud Storage bucket.

When SAP HANA is not running on GCP

If you are running SAP HANA on-premises or on another cloud platform, you need to create a GCP service account for the Backint agent. Store the service account key on the SAP HANA VM, and assign the role of Storage Object Admin to the service account so that the Backint agent can create and delete the backups in the Cloud Storage bucket.

Create the new service account in the GCP project that owns the Cloud Storage bucket that the Backint agent will use.

When using this configuration method, rotate your keys regularly as a best practice to protect against unauthorized access.

Automatic updates for the Backint agent

The automatic update function of the Backint agent requires the SAP HANA VM to support remote HTTP requests to https://www.googleapis.com/.

If a new version is available, it is downloaded and installed prior to running the Backint agent.

Configuration options for the Backint agent

You can specify a number of options for the Backint agent in the parameters.txt configuration file.

Specify each parameter on a new line. Separate parameters and values with a space.

Parameter and value Description
#BUCKET bucket-name A required parameter that specifies the name of the Cloud Storage bucket that the Backint agent writes to and reads from. The Backint agent creates backup objects with the storage class of the bucket and supports all storage classes. The Backint agent uses Compute Engine default encryption to encrypt data at rest.
#SERVICE_ACCOUNT path/to/key/file Optional parameter that specifies the fully-qualified path to the JSON-encoded GCP service account key when Compute Engine default authentication is not used. Specifying #SERVICE_ACCOUNT directs the Backint agent to use the key when authenticating to the Cloud Storage service. The Compute Engine default authentication is recommended.
#DISABLE_COMPRESSION Optional parameter that disables default, on-the-fly compression when Backint agent writes backups to the Cloud Storage bucket. Compression reduces the cost of storage for backups in Cloud Storage, but requires more CPU processing during backup operations. Regardless of this setting, the Backint agent supports either compressed or uncompressed backup files during a restore operation.
#CHUNK_SIZE_MB MB Optional parameter that controls the size of HTTPS requests to Cloud Storage during backup or restore operations. The default chunk size is 100 MB, which means that a single HTTP request stream to or from Cloud Storage is kept open until 100 MB of data is transferred. 100 MB provides a good balance between throughput and reliability for most users. Because Backint agent retries failed HTTP requests multiple times before failing an operation, smaller chunk sizes result in less data that needs to be retransmitted if a request fails. Larger chunk sizes can improve throughput, but require more memory usage and more time to resend data in the event of a request failure.
#RATE_LIMIT_MB MB Optional parameter that sets an upper limit on the outbound bandwidth to Compute Engine during backup or restore operations. By default, GCP does not limit network bandwidth for the Backint agent. When set, throughput might vary, but will not exceed the specified limit.

Logging for the Backint agent

In addition to the logs kept by SAP HANA in backup.log, the Backint agent writes operational and communication-error events to log files in the logs subdirectory in /usr/sap/SID/SYS/global/hdb/opt/backint/backint-gcs.

When the size of a log file reaches 10 MB, the Backint agent rotates the log files.

If necessary, you can edit the Backint agent logging configuration in /usr/sap/SID/SYS/global/hdb/opt/backint/backint-gcs/logging.properties.

The Backint agent also supports Stackdriver Logging. To enable Logging, see the Cloud Storage Backint agent for SAP HANA installation guide.

Managing identity and access to backups

When you use Cloud Storage or Compute Engine to back up your SAP HANA data, access to those backups is controlled by Cloud Identity and Access Management (Cloud IAM). This feature gives admins the ability to authorize who can take action on specific resources. Cloud IAM provides you with centralized, full control, and visibility for managing all of your GCP resources, including your backups.

Cloud IAM also provides a full audit trail history of permissions authorization, removal, and delegation gets surfaced automatically for your admins. This lets you configure policies that monitor access to your data in the backups, allowing you to complete the full access-control cycle with your data. Cloud IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.

To grant access to backups in Cloud Storage:

  1. In the GCP Console, go to the IAM & Admin page:

    Go to IAM & Admin page

  2. Specify the user you are granting access to, and assign the role Storage > Storage Object Creator:

    IAM screencap

How to make backups

SAP HANA systems provisioned on GCP using the deployment guide are configured with a set of persistent disk volumes to be used as an NFS-mounted backup destination. SAP HANA backups are first stored on these local persistent disk volumes, and should then copied to Cloud Storage for long-term storage. You can either manually copy the backups over to Cloud Storage or schedule the copy to Cloud Storage in a crontab.

If you are using the Cloud Storage Backint agent for SAP HANA, you back up to and recover from a Cloud Storage bucket directly. Persistent disk storage is not required.

You can use SAP HANA Studio, SQL commands, or the DBA Cockpit to start or schedule SAP HANA data backups. Log backups are written automatically unless disabled. The following screenshot shows an example:

Backups screencap

Configuring SAP HANA global.ini

If you followed the deployment guide instructions, the SAP HANA global.ini configuration file is customized with database backups stored in /backup/data/ and automatic log archival files are stored in /backup/log/, as follows:

[persistence]
basepath_datavolumes = /hana/data
basepath_logvolumes = /hana/log
basepath_databackup = /hanabackup/data
basepath_logbackup = /hanabackup/log

[system_information]
usage = production

To customize the global.ini configuration file for the Cloud Storage Backint agent for SAP HANA, see the Cloud Storage Backint agent for SAP HANA deployment guide.

Notes for scale-out deployments

In a scale-out implementation, a high-availability solution that uses live migration and automatic restart works in the same way as in a single host setup. The main difference is that the /hana/shared volume is NFS-mounted to all the worker hosts and mastered in the HANA master. There is a brief period of inaccessibility on the NFS volume in the event of a master host's live migration or auto restart. When the master host has restarted, the NFS volume will soon function again on all hosts, and normal operation resumes automatically.

The backup volume, /hanabackup, must be available on all hosts during backup and recovery operations. In the event of failure, you should verify the /hanabackup is mounted on all hosts and remount any that are not. When you choose to copy the backup set to another volume or Cloud Storage, you should run the copy on the master host to achieve better IO performance and reduce network usage. To simplify the backup and recovery process, you can use Cloud Storage Fuse to mount the Cloud Storage bucket on each host.

The scale-out performance is only as good as your data distribution. The better the data is distributed, the better your query performance will be. This requires that you know your data well, understand how the data is being consumed, and design table distribution and partitioning accordingly. Please refer to SAP Note 2081591.

Gcloud Python

Gcloud Python is an idiomatic Python client that you can use to access GCP services. This guide uses Gcloud Python to perform backup and restore operations to and from Cloud Storage for your SAP HANA database backups.

If you followed the deployment guide instructions, Gcloud Python libraries are already available in the Compute Engine instances.

The libraries are open source and allow you to operate on your Cloud Storage bucket to store and retrieve backup data.

You can run the following command to list objects in your Cloud Storage bucket. You can use it to list the backup objects available:

python 2>/dev/null - <<EOF
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.get_bucket("<bucket_name>")
blobs = bucket.list_blobs()
for fileblob in blobs:
     print(fileblob.name)
EOF

For complete details about Gcloud Python, see the storage client library reference documentation.

Backup example

Here are the steps you might take for a typical backup task, using SAP HANA Studio as an example:

  1. In the SAP HANA Backup Editor, select Open Backup Wizard.

    Backup Wizard

    1. Select File as the destination type. This backs up the database to files in the specified file system.
    2. Specify the backup destination, /hanabackup/data/[SID], and the backup prefix. Replace [SID] with the proper SAP SID.
    3. Click Next.
  2. Click Finish in the confirmation form to start the backup.

  3. When the backup starts, a status window displays the progress of your backup. Wait for the backup to complete.

    Backup Progress

    When the backup is complete, the backup summary displays a "Finished" message.

  4. Sign in to your SAP HANA system and verify that the backups are available at the expected locations in the file system. For example:

    Backup List1 Backup List2

  5. Push or synchronize the backup files from the /hanabackup file system to Cloud Storage. The following sample Python script pushes the data from /hanabackup/data and /hanabackup/log to the bucket used for backups, in the form [NODENAME]/[DATA] or [LOG]/YYYY/MM/DD/HH/[BACKUP_FILE_NAME]. This allows you to identify backup files based on the time during which the backup was copied. Run this gcloud Python script on your operating system bash prompt:

    python 2>/dev/null - <<EOF
    import os
    import socket
    from datetime import datetime
    from google.cloud import storage
    storage_client = storage.Client()
    today = datetime.today()
    current_hour = today.strftime('%Y/%m/%d/%H')
    hostname = socket.gethostname()
    bucket = storage_client.get_bucket("hanabackup")
    for subdir, dirs, files in os.walk('/hanabackup/data/H2D/'):
      for file in files:
          backupfilename = os.path.join(subdir, file)
          if 'COMPLETE_DATA_BACKUP' in backupfilename:
                only_filename = backupfilename.split('/')[-1]
                backup_file = hostname + '/data/' + current_hour + '/' + only_filename
                blob = bucket.blob(backup_file)
                blob.upload_from_filename(filename=backupfilename)
    for subdir, dirs, files in os.walk('/hanabackup/log/H2D/'):
      for file in files:
          backupfilename = os.path.join(subdir, file)
          if 'COMPLETE_DATA_BACKUP' in backupfilename:
              only_filename = backupfilename.split('/')[-1]
              backup_file = hostname + '/log/' + current_hour + '/' + only_filename
              blob = bucket.blob(backup_file)
              blob.upload_from_filename(filename=backupfilename)
    EOF
    
  6. Use either the Gcloud Python libraries or GCP Console to list the backup data.

Restore example

To restore your SAP HANA database from a backup:

  1. If the backup files are not available already in the /hanabackup file system but are in Cloud Storage, download the files from Cloud Storage, by running the following script from your operating system bash prompt:

    python - <<EOF
    from google.cloud import storage
    storage_client = storage.Client()
    bucket = storage_client.get_bucket("hanabackup")
    blobs = bucket.list_blobs()
    for fileblob in blobs:
      blob = bucket.blob(fileblob.name)
      fname = str(fileblob.name).split('/')[-1]
      blob.chunk_size=1<<30
      if 'log' in fname:
          blob.download_to_filename('/hanabackup/log/H2D/' + fname)
      else:
          blob.download_to_filename('/hanabackup/data/H2D/' + fname)
    EOF
    
  2. To recover the SAP HANA database, click Backup and Recovery > Recover System:

    Recover system

  3. Click Next.

  4. Specify the location of your backups in your local filesystem and click Add.

  5. Click Next.

  6. Select Recover without the backup catalog:

    Recover Nocat

  7. Click Next.

  8. Select File as the destination type, then specify the location of the backup files and the correct prefix for your backup. (In the backup example, remember that you used COMPLETE_DATA_BACKUP as the prefix.)

  9. Click Next twice.

  10. Click Finish to start the recovery.

  11. When recovery completes, resume normal operations and remove backup files from the /hanabackup/data/[SID]/* directories.

What's next

You might find the following standard SAP documents helpful:

You might also find the following GCP documents useful:

Was this page helpful? Let us know how we did:

Send feedback about...