Restoring in a single region

This page describes how to restore Cassandra in a single region.

In a single region deployment, Apigee hybrid is deployed in a single data center or a region. If you have multiple Apigee organizations in your deployment, the restore process restores data for all the organizations. In a multi-organization setup, you cannot restore a specific organization.

Before you begin

Before you start with restoring Cassandra, consider the following points:
  • Delete the Apigee hybrid installation from all the existing clusters or regions, including the cert-manager and the Anthos Service Mesh (ASM).
  • Install cert-manager and ASM as you would do for a new Apigee hybrid installation. And ensure that the newly installed cert-manager and ASM are of the same version as that of the original hybrid installation.
  • If you want to preserve an existing setup for troubleshooting and root cause analysis (RCA), delete all the org and env components from the Kubernetes cluster except the Apigee controller, and then retain the cluster. The cluster will have the existing Apigee datastore (Cassandra) which you can use for troubleshooting. Subsequently, create a new Kubernetes cluster and then restore Cassandra in the new cluster.
  • If your hybrid installation was set up with multiple organizations, get the overrides files for each organization before proceeding. Then add the restore configuration as mentioned in step 3 to any one of the overrides file. Don't add the restore configuration to any other overrides file.

Restoring a region from a backup

Depending on your configuration, the Cassandra backup can reside either on Cloud Storage or on a remote server. Irrespective of where the backup resides, perform the following steps to restore:
  1. Verify the hybrid version.
    apigeectl version
    Ensure the version is the same version that created the backup files in storage.
  2. Ensure that the Kubernetes cluster that you are restoring to doesn't have any prior Apigee hybrid installation.
  3. Open your overrides.yaml file and set the snapshotTimestamp restore property to the desired backup timestamp.


        namespace: YOUR_RESTORE_NAMESPACE # Use the namespace as in your original cluster.
            snapshotTimestamp: TIMESTAMP


        namespace: apigee
            type: gcepd
            capacity: 50Gi
              replicationType: regional-pd
              password: "abc123"
              password: "abc234"
              password: "abc345"
              password: "abc456"
            value: apigee-data
            enabled: true
            serviceAccountPath: "/Users/myhome/.ssh/my-cassandra-backup-sa.json"
            dbStorageBucket: "gs://myname-cassandra-backup"
            schedule: "45 23 * * 6"
            cloudProvider: "GCP"
            snapshotTimestamp: "20210203213003"


    Property Description


    Namespace for restore. Use the namespace as in your original cluster.



    The timestamp of the backup snapshot to restore. To check what timestamps can be used, go to the dbStorageBucket and look at the files that are present in the bucket. Each file name contains a timestamp value. For example, backup_20210203213003_apigee-cassandra-default-0.tgz

    Where 20210203213003 is the snapshotTimestamp value you would use if you wanted to restore the backups created at that point in time.

  4. Create a new hybrid runtime deployment. This will create a new Cassandra cluster and begin restoring the backup data into the cluster:
    ${APIGEECTL_HOME}/apigeectl init  -f overrides/overrides.yaml
    ${APIGEECTL_HOME}/apigeectl check-ready -f overrides/overrides.yaml
    ${APIGEECTL_HOME}/apigeectl apply -f overrides/overrides.yaml --restore
    ${APIGEECTL_HOME}/apigeectl check-ready -f overrides/overrides.yaml
  5. Verify the restoration job progress:

    kubectl get apigeeds -n apigee

    Ensure apigeeds and all the other pods are up.