This page describes how to restore Cassandra in a single region.
In a single region deployment, Apigee hybrid is deployed in a single data center or a region. If you have multiple Apigee organizations in your deployment, the restore process restores data for all the organizations. In a multi-organization setup, you cannot restore a specific organization.
Restoring a region from a backup
-
Update the Cassandra restore details in the
overrides.yaml
file:namespace:
YOUR_RESTORE_NAMESPACE # Use the same namespace as in your original cluster. cassandra: hostNetwork: false ... restore: enabled: true serviceAccountPath: "SA_JSON_FILE_PATH " dbStorageBucket: "CLOUD_STORAGE_BUCKET_NAME " cloudProvider: "GCP" # required verbatim "GCP" (all caps) snapshotTimestamp: "TIMESTAMP " ... backup: enabled: false ...Where:
Property Description namespace
YOUR_RESTORE_NAMESPACE
Namespace for restore. Use the same namespace as in your original cluster.
cassandra:hostNetwork
hostNetwork
is required and should always be set tofalse
.restore:enabled
Restore is disabled by default. You must set this property to true
.restore:serviceAccountPath
SA_JSON_FILE_PATH
The path on your filesystem to the service account you created for the backup.
restore:dbStorageBucket
CLOUD_STORAGE_BUCKET_NAME
The name of a Google Cloud Storage bucket that stores backup archives to be used for data restoration.
restore:cloudProvider
GCP
The
cloudProvider: "GCP"
property is required.restore:snapshotTimestamp
TIMESTAMP
The timestamp of the backup snapshot to restore. To check what timestamps can be used, go to the
dbStorageBucket
and look at the files that are present in the bucket. Each file name contains a timestamp value. For example,backup_20210203213003_apigee-cassandra-default-0.tgz
Where 20210203213003 is the
snapshotTimestamp
value you would use if you wanted to restore the backups created at that point in time.backup:enabled
You should set this property to false
in case it had been previously set totrue
. -
In case you do not have a clean cluster to start out with, follow the Decommission a hybrid region for helm documentation to bring your existing Hybrid installation into a clean state (you can leave the Cert Manager installed). This would bring you to an equal state as if you would have followed Helm runtime setup manual until the beginning of Step 11.
-
Verify there are no pods remaining in the Apigee namespace:
kubectl get pods -n
APIGEE_NAMESPACE -
If you are using CSI backup, make sure that you can see the volumesnapshots you want to use for the restoration process by running:
kubectl get volumesnapshot -n
APIGEE_NAMESPACE -
Install all Hybrid components one by one as described in Step 10: Install Apigee hybrid using Helm . Note that the
apigee-cassandra-restore
pod will get created once you run the command to install thedatastore
, but it will only go intorunning
state after you install theapigee-org
component.
See Cassandra backup overview for more details on Cassandra backup and restore.
Verify the restoration job progress and confirm that apigeeds
and all the other pods are up:
- Check
apigeeds
:kubectl get apigeeds -n
APIGEE_NAMESPACE - Check all other pods:
kubectl get pods -n
APIGEE_NAMESPACE
Upon successful completion of the restore and confirmation that the runtime components are healthy, we recommend configuring a backup on the cluster:
- Remove the
restore
configuration from theoverrides-restore.yaml
file. - Add the
backup
configuration to theoverrides-restore.yaml
file. - Apply the
backup
configuration with the following command:helm upgrade datastore apigee-datastore/ \ --namespace
APIGEE_NAMESPACE \ --atomic \ -f overrides-restore.yaml