Cassandra backup and recovery

This section discusses how to configure data backup and recovery for the Apache Cassandra database ring installed in the Apigee hybrid runtime plane. See also Cassandra datastore.

Why Cassandra backups are important

Backups are an important measure of protection against disaster scenarios. Each backup is a consistent snapshot of the current Cassandra data that existed at the time the backup was created. This includes the Cassandra data along with schema / metadata within the Cassandra cluster. Backups allow users to restore their Apigee Hybrid instance to a previously known good state, a crucial step to getting a Hybrid instance back online in the event of a disaster. Depending on the Hybrid instance size, there could be one or several backup files for a single backup set. For instance, if the Hybrid instance has three Cassandra nodes then the backup set contains three backup files. If the Hybrid instance has thirty Cassandra nodes then the backup set contains thirty backup files. This means the time to restore a Hybrid instance from a backup depends on how many backup files in a backup set need to be restored.

What you need to know about Cassandra backups

Cassandra is a replicated database that is configured to have at least three copies of your data in each region or data center. Cassandra uses streaming replication and read repairs to maintain the data replicas in each region or data center at any given point.

In hybrid, Cassandra backups are not enabled by default.It's a good practice, however, to enable Cassandra backups in case your data is lost due to a catastrophic failure. Cassandra backups are intended for use in cases of disaster recovery and not for restoring data loss caused by accidental deletion.

Backups are created according to the schedule set in your overrides.yaml file. See the Scheduling Cassandra backups section for instructions on how to schedule backups. Once a backup schedule has been applied to your Hybrid cluster, a Kubernetes backup job is periodically executed according to the schedule. The job triggers a backup script on each Cassandra node in your Hybrid cluster that collects all of the data on the node, creates an archive file of the data, and sends the archive to the Google Cloud storage bucket specified in the Cassandra backup configuration in your overrides.yaml file.

What is backed up?

The backup configuration described in this topic backs up the following entities:

  • Cassandra schema including the user schema (Apigee keyspace definitions)
  • Cassandra partition token information per node
  • A snapshot of the Cassandra data

Where is backup data stored?

Backed up data is stored in a Google Cloud Storage bucket that you must create. Bucket creation and configuration is covered in this topic.

Scheduling Cassandra backups

Backups are scheduled as cron jobs in the runtime plane. To schedule Cassandra backups:

  1. Run the following create-service-account command to create a Google Cloud service account (SA) with the standard roles/storage.objectAdmin role. This SA role allows you to write backup data to Cloud Storage. Execute the following command in the hybrid installation root directory:
    ../tools/create-service-account --env prod \
      --profile apigee-cassandra \
      --name CASSANDRA_BACKUP_SA_NAME \
      --dir OUTPUT_DIR

    Where:

    • CASSANDRA_BACKUP_SA_NAME is the name of the service account.
    • OUTPUT_DIR is the directory where the tool will output the service account .json file.

    For example:
    ./tools/create-service-account --env prod \
      --profile apigee-cassandra \
      --name my-cassandra-backup-sa  --dir ./service-accounts
    For more information about Google Cloud service accounts, see Creating and managing service accounts.
  2. The create-service-account command saves a JSON file containing the service account private key. The file is saved in the same directory where the command executes. You will need the path to this file in the following steps.
  3. Create a Cloud Storage bucket. Specify a reasonable data retention policy for the bucket. Apigee recommends a data retention policy of 15 days.
  4. Open your overrides.yaml file.
  5. Add the following cassandra.backup properties to enable backup. Do not remove any of the properties that are already configured.

    Parameters

    cassandra:
      ...
    
      backup:
        enabled: true
        serviceAccountPath: SA_JSON_FILE_PATH
        dbStorageBucket: CLOUD_STORAGE_BUCKET_PATH
        schedule: BACKUP_SCHEDULE_CODE
    
      ...
      

    Example

    ...
    
    cassandra:
      storage:
        type: gcepd
        capacity: 50Gi
        gcepd:
          replicationType: regional-pd
      sslRootCAPath: "/Users/myhome/ssh/cassandra.crt"
      sslCertPath: "/Users/myhome/ssh/cassandra.crt"
      sslKeyPath: "/Users/myhome/ssh/cassandra.key"
      auth:
        default:
          password: "abc123"
        admin:
          password: "abc234"
        ddl:
          password: "abc345"
        dml:
          password: "abc456"
      nodeSelector:
        key: cloud.google.com/gke-nodepool
        value: apigee-data
      backup:
        enabled: true
        serviceAccountPath: "/Users/myhome/.ssh/my-cassandra-backup-sa.json"
        dbStorageBucket: "gs://myname-cassandra-backup"
        schedule: "45 23 * * 6"
    
      ... 
  6. Where:
    Property Description
    backup:enabled Backup is disabled by default. You must set this property to true.
    backup:serviceAccountPath

    SA_JSON_FILE_PATH

    The path on your filesystem to the service account JSON file that was downloaded when you ran ./tools/create-service-account

    backup:dbStorageBucket

    CLOUD_STORAGE_BUCKET_PATH

    The Cloud Storage bucket path in this format: gs://BUCKET_NAME. The gs:// is required.

    backup:schedule

    BACKUP_SCHEDULE_CODE

    The time when the backup starts, specified in standard crontab syntax. Default: 0 2 * * *

  7. Apply the configuration changes to the new cluster. For example:
    ./apigeectl apply -f overrides.yaml
  8. Verify the backup job. For example:
    kubectl get cronjob -n apigee
    NAME                      SCHEDULE     SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    apigee-cassandra-backup   33 * * * *   False     0        <none>          94s

Restoring backups

Use backups to restore the Apigee infrastructure from the ground up in case of catastrophic failures such as all data in the Apigee Hybrid instance being irrecoverably lost due to a disaster. Be aware if you have a multi-region deployment, recovering from a single region failure should use decommissioning to clean up the impacted region and then use DN expansion on the impacted region. Apigee Cassandra deployments and operational architecture take care of redundancy and fault tolerance for a single region. As Apigee Hybrid is recommended to be deployed in a multi-region configuration for production use cases, a region failure can be recovered from one of the other live regions using the documented region decommissioning and expansion procedures instead of recovering from a backup.

Do not use backups should for the following reasons:

  • Cassandra node failures.
  • Accidental deletion of data like apps, developers, and api_credentials.
  • One or more regions going down in a multi-region deployment.

Considerations to consider when restoring:

  • Downtime: There will be downtime for the duration of the restoration.
  • Data loss: There will be data loss between the last valid backup and the time the restoration is complete.
  • Restoration time: Restoration time depends on the size of the data and cluster.
  • Cherry picking data: You cannot select only specific data to restore. Restoration restores the entire backup you select.

Restoration takes your data from the backup location and restores the data into a new Cassandra cluster with the same number of nodes. No cluster data is taken from the old Cassandra cluster.

The restoration instructions below are for single region deployments that use Google Cloud Storage for backups. For other deployments, see the following:

To restore Cassandra backups:

  1. Create a new Kubernetes cluster with a new namespace in which to restore the hybrid runtime deployment. You cannot use the same cluster and namespace that you used for the original hybrid installation.
  2. In the root hybrid installation directory, create a new overrides-restore.yaml file.
  3. Copy the complete configuration from your original overrides.yaml file into the new overrides-restore.yaml file. For example:
    cp ./overrides.yaml ./overrides-restore.yaml
  4. In the new overrides-restore.yaml add namespace and restore elements and ensure the TLS authentication credentials in the cassandra properties are correct.

    Parameters

        namespace: YOUR_RESTORE_NAMESPACE
        cassandra:
          ...
          sslRootCAPath: PATH_TO_ROOT_CA_FILE
          sslCertPath: PATH_TO_SSL_CERT_FILE
          sslKeyPath: PATH_TO_SSL_KEY_FILE
          ...
          restore:
            enabled: true
            snapshotTimestamp: TIMESTAMP
            serviceAccountPath: SA_JSON_FILE_PATH
            dbStorageBucket: CLOUD_STORAGE_BUCKET_PATH
                 image:
                   pullPolicy: Always
          ...
    

    Example

        ...
    
        namespace: cassandra-restore
    
        cassandra:
          storage:
            type: gcepd
            capacity: 50Gi
            gcepd:
              replicationType: regional-pd
          sslRootCAPath: "/Users/myhome/ssh/cassandra.crt"
          sslCertPath: "/Users/myhome/ssh/cassandra.crt"
          sslKeyPath: "/Users/myhome/ssh/cassandra.key"
          auth:
            default:
              password: "abc123"
            admin:
              password: "abc234"
            ddl:
              password: "abc345"
            dml:
              password: "abc456"
          nodeSelector:
            key: cloud.google.com/gke-nodepool
            value: apigee-data
    
          restore:
            enabled: true
            snapshotTimestamp: "20210203213003"
            serviceAccountPath: "/Users/myhome/.ssh/my_cassandra_backup.json"
            dbStorageBucket: "gs://myname-cassandra-backup"
            image:
              pullPolicy: Always
    
        ...
    

    Where:

    Property Description
    sslRootCAPath
    sslCertPath
    sslKeyPath

    PATH_TO_ROOT_CA_FILE
    PATH_TO_SSL_CERT_FILE
    PATH_TO_SSL_KEY_FILE

    Use the same TLS auth credentials you used to create the original Cassandra database.

    namespace

    YOUR_RESTORE_NAMESPACE

    The name of the new namespace you created in step 1 for the new Cassandra cluster. Do not use the same namespace you used for your original cluster.

    restore:enabled Restore is disabled by default. You must set this property to true.
    restore:snapshotTimestamp

    TIMESTAMP

    The timestamp of the backup snapshot to restore. To check what timestamps can be used, go to the dbStorageBucket and look at the files that are present in the bucket. Each file name contains a timestamp value such as the following:

    backup_20210203213003_apigee-cassandra-default-0.tgz

    Where 20210203213003 is the snapshotTimestamp value you would use if you wanted to restore the backups created at that point in time.

    restore:serviceAccountPath

    SA_JSON_FILE_PATH

    The path on your filesystem to the service account you created for the backup.

    restore:dbStorageBucket

    CLOUD_STORAGE_BUCKET_PATH

    The Cloud Storage bucket path where your backup data is stored in the following format:

    gs://BUCKET_NAME

    The gs:// is required.

  5. Change the app label on any Cassandra nodes in the old namespace by executing the following command:
    kubectl label pods --overwrite --namespace=OLD_NAMESPACE -l app=apigee-cassandra app=apigee-cassandra-old
    
  6. Create a new hybrid runtime deployment. This will create a new Cassandra cluster and begin restoring the backup data into the cluster:
    ./apigeectl init  -f ../overrides-restore.yaml
    
    ./apigeectl apply  -f ../overrides-restore.yaml
    
  7. Once the restoration is complete, the traffic must be switched to use the Cassandra cluster in the new namespace. Run the following commands to switch the traffic:

    kubectl get rs -n OLD_NAMESPACE # look for the 'apigee-connect' replicaset
    
    kubectl patch rs -n OLD_NAMESPACE APIGEE_CONNECT_RS_NAME -p '{"spec":{"replicas" : 0}}'
    
  8. Once the traffic switch is complete, you can reconfigure backups on the restored cluster by removing the restore configuration and adding the backup configuration to the overrides-restore.yaml file. Replace YOUR_RESTORE_NAMESPACE with the new namespace name created in step 1.
    namespace: YOUR_RESTORE_NAMESPACE
    cassandra:
      ...
       backup:
        enabled: true
        serviceAccountPath: SA_JSON_FILE_PATH
        dbStorageBucket: CLOUD_STORAGE_BUCKET_PATH
        schedule: BACKUP_SCHEDULE_CODE
      ...
    

    Then apply the backup configuration with the following command:

    ./apigeectl apply  -f ../overrides-restore.yaml
    

Viewing the restore logs

You can check the restore job logs and use grep to look for error to make sure the restore log has no errors.

Verify the restore completed

Use the following command to check if the restore operation completed:

kubectl get pods

The output is similar to the following:

NAME                           READY     STATUS      RESTARTS   AGE
apigee-cassandra-default-0     1/1       Running     0          1h
apigee-cassandra-default-1     1/1       Running     0          1h
apigee-cassandra-default-2     1/1       Running     0          59m
apigee-cassandra-restore-b4lgf 0/1       Completed   0          51m

View the restore logs

Use the following command to view the restore logs:

kubectl logs -f apigee-cassandra-restore-b4lgf

The output is similar to the following:

Restore Logs:

Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
to download file gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1/backup_20190405011309_schema.tgz
INFO: download successfully extracted the backup files from gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
finished downloading schema.cql
to create schema from 10.32.0.28

Warnings :
dclocal_read_repair_chance table option has been deprecated and will be removed in version 4.0

dclocal_read_repair_chance table option has been deprecated and will be removed in version 4.0


Warnings :
dclocal_read_repair_chance table option has been deprecated and will be removed in version 4.0

dclocal_read_repair_chance table option has been deprecated and will be removed in version 4.0

INFO: the schema has been restored
starting apigee-cassandra-default-0 in default
starting apigee-cassandra-default-1 in default
starting apigee-cassandra-default-2 in default
84 95 106
waiting on waiting nodes $pid to finish  84
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
INFO: restore downloaded  tarball and extracted the file from  gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
INFO: restore downloaded  tarball and extracted the file from  gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
INFO: restore downloaded  tarball and extracted the file from  gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
INFO  12:02:28 Configuration location: file:/etc/cassandra/cassandra.yaml
…...

INFO  12:02:41 [Stream #e013ee80-5863-11e9-8458-353e9e3cb7f9] All sessions completed

Summary statistics:
   Connections per host    : 3
   Total files transferred : 2
   Total bytes transferred : 0.378KiB
   Total duration          : 5048 ms
   Average transfer rate   : 0.074KiB/s
   Peak transfer rate      : 0.075KiB/s

progress: [/10.32.1.1.6]0:1/1 100% 1:1/1 100% [/10.32.0.28]1:1/1 100% 0:1/1 100% [/10.32.3.220]0:1/1 100% 1:1/1 100% total: 100% 0.000KiB/s (avg: 0.074KiB/s)
INFO  12:02:41 [Stream #e013ee80-5863-11e9-8458-353e9e3cb7f9] All sessions completed
progress: [/10.32.1.1.6]0:1/1 100% 1:1/1 100% [/10.32.0.28]1:1/1 100% 0:1/1 100% [/10.32.3.220]0:1/1 100% 1:1/1 100% total: 100% 0.000KiB/s (avg: 0.074KiB/s)
INFO  12:02:41 [Stream #e013ee80-5863-11e9-8458-353e9e3cb7f9] All sessions completed
INFO  12:02:41 [Stream #e013ee80-5863-11e9-8458-353e9e3cb7f9] All sessions completed
INFO: ./apigee/data/cassandra/data/ks1/user-9fbae960571411e99652c7b15b2db6cc restored successfully
INFO: Restore 20190405011309 completed
INFO: ./apigee/data/cassandra/data/ks1/user-9fbae960571411e99652c7b15b2db6cc restored successfully
INFO: Restore 20190405011309 completed
waiting on waiting nodes $pid to finish  106
Restore finished

Verify backup job

You can also verify your backup job after your backup cronjob is scheduled. After the cronjob has been scheduled, you should see something like this:

kubectl get pods

The output is similar to the following:

NAME                                       READY     STATUS      RESTARTS   AGE
apigee-cassandra-default-0                 1/1       Running     0          2h
apigee-cassandra-default-1                 1/1       Running     0          2h
apigee-cassandra-default-2                 1/1       Running     0          2h
apigee-cassandra-backup-1.6451.680-pff6s   0/1       Running     0          54s

Check the backup logs

The backup job:

  • Creates a schema.cql file.
  • Uploads it to your storage bucket.
  • Echoes the node to backup the data and uploads it at the same time.
  • Waits until all of the data is uploaded.
kubectl logs -f apigee-cassandra-backup-1.6451.680-pff6s

The output is similar to the following:

myusername-macbookpro:cassandra-backup-utility myusername$ kubectl logs -f apigee-cassandra-backup-1.64577680-f9sc4
starting apigee-cassandra-default-0 in default
starting apigee-cassandra-default-1 in default
starting apigee-cassandra-default-2 in default
35 46 57
waiting on process  35
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
Requested creating snapshot(s) for [all keyspaces] with snapshot name [20190406190808] and options {skipFlush=false}
Snapshot directory: 20190406190808
INFO: backup created cassandra snapshot 20190406190808
tar: Removing leading `/' from member names
/apigee/data/cassandra/data/ks1/mytest3-37bc2df0587811e98e8d875b0ed64754/snapshots/
/apigee/data/cassandra/data/ks1/mytest3-37bc2df0587811e98e8d875b0ed64754/snapshots/20190406190808/
/apigee/data/cassandra/data/ks1/mytest3-37bc2df0587811e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Data.db
Requested creating snapshot(s) for [all keyspaces] with snapshot name [20190406190808] and options {skipFlush=false}
Requested creating snapshot(s) for [all keyspaces] with snapshot name [20190406190808] and options {skipFlush=false}
Snapshot directory: 20190406190808
INFO: backup created cassandra snapshot 20190406190808
tar: Removing leading `/' from member names
/apigee/data/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/snapshots/
/apigee/data/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/snapshots/20190406190808/
/apigee/data/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/snapshots/20190406190808/manifest.json
/apigee/data/cassandra/data/system/prepared_statements-18a9c2576a0c3841ba718cd529849fef/snapshots/
/apigee/data/cassandra/data/system/prepared_statements-18a9c2576a0c3841ba718cd529849fef/snapshots/20190406190808/
/apigee/data/cassandra/data/system/prepared_statements-18a9c2576a0c3841ba718cd529849fef/snapshots/20190406190808/manifest.json
/apigee/data/cassandra/data/system/range_xfers-55d764384e553f8b9f6e676d4af3976d/snapshots/
/apigee/data/cassandra/data/system/range_xfers-55d764384e553f8b9f6e676d4af3976d/snapshots/20190406190808/
/apigee/data/cassandra/data/system/range_xfers-55d764384e553f8b9f6e676d4af3976d/snapshots/20190406190808/manifest.json
/apigee/data/cassandra/data/system/peer_events-59dfeaea8db2334191ef109974d81484/snapshots/
/apigee/data/cassandra/data/system/peer_events-59dfeaea8db2334191ef109974d81484/snapshots/20190406190808/
/apigee/data/cassandra/data/system/peer_events-59dfeaea8db2334191ef109974d81484/snapshots/20190406190808/manifest.json
/apigee/data/cassandra/data/system/built_views-4b3c50a9ea873d7691016dbc9c38494a/snapshots/
/apigee/data/cassandra/data/system/built_views-4b3c50a9ea873d7691016dbc9c38494a/snapshots/20190406190808/
/apigee/data/cassandra/data/system/built_views-4b3c50a9ea873d7691016dbc9c38494a/snapshots/20190406190808/manifest.json
……
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Filter.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-CompressionInfo.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Index.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Statistics.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Data.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Index.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Statistics.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-TOC.txt
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Statistics.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Summary.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Filter.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Summary.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Index.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/manifest.json
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Filter.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-2-big-Digest.crc32
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Summary.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Data.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-TOC.txt
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/schema.cql
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-CompressionInfo.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Digest.crc32
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-TOC.txt
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-Data.db
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-3-big-Digest.crc32
/apigee/data/cassandra/data/ks2/user-d6d39d70586311e98e8d875b0ed64754/snapshots/20190406190808/mc-1-big-CompressionInfo.db
……
/tmp/tokens.txt
/ [1 files][    0.0 B/    0.0 B]
Operation completed over 1 objects.
/ [1 files][    0.0 B/    0.0 B]
Operation completed over 1 objects.
INFO: backup created tarball and transferred the file to gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
INFO: removing cassandra snapshot
INFO: backup created tarball and transferred the file to gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
INFO: removing cassandra snapshot
Requested clearing snapshot(s) for [all keyspaces]
INFO: Backup 20190406190808 completed
waiting on process  46
Requested clearing snapshot(s) for [all keyspaces]
INFO: Backup 20190406190808 completed
Requested clearing snapshot(s) for [all keyspaces]
waiting on process  57
INFO: Backup 20190406190808 completed
waiting result
to get schema from 10.32.0.28
INFO: /tmp/schema.cql has been generated
Activated service account credentials for: [apigee-cassandra-backup-svc@gce-myusername.iam.gserviceaccount.com]
tar: removing leading '/' from member names
tmp/schema.cql
Copying from <TDIN>...
/ [1 files][    0.0 B/    0.0 B]
Operation completed over 1 objects.
INFO: backup created tarball and transferred the file to gs://gce-myusername-apigee-cassandra-backup/apigeecluster/dc-1
finished uploading schema.cql

Validation

Validate your restoration by testing the IP address of your ingress and testing it with some traffic.

  1. Get the EXTERNAL-IP with the following command:
    kubectl get svc -n istio-system
    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                                                                      AGE
    istio-ingressgateway       LoadBalancer   10.11.123.45   34.56.78.90   15021:32225/TCP,80:32208/TCP,443:31942/TCP,15012:32689/TCP,15443:31936/TCP   1d
    
  2. On the command line, get or refresh your gcloud authentication credentials, as the following example shows:

    TOKEN=$(gcloud auth print-access-token)
  3. Test the ingress using an existing apiproxy deployed in your apigee environment along with API key or access token:
    Curl -v https://EXTERNAL-IP/basepath -H 'envgroup hostalias'

    For example:

    curl -v 'https://example.com/oauth/client_credential' \
      --resolve edgehybrid-staging-func-runtime-wf.hybrid.e2e.apigeeks.net:443:34.56.78.90 \
      -H 'Authorization: Bearer ${TOKEN}'

DNS Configuration for new cluster and traffic cutover

Once you are satisfied with validation, redirect the traffic to the new cluster and change the dns entry to new ingress EXTERNAL-IP address.