Decommission a hybrid region

This guide explains the procedure to decommission a region in a multi region environment.

Decommission a hybrid region

  1. Set the kubernetes contexts to the region that needs to be decommissioned.

    List your current contexts to see the context name for each cluster:

    kubectl config get-contexts

    Set the context to the cluster and region you want to decommission:

    kubectl config use-context CONTEXT_NAME

    Where CONTEXT_NAME is the context name for the cluster and region.

    For example:

    kubectl config get-contexts
      CURRENT  NAME                                                CLUSTER                                             AUTHINFO                                           NAMESPACE
               gke_example-org-1_us-central1_example-cluster-1     gke_example-org-1_us-central1_example-cluster-1     gke_example-org-1_us-central1_example-cluster-1    apigee
      *        gke_example-org-1_us-central1_example-cluster-2     gke_example-org-1_us-central1_example-cluster-2     gke_example-org-1_us-central1_example-cluster-2    apigee
               gke_example-org-1_us-west1_example-cluster-2        gke_example-org-1_us-west1_example-cluster-2        gke_example-org-1_us-west1_example-cluster-2       apigee
    
    kubectl config use-context gke_example-org-1_us-west1_example-cluster-2
  2. Validate all the pods in the region are in a running or completed state:
    kubectl get pods -n apigee
    kubectl get pods -n apigee-system
  3. Validate the release of components using helm:
    helm -n apigee list
    helm -n apigee-system list

    For example:

    helm -n apigee list
      NAME              NAMESPACE REVISION  UPDATED                                 STATUS    CHART                         APP VERSION
      datastore         apigee    2         2024-03-29 17:08:07.917848253 +0000 UTC	deployed  apigee-datastore-1.12.0       1.12.0
      ingress-manager   apigee    2         2024-03-29 17:21:02.917333616 +0000 UTC	deployed  apigee-ingress-manager-1.12.0 1.12.0
      redis             apigee    2         2024-03-29 17:19:51.143728084 +0000 UTC	deployed  apigee-redis-1.12.0           1.12.0
      telemetry         apigee    2         2024-03-29 17:16:09.883885403 +0000 UTC	deployed  apigee-telemetry-1.12.0       1.12.0
      exampleor         apigee    2         2024-03-29 17:21:50.899855344 +0000 UTC	deployed  apigee-org-1.12.0             1.12.0
  4. Validate the status of the Cassandra cluster.

    List the cassandra pods:

    kubectl get pods -n APIGEE_NAMESPACE -l app=apigee-cassandra

    For example:

    kubectl get pods -n apigee -l app=apigee-cassandra
      NAME                          READY    STATUS     RESTARTS    AGE
      apigee-cassandra-default-0    1/1      Running    0           2h
      apigee-cassandra-default-1    1/1      Running    0           2h
      apigee-cassandra-default-2    1/1      Running    0           2h
      apigee-cassandra-default-3    1/1      Running    0           16m
      apigee-cassandra-default-4    1/1      Running    0           14m
      apigee-cassandra-default-5    1/1      Running    0           13m
      apigee-cassandra-default-6    1/1      Running    0           9m
      apigee-cassandra-default-7    1/1      Running    0           9m
      apigee-cassandra-default-8    1/1      Running    0           8m
  5. Delete the Apigee instance in the context you just selected:

    Delete the components one at a time.

    helm -n apigee delete datastore
    
    helm -n apigee delete telemetry
    
    helm -n apigee delete apigee-ingress-manager
    
    helm -n apigee delete redis
    
    helm -n apigee delete ORG_NAME
    

    Repeat the following command for every environment:

    helm -n apigee delete ENV_NAME
    

    Repeat the following command for every environment group:

    helm -n apigee delete ENV_GROUP_NAME
    
    helm -n apigee-system delete operator
    
  6. Verify there are no pods remaining in the Apigee namespaces:
    kubectl get pods -n apigee
    kubectl get pods -n apigee-system
    kubectl get pods -n cert-manager
  7. Set the context to other existing regions and make sure the cassandra datacenter is removed from the existing ring. The output should not show the removed data center details.
    kubectl exec apigee-cassandra-default-0 -n apigee  -- nodetool -u JMX_USER -pw JMX_PASSWORD status