Upgrading a multi-cluster GKE environment with Ingress for Anthos

This tutorial shows how to upgrade a multi-cluster Google Kubernetes Engine (GKE) environment using Ingress. This tutorial is a continuation of the multi-cluster GKE upgrades using Ingress document that explains the process, architecture, and terms in more detail. We recommend that you read the concept document before this tutorial. This document is intended for Google Cloud administrators who are responsible for maintaining fleets for GKE clusters.

We recommend auto-upgrading your GKE clusters. Auto-upgrade is a fully managed way of getting your clusters (control plane and nodes) automatically updated on a release schedule that is determined by Google Cloud. This requires no intervention from the operator. However, if you want more control over how and when clusters are upgraded, this tutorial walks through a method of upgrading multiple clusters where your apps run on all clusters. It then uses Ingress to drain one cluster at a time before upgrading.

Architecture

This tutorial uses the following architecture. There are a total of three clusters: two clusters (blue and green) act as identical clusters with the same app deployed and one cluster (ingress-config) acts as the control plane cluster that configures Ingress. In this tutorial, you deploy a sample app called the Online Boutique app on two app clusters (blue and green clusters).

The Online Boutique app is a sample microservices app that consists of 10 microservices simulating an ecommerce app. The app consists of a web frontend (that clients can access) and several backend Services, for example, a shopping cart, a product catalog, and recommendation services that simulate an e-retailer.

Architecture of two identical clusters and one control plane cluster.

Objectives

  • Create three GKE clusters and register them to Hub.
  • Configure one GKE cluster (ingress-config) as the central configuration cluster.
  • Deploy Online Boutique as a sample app to the other GKE clusters.
  • Configure Ingress to send client traffic to the Online Boutique app that is running on both app clusters.
  • Set up a load generator to the Online Boutique app and configure monitoring.
  • Remove (drain) one app cluster from the multi-cluster Ingress and upgrade the drained cluster.
  • Spill traffic back to the upgraded cluster using Ingress.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. This tutorial requires you to Set up Ingress so that the following is set up:
    • Two or more clusters with the same apps, such as namespaces, Deployments, and Services, running on all clusters.
    • Autoupgrade is turned off for all clusters.
    • GKE static versions 1.14.10-gke.17 or higher, or GKE clusters in the rapid or regular release channels.
    • Clusters are VPC-native clusters that use alias IP address ranges
    • Have HTTP load balancing enabled (enabled by default).
    • gcloud --version must be 281 or higher. GKE cluster registration steps depend on this version or higher.
  2. In the Cloud Console, on the project selector page, select or create a Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. In the Cloud Console, activate Cloud Shell.

    Activate Cloud Shell

  5. Set your default project:

    export PROJECT=$(gcloud info --format='value(config.project)')
    gcloud config set project ${PROJECT}
    
  6. Enable the GKE, Hub, and multiclusteringress APIs:

    gcloud services enable container.googleapis.com gkehub.googleapis.com multiclusteringress.googleapis.com
    

Setting up the environment

  1. In Cloud Shell, clone the repository to get the files for this tutorial:

    cd ${HOME}
    git clone https://github.com/ameer00/gke-multicluster-upgrades.git
    
  2. Create a WORKDIR directory:

    cd gke-multicluster-upgrades
    export WORKDIR=`pwd`
    

Creating and registering GKE clusters to Hub

In this section, you create three GKE clusters and register them to Anthos Hub.

Create GKE clusters

  1. In Cloud Shell, create three GKE clusters:

    gcloud container clusters create ingress-config --zone us-west1-a \
        --num-nodes=3 --enable-ip-alias --async
    gcloud container clusters create blue --zone us-west1-b --num-nodes=4 \
        --enable-ip-alias --async
    gcloud container clusters create green --zone us-west1-c --num-nodes=4 \
        --enable-ip-alias
    

    For the purpose of this tutorial, you create the clusters in three different regions: us-west1-a,us-west1-b, and us-west1-c. For more information about regions, see Geography and regions.

  2. Wait a few minutes until all clusters are successfully created. Ensure that the clusters are running:

    gcloud container clusters list
    

    The output is similar to the following:

    NAME            LOCATION    MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
    ingress-config  us-west1-a  1.14.10-gke.24  35.203.165.186  n1-standard-1  1.14.10-gke.24  3          RUNNING
    blue            us-west1-b  1.14.10-gke.24  34.82.76.141    n1-standard-1  1.14.10-gke.24  4          RUNNING
    green           us-west1-c  1.14.10-gke.17  35.197.46.200   n1-standard-1  1.14.10-gke.17  4          RUNNING
    
  3. Create a kubeconfig file and connect to all clusters to generate entries in the kubeconfig file:

    touch gke-upgrade-kubeconfig
    export KUBECONFIG=gke-upgrade-kubeconfig
    gcloud container clusters get-credentials ingress-config \
        --zone us-west1-a --project ${PROJECT}
    gcloud container clusters get-credentials blue --zone us-west1-b \
        --project ${PROJECT}
    gcloud container clusters get-credentials green --zone us-west1-c \
        --project ${PROJECT}
    

    You use the kubeconfig file to create authentication to clusters by creating a user and context for each cluster. After you create the kubeconfig file, you can quickly switch context between clusters.

  4. Verify that you have three clusters in the kubeconfig file:

    kubectl config view -ojson | jq -r '.clusters[].name'
    

    The output is the following:

    gke_gke-multicluster-upgrades_us-west1-a_ingress-config
    gke_gke-multicluster-upgrades_us-west1-b_blue
    gke_gke-multicluster-upgrades_us-west1-c_green
    
  5. Get the context for the three clusters for later use:

    export INGRESS_CONFIG_CLUSTER=$(kubectl config view -ojson | jq \
        -r '.clusters[].name' | grep ingress-config)
    export BLUE_CLUSTER=$(kubectl config view -ojson | jq \
        -r '.clusters[].name' | grep blue)
    export GREEN_CLUSTER=$(kubectl config view -ojson | jq \
        -r '.clusters[].name' | grep green)
    echo -e "${INGRESS_CONFIG_CLUSTER}\n${BLUE_CLUSTER}\n${GREEN_CLUSTER}"
    

    The output is the following:

    gke_gke-multicluster-upgrades_us-west1-a_ingress-config
    gke_gke-multicluster-upgrades_us-west1-b_blue
    gke_gke-multicluster-upgrades_us-west1-c_green
    

Register GKE clusters to Hub

Hub lets you operate your Kubernetes clusters in hybrid environments. Hub also lets registered clusters use advanced GKE features such as Ingress. To register a GKE cluster to Hub, use a Google Cloud service account and then give Identity and Access Management (IAM) permission to the service account.

  1. In Cloud Shell, create a service account and download its credentials:

    gcloud iam service-accounts create ingress-svc-acct
    gcloud projects add-iam-policy-binding ${PROJECT} \
        --member="serviceAccount:ingress-svc-acct@${PROJECT}.iam.gserviceaccount.com" \
        --role="roles/gkehub.connect"
    gcloud iam service-accounts keys \
        create ${WORKDIR}/ingress-svc-acct.json \
        --iam-account=ingress-svc-acct@${PROJECT}.iam.gserviceaccount.com
    
  2. Get the URIs from the GKE clusters:

    export INGRESS_CONFIG_URI=$(gcloud container clusters list --uri | grep ingress-config)
    export BLUE_URI=$(gcloud container clusters list --uri | grep blue)
    export GREEN_URI=$(gcloud container clusters list --uri | grep green)
    echo -e "${INGRESS_CONFIG_URI}\n${BLUE_URI}\n${GREEN_URI}"
    

    The output is the following:

    https://container.googleapis.com/v1/projects/gke-multicluster-upgrades/zones/us-west1-a/clusters/ingress-config
    https://container.googleapis.com/v1/projects/gke-multicluster-upgrades/zones/us-west1-b/clusters/blue
    https://container.googleapis.com/v1/projects/gke-multicluster-upgrades/zones/us-west1-c/clusters/green
    
  3. Register the three clusters to the Hub:

    gcloud container hub memberships register ingress-config \
        --project=${PROJECT} \
        --gke-uri=${INGRESS_CONFIG_URI} \
        --service-account-key-file=${WORKDIR}/ingress-svc-acct.json
    
    gcloud container hub memberships register blue \
        --project=${PROJECT} \
        --gke-uri=${BLUE_URI} \
        --service-account-key-file=${WORKDIR}/ingress-svc-acct.json
    
    gcloud container hub memberships register green \
        --project=${PROJECT} \
        --gke-uri=${GREEN_URI} \
        --service-account-key-file=${WORKDIR}/ingress-svc-acct.json
    
  4. Verify that the clusters are registered:

    gcloud container hub memberships list
    

    The output is similar to the following:

    NAME            EXTERNAL_ID
    blue            d40521d9-693f-11ea-a26c-42010a8a0010
    green           d3027ecd-693f-11ea-ad5f-42010a8a00a9
    ingress-config  bb778338-693f-11ea-a053-42010a8a016a
    
  5. Configure the ingress-config cluster as the configuration cluster for Ingress by enabling the multiclusteringress feature through the Hub:

    gcloud alpha container hub ingress enable \
        --config-membership=projects/${PROJECT}/locations/global/memberships/ingress-config
    

    The preceding command adds the MulticlusterIngress and MulticlusterService CRDs (Custom Resource Definitions) to the config cluster. This command takes a few minutes to complete. Wait before proceeding to the next step.

  6. Verify that the ingress-cluster cluster was successfully configured for Ingress:

    watch gcloud alpha container hub ingress describe
    

    Wait until the output is similar to the following:

    createTime: '2020-03-18T18:13:46.530713607Z'
    
    featureState:
      details:
        code: OK
        description: Multicluster Ingress requires Anthos license enablement. Unlicensed
          usage is unrestricted for the MCI Beta API. Note that licensing will be enforced
          for use of the Generally Available MCI API.
      detailsByMembership:
        projects/960583074711/locations/global/memberships/blue:
          code: OK
        projects/960583074711/locations/global/memberships/green:
          code: OK
        projects/960583074711/locations/global/memberships/ingress-config:
          code: OK
      lifecycleState: ENABLED
    multiclusteringressFeatureSpec:
      configMembership: projects/gke-multicluster-upgrades/locations/global/memberships/ingress-config
    name:
    projects/gke-multicluster-upgrades/locations/global/features/multiclusteringress
    

    To exit the watch command, press Control+C.

Deploy the Online Boutique app to the blue and green clusters

  1. In Cloud Shell, deploy the Online Boutique app to the blue and green clusters:

    kubectl --context ${BLUE_CLUSTER} apply -f ${WORKDIR}/hipster-shop
    kubectl --context ${GREEN_CLUSTER} apply -f ${WORKDIR}/hipster-shop
    
  2. Wait a few minutes and ensure that all Pods in the blue and green clusters have a Running status:

    kubectl --context ${BLUE_CLUSTER} get pods
    kubectl --context ${GREEN_CLUSTER} get pods
    

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    adservice-86f5dfb995-nlm5w               1/1     Running   0          10m
    cartservice-76cf9686b6-rxf7b             1/1     Running   0          10m
    checkoutservice-7766b946f5-qszvc         1/1     Running   0          10m
    currencyservice-76975c7847-vmwn7         1/1     Running   0          10m
    emailservice-c55cd96f-74rxs              1/1     Running   0          10m
    frontend-f4b7cd95-lk4k8                  1/1     Running   0          10m
    loadgenerator-6784bc5f77-bkx4c           1/1     Running   0          10m
    paymentservice-696f796545-8sjp5          1/1     Running   0          10m
    productcatalogservice-7975f8588c-blrbq   1/1     Running   0          10m
    recommendationservice-6d44459d79-xxb8w   1/1     Running   0          10m
    redis-cart-6448dcbdcc-8qcb4              1/1     Running   0          10m
    shippingservice-767f84b88c-8f26h         1/1     Running   0          10m
    

Configure multi-cluster Ingress

In this section, you create a multi-cluster Ingress that sends traffic to the Online Boutique frontend to both blue and green clusters. You use Cloud Load Balancing to create a load balancer that uses the frontend Services in both the blue and green clusters as backends. To create the load balancer, you need two resources: a MultiClusterIngress and one or more MultiClusterServices. MultiClusterIngress and MultiClusterService objects are multi-cluster analogs for the existing Kubernetes Ingress and Service resources used in the single cluster context.

  1. In Cloud Shell, deploy the MulticlusterIngress resource to the ingress-config cluster:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} apply -f mci.yaml
    

    The output is the following:

    multiclusteringress.networking.gke.io/frontend-multicluster-ingress created
    
  2. Deploy the MulticlusterService resource to the ingress-config cluster:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} apply -f mcs-blue-green.yaml
    

    The output is the following:

    multiclusterservice.networking.gke.io/frontend-multicluster-svc created
    
  3. To compare the two resources, do the following:

    • Inspect the MulticlusterIngress resource:

      kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusteringress -o yaml
      

      The output is the following:

      spec:
          template:
            spec:
              backend:
                serviceName: frontend-multicluster-svc
                servicePort: 80
      

      The MulticlusterIngress resource is similar to the Kubernetes Ingress resource except that the serviceName specification points to a MulticlusterService resource.

    • Inspect the MulticlusterService resource:

      kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusterservice -o yaml
      

      The output is the following:

      spec:
          clusters:
          - link: us-west1-b/blue
          - link: us-west1-c/green
          template:
            spec:
              ports:
              - name: web
                port: 80
                protocol: TCP
                targetPort: 8080
              selector:
                app: frontend
      

      The MulticlusterService resource is similar to a Kubernetes Service resource, except it has a clusters specification. The clusters value is the list of registered clusters where the MulticlusterService resource is created.

    • Verify that the MulticlusterIngress resource created a load balancer with a backend service pointing to the MulticlusterServiceresource:

      watch kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusteringress -o jsonpath="{.items[].status.VIP}"
      

      Wait until the output is similar to the following:

      34.107.246.9
      

      Make a note of the virtual IP address (VIP).

      To exit the watch command, press Control+C.

  4. To access the Online Boutique app, in your browser, go to the virtual IP address.

    If you see a 404 error message, wait a bit longer and refresh the page until you see the Online Boutique frontend.

    Browse through the app and make a few test purchases. At this point, the Online Boutique app is deployed to both blue and green clusters and the MulticlusterIngress resource is set up to send traffic to both clusters.

Setting up the load generator

In this section, you set up a loadgenerator Service that generates client traffic to the Cloud Load Balancing VIP. First, traffic is sent to the blue and green clusters because the MulticlusterService resource is set up to send traffic to both clusters. Later, you configure the MulticlusterService resource to send traffic to a single cluster.

  1. In Cloud Shell, get the Cloud Load Balancing VIP:

    export GCLB_VIP=$(kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusteringress -o json | jq -r '.items[].status.VIP')
    echo ${GCLB_VIP}
    

    The output is similar to the following:

    34.107.246.9
    
  2. Configure the loadgenerator Service to send client traffic to Cloud Load Balancing:

    sed -i 's/GCLB_VIP/'${GCLB_VIP}'/g' ${WORKDIR}/load-generator/loadgenerator.yaml
    
  3. Deploy the loadgenerator in the ingress-config cluster:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} apply -f ${WORKDIR}/load-generator
    
  4. Verify that the loadgenerator Pods in the ingress-config cluster all have a status of Running:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} get pods
    

    The output is similar to the following:

    NAME                             READY   STATUS    RESTARTS   AGE
    loadgenerator-5498cbcb86-hqscp   1/1     Running   0          53s
    loadgenerator-5498cbcb86-m2z2z   1/1     Running   0          53s
    loadgenerator-5498cbcb86-p56qb   1/1     Running   0          53s
    

    If any of the Pods don't have a status of Running, wait a few minutes and then run the command again.

Monitoring traffic

In this section, you monitor the traffic to the Online Boutique app using the Cloud Console.

In the previous section, you set up a loadgenerator deployment that simulates Online Boutique client traffic by accessing it through the Cloud Load Balancing VIP. You can monitor these metrics through the Cloud Console. You set up monitoring first so that you can monitor as you drain clusters for upgrades (described in the next section).

  1. In Cloud Shell, get the name of the forwarding rule for the frontend-multicluster-ingress MulticlusterIngress resource:

    export INGRESS_LB_RULE=$(gcloud compute forwarding-rules list | grep frontend-multicluster-ingress | awk '{print $4}')
    echo ${INGRESS_LB_RULE}
    

    The output is similar to the following:

    mci-h8zu63-default-frontend-multicluster-ingress
    
  2. Generate the URL for the app:

    echo "https://console.cloud.google.com/net-services/loadbalancing/details/http/${INGRESS_LB_RULE}?project=${PROJECT}&tab=monitoring&duration=PT1H"
    

    The output is similar to the following:

    https://console.cloud.google.com/net-services/loadbalancing/details/http/mci-h8zu63-default-frontend-multicluster-ingress?project=gke-multicluster-upgrades&tab=monitoring&duration=PT1H`
    
  3. In a browser, go to the URL generated by the preceding command. In the Backend drop-down list, select the backend that begins with mci.

    Selecting the backend.

    The traffic to the Online Boutique app is going to both blue and green clusters (noted by the two zones the clusters are in). The timeline metrics chart shows traffic going to both backends. In the Backend column, the k8s1- values indicate that the network endpoint group (NEGs) for the two frontend MulticlusterServices are running in the blue and green clusters.

    Timeline metrics chart showing traffic flowing to both backends.

Draining and upgrading the green cluster

In this section, you drain the green cluster. Draining a cluster means that you remove it from the load balancing pool. After you drain the green cluster, all client traffic destined for the Online Boutique app goes to the blue cluster. You can monitor this process as described in the previous section. After the cluster is drained, you can upgrade the drained cluster. After the upgrade, you can put it back in the load balancing pool. You repeat these steps to upgrade the other cluster (not shown in this tutorial).

To drain the green cluster, you update the MulticlusterService resource in the ingress-cluster cluster and remove the green cluster from the clusters specification.

Drain the green cluster

  1. In Cloud Shell, update the MulticlusterService resource in the ingress-config cluster:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} apply -f mcs-blue.yaml
    
  2. Verify that you only have the blue cluster in the clusters specification:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusterservice -o json | jq '.items[].spec.clusters'
    

    The output is the following:

    {
    
    ```
        "link": "us-west1-b/blue"
    }
    ```
    

    Only the blue cluster is listed in the clusters specification so only the blue cluster is in the load balancing pool.

  3. You can see metrics from the Cloud Load Balancing metrics in the Cloud Console. Generate the URL:

    echo "https://console.cloud.google.com/net-services/loadbalancing/details/http/${INGRESS_LB_RULE}?project=${PROJECT}&tab=monitoring&duration=PT1H"
    
  4. In a browser, go to the URL generated from the previous command.

    The chart shows that only the blue cluster is receiving traffic from the Online Boutique app.

    Traffic flowing to `blue` cluster only.

Upgrade the green cluster

Now that the green cluster is no longer receiving any client traffic, you can upgrade the cluster (control plane and nodes).

  1. In Cloud Shell, get the current version of the clusters:

    gcloud container clusters list
    

    The output is similar to the following:

    NAME            LOCATION    MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
    ingress-config  us-west1-a  1.14.10-gke.24  35.203.165.186  n1-standard-1  1.14.10-gke.24  3          RUNNING
    blue            us-west1-b  1.14.10-gke.24  34.82.76.141    n1-standard-1  1.14.10-gke.24  4          RUNNING
    green           us-west1-c  1.14.10-gke.24  35.197.46.200   n1-standard-1  1.14.10-gke.24  4          RUNNING
    
    

    Your cluster versions might be different depending on when you complete this tutorial.

  2. Get the list of available MasterVersions versions in the zone:

    gcloud container get-server-config --zone us-west1-c --format=json | jq
    '.validMasterVersions'
    

    The output is similar to the following:

    [
      "1.15.9-gke.22",
      "1.15.9-gke.12",
      "1.15.9-gke.9",
      "1.15.9-gke.8",
      "1.15.8-gke.3",
      "1.14.10-gke.27",
      "1.14.10-gke.24",
      "1.14.10-gke.22",
      "1.14.10-gke.21",
      "1.14.10-gke.17",
      "1.13.12-gke.30"
    ]
    
  3. Get a list of available NodeVersions versions in the zone:

    gcloud container get-server-config --zone us-west1-c --format=json | jq
    '.validNodeVersions[0:20]'
    

    The output is similar to the following:

    [
      "1.15.9-gke.22",
      "1.15.9-gke.12",
      "1.15.9-gke.9",
      "1.15.9-gke.8",
      "1.15.8-gke.3",
      "1.15.8-gke.2",
      "1.15.7-gke.23",
      "1.15.7-gke.2",
      "1.15.4-gke.22",
      "1.14.10-gke.27",
      "1.14.10-gke.24",
      "1.14.10-gke.22",
      "1.14.10-gke.21",
      "1.14.10-gke.17",
      "1.14.10-gke.0",
      "1.14.9-gke.23",
      "1.14.9-gke.2",
      "1.14.9-gke.0",
      "1.14.8-gke.33",
      "1.14.8-gke.21"
    ]
    
  4. Set an environment variable for a MasterVersion and NodeVersion version that is higher than the current version for the green cluster:

    export UPGRADE_VERSION="1.16.13-gke.401"
    

    This tutorial uses the 1.16.13-gke.401 version. Your cluster versions might be different depending on the versions that are available when you complete this tutorial. For more information about upgrading, see upgrading clusters and node pools. We recommend picking the same version for all nodes. Pick one of the available versions in your list.

  5. Upgrade the control plane node for the green cluster:

    gcloud container clusters upgrade green \
        --zone us-west1-c --master --cluster-version ${UPGRADE_VERSION}
    

    To confirm the upgrade, press Y.

    This process takes a few minutes to complete. Wait until the upgrade is complete before proceeding.

    After the update is complete, the output is the following:

    Updated
    [https://container.googleapis.com/v1/projects/gke-multicluster-upgrades/zones/us-west1-c/clusters/green].
    
  6. Upgrade the nodes in the green cluster:

    gcloud container clusters upgrade green \
        --zone=us-west1-c --node-pool=default-pool \
        --cluster-version ${UPGRADE_VERSION}
    

    To confirm the update, press Y.

    This process takes a few minutes to complete. Wait until the node upgrade is complete before proceeding.

    After the upgrade is complete, the output is the following:

    Upgrading green... Done with 4 out of 4 nodes (100.0%): 4 succeeded...done.
    Updated [https://container.googleapis.com/v1/projects/gke-multicluster-upgrades/zones/us-west1-c/clusters/green].
    
  7. Verify that the green cluster is upgraded:

    gcloud container clusters list
    

    The output is similar to the following:

    NAME            LOCATION    MASTER_VERSION   MASTER_IP       MACHINE_TYPE   NODE_VERSION     NUM_NODES  STATUS
    ingress-config  us-west1-a  1.14.10-gke.24   35.203.165.186  n1-standard-1  1.14.10-gke.24   3          RUNNING
    blue            us-west1-b  1.14.10-gke.24   34.82.76.141    n1-standard-1  1.14.10-gke.24   4          RUNNING
    green           us-west1-c  1.16.13-gke.401  35.197.46.200   n1-standard-1  1.16.13-gke.401  4          RUNNING
    

Add the green cluster to the load balancing pool

In this section, you add the green cluster back into the load balancing pool.

  1. In Cloud Shell, verify that all Online Boutique app deployments are running on the green cluster before you add it back to the load balancing pool:

    kubectl --context ${GREEN_CLUSTER} get pods
    

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    adservice-86f5dfb995-2b25h               1/1     Running   0          16m
    cartservice-76cf9686b6-ws7b7             1/1     Running   1          13m
    checkoutservice-7766b946f5-6fhjh         1/1     Running   0          9m50s
    currencyservice-76975c7847-rf8r7         1/1     Running   0          13m
    emailservice-c55cd96f-pht8h              1/1     Running   0          13m
    frontend-f4b7cd95-wxdsh                  1/1     Running   0          13m
    loadgenerator-6784bc5f77-6b4cd           1/1     Running   6          6m34s
    paymentservice-696f796545-9wrl7          1/1     Running   0          9m49s
    productcatalogservice-7975f8588c-kbm5k   1/1     Running   0          6m33s
    recommendationservice-6d44459d79-km8vm   1/1     Running   0          9m49s
    redis-cart-6448dcbdcc-sjg69              1/1     Running   0          13m
    shippingservice-767f84b88c-gh9m4         1/1     Running   0          9m49s
    
  2. Update the MutliclusterService resource to add the green cluster back to the load balancing pool:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} apply -f mcs-blue-green.yaml
    
  3. Verify that you have both blue and green clusters in the clusters specification:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} get multiclusterservice -o json | jq '.items[].spec.clusters'
    

    The output is the following:

    {
        "link": "us-west1-b/blue"
    },
    {
        "link": "us-west1-c/green"
    }
    

    The blue and green clusters are now in the clusters specification.

  4. Metrics from the Cloud Load Balancing metrics are available in the Cloud Console. Generate the URL:

    echo "https://console.cloud.google.com/net-services/loadbalancing/details/http/${INGRESS_LB_RULE}?project=${PROJECT}&tab=monitoring&duration=PT1H"
    
  5. In a browser, go to the URL generated by the previous command.

    The chart shows that both blue and green clusters are receiving traffic from the Online Boutique app.

    Both clusters receiving traffic.

    Congratulations. You successfully upgraded a GKE cluster in a multi-cluster architecture using Ingress.

  6. To upgrade the blue cluster, repeat the process to drain and upgrade the green cluster, replacing green with blue throughout.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

The easiest way to eliminate billing is to delete the Cloud project that you created for the tutorial. Alternatively, you can delete the individual resources.

Delete the clusters

  1. In Cloud Shell, unregister and delete the blue and green clusters:

    gcloud container hub memberships unregister green \
        --project=${PROJECT} \
        --gke-uri=${GREEN_URI}
    
    gcloud container clusters delete green --zone us-west1-c --quiet
    
    gcloud container hub memberships unregister blue \
        --project=${PROJECT} \
        --gke-uri=${BLUE_URI}
    
    gcloud container clusters delete blue --zone us-west1-b --quiet
    
  2. Delete the MuticlusterIngress resource from the ingress-config cluster:

    kubectl --context ${INGRESS_CONFIG_CLUSTER} delete -f mci.yaml
    

    This command deletes the Cloud Load Balancing resources from the project.

  3. Unregister and delete the ingress-config cluster:

    gcloud container hub memberships unregister ingress-config \
        --project=${PROJECT} \
        --gke-uri=${INGRESS_CONFIG_URI}
    gcloud container clusters delete ingress-config --zone us-west1-a --quiet
    
  4. Verify all clusters are deleted:

    gcloud container clusters list
    

    The output is the following:

    *<null>*
    
  5. Reset the kubeconfig file:

    unset KUBECONFIG
    
  6. Remove the WORKDIR folder:

    cd ${HOME}
    rm -rf ${WORKDIR}
    

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project that you want to delete and then click Delete .
  3. In the dialog, type the project ID and then click Shut down to delete the project.

What's next