Implementing deployment and testing strategies on GKE

Getting new software releases to your users without interrupting their experience is difficult. How do you upgrade an application that's in active use? If you take an application offline for an update, what happens if something goes wrong? Application downtime is expensive, so most enterprises strive to minimize or eliminate it.

This tutorial uses Google Kubernetes Engine (GKE) to walk through several software deployment strategies (recreate, rolling update, and blue/green) and testing strategies (canary, shadow, and A/B). For more information about these strategies, see Application deployment and testing strategies.

Testing patterns such as canary, shadow, and A/B testing require that you split traffic between multiple services that are deployed on the GKE cluster. To split traffic, you use Istio. Istio creates a network of deployed services such as load balancing, traffic routing, service-to-service authentication, and monitoring. No changes are required in the application code in order to run Istio.

This tutorial is intended for system administrators and DevOps engineers who define and implement release and deployment strategies for various applications, systems, and frameworks. This tutorial assumes the reader understands core Kubernetes concepts.

Example workload

This tutorial uses a Spring Boot application that is deployed on GKE. This application is packaged as container images that represent the current (app:current) and new (app:new) versions of the application. In the following sections, the Kubernetes deployments use these container images.

The application version information is exposed through the /<version> endpoint. In the following sections, you use these endpoints to monitor the version of the deployed application:

curl http://ip-address-current/version
{"id":1,"content":"current"}

curl http://ip-address-new/version
{"id":2,"content":"new"}

Replace the following:

  • ip-address-current: The IP address for the current application version.
  • ip-address-new: The IP address for the new application version.

Objectives

Each of the deployment and testing patterns discussed in this tutorial has the following objectives:

  • Create a GKE cluster.
  • Deploy a sample application.
  • Verify that the application is serving traffic.
  • Deploy or test a new version of the application by using a specific method.
  • Verify that the new version is deployed.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. In the Cloud Console, go to the project selector page.

    Go to the project selector page

  2. Select or create a Cloud project.

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. In the Cloud Console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including the gcloud command-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize.

  5. Export the following environment variables:
      export PROJECT=$(gcloud config get-value project)
      export ZONE="us-central1-a"
      export CLUSTER="test-cluster"

    These environment variables are used to create a cluster called test-cluster in the us-central1 zone. For more information, see Geography and regions.
  6. Enable the APIs for Compute Engine, Google Kubernetes Engine, Container Analysis, and Container Registry:
      gcloud services enable \
          compute.googleapis.com \
          container.googleapis.com \
          containeranalysis.googleapis.com \
          containerregistry.googleapis.com

Creating a GKE cluster

  • Create a GKE cluster with Istio enabled:

    gcloud beta container clusters create $CLUSTER --zone $ZONE \
        --addons Istio \
        --istio-config auth=MTLS_PERMISSIVE
    

Cloning the Git repository

  1. Clone the code repository:

    git clone \
        https://github.com/GoogleCloudPlatform/gke-deployment-testing-strategies/
    
  2. Change to the working directory:

    cd kubernetes-deployment-patterns
    

Testing your container images

This tutorial uses pre-built container images for the current and new version of the application. These images are exposed publicly under the Container Registry gcr.io/cloud-solutions-images. The Git repository that accompanies this tutorial includes the source used to build the container images.

To verify that the container images work as intended, follow these steps:

  1. Pull the container image app:current from the Container Registry and run the container:

    docker run --name curr -d \
        -p 9001:8080 gcr.io/cloud-solutions-images/app:current && \
        while ! curl -s http://localhost:9001/version; \
        do sleep 5; done
    

    The output is similar to the following:

    [...]
    {"id":1,"content":"current"}
    
  2. Pull the container image app:new from the Container Registry and run the container:

    docker run --name new -d \
        -p 9002:8080 gcr.io/cloud-solutions-images/app:new && \
        while ! curl -s http://localhost:9002/version; \
        do sleep 5; done
    

    The output is similar to the following:

    [...]
    {"id":2,"content":"new"}
    
  3. Stop the running containers:

    docker rm -f curr && docker rm -f new
    

Deploying the new application version

In this section, you use different deployment patterns (recreate, rolling update, and blue/green) to deploy a new application version (app:new) that replaces the current version (app:current). After each deployment method, you test the deployment before deploying the new version. You then delete the resources that you used in that deployment and try a new deployment method.

Perform a recreate deployment

With a recreate deployment, you terminate the current version of your application and then roll out the new version, as the following diagram shows.

The flow of a recreate deployment.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Test the deployed application by generating traffic against it.
  • Deploy a new version of the application (app:new).
  • Verify that the current version is terminated and that the traffic switched from app:current to app:new.

Deploy the current version

  1. Deploy the current application version:

    kubectl apply -f recreate/deployment-old.yaml
    

    A Kubernetes deployment named app is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app
    

    The output is similar to the following:

    [...]
    deployment "app" successfully rolled out
    
  3. Create a Kubernetes service that serves external traffic:

    kubectl apply -f recreate/service.yaml
    
  4. Verify that the service is created successfully:

    kubectl get svc/app -w
    

    Before proceeding, check that an external IP address is allocated, as indicated by output similar to the following:

    NAME  TYPE          CLUSTER-IP    EXTERNAL-IP      PORT(S)         AGE
    app   LoadBalancer  10.11.254.22  <pending>        8080:30343/TCP  36s
    app   LoadBalancer  10.11.254.22  [EXTERNAL_IP]    8080:30343/TCP  37s
    

    To end the watch loop, press CTRL+C.

Test the deployment

  1. In Cloud Shell, click Open a new tab to start a new Cloud Shell session.
  2. Get the load balancer IP address:

    SERVICE_IP=$(kubectl get svc app \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  3. Generate traffic against the deployed application:

    while(true); do \
        curl "http://${SERVICE_IP}:8080/version"; echo; sleep 2; done
    

    After a few seconds, the output is similar to the following:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    

    Keep the curl command running. You revisit this session after you deploy the new version.

Deploy the new version

  1. In your original Cloud Shell session, deploy the new application version:

    kubectl apply -f recreate/deployment-new.yaml
    
  2. In the terminal session where you ran the curl command, monitor the output to verify that the traffic switched.

    Old replicas are terminated, and then new replicas with the updated version are created. Downtime occurs during the update. After the update finishes, the new version serves requests.

    The output indicates the Initial, Intermediate, and Final states of the update:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    
    curl: (7) Failed to connect to
    [GKE_Service_IP] port 8080:
    Connection refused
    
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    

    To stop the curl command, press CTRL+C.

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f recreate/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Perform a rolling update deployment

With a rolling update deployment, you replace the current version of your application by gradually rolling out the new version, as the following diagram shows.

The flow of a rolling update deployment.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Test the deployed application by generating traffic against it.
  • Deploy a new version of the application (app:new).
  • Verify that the traffic switched from app:current to app:new.

Deploy the current version

  1. Deploy the current application version:

    kubectl apply -f rollingupdate/deployment-old.yaml
    

    A Kubernetes deployment named app is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app
    

    The output is similar to the following:

    [...]
    deployment "app" successfully rolled out
    
  3. Create a Kubernetes service to serve external traffic:

    kubectl apply -f rollingupdate/service.yaml
    
  4. Verify that the service is created successfully:

    kubectl get svc/app -w
    

    Before proceeding, wait for an external IP address to be allocated, as indicated by output similar to the following:

    NAME  TYPE          CLUSTER-IP    EXTERNAL-IP      PORT(S)         AGE
    app   LoadBalancer  10.11.254.22  <pending>        8080:30343/TCP  36s
    app   LoadBalancer  10.11.254.22  [EXTERNAL_IP]    8080:30343/TCP  37s
    

    To end the watch loop, press CTRL+C.

Test the deployment

  1. In Cloud Shell, click Open a new tab to start a new Cloud Shell session.
  2. Get the load balancer IP address:

    SERVICE_IP=$(kubectl get svc app \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  3. Generate traffic against the deployed application:

    while(true); do \
        curl "http://${SERVICE_IP}:8080/version"; echo; sleep 2; done
    

    After a few seconds, the output is similar to the following:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    

    Keep the curl command running. You revisit this session after you deploy the new version.

Deploy the new version

  1. In your original Cloud Shell session, deploy the new application version:

    kubectl apply -f rollingupdate/deployment-new.yaml
    
  2. In the terminal session where you ran the curl command, monitor the responses to verify that the traffic switched.

    The rolling update configuration ensures that at least two Pods serve the client traffic at any time (calculated as number of replicas - maxUnavailable), and at most one Pod is replaced at a time (defined by maxSurge).

    Liveness and readiness probes are used to verify that the new Pods are ready to serve traffic. During the update, the active Pods serve traffic requests. After all replicas are updated, the new application version serves all requests.

    The output indicates the Initial, Intermediate, and Final states of the update:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":2,"content":"new"}
    
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    

    To stop the curl command, press CTRL+C.

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f rollingupdate/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Perform a blue/green deployment

With a blue/green deployment, you release the new version of your application alongside the current version. After you conduct appropriate tests, you switch the traffic to the new version, as the following diagram shows.

The flow of a blue/green deployment.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Test the deployed application by generating traffic against it.
  • Deploy a new version of the application (app:new) alongside the current version.
  • Switch the traffic from app:current to app:new at the load balancer layer by updating the service selector.

Deploy the current version (blue deployment)

  1. Deploy the current application version:

    kubectl apply -f bluegreen/deployment-old.yaml
    

    A Kubernetes deployment named app-01 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-01
    

    The output is similar to the following:

    [...]
    deployment "app-01" successfully rolled out
    
  3. Create a Kubernetes service to serve external traffic:

    kubectl apply -f bluegreen/service-old.yaml
    
  4. Verify that the service is created successfully:

    kubectl get svc/app -w
    

    Before proceeding, wait for an external IP address to be allocated, as indicated by output similar to the following:

    NAME  TYPE          CLUSTER-IP    EXTERNAL-IP      PORT(S)         AGE
    app   LoadBalancer  10.11.254.22  <pending>        8080:30343/TCP  36s
    app   LoadBalancer  10.11.254.22  [EXTERNAL_IP]    8080:30343/TCP  37s
    

    To end the watch loop, press CTRL+C.

Test the deployment

  1. In Cloud Shell, click Open a new tab to start a new Cloud Shell session.
  2. Get the load balancer IP address:

    SERVICE_IP=$(kubectl get svc app \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  3. Generate traffic against the deployed application:

    while(true); \
        do curl "http://${SERVICE_IP}:8080/version"; echo; sleep 2; done
    

    After a few seconds, the output is similar to the following:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    

    Keep the curl command running. You revisit this session after you deploy the new version.

Deploy the new version (green deployment)

  1. In your original Cloud Shell session, deploy the new application version:

    kubectl apply -f bluegreen/deployment-new.yaml
    

    A Kubernetes deployment named app-02 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-02
    

    The output is similar to the following:

    [...]
    deployment "app-02" successfully rolled out
    

Switch traffic from the blue deployment to the green deployment

  1. Update the service selector to point to the new version:

    kubectl apply -f bluegreen/service-new.yaml
    
  2. In the terminal session where you ran the curl command, monitor the responses to verify that the traffic switched. After the service is updated, the requests are routed to the new version.

    The output indicates the Initial and Final states of the update:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    {"id":2,"content":"new"}
    

    To stop the curl command, press CTRL+C.

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f bluegreen/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Testing the new application version

In this section, you deploy a new application version (app:new) that replaces the current version (app:current). You then use different testing patterns (canary, A/B, and shadow) to test the new version.

Perform a canary test

With a canary test, you partially roll out the new version of your application to a subset of users and evaluate its performance against a baseline deployment, as the following diagram shows.

The flow of a canary test.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Use Istio resources to serve external traffic.
  • Deploy a new version of the application (app:new) alongside the current version.
  • Use Istio to split and route traffic between the two versions based on predefined weights.

Deploy the current version

  1. Deploy the current version of the application:

    kubectl apply -f canary/deployment-old.yaml
    

    A Kubernetes deployment named app-01 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-01
    

    The output is similar to the following:

    [...]
    deployment "app-01" successfully rolled out
    

    Following Istio's recommendations for Pods and services, this deployment uses a version label -01 to specify the version of the application.

  3. Deploy the Istio resources:

    kubectl apply \
        -f canary/gateway.yaml \
        -f canary/virtualservice.yaml
    

    An Istio ingress gateway and a virtual service are created. The ingress gateway describes a load balancer operating at the edge of the service mesh. All incoming HTTP/TCP connections are handled by the gateway and forwarded to a virtual service, which defines a set of routing rules to direct traffic inside the service mesh. For more information, see about how Istio routes ingress traffic.

Test the deployment

  1. In Cloud Shell, click Open a new tab to start a new Cloud Shell session.
  2. Get the Istio ingress gateway IP address:

    SERVICE_IP=$(kubectl get service istio-ingressgateway \
        -n istio-system \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  3. Generate traffic against the deployed application:

    while(true); \
        do curl "http://${SERVICE_IP}/version"; echo; sleep 2; done
    

    The output is similar to the following:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    

    Keep the curl command running. You revisit this session after the new version is deployed.

Deploy the new version (canary)

  1. In your original Cloud Shell session, deploy the new application version:

    kubectl apply -f canary/deployment-new.yaml
    

    A Kubernetes deployment named app-02 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-02
    

    The output is similar to the following:

    [...]
    deployment "app-02" successfully rolled out
    

Split the traffic

A canary deployment consists of gradually shifting production traffic from one version to another. In an 80-20 split, 80% of the requests go to app:current, and 20% go to app:new.

To split the traffic, perform the following steps:

  1. Update the virtual service that you created earlier:

    kubectl apply \
        -f canary/destinationrule.yaml \
        -f canary/virtualservice-split.yaml
    

    The destinationrule.yaml and the virtualservice-split.yaml files are configured to enforce an 80-20 traffic split between the application versions. For more information, see how Istio splits traffic between versions.

  2. In the terminal session where you ran the curl command, monitor the responses.

    The output is similar to the following:

    [...]
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":2,"content":"new"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":2,"content":"new"}
    [...]
    

    To stop the curl command, press CTRL+C.

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f canary/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Perform an A/B test

With an A/B test, you release the new version of your application to a subset of users defined by specific conditions (for example, location, browser version, or user agent) and then test a theory or hypothesis, as the following diagram shows.

The flow of an A/B test.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Deploy a new version of the application (app:new) alongside the current version.
  • Use Istio to route incoming requests that have the username test in the request's cookie to app:new. All other requests are routed to app:current.

Deploy the current version

  1. Deploy the current application version:

    kubectl apply -f ab/deployment-old.yaml
    

    A Kubernetes deployment named app-01 with the label 01 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-01
    

    The output is similar to the following:

    [...]
    deployment "app-01" successfully rolled out
    
  3. Deploy the Istio resources:

    kubectl apply -f ab/gateway.yaml -f ab/virtualservice.yaml
    

Test the deployment

  1. Get the Istio ingress gateway IP address:

    SERVICE_IP=$(kubectl get service istio-ingressgateway \
        -n istio-system \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  2. Send a request to the application:

    curl "http://${SERVICE_IP}/version"
    

    The output is similar to the following:

    {"id":1,"content":"current"}
    

Deploy the new version

  1. Deploy the new version of the application:

    kubectl apply -f ab/deployment-new.yaml
    

    A Kubernetes deployment named app-02 with the version label 02 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-02
    

    The output is similar to the following:

    [...]
    deployment "app-02" successfully rolled out
    

Split the traffic

  1. Split the traffic based on the username received in the request's cookie:

    kubectl apply \
        -f ab/destinationrule.yaml \
        -f ab/virtualservice-split.yaml
    

    All requests where user is identified as test go to the new application version.

  2. Send a request to the application in which user is identified as test:

    curl --cookie "user=test" "http://${SERVICE_IP}/version"
    

    The output is similar to the following:

    {"id":2,"content":"new"}
    
  3. Send a request without the cookie:

    curl "http://${SERVICE_IP}/version"
    

    The output is similar to the following:

    {"id":1,"content":"current"}
    

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f ab/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Perform a shadow test

With a shadow test, you test the new version of your application by mirroring user traffic from the current application version without impacting the user requests, as the following diagram shows.

The flow of a shadow test.

To try this pattern, you perform the following steps:

  • Deploy the current version of the application (app:current) on the GKE cluster.
  • Deploy a new version of the application (app:new) alongside the current version.
  • Use Istio to mirror all incoming requests to app:new.
  • Verify that the traffic is mirrored to app:new and does not impact any end-user requests to app:current.

Deploy the current version

  1. Deploy the current version of the application:

    kubectl apply -f shadow/deployment-old.yaml
    

    A Kubernetes deployment named app-01 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-01
    

    The output is similar to the following:

    [...]
    deployment "app-01" successfully rolled out
    
  3. Deploy the Istio resources:

    kubectl apply \
        -f shadow/gateway.yaml \
        -f shadow/virtualservice.yaml
    

Test the deployment

  1. In Cloud Shell, click Open a new tab to start a new Cloud Shell session.
  2. Get the Istio ingress gateway IP address:

    SERVICE_IP=$(kubectl get service istio-ingressgateway \
        -n istio-system \
        -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  3. Generate traffic against the deployed application:

    while(true); \
        do curl "http://${SERVICE_IP}/version"; echo; sleep 2; done
    

    The output is similar to the following:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    

    Keep the curl command running. You revisit this session after you deploy the new version.

Deploy the new version

  1. In your original Cloud Shell session, deploy the new application version:

    kubectl apply -f shadow/deployment-new.yaml
    

    A Kubernetes deployment named app-02 is created.

  2. Verify that the deployment is created successfully:

    kubectl rollout status deploy app-02
    

    The output is similar to the following:

    [...]
    deployment "app-02" successfully rolled out
    

Set up traffic mirroring

  1. Set up traffic mirroring by updating the Istio resources:

    kubectl apply -f shadow/virtualservice-mirror.yaml
    

    In the terminal where you ran the curl command, you see that app:current still serves requests, as the following output indicates:

    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    {"id":1,"content":"current"}
    [...]
    
  2. Check the new deployment logs to ensure that the traffic is mirrored:

    kubectl logs -f --tail=3 deployment/app-02
    

    The output is similar to the following:

    2019-07-15 11:03:12.927  INFO 1 --- [nio-8080-exec-5] com.google.springboot.SpringBootDemo :
    Serving request from version 2
    2019-07-15 11:03:13.345  INFO 1 --- [nio-8080-exec-7] com.google.springboot.SpringBootDemo :
    Serving request from version 2
    2019-07-15 11:03:13.752  INFO 1 --- [nio-8080-exec-8] com.google.springboot.SpringBootDemo :
    Serving request from version 2
    [...]
    

    The traffic is mirrored asynchronously and out of band from the production traffic. This means that Istio prioritizes the production path. That is, Istio returns responses from the production service without waiting for any responses from the shadow service. Also, only requests are shadowed; responses from the shadowed service are dropped.

    To stop the curl command, press CTRL+C.

Clean up the resources

  1. In your original Cloud Shell session, clean up the deployment and service resources used in this example:

    kubectl delete -f shadow/ --ignore-not-found
    
  2. Close the other Cloud Shell session.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Delete the Google Cloud project

The easiest way to eliminate billing is to delete the project you created for the tutorial.

  1. In the Cloud Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project that you want to delete and then click Delete .
  3. In the dialog, type the project ID and then click Shut down to delete the project.

Delete the resources

If you want to keep the Google Cloud project you used in this tutorial, delete the individual resources:

  1. Delete the GKE cluster:

    gcloud container clusters delete $CLUSTER --zone $ZONE --async
    
  2. Delete downloaded code, artifacts, and other dependencies:

    cd .. && rm -rf kubernetes-deployment-patterns
    
  3. Delete the image in Container Registry:

    gcloud container images list-tags gcr.io/$PROJECT/app \
        --format 'value(digest)' | \
        xargs -I {} gcloud container images delete \
        --force-delete-tags --quiet \
        gcr.io/${PROJECT}/app@sha256:{}
    

What's next