Supporting your migration with Istio mesh expansion: Tutorial

Stay organized with collections Save and categorize content based on your preferences.

This tutorial shows how to initialize and configure a service mesh to support a feature-by-feature migration from an on-premises data center to Google Cloud. The tutorial and its accompanying conceptual article is intended for sysadmins, developers, and engineers who want to use a service mesh that dynamically routes traffic either to the source environment or to Google Cloud.

A service mesh can greatly reduce the complexity of both migration and refactoring because it decouples the network functions from the service functions. It also reduces the network operational complexity because it provides load-balancing, traffic management, monitoring, and observability.

This tutorial is intended to help you migrate from a non-Google Cloud environment (such as on-premises or another cloud provider) to Google Cloud. Such migrations have a layer of network complexity because you have to set up a secure communication channel between the non-Google Cloud environment and the Google Cloud environment.

The following diagram shows how you can use a service mesh to route traffic either to microservices running in the source environment or to Google Cloud:

Service mesh to route traffic either to microservices running in the legacy environment or to Google Cloud.

In the preceding diagram, the service mesh routes traffic either to microservices running in the source environment or to Google Cloud.

In this tutorial, you use the following software:

Objectives

  • Initialize an environment simulating the on-premises data center.
  • Deploy example workloads to the on-premises data center.
  • Test the workloads running on the on-premises data center.
  • Configure the destination environment on Google Cloud.
  • Migrate the workload from the on-premises data center to the destination environment.
  • Test the workloads running in the destination environment.
  • Retire the on-premises data center.

Costs

This tutorial uses billable components of Google Cloud, including:

Use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin

  1. Create a Google Cloud organization, or use one that you already have. You also need the Project Creator role for the organization.
  2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Clean up.

Prepare your environment

You perform most of the steps for this tutorial in Cloud Shell.

  1. Open Cloud Shell:

    Open Cloud Shell

  2. Change the working directory to the ${HOME} directory:

      cd "${HOME}"
    
  3. Clone the Git repository, which contains the scripts and the manifest files to deploy and configure the example workload:

      git clone https://github.com/GoogleCloudPlatform/solutions-istio-mesh-expansion-migration
    
  4. Authenticate with Application Default Credentials (ADC):

      gcloud auth application-default login
    

    In the output, you see the path to the Application Default Credentials file:

    Credentials saved to file:
    [/tmp/tmp.T5Qae7XwAO/application_default_credentials.json]
    

    These credentials will be used by any library that requests ADC.

    Make a note of the path to the Application Default Credentials file.

  5. Initialize the environment variables:

    APPLICATION_DEFAULT_CREDENTIALS_PATH=APPLICATION_DEFAULT_CREDENTIALS_PATH
    BILLING_ACCOUNT_ID=BILLING_ACCOUNT_ID
    DEFAULT_FOLDER=DEFAULT_FOLDER
    DEFAULT_PROJECT=DEFAULT_PROJECT
    DEFAULT_REGION=DEFAULT_REGION
    DEFAULT_ZONE=DEFAULT_ZONE
    GKE_CLUSTER_NAME=istio-migration
    TUTORIAL_DIRECTORY_PATH="$(pwd)"/solutions-istio-mesh-expansion-migration
    ORGANIZATION_ID=ORGANIZATION_ID
    

    Replace the following:

  • APPLICATION_DEFAULT_CREDENTIALS_PATH: the path to the ADC file from the previous step.
  • BILLING_ACCOUNT_ID: the ID of the billing account to use.
  • DEFAULT_FOLDER: the ID of the Google Cloud folder to create the Cloud project in. If you want Terraform to create the Cloud project directly under the Google Cloud organization, leave this string empty.
  • DEFAULT_PROJECT: the ID of the Google Cloud project to provision the resources to complete this tutorial. Terraform creates this project for you when it provisions the environment.
  • DEFAULT_REGION: the default region where resources are provisioned.
  • DEFAULT_ZONE: the default zone where resources are provisioned.
  • ORGANIZATION_ID: the ID of your Google Cloud organization.

The example workload

In this tutorial, you use the Bookinfo app, which is a 4-tier, polyglot microservices app that shows information about books. This app is designed to run on Kubernetes, but you deploy it on a Compute Engine instance using Docker and Docker Compose. With Docker Compose, you describe multi-container apps using YAML descriptors. You can then start the app by executing a single command.

Although this example workload is already containerized, this approach also applies to non-containerized services. In such cases, you can add a modernization phase where you containerize services that you intend to migrate.

The Bookinfo app has four microservice components:

  • productpage: Calls the details, ratings, and reviews microservices to populate the book information page
  • details: Serves information about books
  • reviews: Contains book reviews
  • ratings: Returns book ranking information to accompany a book review

Provision your environments

In this section, you provision the following environments for this tutorial:

  • An environment simulating the source on-premises data center
  • An environment simulating the migration destination

In this tutorial, both environments run in Google Cloud. This approach simplifies the setup process because there is only one bootstrapping phase. You automatically provision the source and target environments using Terraform.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Initialize the Terraform backend configuration:

    scripts/init.sh \
      --application-credentials "${APPLICATION_DEFAULT_CREDENTIALS_PATH}" \
      --billing-account-id "${BILLING_ACCOUNT_ID}" \
      --default-folder "${DEFAULT_FOLDER}" \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}" \
      --organization-id "${ORGANIZATION_ID}"
    

    The init.sh script does the following:

    • Generates the descriptors to configure the Terraform backend.
    • Initializes the Terraform working directory.
  3. Change the working directory to the terraform directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"/terraform
    
  4. Apply the changes with Terraform:

    terraform apply
    

    When prompted, review the proposed changes and confirm by answering yes.

    The output is similar to the following:

    Apply complete! Resources: 27 added, 0 changed, 0 destroyed
    

By applying the proposed changes with Terraform, you automate the following tasks:

  • Creating firewall rules to allow external access to the microservices and to the database and inter-node communications.
  • Creating and enabling a service account for the Compute Engine instances. In this tutorial, you create a service account for the Compute Engine instances to use. We recommend that you limit the service account to just the roles and access permissions that are required in order to run the app. For this tutorial, the service account for Compute Engine instances only requires the Compute Viewer role (roles/compute.viewer). This role provides read-only access to Compute Engine resources.
  • Provisioning and configuring a Compute Engine instance to host the workloads to migrate, as a source environment. When you configure the Compute Engine instance, you provide a startup script that installs Docker, Docker Compose, and Dnsmasq.
  • Creating and enabling a service account for the GKE cluster to host the workloads as a target environment. In this tutorial, you create a service account that the GKE cluster nodes use. We recommend that you limit the service account to just the roles and access permissions that are required in order to run the app. For this tutorial, the roles required for the service account for GKE cluster nodes are as follows:
  • Provisioning and configuring a GKE cluster to host the workloads, as a target environment.

    To provision the GKE clusters, Terraform uses the kubernetes-engine Terraform module.

Deploy the workload in the source environment

In this tutorial, you deploy the Istio Bookinfo App as the workload to migrate. The following diagram shows the target architecture of the source environment:

Target architecture for the source environment.

In the preceding diagram, the client accesses the example workload running on Compute Engine. Note: In this tutorial, clients connect directly to a single Compute Engine instance to reduce complexity.

In a real production environment, this direct connection is unlikely because you need a load-balancing layer to run multiple instances of a workload.

To deploy the workload, run the following steps.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Deploy the workloads in the Compute Engine instances:

    scripts/workloads.sh \
      --deploy-with "COMPOSE" \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}"
    

    The workloads.sh script does the following:

    • Configures the default project, region, and zone.
    • Copies the Docker Compose descriptors to the Compute Engine instances.
    • Deploys the example workload using Docker Compose.

    In the output, you see a confirmation of the deployment and how you can access it. The output is similar to the following:

    You can access the workload by loading http://[COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP]:9080/productpage
    

    Make a note of the IP address COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP because you use it in a later step.

Test your deployment in the source environment

In this section, you test the example workload that you configured.

  • Open a browser and go to the following URL, where COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP is the IP address from the previous step:

    http://COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP:9080/productpage
    

    A Bookinfo page is displayed with details about books and relevant ratings, as shown in the following image:

    Books and ratings page in source environment.

Configure Istio

In this section, you configure the destination environment in Google Cloud by installing Istio and then by using Istio to expose the example workload. The following diagram shows the target architecture of the environment:

Destination environment with Istio installed.

As shown in the preceding diagram, Istio exposes the workload that's running in Compute Engine.

Install Istio

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Install Istio:

    scripts/install-istio.sh \
      --cluster-name "${GKE_CLUSTER_NAME}" \
      --google-cloud-project "${DEFAULT_PROJECT}" \
      --cluster-region "${DEFAULT_REGION}"
    

    The install-istio.sh script does the following:

    • Downloads the Istio distribution.
    • Installs Istio in the target environment GKE cluster.
    • Deploys a Gateway to expose the services in the service mesh.
    • Configures Istio to allow the expansion of the service mesh to the Compute Engine instances simulating the source environment.
    • Installs service mesh monitoring and visualization tools, such as Kiali.

      When this command is done, the console shows a confirmation of installation. The output is similar to the following:

      ✔ Istio core installed
      ✔ Istiod installed
      ✔ Ingress gateways installed
      ✔ Egress gateways installed
      ✔ Installation complete
      

Configure Istio mesh expansion

In this section, you connect the Compute Engine instance that simulates the source environment to the service mesh.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Install and configure Istio on the Compute Engine instance:

    scripts/compute-engine-mesh-expansion-setup.sh \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}"
    

    The compute-engine-mesh-expansion-setup.sh script does the following:

    • Installs Istio on the source environment Compute Engine instances.
    • Starts the Istio service on the Compute Engine instances.

Expose the workload

In this section, you register the workloads that are running in the Compute Engine instance and simulating the source environment to the Istio service mesh.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Expose the workloads:

    scripts/workloads.sh \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}" \
      --expose-with "ISTIO_COMPUTE_ENGINE"
    

    The workloads.sh script does the following:

    • Configures the default project, region, and zone.
    • Enables automatic sidecar injection to avoid manual edits to your deployment descriptors.
    • Registers the workloads running in the Compute Engine instance to the mesh by configuring the WorkloadEntries endpoint and the corresponding Services.
    • Deploys the ServiceEntries to allow traffic to the Compute Engine metadata server and to Cloud APIs.
    • Deploys the Virtual Services to route traffic from the Istio Gateway to the productpage instance running in the Compute Engine instance.

      In the output, you see a confirmation of the deployment and how you can access it. The output is similar to the following:

      You can access the workload by loading http://[ISTIO_INGRESS_GATEWAY_EXTERNAL_IP]/productpage
      

      Make a note of the ISTIO_INGRESS_GATEWAY_EXTERNAL_IP IP address because you use it in a later step.

Test the Istio mesh expansion

In this section, you test the example workload running in the Compute Engine instance that you used Istio to expose.

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    As shown in the following image, a page is displayed with information about books and relevant ratings:

    Books and ratings page that displays after Istio mesh expansion test.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    

    The Kiali service dashboard is displayed.

  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    The command generates traffic to the Bookinfo app. In the output, you see a list of the timestamp of the HTTP requests to the productpage service, and the HTTP return code of each request (200 OK in this case). The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    

    In the Kiali service dashboard, you see a diagram of the current mesh, with traffic routed to services running in Compute Engine. All of the traffic is routed from theistio-ingressgateway to the productpage microservice that runs on the Compute Engine instance with the compute-engine version label, and to the kiali service to visualize the service mesh.

    You don't see the other services in the graph (details, reviews, and ratings) because the productpage microservice that runs in Compute Engine connects directly to the other microservices running in Compute Engine. The productpage microservice does not go through the service mesh.

    If you want all the traffic to go through the service mesh, you need to reconfigure the workloads running in Compute Engine to point to the services in the service mesh, instead of connecting to them directly.

    If you don't see the following diagram on the Kiali dashboard, refresh the page.

    Kiali dashboard page.

    As shown in the preceding diagram, the Kiali dashboard shows traffic routed to services that are running in Compute Engine.

  3. In Cloud Shell, to stop the traffic generation command, press Control+C.

Migrate the workload

In this section, you migrate the components of the example workload from the Compute Engine instances to the GKE cluster. For each microservice of the example workload, you do the following:

  • Deploy an instance of the microservice in the GKE cluster.
  • Start routing traffic to both the microservice instances running in Compute Engine and running in GKE.

The following diagram shows the target architecture of the system for this section:

Target architecture with traffic routed to microservice instances in Compute Engine and GKE.

As shown in the preceding diagram, Cloud Load Balancing routes traffic to the Istio gateway, and then Istio either routes the traffic to services that are running in Compute Engine, or to services running in GKE.

To migrate the components of the example workload, do the following.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Deploy the workloads in the target environment:

    scripts/workloads.sh \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}" \
      --deploy-with "GKE"
    

    The workloads.sh script does the following:

    You see a confirmation of the deployment, and how you can access it. The output is similar to the following:

    You can access the workload by loading http://[ISTIO_INGRESS_GATEWAY_EXTERNAL_IP]/productpage
    

    The Service automatically selects both the example workload running in the Compute Engine instances and the ones running in the GKE cluster.

Test the workload running in Compute Engine and GKE

In this section, you test the example workload that you deployed in Compute Engine and GKE.

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    As shown in the following image, a Bookinfo page is displayed with information about books and relevant ratings:

    Books and ratings page shown after testing the example workload deployed in Compute Engine and GKE.

    Because you deployed the same version of the example workload in the Compute Engine and in the GKE cluster, the output is the same as the previous test.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

      http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    
  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    With this command, you generate traffic to the Bookinfo app. The expected output is a list of the date of the HTTP request to the productpage service, and the HTTP return code of each request (200 OK in this case). The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    
  3. In the Kiali service dashboard, you see a diagram of the current mesh, with traffic routed to services that running in Compute Engine and GKE.

    Each instance of microservice has a label to explain its revision. For instances running in Compute Engine, the label is compute-engine. For instances running in GKE, there is an extra string, for example, v1 or v3. Instances running Compute Engine connect directly to the other instances in Compute Engine and don't go through the service mesh. Therefore, you don't see the traffic that goes from the instances running in Compute Engine to other instances.

    If you want all the traffic to go through the service mesh, you need to reconfigure the workloads running in Compute Engine to point to the services in the service mesh, instead of connecting to them directly.

    The traffic is almost equally split between the services running in Compute Engine and the services running in GKE. The service evenly balances traffic between the instances of a service that it selects, whether they are running in the GKE cluster or in the Compute Engine instances.

    You can configure the load-balancing policy for the revisions of each service with the DestinationRule resource. If you don't see the following diagram on the Kiali dashboard, refresh the page.

    Kiali dashboard page.

    As shown in the preceding diagram, the Kiali dashboard shows traffic routed to services that are running in Compute Engine and to services running in GKE.

  4. In Cloud Shell, to stop the traffic generation command, press Control+C.

Route traffic to the GKE cluster only

In this section, you route traffic to the service instances running in the GKE cluster only. For each service of the example workload, you delete the WorkloadEntry that points to the service running in Compute Engine, so that the service only selects the microservice instances running in the GKE cluster.

The following diagram shows the target architecture of the system:

Target architecture with traffic routed to GKE cluster.

In this architecture, traffic is routed only to the GKE cluster.

To apply this routing, run the following steps.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Expose the workloads in the target environment only:

    scripts/workloads.sh \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}" \
      --expose-with "GKE_ONLY"
    

    The workloads.sh script deletes the WorkloadEntries which point to the microservices instance running in Compute Engine from the GKE cluster.

    You see a confirmation of the deployment and how you can access it. The output is similar to the following:

    You can access the workload by loading http://[ISTIO_INGRESS_GATEWAY_EXTERNAL_IP]/productpage
    

Test the workload running in GKE

In this section, you test the example workload that you deployed in GKE.

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    As shown in the following image, a Bookinfo page is displayed with information about books and relevant ratings:

    Books and ratings page shown after testing the example workload in GKE.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    
  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    This command generates traffic to the Bookinfo app. The expected output is a list of the date of the HTTP request to the productpage service, and the HTTP return code of each request (200 OK in this case). The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    
  3. In the Kiali service dashboard, you see a diagram of the current mesh with traffic routed to services that are running in GKE. You deployed two instances of each microservice: one runs in the Compute Engine instance, and the other runs in the GKE cluster). However, because you removed the WorkloadEntries that point to the microservice instances which run in Compute Engine, the services only select the microservice instances running in the GKE cluster.

    If you don't see the following diagram on the Kiali dashboard, refresh the page.

    Microservice instances.

    As shown in the preceding diagram, the Kiali dashboard shows the traffic that's routed to services that are running in GKE.

  4. In Cloud Shell, to stop the traffic generation command, press Control+C.

Retire the source environment

Because all the traffic is routed to the GKE cluster, you can now stop the instances of the workload running in Compute Engine.

The following diagram shows the target architecture of the system for this section:

Source environment with no workload instances running in Compute Engine.

As shown in the preceding diagram, Istio routes traffic to services that are running in GKE only, and the workloads running in Compute Engine are retired.

Run the following steps to retire the source environment.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"
    
  2. Expose the workloads in the target environment only:

    scripts/workloads.sh \
      --default-project "${DEFAULT_PROJECT}" \
      --default-region "${DEFAULT_REGION}" \
      --default-zone "${DEFAULT_ZONE}" \
      --deploy-with "GKE_ONLY"
    

    The workloads.sh script stops the containers running in the Compute Engine instances.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the individual resources

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${TUTORIAL_DIRECTORY_PATH}"/terraform
    
  2. Delete the resources that you provisioned:

    terraform destroy -auto-approve
    

What's next