Deploy your migration with Istio mesh expansion

Last reviewed 2023-11-02 UTC

This document describes how to initialize and configure a service mesh to support a feature-by-feature migration from an on-premises data center to Google Cloud. It assumes that you're familiar with the associated reference architecture. It's intended for administrators, developers, and engineers who want to use a service mesh that dynamically routes traffic either to the source environment or to Google Cloud.

This deployment guide is intended to help you migrate from a non-Google Cloud environment (such as on-premises or another cloud provider) to Google Cloud. Such migrations have a layer of network complexity because you have to set up a secure communication channel between the non-Google Cloud environment and the Google Cloud environment.

Architecture

The following diagram shows how you can use a service mesh to route traffic either to microservices running in the source environment or to Google Cloud:

Architecture that uses a service mesh to route traffic either to microservices running in the legacy environment or to Google Cloud.

In the diagram, Istio Gateway provides a service mesh that links the microservices of an application. Google Kubernetes Engine (GKE) acts as a container to define the boundaries of each microservice. For more information, see Support your migration with Istio mesh expansion.

In this deployment, you use the following software:

The example workload

In this deployment, you use the Bookinfo app, which is a 4-tier, polyglot microservices app that shows information about books. This app is designed to run on Kubernetes, but you deploy it on a Compute Engine instance by using Docker and Docker Compose. With Docker Compose, you describe multi-container apps using YAML descriptors. You can then start the app by executing a single command.

Although this example workload is already containerized, this approach also applies to non-containerized services. In such cases, you can add a modernization phase where you containerize services that you intend to migrate.

The Bookinfo app has four microservice components:

  • productpage: Calls the details, ratings, and reviews microservices to populate the book information page
  • details: Serves information about books
  • reviews: Contains book reviews
  • ratings: Returns book ranking information to accompany a book review

To demonstrate Istio and its features, the authors and maintainers of the Bookinfo app have implemented multiple versions of some of these components. In this deployment, you deploy only one version of each component.

Objectives

  • Initialize an environment that simulates the on-premises data center.
  • Deploy and test example workloads on the on-premises data center.
  • Configure the target environment on Google Cloud.
  • Migrate the workload from the on-premises data center to the target environment.
  • Test the workloads running in the target environment.
  • Retire the on-premises data center.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Prepare your environment

You perform most of the steps for this deployment in Cloud Shell.

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In Cloud Shell, check the amount of free space that you have:

    df -h
    

    To complete this deployment, you need about 200 MB of free space.

  3. Change the working directory to the ${HOME} directory:

    cd "${HOME}"
    
  4. Clone the Git repository, which contains the scripts and the manifest files to deploy and configure the example workload:

    git clone https://github.com/GoogleCloudPlatform/solutions-istio-mesh-expansion-migration
    
  5. Authenticate with Application Default Credentials (ADC):

    gcloud auth application-default login
    

    The output shows the path to the Application Default Credentials file:

    Credentials saved to file:
    [/tmp/tmp.T5Qae7XwAO/application_default_credentials.json]
    

    Make a note of the path to the Application Default Credentials file. These credentials will be used by any library that requests ADC.

  6. Initialize the environment variables:

    APPLICATION_DEFAULT_CREDENTIALS_PATH=APPLICATION_DEFAULT_CREDENTIALS_PATH
    BILLING_ACCOUNT_ID=BILLING_ACCOUNT_ID
    DEFAULT_FOLDER=DEFAULT_FOLDER
    DEFAULT_PROJECT=DEFAULT_PROJECT
    DEFAULT_REGION=DEFAULT_REGION
    DEFAULT_ZONE=DEFAULT_ZONE
    GKE_CLUSTER_NAME=istio-migration
    DEPLOYMENT_DIRECTORY_PATH="$(pwd)"/solutions-istio-mesh-expansion-migration
    ORGANIZATION_ID=ORGANIZATION_ID
    

    Replace the following:

    • APPLICATION_DEFAULT_CREDENTIALS_PATH: the path to the ADC file from the previous step.
    • BILLING_ACCOUNT_ID: the ID of the billing account to use.
    • DEFAULT_FOLDER: the ID of the Google Cloud folder to create the Google Cloud project in. If you want Terraform to create the Google Cloud project directly under the Google Cloud organization, leave this string empty.
    • DEFAULT_PROJECT: the ID of the Google Cloud project to provision the resources to complete this deployment. Terraform creates this project for you when it provisions the environment.
    • DEFAULT_REGION: the default region where resources are provisioned.
    • DEFAULT_ZONE: the default zone where resources are provisioned.
    • ORGANIZATION_ID: the ID of your Google Cloud organization.

Provision your environments

In this section, you provision the following environments for this deployment:

  • An environment that simulates the source on-premises data center.
  • An environment that simulates the migration target.

In this deployment, both environments run in Google Cloud. This approach helps to simplify the setup process because there is only one bootstrapping phase. You automatically provision the source and target environments using Terraform.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Initialize the Terraform backend configuration:

    scripts/init.sh \
    --application-credentials "${APPLICATION_DEFAULT_CREDENTIALS_PATH}" \
    --billing-account-id "${BILLING_ACCOUNT_ID}" \
    --default-folder "${DEFAULT_FOLDER}" \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}" \
    --organization-id "${ORGANIZATION_ID}"
    

    The init.sh script does the following:

    • Generates the descriptors to configure the Terraform backend.
    • Initializes the Terraform working directory.
  3. Change the working directory to the terraform directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"/terraform
    
  4. Apply the changes with Terraform:

    terraform apply
    
  5. When prompted, review the proposed changes and confirm by entering yes.

    The output is similar to the following:

    Apply complete! Resources: 27 added, 0 changed, 0 destroyed
    

By applying the proposed changes with Terraform, you automate the following tasks:

  • Creating firewall rules to allow external access to the microservices and to the database and inter-node communications.
  • Creating and enabling a service account for the Compute Engine instances to use. We recommend that you limit the service account to only the roles and access permissions that are required to run the app. For this deployment, the service account for Compute Engine instances only requires the Compute Viewer role (roles/compute.viewer). This role provides read-only access to Compute Engine resources.
  • Provisioning and configuring a Compute Engine instance to host the workloads to migrate, as a source environment. When you configure the Compute Engine instance, you provide a startup script that installs Docker, Docker Compose, and Dnsmasq.
  • Creating and enabling a service account for the GKE cluster to host the workloads as a target environment. In this deployment, you create a service account that the GKE cluster nodes use. We recommend that you limit the service account to just the roles and access permissions that are required in order to run the app. For this deployment, the roles required for the service account for GKE cluster nodes are as follows:
  • Provisioning and configuring a GKE cluster to host the workloads, as a target environment. To provision the GKE clusters, Terraform uses the kubernetes-engine Terraform module.

Deploy the workload in the source environment

In this deployment, you deploy the Istio Bookinfo App as the workload to migrate. The following diagram shows the architecture of the source environment:

Target architecture for the source environment.

In the diagram, clients access the example workload that's running on Compute Engine. To reduce complexity in this example, clients connect directly to a single Compute Engine instance. In a production environment, this direct connection is unlikely because you need a load-balancing layer to run multiple instances of a workload.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Deploy the workloads in the Compute Engine instances:

    scripts/workloads.sh \
    --deploy-with "COMPOSE" \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}"
    

    The workloads.sh script does the following:

    • Configures the default project, region, and zone.
    • Copies the Docker Compose descriptors to the Compute Engine instances.
    • Deploys the example workload using Docker Compose.

    If you didn't previously create an SSH key file to authenticate with Compute Engine instances, the gcloud CLI prompts you to generate one.

    In the output, you see a confirmation of the deployment and how you can access it. The output is similar to the following:

    You can access the workload by loading http://COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP:9080/productpage
    

    In the output, COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP is the IP address where the workload is served. Make a note of the IP address, because you use it in a later step.

Test your deployment in the source environment

  • Open a browser and go to the following URL, where COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP is the IP address from the previous step:

    http://COMPUTE_ENGINE_PRODUCTPAGE_EXTERNAL_IP:9080/productpage
    

A Bookinfo page is displayed with details about books and relevant ratings.

Configure Istio

In this section, you configure the target environment in Google Cloud by installing Istio, and then you use Istio to expose the example workload. The following diagram shows the architecture of the target environment:

Destination environment with Istio installed.

In the diagram, Istio exposes the workload that's running in Compute Engine.

Install Istio

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Install Istio:

    scripts/install-istio.sh \
    --cluster-name "${GKE_CLUSTER_NAME}" \
    --google-cloud-project "${DEFAULT_PROJECT}" \
    --cluster-region "${DEFAULT_REGION}"
    

    The install-istio.sh script does the following:

    • Downloads the Istio distribution.
    • Installs Istio in the target environment GKE cluster.
    • Deploys a Gateway to expose the services in the service mesh.
    • Configures Istio to allow the expansion of the service mesh to the Compute Engine instances that are simulating the source environment.
    • Installs service mesh monitoring and visualization tools, such as Kiali.

    When this command finishes running, the console shows a confirmation of installation. The output is similar to the following:

    ✔ Istio core installed
    ✔ Istiod installed
    ✔ Ingress gateways installed
    ✔ Egress gateways installed
    ✔ Installation complete
    

Configure Istio mesh expansion

In this section, you connect the Compute Engine instance that simulates the source environment to the service mesh. The service mesh handles connecting the microservices in the legacy environment that will be migrated to the target environment. In this phase, the service mesh is empty, waiting for services to be registered. The service mesh isn't receiving any production traffic yet.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Install and configure Istio on the Compute Engine instance:

    scripts/compute-engine-mesh-expansion-setup.sh \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}"
    

    The compute-engine-mesh-expansion-setup.sh script does the following:

    • Installs Istio on the source environment Compute Engine instances.
    • Starts the Istio service on the Compute Engine instances.

Expose the workload

In this section, you register the workloads that are running in the Compute Engine instance and simulating the source environment to the Istio service mesh.

The workloads.sh script that you run in this section sets up routing rules to split production traffic between services running in the legacy environment and services running in the target environment, using the service mesh. Because traffic routing inside the service mesh is transparent to clients, they won't know that the routing configuration has changed.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Expose the workloads:

    scripts/workloads.sh \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}" \
    --expose-with "ISTIO_COMPUTE_ENGINE"
    
  3. The workloads.sh script does the following:

    In the output, you see a confirmation of the deployment and how you can access it. The output is similar to the following:

    You can access the workload by loading http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    In the output, ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address where the workload is served. Make a note of the IP address because you use it in a later step.

Test the Istio mesh expansion

In this section, you test the example workload that's running in the Compute Engine instance that you used Istio to expose.

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

The source environment entry point (which connects to the Compute Engine instance) is still available at this stage. When you migrate a production environment, we recommend that you gradually redirect traffic to the target environment by updating the configuration of the load-balancing layer.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    

    The Kiali service dashboard is displayed.

  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    The request generates traffic to the Bookinfo app. The output shows a list of the timestamps for each HTTP request to the productpage service, and the HTTP return code of each request (200 in this case).

    The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    

    The request takes time to complete, so you can leave it running and proceed to the next step.

  3. In the Kiali service dashboard, you see a diagram of the current mesh, with traffic routed to services running in Compute Engine. All of the traffic is routed from the istio-ingressgateway to the productpage microservice that runs on the Compute Engine instance with the compute-engine version label, and to the kiali service to visualize the service mesh.

    You don't see the other services in the graph (details, reviews, and ratings) because the productpage microservice that runs in Compute Engine connects directly to the other microservices running in Compute Engine. The productpage microservice does not go through the service mesh.

    If you want all the traffic to go through the service mesh, you need to reconfigure the workloads running in Compute Engine to point to the services in the service mesh, instead of connecting to them directly.

    If you don't see the following diagram on the Kiali dashboard, refresh the page.

    The Kiali dashboard shows how traffic is routed.

    The diagram on the Kiali dashboard shows that traffic is routed to services that are running in Compute Engine.

  4. In Cloud Shell, to stop the traffic generation command, press Control+C.

Migrate the workload

In this section, you migrate the components of the example workload from the Compute Engine instances to the GKE cluster. For each microservice of the example workload, you do the following:

  • Deploy an instance of the microservice in the GKE cluster.
  • Start routing traffic to both the microservice instances running in Compute Engine and running in GKE.

The following diagram shows the architecture of the system for this section:

Target architecture with traffic routed to microservice instances in Compute Engine and GKE.

In the diagram, Cloud Load Balancing routes traffic to Istio Gateway, and then Istio either routes the traffic to services that are running in Compute Engine, or to services running in GKE.

To migrate the components of the example workload, do the following:

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Deploy the workloads in the target environment:

    scripts/workloads.sh \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}" \
    --deploy-with "GKE"
    

    The workloads.sh script does the following:

    You see a confirmation of the deployment, and how you can access it. The output is similar to the following:

    You can access the workload by loading http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

The service mesh routes traffic to both the example workloads running in the Compute Engine instances and the ones running in the GKE cluster.

Test the workload running in Compute Engine and GKE

In this section, you test the example workload that you deployed in Compute Engine and GKE.

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    A Bookinfo page is displayed with information about books and relevant ratings.

Because you deployed the same version of the example workload in the Compute Engine and in the GKE cluster, the output is the same as the previous test.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    
  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    The command generates traffic to the Bookinfo app. The expected output is a list of the dates of the HTTP requests to the productpage service, and the HTTP return code of each request (200 OK in this case). The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    
  3. In the Kiali service dashboard, you see a diagram of the current mesh, with traffic routed to services that are running in Compute Engine and GKE.

    Each instance of a microservice has a label to explain its revision:

    • Instances that are running in Compute Engine have a label of compute-engine.
    • Instances that are running in GKE have an extra string, for example, v1 or v3.

    Instances that are running in Compute Engine connect directly to the other instances in Compute Engine without going through the service mesh. Therefore, you don't see the traffic that goes from the instances running in Compute Engine to other instances.

    If you don't see the following diagram on the Kiali dashboard, refresh the page.

    The Kiali dashboard shows how traffic is routed.

    The diagram on the Kiali dashboard shows the traffic that's routed to services that are running in Compute Engine and to services running in GKE.

  4. In Cloud Shell, to stop the traffic generation command, press Control+C.

Route traffic to the GKE cluster only

In this section, you route traffic to the workload service instances that are running in the GKE cluster only. For each service of the example workload, you delete the WorkloadEntry reference that points to the service running in Compute Engine. The deletion causes the service to only select the microservice instances that are running in the GKE cluster, and traffic is routed only to the GKE cluster. The following diagram shows the architecture of the system for this section:

Target architecture with traffic routed to the GKE cluster.

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Expose the workloads in the target environment only:

    scripts/workloads.sh \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}" \
    --expose-with "GKE_ONLY"
    

    The workloads.sh script deletes the WorkloadEntry references that point to the microservice instances that are running in Compute Engine from the GKE cluster.

    You see a confirmation of the deployment and how you can access it. The output is similar to the following:

    You can access the workload by loading http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

The service entry uses workloadSelector to automatically select the example workload that's running in the GKE cluster.

Test the workload running in GKE

  • Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/productpage
    

    A Bookinfo page is displayed with information about books and relevant ratings.

Visualize the service mesh

In this section, you use Kiali to show a visual representation of the service mesh.

  1. Open a browser and go to the following URL, where ISTIO_INGRESS_GATEWAY_EXTERNAL_IP is the IP address from the previous step:

    http://ISTIO_INGRESS_GATEWAY_EXTERNAL_IP/kiali/console/graph/namespaces/?edges=requestDistribution&graphType=versionedApp&namespaces=default%2Cistio-system&idleNodes=false&duration=60&refresh=15000&operationNodes=false&idleEdges=false&injectServiceNodes=true&layout=dagre
    
  2. In Cloud Shell, run a request multiple times for the main page of the example workload:

    ISTIO_INGRESS_GATEWAY_EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    for i in {1..10000}; do curl -s -o /dev/null -w "$(date --iso-8601=ns) - %{http_code}\n"  http://"${ISTIO_INGRESS_GATEWAY_EXTERNAL_IP}"/productpage; done
    

    This command generates traffic to the Bookinfo app. The expected output is a list of the dates of the HTTP requests to the productpage service, and the HTTP return code of each request (200 OK in this case). The output is similar to the following:

    2021-06-09T10:16:15,355323181+00:00 - 200
    2021-06-09T10:16:15,355323182+00:00 - 200
    2021-06-09T10:16:15,355323183+00:00 - 200
    [...]
    
  3. The Kiali service dashboard shows a diagram of the current mesh with traffic routed to services that are running in GKE. You deployed two instances of each microservice: one runs in the Compute Engine instance, and the other runs in the GKE cluster. However, because you removed the WorkloadEntry references that point to the microservice instances that run in Compute Engine, the services only select the microservice instances that are running in the GKE cluster.

    If you don't see the following diagram on the Kiali dashboard, refresh the page:

    The Kiali dashboard shows how traffic is routed.

    The diagram on the Kiali dashboard shows the traffic that's routed to services that are running in GKE.

  4. In Cloud Shell, to stop the traffic generation command, press Control+C.

Retire the source environment

Because all the traffic is now routed to the GKE cluster, you can stop the instances of the workload that are running in Compute Engine.

During a production migration, keep the source data center ready for your rollback strategy. We recommend that you start retiring the source data center only when you're sure that the new solution is working as expected and that all backup and fault-tolerance mechanisms are in place.

The following diagram shows the architecture of the system for this section:

Source environment with no workload instances running in Compute Engine.

In the diagram, Istio routes traffic to services that are running in GKE only, and the workloads that are running in Compute Engine are retired.

To retire the source environment, do the following:

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"
    
  2. Expose the workloads in the target environment only:

    scripts/workloads.sh \
    --default-project "${DEFAULT_PROJECT}" \
    --default-region "${DEFAULT_REGION}" \
    --default-zone "${DEFAULT_ZONE}" \
    --deploy-with "GKE_ONLY"
    

    The workloads.sh script stops the containers running in the Compute Engine instances.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this deployment, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the individual resources

  1. In Cloud Shell, change the working directory to the repository directory:

    cd "${DEPLOYMENT_DIRECTORY_PATH}"/terraform
    
  2. Delete the resources that you provisioned:

    terraform destroy -auto-approve
    

What's next