Building a multi-cluster service mesh on GKE with shared control-plane, single-VPC architecture

This tutorial describes how to deploy apps across multiple Kubernetes clusters by using an Istio multi-cluster service mesh. Istio multi-cluster service mesh An Istio multi-cluster service mesh lets services that are running on multiple Kubernetes clusters securely communicate with one another. Kubernetes clusters could be running anywhere, even in different cloud platforms—for example, Google Kubernetes Engine (GKE) clusters running in Google Cloud, or a Kubernetes cluster running in an on-premises data center.

Istio is an open source implementation of a service mesh that lets you discover, dynamically route to, and securely connect to Services running on Kubernetes clusters. Istio also provides a policy-driven framework for routing, load-balancing, throttling, telemetry, circuit-breaking, authenticating, and authorizing service calls in the mesh with few or no changes in your application code. When Istio is installed in a Kubernetes cluster, it uses the Kubernetes Service registry to automatically discover and create a service mesh of interconnected services (or microservices) that are running in the cluster.

The components of Istio can be described in terms of a data plane and a control plane. The data plane is responsible for carrying the network traffic in the mesh, and the control plane is responsible for configuring and managing the mesh:

  • Data plane: Istio participates in Service communication by injecting a sidecar container to run a proxy in each of the Pods for the Services in the mesh. The sidecar runs an intelligent proxy called Envoy to provide routing, security, and observability for the Services.
  • Control plane: A component called Pilot is responsible for providing configuration to the Envoy sidecars. Certificates are assigned to each Service by a component called Citadel.

Microservices running in one Kubernetes cluster might need to talk to microservices running in other Kubernetes clusters. For example, microservices might need to talk across regions and environments, or microservice owners might maintain their own Kubernetes clusters. Istio lets you create a service mesh beyond a single Kubernetes cluster to include microservices running in remote clusters and even external microservices running in VMs, outside of Kubernetes.

Istio provides two main configurations for multi-cluster deployments:

The shared Istio control-plane configuration uses an Istio control plane running on one of the clusters. The control plane's Pilot manages Service configuration on the local and remote clusters and configures the Envoy sidecars for all the clusters. This configuration results in a single Istio service mesh that encompasses Services running in multiple Kubernetes clusters. In a shared control-plane configuration, the Pilot running in the Istio control-plane cluster must have IP address access to all Pods running in every cluster.

You can achieve the Pod-to-Pod IP address connectivity between clusters in one of two ways:

  1. You create all clusters in a flat network, for example, a single VPC, with firewall rules allowing Pod-to-Pod IP address connectivity between clusters. Clusters might also exist in VPN connected networks. In either case, the Pilot Pod in the control cluster can access all Pods in remote clusters directly through the Pod IP addresses.
  2. Your clusters are not in a single network and rely on a different mechanism for the Pilot Pod in the control cluster to reach Pods in remote clusters. In this scenario, the Pilot Pod cannot access other Pods directly with Pod IPs, which is why you use Istio Ingress gateways to achieve network connectivity between clusters in disparate networks.

In this tutorial, you deploy Istio in two GKE clusters using a shared control-plane architecture in a single VPC network. The Pilot Pod in the Istio control-plane cluster will have direct IP address connectivity to Pods running in the remote cluster. For this tutorial, you use a 10-tier demo microservices app called Online Boutique that is split across two GKE clusters. The microservices are written in different programming languages. To see the language for each microservice, see the README page.

You build the following architecture inside a Google Cloud project.

Architecture that deploys 10 microservices across two clusters.

In this architecture, you have a control cluster and a remote cluster. The Istio control plane is deployed on the control cluster. The clusters communicate with various microservices both locally (in the same cluster) and nonlocally (in the other cluster).

Objectives

  • Create two GKE clusters, control and remote, in a single VPC.
  • Install Istio in multi-cluster mode on both GKE clusters, and deploy the Istio control plane on the control cluster.
  • Install the Online Boutique app split across both clusters.
  • Observe the expanded service mesh in both clusters.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the GKE and Cloud Source Repositories APIs.

    Enable the APIs

Setting up your environment

You run all the terminal commands in this tutorial from Cloud Shell.

  1. In the Google Cloud Console, open Cloud Shell:

    OPEN Cloud Shell

  2. Download the required files for this tutorial by cloning the following GitHub repository:

    cd $HOME
    git clone https://github.com/GoogleCloudPlatform/istio-multicluster-gke.git
    
  3. Make the repository folder your $WORKDIR folder from which you do all the tasks related to this tutorial:

    cd $HOME/istio-multicluster-gke
    WORKDIR=$(pwd)
    

    You can delete the folder when you finish the tutorial.

  4. Install kubectx/kubens:

    git clone https://github.com/ahmetb/kubectx $WORKDIR/kubectx
    export PATH=$PATH:$WORKDIR/kubectx
    

    These tools make it easier to work with multiple Kubernetes clusters by switching contexts or namespaces.

Creating GKE clusters

In this section, you create two GKE clusters in the default VPC with alias IP addresses, enabled. With alias IP ranges, GKE clusters can allocate IP addresses from a CIDR block known to Google Cloud. This configuration results in Pod IP addresses being natively routable within the VPC, which lets Pods in different clusters have direct IP address connectivity.

  1. In Cloud Shell, create two GKE clusters: one cluster named control where the Istio control plane is installed, and a second cluster named remote to be added to the Istio multi-cluster service mesh. You create both clusters in the default VPC but in different regions.

    gcloud container clusters create control --zone us-west2-a --username "admin" \
        --machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
        --num-nodes "4" --network "default" --enable-stackdriver-kubernetes --enable-ip-alias --async
    
    gcloud container clusters create remote --zone us-central1-f --username "admin" \
        --machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
        --num-nodes "4" --network "default" --enable-stackdriver-kubernetes --enable-ip-alias
    
  2. Wait a few minutes until both clusters are created. Verify that the status for each cluster is RUNNING:

    gcloud container clusters list
    

    The output is similar to the following:

    NAME     LOCATION       MASTER_VERSION    MASTER_IP        MACHINE_TYPE   NODE_VERSION      NUM_NODES  STATUS
    remote   us-central1-f  1.16.15-gke.6000  104.197.183.119  n1-standard-2  1.16.15-gke.6000  4          RUNNING
    control  us-west2-a     1.16.15-gke.6000  34.94.180.21     n1-standard-2  1.16.15-gke.6000  4          RUNNING
    
  3. Set the KUBECONFIG variable to use a new kubeconfig file for this tutorial:

    touch ${WORKDIR}/istiokubecfg
    export KUBECONFIG=${WORKDIR}/istiokubecfg
    
  4. Connect to both clusters to generate entries in the kubeconfig file:

    export PROJECT_ID=$(gcloud info --format='value(config.project)')
    gcloud container clusters get-credentials control --zone us-west2-a --project ${PROJECT_ID}
    gcloud container clusters get-credentials remote --zone us-central1-f --project ${PROJECT_ID}
    

    A kubeconfig file is used for authentication to clusters. After you create the kubeconfig file, you can quickly switch context between clusters.

  5. Use kubectx to rename the context names for convenience:

    kubectx control=gke_${PROJECT_ID}_us-west2-a_control
    kubectx remote=gke_${PROJECT_ID}_us-central1-f_remote
    

Configuring networking

In this section, you configure VPC routes to let Pods in both clusters have direct IP address connectivity. When you enable alias IP ranges for GKE clusters, two secondary subnets are created for each cluster. The primary subnet is used for Node IP addresses, and the two secondary subnets are used for Pod CIDR and Service IP addresses.

  1. In Cloud Shell, inspect the secondary IP addresses for both clusters:

    gcloud compute networks subnets describe default --region=us-west2 --format=json | jq '.secondaryIpRanges[]'
    gcloud compute networks subnets describe default --region=us-central1 --format=json | jq '.secondaryIpRanges[]'
    

    For the control cluster, the output is similar to the following:

    {
      "ipCidrRange": "10.56.0.0/14",
      "rangeName": "gke-control-pods-47496b0c"
    }
    {
      "ipCidrRange": "10.0.0.0/20",
      "rangeName": "gke-control-services-47496b0c"
    }
    

    For the remote cluster, the output is similar to the following:

    {
      "ipCidrRange": "10.0.16.0/20",
      "rangeName": "gke-remote-services-66101313"
    }
    {
      "ipCidrRange": "10.60.0.0/14",
      "rangeName": "gke-remote-pods-66101313"
    }
    
  2. Store the clusters' Pod IP range and the primary IP address range in variables for later use:

    CONTROL_POD_CIDR=$(gcloud container clusters describe control --zone us-west2-a --format=json | jq -r '.clusterIpv4Cidr')
    REMOTE_POD_CIDR=$(gcloud container clusters describe remote --zone us-central1-f --format=json | jq -r '.clusterIpv4Cidr')
    CONTROL_PRIMARY_CIDR=$(gcloud compute networks subnets describe default --region=us-west2 --format=json | jq -r '.ipCidrRange')
    REMOTE_PRIMARY_CIDR=$(gcloud compute networks subnets describe default --region=us-central1 --format=json | jq -r '.ipCidrRange')
    
  3. Create a list variable for both all IP CIDR ranges:

    ALL_CLUSTER_CIDRS=$CONTROL_POD_CIDR,$REMOTE_POD_CIDR,$CONTROL_PRIMARY_CIDR,$REMOTE_PRIMARY_CIDR
    

    You need all the nodes to be able to communicate with each other and with the Pod CIDR ranges.

  4. Store the network tags for the cluster nodes in a variable:

    ALL_CLUSTER_NETTAGS=$(gcloud compute instances list --format=json | jq -r '.[].tags.items[0]' | uniq | awk -vORS=, '{ print $1 }' | sed 's/,$/\n/')
    

    You use these network tags later in firewall rules.

  5. Create a firewall rule that allows traffic between the clusters' Pod CIDR ranges and nodes:

    gcloud compute firewall-rules create istio-multicluster-rule \
        --allow=tcp,udp,icmp,esp,ah,sctp \
        --direction=INGRESS \
        --priority=900 \
        --source-ranges="${ALL_CLUSTER_CIDRS}" \
        --target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
    

Installing Istio on both GKE clusters

In this section, you use Istio Operator to install and configure Istio in a multi-cluster configuration on both GKE clusters.

  1. In Cloud Shell, download Istio:

    cd ${WORKDIR}
    export ISTIO_VER=1.8.2
    curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VER} TARGET_ARCH=x86_64 sh -
    

    In production, we recommend that you hard-code a version of the app (that is, use the version number of a known and tested version) to ensure consistent behavior.

A multicluster service mesh deployment requires that you establish trust between all clusters in the mesh. You can use a common root Certificate Authority (CA) to generate intermediate certificates for the multiple clusters. By default, all clusters create self-signed certificates. A root CA is used by all workloads within a mesh as the root of trust. Each Istio CA uses an intermediate CA signing key and certificate, signed by the root CA. When multiple Istio CAs exist within a mesh, this establishes a hierarchy of trust among the CAs. Istio repository comes with sample certificates that you can use for educational purpose only. Inside each cluster, you create the istio-system namespace and a secret called cacrts with the appropriate cluster certificates that you previously created.

  1. Create an istio-system namespace and a cacerts secrets in both GKE clusters:

    kubectl --context control create namespace istio-system
    kubectl --context control create secret generic cacerts -n istio-system \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/ca-cert.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/ca-key.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/root-cert.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/cert-chain.pem
    kubectl --context remote create namespace istio-system
    kubectl --context remote create secret generic cacerts -n istio-system \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/ca-cert.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/ca-key.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/root-cert.pem \
      --from-file=${WORKDIR}/istio-$ISTIO_VER/samples/certs/cert-chain.pem
    
  2. Create the istio configuration for the control cluster:

    cat <<EOF > ${WORKDIR}/istio_control.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: istio-control
    spec:
      values:
        global:
          meshID: mesh1
          multiCluster:
            clusterName: control
          network: network1
      components:
        ingressGateways:
          - name: istio-eastwestgateway
            label:
              istio: eastwestgateway
              app: istio-eastwestgateway
              topology.istio.io/network: network1
            enabled: true
            k8s:
              env:
                # sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
                - name: ISTIO_META_ROUTER_MODE
                  value: "sni-dnat"
                # traffic through this gateway should be routed inside the network
                - name: ISTIO_META_REQUESTED_NETWORK_VIEW
                  value: network1
              serviceAnnotations:
                cloud.google.com/load-balancer-type: "Internal"
                networking.gke.io/internal-load-balancer-allow-global-access: "true"
              service:
                ports:
                  - name: status-port
                    port: 15021
                    targetPort: 15021
                  - name: tls
                    port: 15443
                    targetPort: 15443
                  - name: tls-istiod
                    port: 15012
                    targetPort: 15012
                  - name: tls-webhook
                    port: 15017
                    targetPort: 15017
    EOF
    
  3. Install the Istio control plane on the control cluster:

    ${WORKDIR}/istio-${ISTIO_VER}/bin/istioctl install --context control -f ${WORKDIR}/istio_control.yaml
    

    Enter y to proceed with the installation. Istio takes 2 to 3 minutes to install.

  4. Verify that all Istio deployments are running:

    kubectl --context control get pods -n istio-system
    

    Istio is ready when all of the Pods are either running or completed.

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-69b49b9785-f5msp   1/1     Running   0          9m11s
    istio-ingressgateway-5c65f88f66-rdd2r    1/1     Running   0          52m
    istiod-6c56d9cbd8-k9klt                  1/1     Running   0          52m
    
  5. Wait until the istio-eastwestgateway service gets assigned an external IP address. Inspect the external IP address of the istio-eastwestgateway:

    kubectl --context control get svc istio-eastwestgateway -n istio-system
    

    The output is similar to the following:

    NAME                    TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)                                                           AGE
    istio-eastwestgateway   LoadBalancer   10.60.7.19   10.168.0.6    15021:31557/TCP,15443:30306/TCP,15012:31791/TCP,15017:32341/TCP   14m
    

    Note that the external IP address is an internal load balancer IP address. You need an internal load balancer because both clusters are on the same VPC and can communicate through an internal load balancer.

  6. Expose the istiod service in the control clustrer using the istio-eastwestgateway Gateway and a VirtualService. The remote cluster's Envoy proxies must be able to communicate with the istiod Service in the control cluster. Create the Gateway and VirtualService by running the following command:

    kubectl --context control apply -f ${WORKDIR}/istio-1.8.2/samples/multicluster/expose-istiod.yaml
    

    The output is similar to the following:

    gateway.networking.istio.io/istiod-gateway created
    virtualservice.networking.istio.io/istiod-vs created
    

    Give the Istio controlplane in the control cluster access to the remote cluster API server. This enables the following:

    1. Enables the control plane to authenticate connection requests from workloads running in the remote cluster. Without API Server access, the control plane will reject the requests.
    2. Enables discovery of service endpoints running in the remote cluster.
  7. Run the following command to give Istio controlplane in the control cluster access to the remote cluster API server..

    ${WORKDIR}/istio-${ISTIO_VER}/bin/istioctl x create-remote-secret \
    --context=remote \
    --name=remote | \
    kubectl apply -f - --context=control
    

    The output is similar to the following:

    secret/istio-remote-secret-remote created
    

    In the next steps, you configure the control cluster with permissions to access resources in the remote cluster.

    The Istio control plane requires access to all clusters in the mesh to discover Services, endpoints, and Pod attributes. To gain this access, you create a kubeconfig file for the remote cluster added as a secret in the control cluster. The istio-remote Helm chart creates a Kubernetes service account named istio-multi in the remote cluster with the minimal required Role-Based Access Control (RBAC) access. The following steps generate the remote cluster's kubeconfig file using the credentials of the istio-multi service account.

  8. Create the istio configuration for the remote cluster:

    cat <<EOF > ${WORKDIR}/istio_remote.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    spec:
      profile: remote
      values:
        global:
          meshID: mesh1
          multiCluster:
            clusterName: remote
          network: network1
          remotePilotAddress: ${DISCOVERY_ADDRESS}
    EOF
    
  9. Install Istio on the remote cluster:

    ${WORKDIR}/istio-${ISTIO_VER}/bin/istioctl install --context=remote -f ${WORKDIR}/istio_remote.yaml
    

    Enter y to proceed with the installation. Istio takes 2 to 3 minutes to install.

  10. Inspect the Istio deployment in the remote cluster:

    kubectl --context remote -n istio-system get pods
    

    The output is similar to the following:

    NAME                                   READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-c68779485-4fpmm   1/1     Running   0          3m50s
    istiod-866555cf49-w6qsx                1/1     Running   0          3m57s
    

    The two clusters are now setup with the control cluster providing the Istio controlplane to the remote cluster. In the next section, you deploy a sample application split between the two clusters.

Deploying the Online Boutique app

In this section, you install the Online Boutique application to both clusters.

  1. In Cloud Shell, create the online-boutique namespace in both clusters:

    for cluster in $(kubectx);
    do
        kubectx $cluster;
        kubectl create namespace online-boutique;
    done
    
  2. Label the online-boutique namespace in both clusters for automatic Istio sidecar proxy injection:

    for cluster in $(kubectx);
    do
        kubectx $cluster;
        kubectl label namespace online-boutique istio-injection=enabled
    done
    

    This step ensures that all Pods that are created in the online-boutique namespace have the Envoy sidecar container deployed.

  3. Install the Online Boutique app resources in the control cluster:

    kubectl --context control -n online-boutique apply -f $WORKDIR/istio-single-controlplane/single-network/control
    

    The output is similar to the following:

    deployment.apps/emailservice created
    deployment.apps/checkoutservice created
    deployment.apps/frontend created
    deployment.apps/paymentservice created
    deployment.apps/productcatalogservice created
    deployment.apps/currencyservice created
    deployment.apps/shippingservice created
    deployment.apps/adservice created
    gateway.networking.istio.io/frontend-gateway created
    virtualservice.networking.istio.io/frontend-ingress created
    serviceentry.networking.istio.io/currency-provider-external created
    virtualservice.networking.istio.io/frontend created
    serviceentry.networking.istio.io/whitelist-egress-googleapis created
    service/emailservice created
    service/checkoutservice created
    service/recommendationservice created
    service/frontend created
    service/paymentservice created
    service/productcatalogservice created
    service/cartservice created
    service/currencyservice created
    service/shippingservice created
    service/redis-cart created
    service/adservice created
    
  4. Install the Online Boutique app resources in the remote cluster:

    kubectl --context remote -n online-boutique apply -f $WORKDIR/istio-single-controlplane/single-network/remote
    

    The output is similar to the following:

    deployment.apps/loadgenerator created
    deployment.apps/cartservice created
    deployment.apps/recommendationservice created
    service/emailservice created
    service/checkoutservice created
    service/recommendationservice created
    service/frontend created
    service/paymentservice created
    service/productcatalogservice created
    service/cartservice created
    service/currencyservice created
    service/shippingservice created
    service/redis-cart created
    service/adservice created
    
  5. Verify that all workloads are running in the control cluster:

    kubectl --context control -n online-boutique get pods
    

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    adservice-5db678b487-zs6g9               2/2     Running   0          69s
    checkoutservice-b5b8858c7-djnzk          2/2     Running   0          70s
    currencyservice-954d8c5f-mv7kh           2/2     Running   0          70s
    emailservice-5c5555556b-9jk59            2/2     Running   0          71s
    frontend-6fbb48ffc6-gmnsv                2/2     Running   0          70s
    paymentservice-5684b97df7-l9ccn          2/2     Running   0          70s
    productcatalogservice-55479b967c-vqw6w   2/2     Running   0          70s
    shippingservice-59bd8c7b8c-wln4v         2/2     Running   0          69s
    
  6. Verify that all workloads are running in the remote cluster:

    kubectl --context remote -n online-boutique get pods
    

    The output is similar to the following:

    NAME                                     READY   STATUS    RESTARTS   AGE
    cartservice-5db5d5c5f9-vvwgx             2/2     Running   0          63s
    loadgenerator-8bcfd68db-gmlfk            2/2     Running   0          63s
    recommendationservice-859c7c66d5-f2x9m   2/2     Running   0          63s
    

The output in the previous two steps shows that the Online Boutique app microservices are split between the control and remote clusters.

Accessing the Online Boutique app

  1. In Cloud Shell, get the Istio ingress gateway IP address for the control cluster:

    kubectl --context control get -n istio-system service istio-ingressgateway -o json | jq -r '.status.loadBalancer.ingress[0].ip'
    

    The output shows the Istio ingress gateway IP address.

  2. Copy and paste the Istio ingress gateway IP address into a web browser tab, and then refresh the page. You see the Online Boutique home page.

    Online Boutique home page showing pictures of various products such as bikes, cameras, typewriters, and so on.

    Navigate around the app, browse various products, place them in your cart, check out, and so on.

    You see that the Online Boutique app is fully functional and running across two Kubernetes clusters in two environments.

Monitoring the service mesh

You can use Kiali to visualize the service mesh. Kiali is a service-mesh observability tool. In order to use Kiali, you must install Prometheus and Kiali to the control cluster.

  1. Deploy the Prometheus and Kiali add-ons to the control cluster:

    kubectl --context control apply -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/addons/prometheus.yaml
    kubectl --context control apply -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/addons/kiali.yaml
    

    If the Kiali install fails with errors, try running the same command again. There might be some timing issues which will be resolved when the command is run again.

  2. Ensure that the Prometheus and Kiali Pods are running:

    kubectl --context control get pods -n istio-system
    

    The output is similar to the following:

    kiali-cb9fb9554-x2r5z                    1/1     Running   0          7m46s
    prometheus-6d87d85c88-zv8cr              2/2     Running   0          3m57s
    
  3. In Cloud Shell, expose Kiali on the control cluster:

    kubectl --context control port-forward svc/kiali 8080:20001 -n istio-system --context control >> /dev/null &
    
  4. Open the Kiali web interface on the control cluster. Select Web Preview, and then select Preview on port 8080.

  5. At the Kiali login prompt, log in with username admin and password admin.

  6. From the menu, select Graph.

  7. From the Select a namespace drop-down list, select online-boutique.

  8. From the menu under Graph, select Service graph.

  9. Optionally, from the Display menu, select Traffic Animation to see loadgenerator generating traffic to your app.

    Traffic animation that shows the `loadgenerator` generating traffic to your app.

The preceding image shows a shared Istio service mesh for microservices that are spread across two clusters. Regardless of the cluster, as new microservices are added, they are automatically discovered and added to the mesh. The flat network connectivity allows for the flexibility of having multiple clusters while accommodating the ease of governing all microservices through a shared control plane.

Cleaning up

After you've finished the tutorial, clean up the resources you created on Google Cloud so you won't be billed for them in the future. The following sections describe how to delete these resources.

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Reset kubeconfig

  • Unset the KUBECONFIG variable:

    unset KUBECONFIG
    rm ${WORKDIR}/istiokubecfg
    

What's next