Provision managed Anthos Service Mesh

Managed Anthos Service Mesh is a Google-managed service mesh that you only need to enable. Google handles the reliability, upgrades, scaling, and security for you in a backward-compatible manner.

This page shows you how to use the fleet feature API to set up managed Anthos Service Mesh.

When you enable managed Anthos Service Mesh using the fleet API:

  • Google applies the recommended control plane configuration
  • Google enables automatic data plane management
  • Your cluster is enrolled in an Anthos Service Mesh release channel based on your Google Kubernetes Engine (GKE) cluster's release channel, and your control plane and data plane are kept up to date with each new release.
  • Google enables endpoint discovery and cross cluster load balancing throughout your service mesh with default settings, although you must create firewall rules.

Use this onboarding path if you want:

  • To use gcloud to configure managed Anthos Service Mesh using Google Cloud APIs and IAM.
  • To configure Anthos Service Mesh using the same APIs as other fleet features.
  • To automatically get the recommended configuration of Anthos Service Mesh for each of your clusters.

Prerequisites

As a starting point, this guide assumes that you have:

Requirements

  • One or more clusters with a supported version of GKE, in one of the supported regions.
  • Ensure that your cluster has enough capacity for the required components that managed Anthos Service Mesh installs in the cluster.
    • The mdp-controller deployment in kube-system namespace requests cpu: 50m, memory: 128Mi.
    • The istio-cni-node daemonset in kube-system namespace requests cpu: 100m, memory: 100Mi on each node.
  • Ensure that the client machine that you provision managed Anthos Service Mesh from has network connectivity to the API server.
  • Your clusters must be registered to a fleet. This is included in the instructions, or can be done separately prior to the provision.
  • Your project must have the Service Mesh fleet feature enabled. This is included in the instructions or can be done separately.
  • GKE Autopilot is only supported with GKE version 1.21.3+.

  • Istio CNI is required and installed by default when provisioning managed Anthos Service Mesh.

  • Managed Anthos Service Mesh can use multiple GKE clusters in a single-project single-network environment or a multi-project single-network environment.

    • If you join clusters that are not in the same project, they must be registered to the same fleet host project, and the clusters must be in a shared VPC configuration together on the same network.
    • For a single-project multi-cluster environment, the fleet project can be the same as the cluster project. For more information about fleets, see Fleets Overview.
    • For a multi-project environment, we recommend that you host the fleet in a separate project from the cluster projects. If your organizational policies and existing configuration allow it, we recommend that you use the shared VPC project as the fleet host project. For more information, see Setting up clusters with Shared VPC.
    • If your organization uses VPC Service Controls and you are provisioning managed Anthos Service Mesh on GKE clusters, then you must also follow the steps in VPC Service Controls for Anthos Service Mesh.

Limitations

We recommend that you review the list of managed Anthos Service Mesh supported features and limitations. In particular, note the following:

  • The IstioOperator API isn't supported since its main purpose is to control in-cluster components.

  • Enabling managed Anthos Service Mesh with the fleet API will use Mesh CA. If your service mesh deployment involves regulated workloads or requires Certificate Authority Service (CA Service), then follow Provision managed Anthos Service Mesh using asmcli.

  • Migrations from managed Anthos Service Mesh with asmcli to Anthos Service Mesh with fleet API are not supported. Similarly, configuring managed Anthos Service Mesh with fleet API from --management manual to --management automatic is not supported.

  • For GKE Autopilot clusters, cross-project setup is only supported with GKE 1.23 or later.

  • For GKE Autopilot clusters, in order to adapt to the GKE Autopilot resource limit, the default proxy resource requests and limits are set to 500m CPU and 512 Mb memory. You can override the default values using custom injection.

  • The actual features available to managed Anthos Service Mesh depend on the release channel. For more information, review the full list of managed Anthos Service Mesh supported features and limitations.

  • During the provisioning process for a managed control plane, Istio CRDs corresponding to the selected channel are provisioned in the specified cluster. If there are existing Istio CRDs in the cluster, they will be overwritten.

  • Istio CNI is not compatible with GKE Sandbox. Managed Anthos Service Mesh on Autopilot, therefore, does not work with GKE Sandbox since managed Istio CNI is required.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Configure gcloud (even if you are using Cloud Shell).
    1. Authenticate with the Google Cloud CLI, where FLEET_PROJECT_ID is the ID of your Fleet Host project. Generally, the FLEET_PROJECT_ID is created by default and has the same name as the project.

             gcloud auth login --project FLEET_PROJECT_ID
      

    2. Update the components:

             gcloud components update
      

  7. Enable the required APIs on your fleet host project.

      gcloud services enable mesh.googleapis.com \
          --project=FLEET_PROJECT_ID
    

Enabling mesh.googleapis.com enables the following APIs:

API Purpose Can Be Disabled
meshconfig.googleapis.com Anthos Service Mesh uses the Mesh Configuration API to relay configuration data from your mesh to Google Cloud. Additionally, enabling the Mesh Configuration API allows you to access the Anthos Service Mesh pages in the Google Cloud console and to use the Anthos Service Mesh certificate authority (Mesh CA). No
meshca.googleapis.com Related to Anthos Service Mesh certificate authority used by managed Anthos Service Mesh. No
container.googleapis.com Required to create Google Kubernetes Engine (GKE) clusters. No
gkehub.googleapis.com Required to manage the mesh as a fleet. No
monitoring.googleapis.com Required to capture telemetry for mesh workloads. No
stackdriver.googleapis.com Required to use the Services UI. No
opsconfigmonitoring.googleapis.com Required to use the Services UI for off-Google Cloud clusters. No
connectgateway.googleapis.com Required so that the managed Anthos Service Mesh control plane can access mesh workloads. Yes*
trafficdirector.googleapis.com Enables a highly available and scalable managed control plane. Yes*
networkservices.googleapis.com Enables a highly available and scalable managed control plane. Yes*
networksecurity.googleapis.com Enables a highly available and scalable managed control plane. Yes*

Configure managed Anthos Service Mesh

The steps required to provision managed Anthos Service Mesh using the fleet API depend on whether you prefer to enable by default for new fleet clusters or enable it per cluster.

Configure for your fleet

If you have enabled Google Kubernetes Engine (GKE) Enterprise edition, you can enable managed Anthos Service Mesh as a default configuration for your fleet. This means that every new GKE on Google Cloud cluster registered during cluster creation will have managed Anthos Service Mesh enabled on the cluster. You can find out more about fleet default configuration in Manage fleet-level features.

To enable fleet-level defaults for managed Anthos Service Mesh, complete the following steps:

Console

  1. In the Google Cloud console, go to the Feature Manager page.

    Go to Feature Manager

  2. In the Service Mesh pane, click Configure.

  3. Review the settings that are inherited by all new clusters that you create in the Google Cloud console and register to the fleet.

  4. To apply these settings, click Configure.

  5. In the confirmation dialog, click Confirm.

  6. Optional: Sync existing clusters to the default settings:

    1. In the Clusters in the fleet list, select the clusters that you want to sync. You can only select clusters that have Anthos Service Mesh installed.
    2. Click Sync to fleet settings and click Confirm in the confirmation dialog that appears. This operation can take a few minutes to complete.

gcloud

To configure fleet-level defaults using the Google Cloud CLI, you must establish the following settings:

  • Fleet-level settings

    • Create a mesh.yaml file that only contains the single line management: automatic:

      echo "management: automatic" > mesh.yaml
      
    • Enable Anthos Service Mesh for your fleet:

      gcloud container fleet mesh enable --project FLEET_PROJECT_ID \
          --fleet-default-member-config mesh.yaml
      

      If you see the following error, then you need to enable GKE Enterprise.

      ERROR: (gcloud.container.fleet.mesh.enable) FAILED_PRECONDITION: The
      [anthos.googleapis.com] service is required for this operation and is not
      enabled for the project [PROJECT_NUMBER]. Please use the Google Developers
      Console to enable it.: failed precondition
      
  • Cluster-level settings

    • When you're ready to create clusters to use with Anthos Service Mesh, create and register them in a single step with Google Cloud CLI to use the default configuration. For example:

      gcloud container clusters create-auto CLUSTER_NAME \
          --fleet-project FLEET_PROJECT_ID \
          --location=LOCATION
      

      You can get the project number for your fleet project by running the following command:

      gcloud projects list --filter="FLEET_PROJECT_ID" --format="value(PROJECT_ID)"
      

      The --location flag is the compute zone or region (such as us-central1-a or us-central1) for the cluster.

    • If your cluster's project differs from your fleet host project, you must allow Anthos Service Mesh service accounts in the fleet project to access the cluster project, and enable required APIs on the cluster project. You only need to do this once for each cluster project.

      Grant service accounts in the fleet project permission to access the cluster project:

      gcloud projects add-iam-policy-binding "CLUSTER_PROJECT_ID"  \
          --member "serviceAccount:service-FLEET_PROJECT_NUMBER@gcp-sa-servicemesh.iam.gserviceaccount.com" \
          --role roles/anthosservicemesh.serviceAgent
      

      Enable the Mesh API on the cluster's project:

      gcloud services enable mesh.googleapis.com \
        --project=CLUSTER_PROJECT_ID
      

      Replace CLUSTER_PROJECT_ID with the unique identifier of your cluster project. If you created your cluster in the same project as your fleet, then the CLUSTER_PROJECT_ID is the same as the FLEET_PROJECT_ID.

Proceed to Verify the control plane has been provisioned.

Configure per cluster

Use the following steps to configure managed Anthos Service Mesh for each cluster in your mesh individually.

Enable the Anthos Service Mesh fleet feature

Enable Anthos Service Mesh on the fleet project. Note that if you plan to register multiple clusters, enabling Anthos Service Mesh happens at the fleet-level so you only have to run this command once.

gcloud container fleet mesh enable --project FLEET_PROJECT_ID

Register clusters to a fleet

  1. Register a GKE cluster using fleet workload identity. The --location flag is the compute zone or region (such as us-central1-a or us-central1) for the cluster.

    gcloud container clusters update CLUSTER_NAME \
      --location CLUSTER_LOCATION \
      --fleet-project FLEET_PROJECT_ID
    
  2. Verify your cluster is registered:

    gcloud container fleet memberships list --project FLEET_PROJECT_ID
    

    Example output:

    NAME                 EXTERNAL_ID                           LOCATION
    cluster-1            1d8e255d-2b55-4df9-8793-0435461a2cbc  us-central1
    

    Make note of the MEMBERSHIP_NAME, as you will need it when you enable automatic management.

  3. If your cluster's project differs from your fleet host project, you must allow Anthos Service Mesh service accounts in the fleet project to access the cluster project, and enable required APIs on the cluster project. You only need to do this once for each cluster project.

    If you previously used asmcli to configure managed Anthos Service Mesh for this combination of cluster and fleet projects, then these changes have already been applied and you don't have to run the following commands.

    Grant service accounts in the fleet project permission to access the cluster project:

    gcloud projects add-iam-policy-binding "CLUSTER_PROJECT_ID" \
      --member "serviceAccount:service-FLEET_PROJECT_NUMBER@gcp-sa-servicemesh.iam.gserviceaccount.com" \
      --role roles/anthosservicemesh.serviceAgent
    

    Enable the Mesh API on the cluster's project:

    gcloud services enable mesh.googleapis.com \
      --project=CLUSTER_PROJECT_ID
    

Enable automatic management

Run the following command to enable automatic management:

  gcloud container fleet mesh update \
     --management automatic \
     --memberships MEMBERSHIP_NAME \
     --project FLEET_PROJECT_ID \
     --location MEMBERSHIP_LOCATION

where:

  • MEMBERSHIP_NAME is the membership name listed when you verified that your cluster was registered to the fleet.
  • MEMBERSHIP_LOCATION is the location of your membership (either a region, or global).

    If you recently created the membership using the command in this guide, this should be the region of your cluster. If you have a zonal cluster, use the region corresponding to the cluster's zone. For example, if you have a zonal cluster in us-central1-c, then use the value us-central1.

    This value may be global if you registered prior to May 2023, or if you specified the global location when registering the membership. You can check your membership's location with gcloud container fleet memberships list --project FLEET_PROJECT_ID.

Verify the control plane has been provisioned

After a few minutes, verify that the control plane status is ACTIVE:

gcloud container fleet mesh describe --project FLEET_PROJECT_ID

The output is similar to:

...
membershipSpecs:
  projects/746296320118/locations/us-central1/memberships/demo-cluster-1:
    mesh:
      management: MANAGEMENT_AUTOMATIC
membershipStates:
  projects/746296320118/locations/us-central1/memberships/demo-cluster-1:
    servicemesh:
      controlPlaneManagement:
        details:
        - code: REVISION_READY
          details: 'Ready: asm-managed'
        state: ACTIVE
      dataPlaneManagement:
        details:
        - code: OK
          details: Service is running.
        state: ACTIVE
    state:
      code: OK
      description: 'Revision(s) ready for use: asm-managed.'
...

Take note of the revision label in the details field, for example, asm-managed in the provided output. If you are using revision labels, then you need to set this label before you Deploy applications. If you are using default injection labels, then you don't need to set this label.

Configure kubectl to point to the cluster

The following sections involve running kubectl commands against each one of your clusters. Before proceeding through the following sections, run the following command for each of your clusters to configure kubectl to point to the cluster.

gcloud container clusters get-credentials CLUSTER_NAME \
      --location CLUSTER_LOCATION \
      --project CLUSTER_PROJECT_ID

Note that an ingress gateway isn't automatically deployed with the control plane. Decoupling the deployment of the ingress gateway and control plane allows you to more easily manage your gateways in a production environment. If the cluster needs an ingress gateway or an egress gateway, see Deploy gateways. To enable other optional features, see Enabling optional features on managed Anthos Service Mesh.

Managed data plane

If you use managed Anthos Service Mesh, Google fully manages upgrades of your proxies unless you disable it at the namespace, workload, or revision level.

With the managed data plane, the sidecar proxies and injected gateways are automatically updated in conjunction with the managed control plane by restarting workloads to re-inject new versions of the proxy. This normally completes 1-2 weeks after the managed control plane is upgraded.

If disabled, proxy management is driven by the natural lifecycle of the pods in the cluster and must be manually triggered by the user to control the update rate.

The managed data plane upgrades proxies by evicting pods that are running earlier versions of the proxy. The evictions are done gradually, honoring the pod disruption budget and controlling the rate of change.

The managed data plane doesn't manage the following:

  • Uninjected pods
  • Manually injected pods
  • Jobs
  • StatefulSets
  • DaemonSets

Disable the managed data plane (optional)

If you are provisioning managed Anthos Service Mesh on a new cluster, then you can disable the managed data plane completely, or for individual namespaces or pods. The managed data plane will continue to be disabled for existing clusters where it was disabled by default or manually.

To disable the managed data plane at the cluster level and revert back to managing the sidecar proxies yourself, change the annotation:

kubectl annotate --overwrite controlplanerevision -n istio-system \
REVISION_LABEL \
  mesh.cloud.google.com/proxy='{"managed":"false"}'

To disable the managed data plane for a namespace:

kubectl annotate --overwrite namespace NAMESPACE \
  mesh.cloud.google.com/proxy='{"managed":"false"}'

To disable the managed data plane for a pod:

kubectl annotate --overwrite pod POD_NAME \
  mesh.cloud.google.com/proxy='{"managed":"false"}'

Enable maintenance notifications

You can request to be notified about upcoming managed data plane maintenance up to week before maintenance is scheduled. Maintenance notifications are not sent by default. You must also Configure a GKE maintenance window before you can receive notifications. When enabled, notifications are sent at least two days before the upgrade operation.

To opt in to managed data plane maintenance notifications:

  1. Go to the Communication page.

    Go to the Communication page

  2. In the Anthos Service Mesh Upgrade row, under the Email column, select the radio button to turn maintenance notifications ON.

Each user that wants to receive notifications must opt in separately. If you want to set an email filter for these notifications, the subject line is:

Upcoming upgrade for your Anthos Service Mesh cluster "CLUSTER_LOCATION/CLUSTER_NAME".

The following example shows a typical managed data plane maintenance notification:

Subject Line: Upcoming upgrade for your ASM cluster "<location/cluster-name>"

Dear Anthos Service Mesh user,

The Anthos Service Mesh components in your cluster ${instance_id} (https://console.cloud.google.com/kubernetes/clusters/details/${instance_id}/details?project=${project_id}) are scheduled to upgrade on ${scheduled_date_human_readable} at ${scheduled_time_human_readable}.

You can check the release notes (https://cloud.google.com/service-mesh/docs/release-notes) to learn about the new update.

In the event that this maintenance gets canceled, you'll receive another email.

Sincerely,

The Anthos Service Mesh Team

(c) 2022 Google LLC 1600 Amphitheater Parkway, Mountain View, CA 94043 You have received this announcement to update you about important changes to Google Cloud Platform or your account. You can opt out of maintenance window notifications by editing your user preferences: https://console.cloud.google.com/user-preferences/communication?project=${project_id}

Configure endpoint discovery (only for multi-cluster installations)

Before you continue, you should have already configured managed Anthos Service Mesh on each cluster as described in the previous steps. There is no need to indicate that a cluster is a primary cluster, this is the default behavior.

Additionally, ensure you have downloaded asmcli (only if you wish to verify your configuration with the sample application) and set the project and cluster variables.

Public clusters

Configure endpoint discovery between public clusters

Enabling managed Anthos Service Mesh with the fleet API will enable endpoint discovery for this cluster. However, you must open firewall ports. To disable endpoint discovery for one or more clusters, see the instructions to disable it in Endpoint discovery between public clusters with declarative API.

Private clusters

Configure endpoint discovery between private clusters

Enabling managed Anthos Service Mesh with the fleet API will enable endpoint discovery for this cluster. However, you must open firewall ports. To disable endpoint discovery for one or more clusters, see the instructions to disable it in Endpoint discovery between private clusters with declarative API.

For an example application with two clusters, see HelloWorld service example.

Deploy applications

If you have more than one cluster in the fleet using managed Anthos Service Mesh, then ensure endpoint discovery or firewall ports are configured as intended before proceeding and deploying applications.

To deploy applications, use either the label corresponding to the channel you configured during installation or istio-injection=enabled if you are using default injection labels.

Default injection label

kubectl label namespace NAMESPACE istio-injection=enabled istio.io/rev- --overwrite

Revision label

Before you deploy applications, remove any previous istio-injection labels from their namespaces and set the istio.io/rev=REVISION_LABEL label instead.

This is the revision label you identified when you verified the control plane. To change it to a specific revision label, click REVISION_LABEL, and replace it with the applicable label: asm-managed-rapid for Rapid, asm-managed for Regular, or asm-managed-stable for Stable.

The revision label corresponds to a release channel:

Revision label Channel
asm-managed Regular
asm-managed-rapid Rapid
asm-managed-stable Stable
kubectl label namespace NAMESPACE istio-injection- istio.io/rev=REVISION_LABEL --overwrite

At this point, you have successfully configured managed Anthos Service Mesh. If you have any existing workloads in labeled namespaces, then restart them so they get proxies injected.

You are now ready to deploy your applications, or you can deploy the Bookinfo sample application.

If you deploy an application in a multi-cluster setup, replicate the Kubernetes and control plane configuration in all clusters, unless you plan to limit that particular config to a subset of clusters. The configuration applied to a particular cluster is the source of truth for that cluster.

Customize injection (optional)

Per-pod configuration is available to override these options on individual pods. This is done by adding an istio-proxy container to your pod. The sidecar injection will treat any configuration defined here as an override to the default injection template.

For example, the following configuration customizes a variety of settings, including lowering the CPU requests, adding a volume mount, and adding a preStop hook:

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - name: hello
    image: alpine
  - name: istio-proxy
    image: auto
    resources:
      requests:
        cpu: "200m"
        memory: "256Mi"
      limites:
        cpu: "200m"
        memory: "256Mi"
    volumeMounts:
    - mountPath: /etc/certs
      name: certs
    lifecycle:
      preStop:
        exec:
          command: ["sleep", "10"]
  volumes:
  - name: certs
    secret:
      secretName: istio-certs

In general, any field in a pod can be set. However, care must be taken for certain fields:

  • Kubernetes requires the image field to be set before the injection has run. While you can set a specific image to override the default one, it is recommended to set the image to auto which will cause the sidecar injector to automatically select the image to use.
  • Some fields in containers are dependent on related settings. For example, CPU request must be less than CPU limit. If both fields are not properly configured, the pod may fail to start.
  • Kubernetes lets you set both requests and limits for resources in your PodSpec. GKE Autopilot only considers requests. For more information, see Setting resource limits in Autopilot.

Additionally, certain fields are configurable by annotations on the pod, although it is recommended to use the above approach to customizing settings. Additional care must be taken for certain annotations:

  • For GKE Standard, if sidecar.istio.io/proxyCPU is set, make sure to explicitly set sidecar.istio.io/proxyCPULimit. Otherwise the sidecar's CPU limit will be set as unlimited.
  • For GKE Standard, if sidecar.istio.io/proxyMemory is set, make sure to explicitly set sidecar.istio.io/proxyMemoryLimit. Otherwise the sidecar's memory limit will be set as unlimited.
  • For GKE Autopilot, configuring resource requests and limits using annotations might overprovision resources. Use the image template approach to avoid. See Resource modification examples in Autopilot.

For example, see the below resources annotation configuration:

spec:
  template:
    metadata:
      annotations:
        sidecar.istio.io/proxyCPU: "200m"
        sidecar.istio.io/proxyCPULimit: "200m"
        sidecar.istio.io/proxyMemory: "256Mi"
        sidecar.istio.io/proxyMemoryLimit: "256Mi"

Verify control plane metrics

You can view the version of the control plane and data plane in Metrics Explorer.

To verify that your configuration works correctly:

  1. In the Google Cloud console, view the control plane metrics:

    Go to Metrics Explorer

  2. Choose your workspace and add a custom query using the following parameters:

    • Resource type: Kubernetes Container
    • Metric: Proxy Clients
    • Filter: container_name="cr-REVISION_LABEL"
    • Group By: revision label and proxy_version label
    • Aggregator: sum
    • Period: 1 minute

    When you run Anthos Service Mesh with both a Google-managed and an in-cluster control plane, you can tell the metrics apart by their container name. For example, managed metrics have container_name="cr-asm-managed", while unmanaged metrics have container_name="discovery". To display metrics from both, remove the Filter on container_name="cr-asm-managed".

  3. Verify the control plane version and proxy version by inspecting the following fields in Metrics Explorer:

    • The revision field indicates the control plane version.
    • The proxy_version field indicates the proxy_version.
    • The value field indicates the number of connected proxies.

    For the current channel to Anthos Service Mesh version mapping, see Anthos Service Mesh versions per channel.

Migrate applications to managed Anthos Service Mesh

To migrate applications from in-cluster Anthos Service Mesh to managed Anthos Service Mesh, perform the following steps:

  1. Replace the current namespace label. The steps required depend on whether you wish to use default injection labels (for example, istio-injection enabled) or the revision label.

    Default injection label

    1. Run the following command to move the default tag to the managed revision:

      istioctl tag set default --revision REVISION_LABEL
      
    2. Run the following command to label the namespace using istio-injection=enabled, if it wasn't already:

      kubectl label namespace NAMESPACE istio-injection=enabled istio.io/rev- \
      --overwrite
      

    Revision label

    If you used the istio.io/rev=REVISION_LABEL label, then run the following command:

    kubectl label namespace NAMESPACE istio-injection- istio.io/rev=REVISION_LABEL \
        --overwrite
    
  2. Perform a rolling upgrade of deployments in the namespace:

    kubectl rollout restart deployment -n NAMESPACE
    
  3. Test your application to verify that the workloads function correctly.

  4. If you have workloads in other namespaces, repeat the previous steps for each namespace.

  5. If you deployed the application in a multi-cluster setup, replicate the Kubernetes and Istio configuration in all clusters, unless there is a desire to limit that configuration to a subset of clusters only. The configuration applied to a particular cluster is the source of truth for that cluster.

  6. Check that the metrics appear as expected by following the steps in Verify control plane metrics.

If you are satisfied that your application works as expected, you can remove the in-cluster istiod after you switch all namespaces to the managed control plane, or keep them as a backup - istiod will automatically scale down to use fewer resources. To remove, skip to Delete old control plane.

If you encounter problems, you can identify and resolve them by using the information in Resolving managed control plane issues and if necessary, roll back to the previous version.

Delete old control plane

After you install and confirm that all namespaces use the Google-managed control plane, you can delete the old control plane.

kubectl delete Service,Deployment,HorizontalPodAutoscaler,PodDisruptionBudget istiod -n istio-system --ignore-not-found=true

If you used istioctl kube-inject instead of automatic injection, or if you installed additional gateways, check the metrics for the control plane, and verify that the number of connected endpoints is zero.

Roll back

Perform the following steps if you need to roll back to the previous control plane version:

  1. Update workloads to be injected with the previous version of the control plane. In the following command, the revision value asm-191-1 is used only as an example. Replace the example value with the revision label of your previous control plane.

    kubectl label namespace NAMESPACE istio-injection- istio.io/rev=asm-191-1 --overwrite
    
  2. Restart the Pods to trigger re-injection so the proxies have the previous version:

    kubectl rollout restart deployment -n NAMESPACE
    

The managed control plane will automatically scale to zero and not use any resource when not in use. The mutating webhooks and provisioning will remain and do not affect cluster behavior.

The gateway is now set to the asm-managed revision. To roll back, re-run the Anthos Service Mesh install command, which will re-deploy gateway pointing back to your in-cluster control plane:

kubectl -n istio-system rollout undo deploy istio-ingressgateway

Expect this output on success:

deployment.apps/istio-ingressgateway rolled back

Uninstall Anthos Service Mesh

Managed control plane auto-scales to zero when no namespaces are using it. For detailed steps, see Uninstall Anthos Service Mesh.

Troubleshooting

To identify and resolve problems when using managed control plane, see Resolving managed control plane issues.

What's next