Overview
Managed Anthos Service Mesh with asmcli
is a managed control plane and
a managed data plane that you simply configure. Google handles their reliability,
upgrades, scaling and security for you in a backward-compatible manner. This
guide explains how to set up or migrate applications to managed Anthos Service Mesh
in a single or multi-cluster configuration with asmcli
.
To learn about the supported features and limitations of managed Anthos Service Mesh, see Managed Anthos Service Mesh supported features.
Prerequisites
As a starting point, this guide assumes that you have:
- A Cloud project
- A Cloud Billing account
- Obtained the required permissions to provision managed Anthos Service Mesh
- The
asmcli
installation tool,kpt
, and other tools specified in Install the required tools
For a faster provisioning, your clusters must have Workload Identity enabled. If Workload Identity isn't enabled, the provision will automatically enable it.
Requirements
- One or more clusters with a supported version of GKE, in one of the supported regions.
- Ensure that the client machine that you provision managed Anthos Service Mesh from has network connectivity to the API server.
- Your clusters must be registered to a fleet.
This step can be done separately prior to the provision or as part of the
provision by passing the
--enable-registration
and--fleet-id
flags. Your project must have the Service Mesh fleet feature enabled. You could enable it as part of the provision by passing
--enable-gcp-components
, or by running the following command:gcloud container fleet mesh enable --project=FLEET_PROJECT_ID
where FLEET_PROJECT_ID is the project-id of the fleet host project.
GKE Autopilot is only supported with GKE version 1.21.3+. CNI will be installed and managed by Google.
Managed Anthos Service Mesh can use multiple GKE clusters in a single-project single-network environment or a multi-project single-network environment.
- If you join clusters that are not in the same project, they must be registered to the same fleet host project, and the clusters must be in a shared VPC configuration together on the same network.
- For a single-project multi-cluster environment, the fleet project can be the same as the cluster project. For more information about fleets, see Fleets Overview.
- For a multi-project environment, we recommend that you host the fleet in a separate project from the cluster projects. If your organizational policies and existing configuration allow it, we recommend that you use the shared VPC project as the fleet host project. For more information, see Setting up clusters with Shared VPC.
- If your organization uses VPC Service Controls and you are provisioning
managed Anthos Service Mesh on GKE clusters with a release
greater or equal to 1.22.1-gke.10, then you may need to take additional
configuration steps:
- If you are provisioning managed Anthos Service Mesh on the
regular or stable
release channel, then
you must use the additional
--use-vpcsc
flag when applying the managed control plane and follow the VPC Service Controls (preview) guide. Otherwise, the provision will fail security controls. - If you are provisioning managed Anthos Service Mesh on the rapid
release channel, then
you do not need to use the additional
--use-vpcsc
flag when applying the managed control plane, but you do need to follow the VPC Service Controls (GA) guide.
- If you are provisioning managed Anthos Service Mesh on the
regular or stable
release channel, then
you must use the additional
Limitations
We recommend that you review the list of managed Anthos Service Mesh supported features and limitations. In particular, note the following:
The
IstioOperator
API isn't supported since its main purpose is to control in-cluster components.Migrations from managed Anthos Service Mesh with
asmcli
to Anthos Service Mesh with fleet API are not supported. Similarly, configuring managed Anthos Service Mesh with fleet API from--management manual
to--management automatic
is not supported.For GKE Autopilot clusters, cross-project setup is only supported with GKE 1.23 or later.
For GKE Autopilot clusters, in order to adapt to the GKE Autopilot resource limit, the default proxy resource requests and limits are set to 500m CPU and 512 Mb memory. You can override the default values using custom injection.
The actual features available to managed Anthos Service Mesh depend on the release channel. For more information, review the full list of managed Anthos Service Mesh supported features and limitations.
During the provisioning process for a managed control plane, Istio CRDs corresponding to the selected channel are provisioned in the specified cluster. If there are existing Istio CRDs in the cluster, they will be overwritten.
Istio CNI is not compatible with GKE Sandbox. Managed Anthos Service Mesh on Autopilot, therefore, does not work with GKE Sandbox since managed Istio CNI is required.
The
asmcli
tool must have access to the Google Kubernetes Engine (GKE) endpoint. You can configure access through a "jump" server, such as a Compute Engine VM within the Virtual Private Cloud (VPC) giving specific access.
Before you begin
Configure gcloud
Do the following steps even if you are using Cloud Shell.
Authenticate with the Google Cloud CLI:
gcloud auth login --project PROJECT_ID
Update the components:
gcloud components update
Configure
kubectl
to point to the cluster.gcloud container clusters get-credentials CLUSTER_NAME \ --zone CLUSTER_LOCATION \ --project PROJECT_ID
Download the installation tool
Download the latest version of the tool to the current working directory:
curl https://storage.googleapis.com/csm-artifacts/asm/asmcli > asmcli
Make the tool executable:
chmod +x asmcli
Configure each cluster
Use the following steps to configure managed Anthos Service Mesh for each cluster in your mesh.
Apply the managed control plane
Before you apply the managed control plane, you must select a release channel.
Run the installation tool for each cluster that will use managed Anthos Service Mesh. We recommend that you include both of the following options:
--enable-registration --fleet_id FLEET_PROJECT_ID
These two flags register the cluster to a fleet, where the FLEET_ID is the project-id of the fleet host project. If using a single-project, the FLEET_PROJECT_ID is the same as PROJECT_ID, the fleet host project and the cluster project are the same. In more complex configurations like multi-project, we recommend using a separate fleet host project.--enable-all
. This flag enables both required components and registration.
If your organization enforces VPC Service Controls
for your project, you must configure an additional flag: --use-vpcsc
.
Otherwise the installation will fail security controls. Support for the
VPC Service Controls feature is available in the Regular and Rapid channels.
The asmcli
tool configures the managed control plane directly using tools and
logic inside of the CLI tool. Use the set of instructions below depending on
your preferred CA.
Certificate Authorities
Select a Certificate Authority to use for your mesh.
Mesh CA
Run the following command to install the control plane with default features
and Mesh CA. Enter your values in the provided placeholders. Replace
RELEASE_CHANNEL with the appropriate channel:
regular
, stable
, or rapid
.
./asmcli install \
-p PROJECT_ID \
-l LOCATION \
-n CLUSTER_NAME \
--fleet_id FLEET_PROJECT_ID \
--managed \
--verbose \
--output_dir DIR_PATH \
--enable-all \
--channel RELEASE_CHANNEL
CA Service
- Follow the steps in Configure Certificate Authority Service.
- Run the following command to install the control plane with default
features and Certificate Authority Service.
Enter your values in the provided placeholders. Replace
RELEASE_CHANNEL with the appropriate channel:
regular
,stable
, orrapid
.
./asmcli install \
-p PROJECT_ID \
-l LOCATION \
-n CLUSTER_NAME \
--fleet_id FLEET_PROJECT_ID \
--managed \
--verbose \
--output_dir DIR_PATH \
--enable-all \
--channel RELEASE_CHANNEL \
--ca gcp_cas \
--ca_pool pool_name
The tool downloads all the files for configuring the managed control plane
to the specified --output_dir
, installing the istioctl
tool and sample
applications. The steps in this guide assume that you run istioctl
from the
--output_dir
location you specified when running asmcli install
, with
istioctl
present in its <Istio release dir>/bin
subdirectory.
If you rerun asmcli
on the same cluster, it overwrites the
existing control plane configuration. Be sure to specify the same options and
flags if you want the same configuration.
Verify the control plane has been provisioned
The asmcli
tool creates a ControlPlaneRevision
custom resource in the
cluster. This resource's status is updated when the managed control plane is
provisioned or fails provisioning.
Inspect the status of the resource. Replace NAME
with the value corresponding to each channel: asm-managed
,
asm-managed-stable
, or asm-managed-rapid
.
kubectl describe controlplanerevision NAME -n istio-system
The output is similar to:
Name: asm-managed … Status: Conditions: Last Transition Time: 2021-08-05T18:56:32Z Message: The provisioning process has completed successfully Reason: Provisioned Status: True Type: Reconciled Last Transition Time: 2021-08-05T18:56:32Z Message: Provisioning has finished Reason: ProvisioningFinished Status: True Type: ProvisioningFinished Last Transition Time: 2021-08-05T18:56:32Z Message: Provisioning has not stalled Reason: NotStalled Status: False Type: Stalled
The Reconciled condition determines whether the managed control plane is running
correctly. If true
, the control plane is running successfully. Stalled
determines whether the managed control plane provisioning process has
encountered an error. If Stalled
, the Message
field contains more
information about the specific error. See
Stalled codes
for more information about possible errors.
Zero-touch upgrades
Once the managed control plane is installed, Google will automatically upgrade it when new releases or patches become available.
Managed data plane
If you use managed Anthos Service Mesh, Google fully manages upgrades of your proxies unless you disable it at the namespace, workload, or revision level. This only applies to new clusters. Existing clusters may not have the managed data plane enabled.
With the managed data plane, the sidecar proxies and injected gateways are automatically updated in conjunction with the managed control plane by restarting workloads to re-inject new versions of the proxy. This normally completes 1-2 weeks after the managed control plane is upgraded.
If disabled, proxy management is driven by the natural lifecycle of the pods in the cluster and must be manually triggered by the user to control the update rate.
The managed data plane upgrades proxies by evicting pods that are running older versions of the proxy. The evictions are done gradually, honoring the pod disruption budget and controlling the rate of change.
Note that the managed data plane requires the Istio Container Network Interface (CNI) plugin, which is enabled by default when you deploy the managed control plane.
The managed data plane doesn't manage the following:
- Uninjected pods
- Manually injected pods
- Jobs
- StatefulSets
- DaemonSets
If you have provisioned managed Anthos Service Mesh on an older cluster, you can enable data plane management for the entire cluster:
kubectl annotate --overwrite controlplanerevision -n istio-system \
REVISION_LABEL \
mesh.cloud.google.com/proxy='{"managed":"true"}'
Alternatively, you can enable the managed data plane selectively for a specific control plane revision, namespace, or pod by annotating it with the same annotation. If you control individual components selectively, then the order of precedence is control plane revision, then namespace, then pod.
It could take up to ten minutes for the service to be ready to manage the proxies in the cluster. Run the following command to check the status:
gcloud container fleet mesh describe --project FLEET_PROJECT_ID
Expected output
membershipStates:
projects/PROJECT_NUMBER/locations/global/memberships/CLUSTER_NAME:
servicemesh:
dataPlaneManagement:
details:
- code: OK
details: Service is running.
state: ACTIVE
state:
code: OK
description: 'Revision(s) ready for use: asm-managed-rapid.'
If the service does not become ready within ten minutes, see Managed data plane status for next steps.
Disable the managed data plane (optional)
If you are provisioning managed Anthos Service Mesh on a new cluster, then you can disable the managed data plane completely, or for individual namespaces or pods. The managed data plane will continue to be disabled for existing clusters where it was disabled by default or manually.
To disable the managed data plane at the cluster level and revert back to managing the sidecar proxies yourself, change the annotation:
kubectl annotate --overwrite controlplanerevision -n istio-system \
REVISION_LABEL \
mesh.cloud.google.com/proxy='{"managed":"false"}'
To disable the managed data plane for a namespace:
kubectl annotate --overwrite namespace NAMESPACE \
mesh.cloud.google.com/proxy='{"managed":"false"}'
To disable the managed data plane for a pod:
kubectl annotate --overwrite pod POD_NAME \
mesh.cloud.google.com/proxy='{"managed":"false"}'
Enable maintenance notifications
You can request to be notified about upcoming managed data plane maintenance up to week before maintenance is scheduled. Maintenance notifications are not sent by default. You must also Configure a GKE maintenance window before you can receive notifications.
To opt in to managed data plane maintenance notifications:
Go to the Communication page.
In the Anthos Service Mesh Upgrade row, under the Email column, select the radio button to turn maintenance notifications ON.
Each user that wants to receive notifications must opt in separately. If you want to set an email filter for these notifications, the subject line is:
Upcoming upgrade for your Anthos Service Mesh cluster "CLUSTER_LOCATION/CLUSTER_NAME"
.
The following example shows a typical managed data plane maintenance notification:
Subject Line: Upcoming upgrade for your ASM cluster "
<location/cluster-name>
"Dear Anthos Service Mesh user,
The Anthos Service Mesh components in your cluster ${instance_id} (https://console.cloud.google.com/kubernetes/clusters/details/${instance_id}/details?project=${project_id}) are scheduled to upgrade on ${scheduled_date_human_readable} at ${scheduled_time_human_readable}.
You can check the release notes (https://cloud.google.com/service-mesh/docs/release-notes) to learn about the new update.
In the event that this maintenance gets canceled, you'll receive another email.
Sincerely,
The Anthos Service Mesh Team
(c) 2022 Google LLC 1600 Amphitheater Parkway, Mountain View, CA 94043 You have received this announcement to update you about important changes to Google Cloud Platform or your account. You can opt out of maintenance window notifications by editing your user preferences: https://console.cloud.google.com/user-preferences/communication?project=${project_id}
Configure endpoint discovery (only for multi-cluster installations)
Before you continue, you should have already configured managed Anthos Service Mesh on each cluster as described in the previous steps. There is no need to indicate that a cluster is a primary cluster, this is the default behavior.
Additionally, ensure you have
downloaded asmcli
(only if you wish to verify your configuration with the sample application) and
set the project and cluster variables.
Public clusters
Configure endpoint discovery between public clusters
If you are operating on public clusters (non-private clusters), you can either Configure endpoint discovery between public clusters or more simply Enable endpoint discovery between public clusters.
Private clusters
Configure endpoint discovery between private clusters
When using GKE private clusters, you must configure the cluster control plane endpoint to be the public endpoint instead of the private endpoint. Please refer to Configure endpoint discovery between private clusters.
For an example application with two clusters, see HelloWorld service example.
Deploy applications
To deploy applications, use either the label corresponding to the channel you
configured during installation or istio-injection=enabled
if you are using
default injection labels.
Default injection label
kubectl label namespace NAMESPACE istio-injection=enabled istio.io/rev- --overwrite
Revision label
Before you deploy applications, remove any previous istio-injection
labels
from their namespaces and set the istio.io/rev=REVISION_LABEL
label instead.
To change it to a specific revision label, click REVISION_LABEL
, and replace
it with the applicable label: asm-managed-rapid
for Rapid, asm-managed
for
Regular, or asm-managed-stable
for Stable.
The revision label corresponds to a release channel:
Revision label | Channel |
---|---|
asm-managed |
Regular |
asm-managed-rapid |
Rapid |
asm-managed-stable |
Stable |
kubectl label namespace NAMESPACE istio-injection- istio.io/rev=REVISION_LABEL --overwrite
At this point, you have successfully configured Anthos Service Mesh managed control plane. If you have any existing workloads in labeled namespaces, then restart them so they get proxies injected.
You are now ready to deploy your applications, or you can deploy the Bookinfo sample application.
If you deploy an application in a multi-cluster setup, replicate the Kubernetes and control plane configuration in all clusters, unless you plan to limit that particular config to a subset of clusters. The configuration applied to a particular cluster is the source of truth for that cluster.
Customize injection (optional)
Per-pod configuration is available to override these options on individual pods.
This is done by adding an istio-proxy
container to your pod. The sidecar
injection will treat any configuration defined here as an override to the
default injection template.
For example, the following configuration customizes a variety of settings,
including lowering the CPU requests, adding a volume mount, and adding a
preStop
hook:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: hello
image: alpine
- name: istio-proxy
image: auto
resources:
requests:
cpu: "200m"
memory: "256Mi"
limites:
cpu: "200m"
memory: "256Mi"
volumeMounts:
- mountPath: /etc/certs
name: certs
lifecycle:
preStop:
exec:
command: ["sleep", "10"]
volumes:
- name: certs
secret:
secretName: istio-certs
In general, any field in a pod can be set. However, care must be taken for certain fields:
- Kubernetes requires the
image
field to be set before the injection has run. While you can set a specific image to override the default one, it is recommended to set theimage
toauto
which will cause the sidecar injector to automatically select the image to use. - Some fields in
containers
are dependent on related settings. For example, CPU request must be less than CPU limit. If both fields are not properly configured, the pod may fail to start. - Kubernetes lets you set both
requests
andlimits
for resources in yourPodSpec
. GKE Autopilot only considersrequests
. For more information, see Setting resource limits in Autopilot.
Additionally, certain fields are configurable by annotations on the pod, although it is recommended to use the above approach to customizing settings. Additional care must be taken for certain annotations:
- For GKE Standard, if
sidecar.istio.io/proxyCPU
is set, make sure to explicitly setsidecar.istio.io/proxyCPULimit
. Otherwise the sidecar's CPU limit will be set as unlimited. - For GKE Standard, if
sidecar.istio.io/proxyMemory
is set, make sure to explicitly setsidecar.istio.io/proxyMemoryLimit
. Otherwise the sidecar's memory limit will be set as unlimited. - For GKE Autopilot, configuring resource
requests
andlimits
using annotations might overprovision resources. Use the image template approach to avoid. See Resource modification examples in Autopilot.
For example, see the below resources annotation configuration:
spec:
template:
metadata:
annotations:
sidecar.istio.io/proxyCPU: "200m"
sidecar.istio.io/proxyCPULimit: "200m"
sidecar.istio.io/proxyMemory: "256Mi"
sidecar.istio.io/proxyMemoryLimit: "256Mi"
Verify control plane metrics
You can view the version of the control plane and data plane in Metrics Explorer.
To verify that your configuration works correctly:
In the Google Cloud console, view the control plane metrics:
Choose your workspace and add a custom query using the following parameters:
- Resource type: Kubernetes Container
- Metric: Proxy Clients
- Filter:
container_name="cr-REVISION_LABEL"
- Group By: revision label and proxy_version label
- Aggregator sum
- Period: 1 minute
When you run Anthos Service Mesh with both a Google-managed and an in-cluster control plane, you can tell the metrics apart by their container name. For example, managed metrics have
container_name="cr-asm-managed"
, while unmanaged metrics havecontainer_name="discovery"
. To display metrics from both, remove the Filter oncontainer_name="cr-asm-managed"
.Verify the control plane version and proxy version by inspecting the following fields in Metrics Explorer:
- The revision field indicates the control plane version.
- The proxy_version field indicates the
proxy_version
. - The value field indicates the number of connected proxies.
For the current channel to Anthos Service Mesh version mapping, see Anthos Service Mesh versions per channel.
Migrate applications to managed Anthos Service Mesh
Prepare for migration
To prepare to migrate applications from in-cluster Anthos Service Mesh to managed Anthos Service Mesh, perform the following steps:
Run the tool as indicated in the Apply the Google-managed control plane section.
(Optional) If you want to use the Google-managed data plane, enable data plane management:
kubectl annotate --overwrite controlplanerevision REVISION_TAG \ mesh.cloud.google.com/proxy='{"managed":"true"}'
Migrate applications
To migrate applications from in-cluster Anthos Service Mesh to managed Anthos Service Mesh, perform the following steps:
Replace the current namespace label. The steps required depend on whether you wish to use default injection labels (for example,
istio-injection enabled
) or the revision label.Default injection label
Run the following command to move the default tag to the managed revision:
istioctl tag set default --revision REVISION_LABEL
Run the following command to label the namespace using
istio-injection=enabled
, if it wasn't already:kubectl label namespace NAMESPACE istio-injection=enabled istio.io/rev- \ --overwrite
Revision label
If you used the
istio.io/rev=REVISION_LABEL
label, then run the following command:kubectl label namespace NAMESPACE istio-injection- istio.io/rev=REVISION_LABEL \ --overwrite
Perform a rolling upgrade of deployments in the namespace:
kubectl rollout restart deployment -n NAMESPACE
Test your application to verify that the workloads function correctly.
If you have workloads in other namespaces, repeat the previous steps for each namespace.
If you deployed the application in a multi-cluster setup, replicate the Kubernetes and Istio configuration in all clusters, unless there is a desire to limit that configuration to a subset of clusters only. The configuration applied to a particular cluster is the source of truth for that cluster.
Check that the metrics appear as expected by following the steps in Verify control plane metrics.
If you are satisfied that your application works as expected, you can remove the
in-cluster istiod
after you switch all namespaces to the managed control
plane, or keep them as a backup - istiod
will automatically scale down to use
fewer resources. To remove, skip to
Delete old control plane.
If you encounter problems, you can identify and resolve them by using the information in Resolving managed control plane issues and if necessary, roll back to the previous version.
Delete old control plane
After you install and confirm that all namespaces use the Google-managed control plane, you can delete the old control plane.
kubectl delete Service,Deployment,HorizontalPodAutoscaler,PodDisruptionBudget istiod -n istio-system --ignore-not-found=true
If you used istioctl kube-inject
instead of automatic injection, or if
you installed additional gateways, check the metrics for the control plane,
and verify that the number of connected endpoints is zero.
Roll back
Perform the following steps if you need to roll back to the previous control plane version:
Update workloads to be injected with the previous version of the control plane. In the following command, the revision value
asm-191-1
is used only as an example. Replace the example value with the revision label of your previous control plane.kubectl label namespace NAMESPACE istio-injection- istio.io/rev=asm-191-1 --overwrite
Restart the Pods to trigger re-injection so the proxies have the previous version:
kubectl rollout restart deployment -n NAMESPACE
The managed control plane will automatically scale to zero and not use any resource when not in use. The mutating webhooks and provisioning will remain and do not affect cluster behavior.
The gateway is now set to the asm-managed
revision. To roll back, re-run
the Anthos Service Mesh install command, which will re-deploy gateway pointing back
to your in-cluster control plane:
kubectl -n istio-system rollout undo deploy istio-ingressgateway
Expect this output on success:
deployment.apps/istio-ingressgateway rolled back
Uninstall
Managed control plane auto-scales to zero when no namespaces are using it. For detailed steps, see Uninstall Anthos Service Mesh.
Troubleshooting
To identify and resolve problems when using managed control plane, see Resolving managed control plane issues.
What's next?
- Learn about release channels.
- Migrate from
IstioOperator
. - Migrate a gateway to managed control plane.
- Learn how to Enable optional managed Anthos Service Mesh features, such as: