This document describes how Google Kubernetes Engine (GKE) administrators can install Anthos Service Mesh and migrate workloads currently running with an Istio service mesh. The deployed Anthos Service Mesh configuration includes Cloud Monitoring for telemetry and Anthos Service Mesh certificate authority (Mesh CA) for managed, high-availability mesh certificate management. Gateways, virtual services, and other mesh configurations that define your mesh topology are preserved in the migration.
This process covers a single-cluster installation. For a multi-cluster mesh installation, see Setting up a multi-cluster mesh on GKE, which includes steps for how to add clusters to Anthos Service Mesh post-installation.
To complete the steps in this document, you must be using Istio 1.7 or later
with a GKE cluster. Anthos Service Mesh does not support Helm for
installation or configuration. We recommend that mesh administrators use the
IstioOperator
API for mesh configurations. This process might introduce
downtime for your application while switching certificate authorities, so we
recommend that you perform this process during a scheduled maintenance window.
Anthos Service Mesh uses the same Istio and Envoy APIs to configure your mesh, so no change to existing resources is necessary.
A few implementation differences after migration are as follows:
The Istio control plane is replaced with an Anthos Service Mesh control plane.
The Citadel certificate authority is removed, and certificates are managed by a Google Cloud Mesh CA service.
Telemetry is sent to Cloud Logging and Cloud Monitoring. Dashboards and SLO management are available in the Google Cloud console.
If you have a customized
IstioOperator
resource, the script can take that as an input.Your open source Istio installation (version 1.7 or later) is migrated to Anthos Service Mesh version 1.10 with Mesh CA. If you have a different version of Istio or require a different version of Anthos Service Mesh, or you would like to deploy Anthos Service Mesh with a Google-managed control plane, see Preparing to migrate from Istio.
Prerequisites
The following prerequisites are required to complete this guide:
You have a GKE cluster with Istio version 1.7 or later installed. If you do not have a GKE cluster, or you would like to test this guide on a new (test) cluster first, follow the steps in the Appendix to create a new GKE cluster with Istio version 1.7 or later deployed with a test application.
You use Cloud Shell to perform the steps in this guide because this guide is tested on Cloud Shell.
Objectives
In this guide, you choose a migration path. You can choose from either a one-step scripted path or a step-by-step scripted migration.
For more information, see Choose a migration path.
To get answers to frequently asked questions about this migration, see Migrating from Istio 1.7 or later to Anthos Service Mesh and Mesh CA FAQ.
Before you begin
For this guide, you need administrative access to a GKE cluster with Istio installed. To observe how your application behaves during the migration process, we recommend that you first perform this process with a cluster in a development or staging environment.
Anthos Service Mesh has the following requirements. You can perform them manually yourself or allow the provided tools to enable dependencies on your behalf during the pre-installation process.
Enable the following Google Cloud APIs:
container.googleapis.com
meshca.googleapis.com
meshconfig.googleapis.com
gkehub.googleapis.com
stackdriver.googleapis.com
Enable Workload Identity and Stackdriver for your GKE cluster.
Label your cluster to enable the Service user interface.
Obtain cluster admin rights on the Kubernetes cluster.
Register your cluster to a fleet.
Enable the
servicemesh
feature on the fleet.
Set up your environment
To set up your environment, follow these steps:
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console page, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed, and with values already set for your current project. It can take a few seconds for the session to initialize.
Create the environment variables used in this guide:
export PROJECT_ID=PROJECT_ID gcloud config set project ${PROJECT_ID} export PROJECT_NUM=$(gcloud projects describe ${PROJECT_ID} --format='value(projectNumber)') export CLUSTER_NAME=GKE_CLUSTER_NAME export CLUSTER_LOCATION=GKE_CLUSTER_REGION_OR_ZONE
Create a
WORKDIR
folder:mkdir -p migrate-to-asm-working-dir && cd migrate-to-asm-working-dir && export WORKDIR=`pwd`
Create a
KUBECONFIG
file for this guide:touch asm-kubeconfig && export KUBECONFIG=`pwd`/asm-kubeconfig
Connect to your GKE cluster:
Zonal clusters
gcloud container clusters get-credentials ${CLUSTER_NAME} \ --zone ${CLUSTER_LOCATION}
Regional clusters
gcloud container clusters get-credentials ${CLUSTER_NAME} \ --region ${CLUSTER_LOCATION}
Download the migration script:
curl -LO https://storage.googleapis.com/csm-artifacts/asm/migrate-to-asm chmod +x ./migrate-to-asm
Choose a migration path
You can choose one of two paths to migrate to Anthos Service Mesh. Choose only one of these two strategies, and then proceed to that section:
One-step migration to Anthos Service Mesh. As the name suggests, you can perform all the required steps to migrate to Anthos Service Mesh by using a single command. This might be beneficial if you have many clusters and you need a fast and easy way to upgrade them to Anthos Service Mesh. However, this method might result in application downtime.
Step-by-step migration to Anthos Service Mesh. This method provides you with more control over each step and helps you understand exactly what is required to migrate to Anthos Service Mesh.
One-step migration to Anthos Service Mesh
In this section, you migrate your current Istio version 1.7 (or later) installation to Anthos Service Mesh version 1.10. This section lets you perform the migration by running a single step. If you would like to perform the migration by running a series of steps, see the Step-by-step migration to Anthos Service Mesh section.
To migrate to Anthos Service Mesh, run the following command. With
any command, you can use the --dry-run
flag to print commands instead of
running them, or you can use the --verbose
flag to print commands as the
script runs them. If you have previously configured dependencies, as noted in
the Before you begin section, you can omit the
--enable-dependencies
flag.
No custom resource
Do not use a custom IstioOperator
resource:
./migrate-to-asm migrate \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID \ --enable-dependencies \ --verbose
Use custom resource
Use a custom IstioOperator
resource:
export ISTIO_OPERATOR_FILEPATH=PATH_OF_ISTIO_OPERATOR_YAML_FILE ./migrate-to-asm migrate \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID \ --enable-dependencies \ --custom_overlay ${ISTIO_OPERATOR_FILEPATH} \ --verbose
This command performs the following steps:
- Ensures that the Istio version is version 1.7 or later.
- Enables Workload Identity on the cluster. Workload Identity is required for Mesh CA. You do not need to enable the GKE Metadata Server on existing node pools.
- Enables the required APIs for Anthos Service Mesh.
- Registers the cluster to a fleet.
- Updates the cluster with the required labels.
- Evaluates whether a Google-managed control plane is better for the specified cluster.
- Deploys Anthos Service Mesh with the optimal control plane configuration.
- Relabels all Istio-enabled namespaces with the required Anthos Service Mesh label.
- Restarts the workloads in all Anthos Service Mesh-enabled namespaces so that workloads get the new Anthos Service Mesh proxies.
- Removes the Istio control plane.
Step-by-step migration to Anthos Service Mesh
In this section, you migrate your Istio version 1.7 (or later) installation to Anthos Service Mesh version 1.10. This section lets you perform the migration by running a series of steps. If you would like to perform the migration in a single step, see the One-step migration to Anthos Service Mesh section.
The following steps are required to migrate to Anthos Service Mesh:
- Perform a pre-migration step to validate and prepare the cluster and environment for migration to Anthos Service Mesh.
- Install Anthos Service Mesh as a canary control plane alongside an existing Istio control plane and prepare workloads.
- Test workloads on Anthos Service Mesh and relabel namespaces for Anthos Service Mesh sidecar injection.
- Access and inspect Anthos Service Mesh dashboards.
- Clean up Istio artifacts or roll back to an existing Istio version.
Perform pre-migration step
The pre-migration step performs the following actions:
It validates that the project and cluster information is correct and that the installed Istio version is compatible with migration.
It backs up the configuration for the default gateway and the labels for the current Istio service mesh.
If the
--enable-dependencies
flag is used, it enables dependencies on your behalf; otherwise, it verifies that the dependencies are enabled.
The pre-migration script creates a new folder (or overwrites an existing folder)
called configuration_backup
in the current directory.
To perform the pre-migration step, run the following command:
Dependencies
Enable dependencies:
./migrate-to-asm pre-migrate \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID \ --enable-dependencies
No dependencies
Do not enable dependencies:
./migrate-to-asm pre-migrate \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID
The output is similar to the following:
migrate-to-asm: Checking installation tool dependencies... migrate-to-asm: Checking for $PROJECT_ID... migrate-to-asm: Confirming cluster information for $PROJECT_ID/$LOCATION/$CLUSTER_NAME... migrate-to-asm: Confirming node pool requirements for $PROJECT_ID/$LOCATION/$CLUSTER_NAME... migrate-to-asm: Checking existing Istio version(s)... migrate-to-asm: 1.9.5 migrate-to-asm: No version issues found. migrate-to-asm: Enabling required APIs... migrate-to-asm: migrate-to-asm: APIs enabled. migrate-to-asm: Enabling the service mesh feature... migrate-to-asm: migrate-to-asm: The service mesh feature is already enabled. migrate-to-asm: Enabling Stackdriver on $LOCATION/$CLUSTER_NAME... Updating $CLUSTER_NAME... .........................done. Updated [https://container.googleapis.com/v1/projects/$PROJECT_ID/zones/$LOCATION/clusters/$CLUSTER_NAME]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/$LOCATION/$CLUSTER_NAME?project=$PROJECT_ID migrate-to-asm: migrate-to-asm: Stackdriver enabled. migrate-to-asm: Querying for core/account... migrate-to-asm: Binding user@example.com to cluster admin role... migrate-to-asm: migrate-to-asm: migrate-to-asm: Successfully bound to cluster admin role. migrate-to-asm: Initializing meshconfig API... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3 0 3 0 0 6 0 --:--:-- --:--:-- --:--:-- 6 migrate-to-asm: migrate-to-asm: Finished pre-migration!
Install Anthos Service Mesh and prepare workloads
This step does the following:
- It checks for the presence of the
configuration_backup
folder, and if it's not present, it aborts to ensure that the pre-migration tool ran successfully. - It installs and configures an Anthos Service Mesh control plane based on analysis of cluster and mesh configuration.
- It uses the custom
IstioOperator
resource if one is provided. If you have custom gateways or multiple gateways that you have configured by using anIstioOperator
resource, then use the same resource in this step.
To skip the analysis and force the tool to install an unmanaged control plane
running with your cluster's resources, add the --no-mcp
flag to your command.
You can choose one of three paths when installing Anthos Service Mesh:
Option 1: Without a custom
IstioOperator
resource. You can install Anthos Service Mesh without a custom resource. Using this option installs the default configuration of Istio and updates the defaultistio-ingressgateway
in place.Option 2: With a
--no-gateways
option. When installing Anthos Service Mesh without a customIstioOperator
resource, you can also use the--no-gateways
option to not update the defaultistio-ingressgateway
in place. If you use this option, you must upgrade gateways manually post-installation.Option 3: With a custom
IstioOperator
resource. You can install Anthos Service Mesh with a customIstioOperator
resource. If you deployed Istio by using a customIstioOperator
resource, we recommend that you use the sameIstioOperator
resource when installing Anthos Service Mesh.
To install Anthos Service Mesh, run one of the following commands:
Option 1
Upgrade the default istio-ingressgateway
in place:
./migrate-to-asm install-asm \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID
Option 2
Do not upgrade the default istio-ingressgateway
in place:
./migrate-to-asm install-asm \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID \ --no-gateways
Option 3
Upgrade gateways in place with a custom IstioOperator
resource:
export ISTIO_OPERATOR_FILEPATH=PATH_OF_ISTIO_OPERATOR_YAML_FILE ./migrate-to-asm install-asm \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID \ --custom-overlay ${ISTIO_OPERATOR_FILEPATH}
The output is similar to the following:
migrate-to-asm: Checking installation tool dependencies... migrate-to-asm: Checking for $PROJECT_ID... migrate-to-asm: Fetching/writing Google Cloud credentials to kubeconfig file... Fetching cluster endpoint and auth data. kubeconfig entry generated for $CLUSTER_NAME. migrate-to-asm: migrate-to-asm: Verifying connectivity (20s)... migrate-to-asm: kubeconfig set to $PROJECT_ID/$LOCATION/$CLUSTER_NAME... migrate-to-asm: Configuring kpt package... asm/ set 20 field(s) of setter "gcloud.container.cluster" to value "$CLUSTER_NAME" asm/ set 28 field(s) of setter "gcloud.core.project" to value "$PROJECT_ID" asm/ set 2 field(s) of setter "gcloud.project.projectNumber" to value "42" asm/ set 5 field(s) of setter "gcloud.project.environProjectNumber" to value "42" asm/ set 20 field(s) of setter "gcloud.compute.location" to value "$LOCATION" asm/ set 1 field(s) of setter "gcloud.compute.network" to value "$PROJECT_ID-default" asm/ set 6 field(s) of setter "anthos.servicemesh.rev" to value "asm-1102-2" asm/ set 5 field(s) of setter "anthos.servicemesh.tag" to value "1.10.2-asm.2" asm/ set 4 field(s) of setter "anthos.servicemesh.hubTrustDomain" to value "$PROJECT_ID.svc.id.goog" asm/ set 2 field(s) of setter "anthos.servicemesh.hub-idp-url" to value "https://container.googleapis.com/v1/projects/$PROJECT_ID/locations/$LOCATION/clusters/$CLUSTER_NAME" asm/ set 4 field(s) of setter "anthos.servicemesh.trustDomainAliases" to value "$PROJECT_ID.svc.id.goog" migrate-to-asm: Configured. migrate-to-asm: Installing Anthos Service Mesh control plane... migrate-to-asm: - Processing resources for Istio core. ✔ Istio core installed - Processing resources for Istiod. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod-asm-1102-2 ✔ Istiod installed - Processing resources for CNI, Ingress gateways. - Processing resources for CNI, Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgateway ✔ CNI installed - Processing resources for Ingress gateways. Waiting for Deployment/istio-system/istio-ingressgateway ✔ Ingress gateways installed - Pruning removed resources migrate-to-asm: migrate-to-asm: namespace/asm-system created customresourcedefinition.apiextensions.k8s.io/canonicalservices.anthos.cloud.google.com configured role.rbac.authorization.k8s.io/canonical-service-leader-election-role created clusterrole.rbac.authorization.k8s.io/canonical-service-manager-role configured clusterrole.rbac.authorization.k8s.io/canonical-service-metrics-reader unchanged serviceaccount/canonical-service-account created rolebinding.rbac.authorization.k8s.io/canonical-service-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/canonical-service-manager-rolebinding unchanged clusterrolebinding.rbac.authorization.k8s.io/canonical-service-proxy-rolebinding unchanged service/canonical-service-controller-manager-metrics-service created deployment.apps/canonical-service-controller-manager created deployment.apps/canonical-service-controller-manager condition met migrate-to-asm: migrate-to-asm: migrate-to-asm: ******* migrate-to-asm: Control plane installation complete!
Re-inject workloads and check application behavior
The Anthos Service Mesh control plane is now ready to handle workloads, but the existing Istio control plane is still managing existing workloads. To migrate those workloads, you need to relabel Kubernetes namespaces that are currently labeled for Istio injection with the Anthos Service Mesh revision label. You then need to restart workloads in those namespaces. You can do this manually (see the Note in step 1) or in one step by using the tool.
The relabel step does the following:
- It finds all the namespaces that currently use an Istio injection label.
- It relabels those namespaces with
istio.io/rev=asm-1102-2
. - It restarts the workloads in the namespace.
To re-inject the workloads, follow these steps:
Relabel all Istio-enabled namespaces and restart the workloads by running the following command:
./migrate-to-asm relabel \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_ID
The output is similar to the following:
migrate-to-asm: Checking installation tool dependencies... migrate-to-asm: Checking for $PROJECT_ID... migrate-to-asm: Fetching/writing Google Cloud credentials to kubeconfig file... Fetching cluster endpoint and auth data. kubeconfig entry generated for $CLUSTER_NAME. migrate-to-asm: migrate-to-asm: Verifying connectivity (20s)... migrate-to-asm: kubeconfig set to $PROJECT_ID/$LOCATION/$CLUSTER_NAME... ****** migrate-to-asm: Installation of Anthos Service Mesh has completed. Migration will continue migrate-to-asm: by relabeling and restarting workloads in the following namespaces: migrate-to-asm: namespace/default migrate-to-asm: Continue with migration? (Y/n)Y migrate-to-asm: Relabeling namespace/default... namespace/default labeled migrate-to-asm: Restarting workloads in namespace/default and waiting for them to become available (max 5 min)... deployment.apps/frontend restarted deployment.apps/backend restarted deployment.apps/frontend condition met deployment.apps/backend condition met migrate-to-asm: ******* migrate-to-asm: Finished restarting workloads!
Wait until all Deployments are restarted, and then check the version of the data plane by running the following command:
istioctl version
The output is similar to the following:
client version: 1.8.0 pilot version: 1.9.5 istiod version: 1.10.2-asm.2 data plane version: 1.10.2-asm.2 (14 proxies)
Verify that the applications are functioning properly after restarting.
Access Anthos Service Mesh dashboards
In this section, you go to the Anthos Service Mesh dashboards and make sure that you are receiving the golden signals for all Services. You should also be able to see your application topology.
In the Google Cloud console, go to the Anthos Service Mesh page.
You should be able to view the metrics and topology for your Services.
To learn more about Anthos Service Mesh dashboards, see Exploring Anthos Service Mesh in the Google Cloud console.
Finalize your migration
Before you finalize your migration, ensure that all your applications are functioning properly. After you finalize your migration, you cannot roll back to the existing Istio version. Finalizing your migration performs the following steps:
- It validates that all of the running proxies in the cluster are using Anthos Service Mesh.
- It removes unused Istio components from the cluster. This step is irreversible.
To finalize your migration to Anthos Service Mesh, run the following command:
./migrate-to-asm finalize \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_IDThe output is similar to the following:
migrate-to-asm: Checking installation tool dependencies... migrate-to-asm: Checking for asm-scriptaro-oss... migrate-to-asm: All proxies running Anthos Service Mesh! Remove previous control plane resources? (Y/n) migrate-to-asm: **** migrate-to-asm: Previous Istio control plane has been removed.
Roll back to an existing Istio version
Run the rollback step to relabel namespaces with the previous Istio injection label, restart workloads, and roll back gateway changes. Afterwards, the tool removes any Anthos Service Mesh components deployed in the cluster.
You need to manually revert any dependencies enabled by the pre-migration step.
To roll back to Istio, run the following command:
./migrate-to-asm rollback \ --cluster_location $CLUSTER_LOCATION \ --cluster-name $CLUSTER_NAME \ --project-id $PROJECT_IDThe output is similar to the following:
migrate-to-asm: Checking installation tool dependencies... migrate-to-asm: Checking for $PROJECT_ID... ****** migrate-to-asm: Rolling back migration by relabeling and restarting workloads migrate-to-asm: in the following namespaces: migrate-to-asm: namespace/default migrate-to-asm: Continue with rollback? (Y/n) migrate-to-asm: Relabeling namespace/default... namespace/default labeled migrate-to-asm: Restarting workloads in namespace/default and waiting for them to become available (max 5 min)... deployment.apps/frontend restarted deployment.apps/backend restarted deployment.apps/frontend condition met deployment.apps/backend condition met migrate-to-asm: ******* migrate-to-asm: Finished restarting workloads! service/istio-ingressgateway configured deployment.apps/istio-ingressgateway configured There are still 14 proxies pointing to the control plane revision asm-1102-2 istio-ingressgateway-66c85975d-2gt8c.istio-system istio-ingressgateway-66c85975d-jdd96.istio-system ... frontend-685dcb78d6-9l45j.default If you proceed with the uninstall, these proxies will become detached from any control plane and will not function correctly. Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway. Removed HorizontalPodAutoscaler:istio-system:istiod-asm-1102-2. ... Removed ClusterRoleBinding::mdp-controller. ✔ Uninstall complete namespace "asm-system" deleted migrate-to-asm: **** migrate-to-asm: Anthos Service Mesh has been uninstalled from the cluster.
Appendix
Create a GKE cluster with Istio installed
In this section, you deploy a GKE cluster with Istio enabled. You can use a private or a non-private GKE cluster. Private GKE clusters must have a public GKE endpoint. You also verify the Istio installation.
If you already have an existing GKE cluster, you can skip the
creation step and make sure that you have access to the cluster that uses the
KUBECONFIG
file. The context that this guide uses is defined in the variable
${CLUSTER_1_CTX}
. You can set the context of your cluster to this variable.
Create the environment variables used in this guide:
# Enter your project ID export PROJECT_ID=PROJECT_ID gcloud config set project ${PROJECT_ID} export PROJECT_NUM=$(gcloud projects describe ${PROJECT_ID} --format='value(projectNumber)') export CLUSTER_NAME=GKE_CLUSTER_NAME export CLUSTER_LOCATION=GKE_CLUSTER_REGION_OR_ZONE export CLUSTER_CTX=gke_${PROJECT_ID}_${CLUSTER_LOCATION}_${CLUSTER_NAME} export ISTIO_VERSION=ISTIO_VERSION # Must be versions 1.7 through 1.10 and must be of the form major.minor.patch, for example 1.7.4 or 1.9.5
Create a GKE cluster with Istio enabled (this is a private cluster). You can also perform these steps with a non-private GKE cluster.
Zonal clusters
gcloud container clusters create ${CLUSTER_NAME} \ --project ${PROJECT_ID} \ --zone ${CLUSTER_LOCATION} \ --machine-type "e2-standard-4" \ --num-nodes "4" --min-nodes "2" --max-nodes "5" \ --enable-ip-alias --enable-autoscaling
Regional clusters
gcloud container clusters create ${CLUSTER_NAME} \ --project ${PROJECT_ID} \ --region ${CLUSTER_LOCATION} \ --machine-type "e2-standard-4" \ --num-nodes "4" --min-nodes "2" --max-nodes "5" \ --enable-ip-alias --enable-autoscaling
Confirm that the cluster is
RUNNING
:gcloud container clusters list
The output is similar to the following:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS gke-east us-east1-b 1.19.10-gke.1600 34.73.171.206 e2-standard-4 1.19.10-gke.1600 4 RUNNING
Connect to the cluster:
Zonal clusters
gcloud container clusters get-credentials ${CLUSTER_NAME} \ --zone ${CLUSTER_LOCATION}
Regional clusters
gcloud container clusters get-credentials ${CLUSTER_NAME} \ --region ${CLUSTER_LOCATION}
Remember to unset your KUBECONFIG
variable at the end.
Install Istio
In this section, you deploy Istio version 1.7 to the GKE cluster.
Download Istio:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} TARGET_ARCH=x86_64 sh -
Install Istio by using the
istioctl
command-line tool. Choose one option from either of the following:- Option 1: without a custom
IstioOperator
resource Option 2: with a custom
IstioOperator
resource
Option 1
Without a custom
IstioOperator
resource:./istio-${ISTIO_VERSION}/bin/istioctl install --set profile=default -y
The output is similar to the following:
✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Installation complete
Option 2
With a custom
IstioOperator
resource:cat <<EOF > istio-operator.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istio-operator spec: components: base: enabled: true ingressGateways: - enabled: true k8s: env: - name: TERMINATION_DRAIN_DURATION_SECONDS value: "10" hpaSpec: maxReplicas: 10 metrics: - resource: name: cpu targetAverageUtilization: 80 type: Resource minReplicas: 2 resources: limits: cpu: "4" memory: 8Gi requests: cpu: "2" memory: 4Gi service: ports: - name: status-port port: 15021 targetPort: 15021 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 - name: tls port: 15443 targetPort: 15443 name: istio-ingressgateway - enabled: true k8s: env: - name: TERMINATION_DRAIN_DURATION_SECONDS value: "10" hpaSpec: maxReplicas: 10 minReplicas: 2 resources: limits: cpu: "4" memory: 8Gi requests: cpu: "2" memory: 4Gi service: ports: - name: status-port port: 15021 targetPort: 15021 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 - name: tls port: 15443 targetPort: 15443 label: istio: istio-api-ingressgateway name: istio-api-ingressgateway meshConfig: defaultConfig: tracing: sampling: 1 zipkin: address: jaeger-collector.observability.svc.cluster.local:9411 enableTracing: true EOF ./istio-${ISTIO_VERSION}/bin/istioctl install -f istio-operator.yaml -y
The output is similar to the following:
✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Installation complete
- Option 1: without a custom
Ensure that Istio Services and Pods are deployed and running:
kubectl --context=${CLUSTER_CTX} -n istio-system get services,pods
The output is similar to the following:
Option 1
Without a custom
IstioOperator
resource:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-ingressgateway LoadBalancer 10.64.5.113 <pending> 15021:31285/TCP,80:31740/TCP,443:30753/TCP,15443:31246/TCP 33s service/istiod ClusterIP 10.64.15.184 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 45s NAME READY STATUS RESTARTS AGE pod/istio-ingressgateway-6f44d6745b-22q9h 1/1 Running 0 34s pod/istiod-b89f5cc6-nhsrc 1/1 Running 0 48s
Option 2
With a custom
IstioOperator
resource:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-api-ingressgateway LoadBalancer 10.100.0.84 104.196.26.108 15021:32489/TCP,80:30083/TCP,443:30565/TCP,15443:30705/TCP 76s service/istio-ingressgateway LoadBalancer 10.100.3.221 34.139.111.125 15021:30966/TCP,80:31557/TCP,443:31016/TCP,15443:31574/TCP 75s service/istiod ClusterIP 10.100.13.72 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 86s NAME READY STATUS RESTARTS AGE pod/istio-api-ingressgateway-79978ddc65-hslbv 1/1 Running 0 61s pod/istio-api-ingressgateway-79978ddc65-z92w8 1/1 Running 0 77s pod/istio-ingressgateway-fb47c4859-pkdn7 1/1 Running 0 60s pod/istio-ingressgateway-fb47c4859-t2pfq 1/1 Running 0 77s pod/istiod-9445656d7-fxk9j 1/1 Running 0 89s
Deploy Online Boutique
In this section, you deploy a sample microservices-based application called Online Boutique to the GKE cluster. Online Boutique is deployed in an Istio-enabled namespace. You verify that the application is working and that Istio is injecting the sidecar proxies to every Pod.
If you already have existing clusters with applications, you can skip creating a new namespace and deploying Online Boutique. You can follow the same process for all namespaces in the Install Anthos Service Mesh and prepare workloads section.
Deploy Online Boutique to the GKE cluster:
kpt pkg get \ https://github.com/GoogleCloudPlatform/microservices-demo.git/release \ online-boutique kubectl --context=${CLUSTER_CTX} create namespace online-boutique kubectl --context=${CLUSTER_CTX} label namespace online-boutique istio-injection=enabled kubectl --context=${CLUSTER_CTX} -n online-boutique apply -f online-boutique
Wait until all Deployments are ready:
kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment adservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment checkoutservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment currencyservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment emailservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment frontend kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment paymentservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment productcatalogservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment shippingservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment cartservice kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment loadgenerator kubectl --context=${CLUSTER_CTX} -n online-boutique wait --for=condition=available --timeout=5m deployment recommendationservice
Ensure that there are two containers per Pod—the application container and the Istio sidecar proxy that Istio automatically injects into the Pod:
kubectl --context=${CLUSTER_CTX} -n online-boutique get pods
The output is similar to the following:
NAME READY STATUS RESTARTS AGE adservice-7cbc9bd9-t92k4 2/2 Running 0 3m21s cartservice-d7db78c66-5qfmt 2/2 Running 1 3m23s checkoutservice-784bfc794f-j8rl5 2/2 Running 0 3m26s currencyservice-5898885559-lkwg4 2/2 Running 0 3m23s emailservice-6bd8b47657-llvgv 2/2 Running 0 3m27s frontend-764c5c755f-9wf97 2/2 Running 0 3m25s loadgenerator-84cbcd768c-5pdbr 2/2 Running 3 3m23s paymentservice-6c676df669-s779c 2/2 Running 0 3m25s productcatalogservice-7fcf4f8cc-hvf5x 2/2 Running 0 3m24s recommendationservice-79f5f4bbf5-6st24 2/2 Running 0 3m26s redis-cart-74594bd569-pfhkz 2/2 Running 0 3m22s shippingservice-b5879cdbf-5z7m5 2/2 Running 0 3m22s
You can also check the sidecar Envoy proxy version from any one of the Pods to confirm that you have Istio version 1.4 Envoy proxies deployed:
export FRONTEND_POD=$(kubectl get pod -n online-boutique -l app=frontend --context=${CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') kubectl --context=${CLUSTER_CTX} get pods ${FRONTEND_POD} -n online-boutique -o json | jq '.status.containerStatuses[].image'
The output is similar to the following:
"docker.io/istio/proxyv2:1.7.4" "gcr.io/google-samples/microservices-demo/frontend:v0.3.4"
Access the application by navigating to the IP address of the
istio-ingressgateway
Service IP address:kubectl --context=${CLUSTER_CTX} -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
What's next
- To get answers to frequently asked questions about this migration, see Migrating from Istio 1.7 or later to Anthos Service Mesh and Mesh CA FAQ.