You are viewing documentation for Anthos Service Mesh 1.10. View the latest documentation or select another available version:

Add Compute Engine virtual machines to Anthos Service Mesh

This page describes how to add Compute Engine virtual machines (VMs) to Anthos Service Mesh on Google Kubernetes Engine (GKE). This page shows how to install Anthos Service Mesh 1.10.4 with the option that prepares the cluster for adding a VM.

  • If you have Anthos Service Mesh 1.9 or a 1.10 patch release already installed, this page shows how to upgrade to Anthos Service Mesh 1.10.4 with the required option for adding a VM.

  • If you have Anthos Service Mesh 1.9, and you don't want to upgrade, see the Anthos Service Mesh 1.9 guide for instructions on adding VMs to Anthos Service Mesh 1.9.

  • If you have an earlier version of Anthos Service Mesh, you must first upgrade Anthos Service Mesh to 1.9 or later.

This page provides the command line for installing the in-cluster control plane.

Introduction

Anthos Service Mesh allows you to manage, observe and secure services running on Managed Instance Groups (MIGs) together with services running on Google Kubernetes Engine (GKE) clusters in the mesh. This allows you to do the following with the Compute Engine instances in your mesh:

  • Manage traffic.
  • Enforce mTLS.
  • Apply access control to service traffic.
  • Securely access Google Cloud services.
  • Collect metrics, logging and tracing data.
  • Monitor services using the Google Cloud Console.

This enables legacy applications not suitable or ready for containerization to take advantage of Anthos Service Mesh features and allows for integrating those workloads with the rest of your services.

How it works

ASM provides two related Custom Resource Definitions (CRDs) to represent virtual machine workloads:

  • WorkloadGroup represents a logical group of virtual machine workloads that share common properties. This is similar to a Deployment in Kubernetes.
  • WorkloadEntry represents a single instance of a virtual machine workload. This is similar to a Pod in Kubernetes.
  • A Service can then select the WorkloadGroup and have ASM route traffic to the VM instance in a similar way to a Pod. This allows the VM to act like any other workload in the mesh.

You create a Compute Engine instance template for each Compute Engine instance group, which specifies a service proxy agent for each Compute Engine instance in that group. During installation, the agent bootstraps the service proxy, sets up traffic interception and monitors the health of the service proxy during the Compute Engine instance's lifetime. The proxy connects with the Anthos Service Mesh control plane, then automatically registers each Compute Engine instance as a WorkloadEntry for the corresponding WorkloadGroup. This lets the Anthos Service Mesh treat each instance as a service endpoint, like the Kubernetes pods in the cluster. You can also create Kubernetes services for VM workloads, like you would for Kubernetes pods.

To scale out the number of workloads on Compute Engine instances, starting from a minimum MIG size of zero, see Autoscaling groups of instances.

The service proxy agent relies on VM manager to ensure the agent is installed in each VM in the MIG. For more information about instance groups and VM management, see Managed instance group (MIG) and VM Manager.

Supported Linux distributions

OS version supported
Debian 10
Debian 9
Centos 8
Centos 7

See Debian support or CentOS support for more information on OS distributions.

Limitations

  • The mesh control plane must be Anthos Service Mesh 1.9 or higher.
  • Only Compute Engine managed instance groups created from a Compute Engine instance template are supported.
  • The cluster and VMs must be on the same network, in the same project, and use a single network interface.
  • You can use this feature without an Anthos subscription, but certain UI elements and features in Google Cloud Console are only available to Anthos subscribers. For information about what is available to subscribers and non-subscribers, see Anthos and Anthos Service Mesh UI differences.

Prerequisites

Before you begin:

Review the Cloud project, Anthos licensing, and general requirements described in Prerequisites.

Cluster requirements

Before continuing, make sure that your cluster meets the GKE requirements. In addition, Anthos Service Mesh VM support requires that:

  • You specify Mesh CA as the certificate authority (CA) when you install Anthos Service Mesh.
  • You don't override Stackdriver for telemetry. Stackdriver is configured by default when you install Anthos Service Mesh.
  • Your cluster is registered to a fleet. However, if the cluster isn't registered, the VM installation process registers the cluster in your specified project.

Getting started

Follow the steps in Get started to:

If you don't have Anthos Service Mesh installed, continue with the next section. If you have an existing Anthos Service Mesh installation, follow the steps in the Existing installations.

New installation

Set up the Anthos Service Mesh cluster for VMs by preparing the Anthos Service Mesh 1.10 control plane.

The following command shows how to install the Anthos Service Mesh in-cluster control plane with --option vm that prepares the control plane for adding VMs.

./asmcli install \
  --project_id PROJECT_ID \
  --cluster_name CLUSTER_NAME \
  --cluster_location CLUSTER_LOCATION \
  --output_dir DIR_PATH \
  --enable_all \
  --ca mesh_ca \
  --option vm
  • --project_id, --cluster_name, and --cluster_location Specify the project ID that the cluster is in, the cluster name, and either the cluster zone or region.
  • --output_dir Include this option to specify a directory where asmcli downloads the anthos-service-mesh package and extracts the installation file, which contains istioctl, samples, and manifests. Otherwise asmcli downloads the files to a tmp directory. You can specify either a relative path or a full path. The environment variable $PWD doesn't work here.
  • --enable_all Allows the script to:
    • Grant required IAM permissions.
    • Enable the required Google APIs.
    • Set a label on the cluster that identifies the mesh.
    • Register the cluster to the fleet if it isn't already registered.

  • --ca mesh_ca Use Mesh CA as the certificate authority. asmcliconfigures Mesh CA to use fleet workload identity
  • --option vm Prepares the cluster for including a VM in the service mesh.

If you have existing workloads running on your cluster, redeploy the workloads, and then come back to this page to add your VMS.

Existing installations

If Anthos Service Mesh has already been installed on your cluster, do the following steps:

  1. Register your cluster to the fleet if you haven't already done so.

  2. Run the following command to verify that your Anthos Service Mesh installation is ready for VM workloads.

    ./asmcli experimental vm prepare-cluster \
        --project_id PROJECT_ID \
        --cluster_name CLUSTER_NAME \
        --cluster_location CLUSTER_LOCATION
    

    On success, the command outputs the following:

    The cluster is ready for adding VM workloads.
    Please follow the Anthos Service Mesh for Compute Engine VM user guide to add
    Compute Engine VMs to your mesh.
    

The command does the following:

  1. Enable VM Auto Registration: This is done by setting the PILOT_ENABLE_WORKLOAD_ENTRY_AUTOREGISTRATION and PILOT_ENABLE_CROSS_CLUSTER_WORKLOAD_ENTRY variables to true. When this is enabled, new VM instances will register with the WorkloadGroup and new WorkloadEntry CRs will be created to route traffic to the VMs. All Anthos Service Mesh 1.9+ control planes installed with asmcli will have VM auto registration enabled by default.

  2. Install an Expansion gateway: This gateway is named the eastwest gateway and is defined in the Anthos Service Mesh config package. This will also expose the control plane to the VMs.

  3. Install the IdentityProvider CRD and register a Google IdentityProvider CR to enable VMs to authenticate to Anthos Service Mesh control plane and securely communicate with the rest of the service mesh.

  4. Register the cluster to a fleet and enable workload identity, if you use --enable_all or --enable_registration in the asmcli script.

  5. Enable the Service Mesh feature within the fleet. This feature will manage the policies necessary to allow VMs to securely communicate with the mesh.

Install ingress gateways

Anthos Service Mesh gives you the option to deploy and manage gateways as part of your service mesh. A gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. Gateways are Envoy proxies that provide you with fine-grained control over traffic entering and leaving the mesh.

  1. Create a namespace for the ingress gateway if you don't already have one. Gateways are user workloads, and as a best practice, they shouldn't be deployed in the control plane namespace. Replace GATEWAY_NAMESPACE with the name of your namespace.

    kubectl create namespace GATEWAY_NAMESPACE
    
  2. Enable auto-injection on the gateway by applying a a revision label on the gateway namespace. The revision label is used by the sidecar injector webhook to associate injected proxies with a particular control plane revision. The revision label that you use depends on whether you deployed managed Anthos Service Mesh or the in-cluster control plane.

    1. Use the following command to locate the revision label on istiod:

      kubectl -n istio-system get pods -l app=istiod --show-labels
      

      The output looks similar to the following:

      NAME                                READY   STATUS    RESTARTS   AGE   LABELS
      istiod-asm-1104-14-5788d57586-bljj4   1/1     Running   0          23h   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=5788d57586
      istiod-asm-1104-14-5788d57586-vsklm   1/1     Running   1          23h   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=5788d57586
      

      In the output, under the LABELS column, note the value of the istiod revision label, which follows the prefix istio.io/rev=. In this example, the value is asm-1104-14.

    2. Apply the revision label to the namespace. In the following command, REVISION is the value of the istiod revision label that you noted in the previous step.

      kubectl label namespace GATEWAY_NAMESPACE istio-injection- istio.io/rev=REVISION --overwrite
      

    You can ignore the message "istio-injection not found" in the output. That means that the namespace didn't previously have the istio-injection label, which you should expect in new installations of Anthos Service Mesh or new deployments. Because auto-injection fails if a namespace has both the istio-injection and the revision label, all kubectl label commands in the Anthos Service Mesh documentation include removing the istio-injection label.

  3. Change to the directory that you specified in --output_dir.

  4. You can deploy the example ingress gateway configuration located in the samples/gateways/istio-ingressgateway/ directory as is, or modify it as needed.

    kubectl apply -n GATEWAY_NAMESPACE -f samples/gateways/istio-ingressgateway
    

Learn more about best practices for gateways.

Add your VMs

In this section, you add Compute Engine instances to your mesh based on the instance template you create with gcloud. gcloud only generates the necessary configuration for the service proxy agent. To include more configuration in your instance template, use the gcloud reference guide for more information.

To add VMs to your mesh, use the following steps:

  1. Set the following environment variables to use in later steps. Set these variables for each VM workload:

    • WORKLOAD_NAME is the name of the workload the VM is part of, which must be a compliant DNS-1123 subdomain consisting of lower case alphanumeric characters.
    • WORKLOAD_VERSION is the version of the workload the VM is part of. Optional.
    • WORKLOAD_SERVICE_ACCOUNT is the service GCP Service Account the VM runs as.
    • WORKLOAD_NAMESPACE is the namespace for the workload.
    • ASM_INSTANCE_TEMPLATE is the name of the instance template to be created. Compute Engine instance template name does not allow underscores.
  2. Create the namespace for the VM workloads if it doesn't already exist:

    kubectl create ns WORKLOAD_NAMESPACE
    
  3. Label the namespace with the control plane revision.

    For an example of how to find the control plane revision shown as REVISION in the following example, see Deploying and redeploying workloads.

    kubectl label ns WORKLOAD_NAMESPACE istio-injection- istio.io/rev=REVISION --overwrite
    
  4. Create the WorkloadGroup for the VMs to be registered:

    kubectl apply -f - << EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: WorkloadGroup
    metadata:
      name: WORKLOAD_NAME
      namespace: WORKLOAD_NAMESPACE
    spec:
      metadata:
        labels:
          app.kubernetes.io/name: WORKLOAD_NAME
          app.kubernetes.io/version: WORKLOAD_VERSION
        annotations:
          security.cloud.google.com/IdentityProvider: google
      template:
        serviceAccount: WORKLOAD_SERVICE_ACCOUNT
    EOF
    
    Field Description
    name The name of the workload the VM is part of.
    namespace The namespace the workload is a part of.
    app.kubernetes.io/name The recommended labels for Kubernetes applications. You can use your own labels for your VM workloads.
    app.kubernetes.io/version The recommended labels for Kubernetes applications. You can use your own labels for your VM workloads.
    serviceAccount The service account identity used by the VM and project, which will be used as part of the identity of the workload in the SPIFFE format. For more information, see Service accounts.
    security.cloud.google.com/IdentityProvider The identity provider that the VM will use, which should already be registered in your cluster. For Compute Engine VMs it should be set to google. The IdentityProvider tells the control plane how to authenticate the VM's credential and where to extract VM's service account.
  5. Use gcloud beta compute instance-templates create command with the --mesh flag to create an instance template for your Anthos Service Mesh Compute Engine instances.

    gcloud verifies cluster prerequisites, adds VM labels for Anthos Service Mesh, generates the custom metadata configuration for the service proxy agent and creates a new instance template.

    If your instance template includes a startup script that requires network connectivity, the script should be resilient to transient network connectivity issues. See the demo application for an example of how to add resiliency against temporary network disruption.

    For more information about creating instance templates, see Creating instance templates.

    gcloud beta compute instance-templates create \
    ASM_INSTANCE_TEMPLATE \
    --mesh gke-cluster=CLUSTER_LOCATION/CLUSTER_NAME,workload=WORKLOAD_NAMESPACE/WORKLOAD_NAME \
    --project PROJECT_ID
    
  6. Set the following environment variables for each MIG that you create:

    • INSTANCE_GROUP_NAME is the name of the Compute Engine instance group to create.
    • ASM_INSTANCE_TEMPLATE is the name of the instance template to be created. Compute Engine instance template name does not allow underscores.
    • INSTANCE_GROUP_ZONE is the zone of the Compute Engine instance group to be created.
    • PROJECT_ID is the project ID that the cluster was created in.
    • SIZE is the size of the instance group to be created. It can be changed after the instance group is created.
    • WORKLOAD_NAME is the name of the workload the VM is part of.
    • WORKLOAD_NAMESPACE is the namespace for the workload.
  7. Create a Managed Instance Group for the VM workloads, using the variables created in the previous steps:

    gcloud compute instance-groups managed create INSTANCE_GROUP_NAME \
    --template ASM_INSTANCE_TEMPLATE \
    --zone=INSTANCE_GROUP_ZONE \
    --project=PROJECT_ID \
    --size=SIZE
    

    To scale out the number of workloads on Compute Engine instances, starting from a zonal or regional MIG size of zero, see Autoscaling groups of instances. For more information about creating groups, see gcloud compute instance-groups managed create.

    When your instance starts, it will automatically authenticate with the Anthos Service Mesh control plane on your cluster and the control plane will register each VM as a WorkloadEntry.

  8. When the VM instance in the MIG finishes starting up, you can view the registered VMs in the workload namespace by using the following command:

    kubectl get workloadentry -n WORKLOAD_NAMESPACE
    
  9. Add a Kubernetes Service to expose VM workloads added above. Be sure to have the service select the corresponding label on the VM WorkloadGroup registered above for correct traffic routing.

    The following example creates a Kubernetes service named WORKLOAD_NAME in the namespace WORKLOAD_NAMESPACE that exposes VM workloads with the app.kubernetes.io/name: WORKLOAD_NAME label under HTTP port 80.

    kubectl apply -f - << EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: WORKLOAD_NAME
      namespace: WORKLOAD_NAMESPACE
      labels:
        asm_resource_type: VM
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app.kubernetes.io/name: WORKLOAD_NAME
    EOF
    

    For more details on how to create a Kubernetes service, see https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service.

  10. To use a sample application on your VM, see Deploy a sample application.

Redeploy workloads after an in-cluster control plane upgrade

If you upgraded Anthos Service Mesh in the previous section, and you have workloads running on your cluster, switch them to the new control plane.

For VM workloads, create a new instance template and perform a rolling update to the VMs in your MIG:

  1. Use the following command to locate the revision label on istiod:

    kubectl -n istio-system get pods -l app=istiod --show-labels
    

    The output from the command is similar to the following. Note that the output for migrations is slightly different than for upgrades. The following example output is from a migration.

    NAME                                         READY   STATUS    RESTARTS   AGE   LABELS
    istiod-7744bc8dd7-qhlss                      1/1     Running   0          49m   app=istiod,istio.io/rev=default,istio=pilot,pod-template-hash=7744bc8dd7
    istiod-asm-1104-14-85d86774f7-flrt2   1/1     Running   0          26m   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=85d86774f7
    istiod-asm-1104-14-85d86774f7-tcwtn   1/1     Running   0          26m   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=85d86774f7
    1. In the output, under the LABELS column, note the value in the istiod revision label for the new version, which follows the prefix istio.io/rev=. In this example, the value is asm-1104-14.

    2. Also note the value in the revision label for the old istiod version. You need this to delete the old version of istiod when you finish moving workloads to the new version. In the example output, the value in the revision label for the old version of istiod is default.

  2. Add the revision label to a namespace and remove the istio-injection label (if it exists). In the following command, change REVISION to the value that matches the new revision of istiod.

    kubectl label namespace NAMESPACE istio.io/rev=REVISION istio-injection- --overwrite

    If you see "istio-injection not found" in the output, you can ignore it. That means that the namespace didn't previously have the istio-injection label. Because auto-injection fails if a namespace has both the istio-injection and the revision label, all kubectl label commands in the Anthos Service Mesh documentation include removing the istio-injection label.

  3. Create a new instance template using gcloud. Be sure to include the same configuration if you had an instance template for the same workload.

    gcloud beta compute instance-templates create NEW_ASM_INSTANCE_TEMPLATE \
    --mesh gke-cluster=CLUSTER_LOCATION/CLUSTER_NAME,workload=WORKLOAD_NAMESPACE/WORKLOAD_NAME \
    --project PROJECT_ID
    
  4. Perform a rolling-update to your existing MIG for the workload.

    For more information, see Starting a basic rolling update.

    gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \
    --version=template=NEW_ASM_INSTANCE_TEMPLATE \
    --zone=INSTANCE_GROUP_ZONE
    
  5. Test the VM workload to ensure that it is working as expected.

Upgrade VM applications

If you have any updates to your application, including changes to the WorkloadGroup and/or changes to your instance template configuration, a new instance template is required to update the MIG of your VM workloads.

When the WorkloadGroup change is applied and/or the new source instance template is created, you will create a new instance template for Anthos Service Mesh and perform a rolling update to the VMs in your MIG.

  1. Create a new instance template using gcloud.

    gcloud beta compute instance-templates create NEW_ASM_INSTANCE_TEMPLATE \
    --mesh gke-cluster=CLUSTER_LOCATION/CLUSTER_NAME,workload=WORKLOAD_NAMESPACE/WORKLOAD_NAME \
    --project PROJECT_ID
    
  2. Perform a rolling-update to your existing MIG for the workload. For more information about how to use MIG rolling update, see Starting a basic rolling update.

    gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \
    --version=template=NEW_ASM_INSTANCE_TEMPLATE \
    --zone=INSTANCE_GROUP_ZONE
    
  3. Test the VM workload to ensure it works as expected.

Deploy a sample application

To demonstrate that your new mesh configuration is working correctly, you can install the Bookinfo sample application. This example runs a MySQL database on the VM and the ratings service reads the ratings values from the database.

Install Bookinfo on the cluster

Use the following steps to deploy the BookInfo application's services with the sidecar proxies injected alongside each service. The BookInfo application will be deployed in the default namespace.

  1. On the command line on the computer where you installed Anthos Service Mesh, go to the root of the Anthos Service Mesh installation directory that you created in the Downloading the script step.

  2. To enable automatic sidecar injection, choose the instruction below based on your Anthos Service Mesh control plane type.

    Use the following command to locate the label on istiod, which contains the revision label value to use in later steps.

    kubectl -n istio-system get pods -l app=istiod --show-labels
    

    The output looks similar to the following:

    NAME                                READY   STATUS    RESTARTS   AGE   LABELS
    istiod-asm-1104-14-5788d57586-bljj4   1/1     Running   0          23h   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=5788d57586
    istiod-asm-1104-14-5788d57586-vsklm   1/1     Running   1          23h   app=istiod,istio.io/rev=asm-1104-14,istio=istiod,pod-template-hash=5788d57586
    

    In the output, under the LABELS column, note the value of the istiod revision label, which follows the prefix istio.io/rev=. In this example, the value is asm-1104-14.

  3. Apply the revision label to the default namespace.

    In the following command, REVISION is the value of the istiod revision label that you noted in the previous step.

    kubectl label namespace default istio-injection- istio.io/rev=REVISION --overwrite
    

    You can ignore the message "istio-injection not found" in the output. That means that the namespace didn't previously have the istio-injection label, which you should expect in new installations of Anthos Service Mesh or new deployments. Because auto-injection fails if a namespace has both the istio-injection and the revision label, all kubectl label commands in the Anthos Service Mesh documentation include removing the istio-injection label.

  4. Deploy your application to the default namespace using kubectl:

    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    
  5. Confirm that the application has been deployed correctly by running the following commands:

    kubectl get services
    

    Expected output:

    NAME                       CLUSTER-IP   EXTERNAL-IP         PORT(S)              AGE
    details                    10.0.0.31    <none>        9080/TCP             6m
    kubernetes                 10.0.0.1     <none>        443/TCP              7d
    productpage                10.0.0.120   <none>        9080/TCP             6m
    ratings                    10.0.0.15    <none>        9080/TCP             6m
    reviews                    10.0.0.170   <none>        9080/TCP             6m

    and

    kubectl get pod
    

    Expected output:

    NAME                                        READY     STATUS    RESTARTS   AGE
    details-v1-1520924117-48z17                 2/2       Running   0          6m
    productpage-v1-560495357-jk1lz              2/2       Running   0          6m
    ratings-v1-734492171-rnr5l                  2/2       Running   0          6m
    reviews-v1-874083890-f0qf0                  2/2       Running   0          6m
    reviews-v2-1343845940-b34q5                 2/2       Running   0          6m
    reviews-v3-1813607990-8ch52                 2/2       Running   0          6m
  6. Finally, define the ingress gateway routing for the application:

    kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
    

    Expected output:

    gateway.networking.istio.io/bookinfo-gateway created
    virtualservice.networking.istio.io/bookinfo created
  7. Confirm that the product page is accessible. In the following command, GATEWAY_NAMESPACE is the namespace of your Istio Gateway.

    export INGRESS_HOST=$(kubectl -n GATEWAY_NAMESPACE get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export INGRESS_PORT=$(kubectl -n GATEWAY_NAMESPACE get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
    export GATEWAY_URL="${INGRESS_HOST}:${INGRESS_PORT}"
    curl -s "http://${GATEWAY_URL}/productpage" | grep -o "<title>.*</title>"
    

    Expected output:

    <title>Simple Bookstore App</title>
    

Create Compute Engine instances and install MySQL

In this step, you will create a Compute Engine instance template for the MySQL instance running on the VM. For more detailed steps, see Bookinfo with a Virtual Machine.

  1. Create a startup script to install MySQL and add a ratings database upon startup. Note that if you are using CentOS it will take up to 10 minutes for the mariadb-server to be ready.

    Debian

    cat << "EOF" > init-mysql
    #!/bin/bash
    
    # Wait until Envoy is ready before installing mysql
    while true; do
      rt=$(curl -s 127.0.0.1:15000/ready)
      if [[ $? -eq 0 ]] && [[ "${rt}" -eq "LIVE" ]]; then
        echo "envoy is ready"
        break
      fi
      sleep 1
    done
    
    # Wait until DNS is ready before installing mysql
    while true; do
      curl -I productpage.default.svc:9080
      if [[ $? -eq 0 ]]; then
        echo "dns is ready"
        break
      fi
      sleep 1
    done
    
    sudo apt-get update && sudo apt-get install -y mariadb-server
    
    sudo sed -i '/bind-address/c\bind-address  = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf
    
    cat <<EOD | sudo mysql
    # Grant access to root
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
    
    # Grant root access to other IPs
    CREATE USER 'root'@'%' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
    FLUSH PRIVILEGES;
    quit;
    EOD
    
    sudo systemctl restart mysql
    
    curl -LO https://raw.githubusercontent.com/istio/istio/release-1.10/samples/bookinfo/src/mysql/mysqldb-init.sql
    
    mysql -u root -ppassword < mysqldb-init.sql
    EOF
    

    CentOS

    cat << "EOF" > init-mysql
    #!/bin/bash
    
    # Wait until Envoy is ready before installing mysql
    while true; do
      rt=$(curl -s 127.0.0.1:15000/ready)
      if [[ $? -eq 0 ]] && [[ "${rt}" -eq "LIVE" ]]; then
        echo "envoy is ready"
        break
      fi
      sleep 1
    done
    
    # Wait until DNS is ready before installing mysql
    while true; do
      curl -I productpage.default.svc:9080
      if [[ $? -eq 0 ]]; then
        echo "dns is ready"
        break
      fi
      sleep 1
    done
    
    sudo yum update -y && sudo yum install -y mariadb-server
    
    # Wait until mysql is ready
    while true; do
      rt=$(which mysql)
      if [[ ! -z "${rt}" ]]; then
        echo "mysql is ready"
        break
      fi
      sleep 1
    done
    
    sudo sed -i '/bind-address/c\bind-address  = 0.0.0.0' /etc/my.cnf.d/mariadb-server.cnf
    
    sudo systemctl restart mariadb
    
    cat > grantaccess.sql << EOD
    
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
    
    CREATE USER 'root'@'%' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
    FLUSH PRIVILEGES;
    EOD
    
    until sudo mysql < grantaccess.sql; do
       sleep 1
    done
    
    sudo systemctl restart mariadb
    
    curl -LO https://raw.githubusercontent.com/istio/istio/release-1.10/samples/bookinfo/src/mysql/mysqldb-init.sql
    
    mysql -u root -ppassword < mysqldb-init.sql
    EOF
    
  2. Create a WorkloadGroup for the MySQL workload

    kubectl apply -f - << EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: WorkloadGroup
    metadata:
      name: mysql
      namespace: default
    spec:
      metadata:
        labels:
          app.kubernetes.io/name: mysql
        annotations:
          security.cloud.google.com/IdentityProvider: google
      template:
        serviceAccount: WORKLOAD_SERVICE_ACCOUNT
    EOF
    
  3. Use gcloud to create a new instance template to prepare the instances for your mesh and include the startup script created above.

    Debian

    gcloud beta compute instance-templates create asm-mysql-instance-template \
    --mesh gke-cluster=CLUSTER_LOCATION/CLUSTER_NAME,workload=default/mysql \
    --project PROJECT_ID \
    --metadata-from-file=startup-script=init-mysql \
    --image-project=debian-cloud --image-family=debian-10 --boot-disk-size=10GB
    

    CentOS

    gcloud beta compute instance-templates create asm-mysql-instance-template \
    --mesh gke-cluster=CLUSTER_LOCATION/CLUSTER_NAME,workload=default/mysql \
    --project PROJECT_ID \
    --metadata-from-file=startup-script=init-mysql \
    --image-project=centos-cloud --image-family=centos-8 --boot-disk-size=20GB
    
  4. Create a Compute Engine MIG using the newly created instance template.

    gcloud compute instance-groups managed create mysql-instance \
    --template asm-mysql-instance-template \
    --zone=us-central1-c \
    --project=PROJECT_ID \
    --size=1
    

Create a service

Create a Kubernetes service for the MySQL service by using the following command:

kubectl apply -f - << EOF
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
  labels:
    asm_resource_type: VM
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    app.kubernetes.io/name: mysql
EOF

Use the Anthos UI dashboard

To see the new VM-based service you've created, click Anthos > Service Mesh from the main left navigation bar. It will display a table of the services running in your mesh. The service you added should appear in the table, with a Type value of VM and some high-level metrics. To see more telemetry from your VM-based service, click the service name, which will display the service-level dashboard.

For more information on how to use the Anthos UI dashboard, see Exploring Anthos Service Mesh in the Cloud Console.

Manage traffic to the VM workloads

You can change the networking rules to control how the traffic flows in and out of the VM(s).

Control traffic to a new ratings service (Pod to VM)

Create another ratings service in Bookinfo that will use the MySQL instance created above as the data source and specify a routing rule that forces the review service to use the new rating service.

  1. Create a new rating service to use the MySQL instance.

    kubectl apply -f - << EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ratings-v2-mysql-vm
      labels:
        app: ratings
        version: v2-mysql-vm
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ratings
          version: v2-mysql-vm
      template:
        metadata:
          labels:
            app: ratings
            version: v2-mysql-vm
        spec:
          serviceAccountName: bookinfo-ratings
          containers:
          - name: ratings
            image: docker.io/istio/examples-bookinfo-ratings-v2:1.16.2
            imagePullPolicy: IfNotPresent
            env:
              - name: DB_TYPE
                value: "mysql"
              - name: MYSQL_DB_HOST
                value: mysql.default.svc.cluster.local
              - name: MYSQL_DB_PORT
                value: "3306"
              - name: MYSQL_DB_USER
                value: root
              - name: MYSQL_DB_PASSWORD
                value: password
            ports:
            - containerPort: 9080
    EOF
    
  2. Create a routing rule.

    kubectl apply -f - << EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
      - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v3
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: ratings
    spec:
      hosts:
      - ratings
      http:
      - route:
        - destination:
            host: ratings
            subset: v2-mysql-vm
    EOF
    
  3. Apply destination rules for the created services.

    kubectl apply -f - << EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: reviews
    spec:
      host: reviews
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
      - name: v3
        labels:
          version: v3
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: ratings
    spec:
      host: ratings
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
      - name: v2-mysql
        labels:
          version: v2-mysql
      - name: v2-mysql-vm
        labels:
          version: v2-mysql-vm
    EOF
    

Validating the application deployment

To see if the BookInfo application is working, you need to send traffic to the ingress gateway.

  • If you installed Anthos Service Mesh on GKE, get the external IP address of the ingress gateway that you created in previous steps:

    kubectl get svc istio-ingressgateway -n GATEWAY_NAMESPACE
    

    Output:

    NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
    istio-ingressgateway   LoadBalancer   10.19.247.233   35.239.7.64   80:31380/TCP,443:31390/TCP,31400:31400/TCP   27m

    In this example, the IP address of the ingress service is 35.239.7.64.

Trying the application

  1. Check that the BookInfo app is running with curl:

    curl -I http://EXTERNAL_IP/productpage
    

    If the response shows 200, it means the application is working properly with Anthos Service Mesh.

  2. To view the BookInfo web page, enter the following address in your browser:

    http://EXTERNAL_IP/productpage
    
  3. Verify on the Bookinfo application homepage that it is showing five stars from Reviewer1 and four stars from Reviewer2.

Enforcing security on the VM workloads

Enforcing security on the VM workloads is the same as enforcing security on the Kubernetes workloads. For more information, see Istio security.

After you complete the previous steps, your Compute Engine VM will have a Google-issued workload certificate. In the certificate, the SubjectAlternativeName value shows the VM's Anthos workload identity in the form spiffe://<workload_identity_pool>/ns/WORKLOAD_NAMESPACE/sa/WORKLOAD_SERVICE_ACCOUNT.

For more information, see workload identity pool.

Enable mTLS strict mode for the mesh

Apply the following YAML to enforce strict mTLS mesh-wide.

kubectl apply -f - << EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT
EOF

Authorization for service-to-service traffic

Use AuthorizationPolicy to control access between the applications on your Compute Engine VM and other mesh workloads (e.g., on the GKE cluster).

Example: Deny Kubernetes workloads to access Compute Engine VMs

The following authorization policy denies a Kubernetes workload ratings to access Compute Engine VM workloads that serve the ratings MySQL server.

  kubectl apply -f - << EOF
  apiVersion: security.istio.io/v1beta1
  kind: AuthorizationPolicy
  metadata:
    name: mysql-deny
    namespace: default
  spec:
    selector:
      matchLabels:
        app.kubernetes.io/name: mysql
    action: DENY
    rules:
    - from:
      - source:
          principals: ["cluster.local/ns/default/sa/bookinfo-ratings"]
  EOF

After applying the example AuthorizationPolicy, you should see a Ratings service is currently unavailable error message in the book reviews section on the product page.

Installing the Cloud Monitoring Agent

You can install the Cloud Monitoring Agent to collect and monitor system and application metrics from your VM instances. This allows you to monitor key metrics, for example CPU and memory utilization on the agent.

For more information, see Cloud Monitoring Agent documentation.

Troubleshooting

For troubleshooting tips, see Troubleshooting VM support.