Using Istio with Google Compute Engine

Istio is an open source framework for connecting, monitoring, and securing microservices. It lets you create a network, or mesh, of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special Envoy sidecar proxy to each of your app's pods. The Envoy proxy intercepts all network communication between microservices, and is configured and managed using Istio’s control plane functionality.

Currently, Istio's control plane can only be installed on Kubernetes implementations such as Google Kubernetes Engine (GKE), but its mesh expansion feature means that you can add services running on non- Kubernetes platforms into the service mesh, including services running on Compute Engine VMs. This lets you control the Kubernetes and virtual machine (VM) services as a single mesh. This tutorial shows you how to configure Istio to use mesh expansion, and configure Compute Engine VM instances so that they can be added to an Istio mesh. It assumes that you already have an existing Istio installation on GKE.

For more information about Istio and how it works, see If you're interested in finding out how the mesh expansion configuration used in this tutorial works, see How it works, though you don't need to read this to complete the tutorial.


  • Update an existing Istio on a GKE installation to use mesh expansion.
  • Configure Compute Engine VM instances to join an Istio service mesh.
  • Run an Istio mesh service on a Compute Engine VM instance.


This tutorial uses billable components of Google Cloud Platform including Compute Engine.

New GCP users might be eligible for a free trial.

Before you begin

  • You must have an existing Istio installation on GKE. For more information, see Installing Istio on a GKE cluster.
  • You must have the BookInfo sample app installed and running, again, as described in Installing Istio on a GKE cluster, and have istioctl in your PATH.
  • You must have enough IP address and backend service quota to run four internal load balancers (one load balancer and one IP address each) and BookInfo's ingress service (one load balancer and one IP address) from the previous tutorial.
  • Make sure your kubectl context is set to your Istio cluster.

    kubectl config current-context              # Display the current-context
    kubectl config use-context [CLUSTER_NAME]   # set the default context to [CLUSTER_NAME]

Set defaults for the gcloud command-line tool

To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set the defaults:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone [COMPUTE_ENGINE_ZONE]

Setting up the mesh for expansion

The first step when adding non-Google Kubernetes Engine services to an Istio mesh is to configure the Istio installation itself and generate the configuration files that allow it to be used by the Compute Engine VM instances. Your Istio download includes a script that helps with this on GKE. You'll find it in /install/tools/ Follow these steps on the machine where you have your Istio installation directory and your cluster credentials. This is your cluster admin machine.

  1. Use the provided mesh-expansion deployment to set up Internal load balancers for Pilot, Mixer, Istio Certificate Authority, and the {GKE Cloud DNS server. This ensures that these services can be accessed by the VM instances.

    kubectl apply -f install/kubernetes/mesh-expansion.yaml
  2. Confirm that the services are up and running and that all the ILBs have EXTERNAL-IP values (you may have to wait a minute):

    $ kubectl -n istio-system get services
    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                  AGE
    istio-ca-ilb      LoadBalancer       8060:32174/TCP                                           3m
    istio-egress      ClusterIP   <none>           80/TCP                                                   7m
    istio-ingress     LoadBalancer   80:31822/TCP,443:30637/TCP                               7m
    istio-mixer       ClusterIP   <none>           9091/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP   8m
    istio-pilot       ClusterIP    <none>           8080/TCP,443/TCP                                         7m
    istio-pilot-ilb   LoadBalancer       8080:30064/TCP                                           3m
    mixer-ilb         LoadBalancer       9091:31278/TCP                                           3m
  3. From your Istio installation directory, use the helper script to generate the Istio cluster.env configuration to be deployed in the VM instances, specifying your own cluster name. This file contains the cluster IP address ranges to intercept.

    install/tools/ generateClusterEnv [CLUSTER_NAME]

    This creates a file with a single line like this:

    $ cat cluster.env
  4. Now use the same script to generate the Cloud DNS configuration file to be used in the VM instances. This allows apps on the Compute Engine VM to resolve cluster service names by using dnsmasq, which will be intercepted by the sidecar and forwarded.

    install/tools/ generateDnsmasq

    Example generated file:

    $ cat kubedns

Setting up a mesh expansion VM

After you've set up the mesh and generated the relevant config files, the next step is to configure the Compute Engine VM instances to join the mesh, including copying your generated files to the VMs. For tutorial purposes, you can use the provided script again to copy files and configure the machine. However, when adding your own VMs to a real mesh app, follow the steps manually to integrate the instances into your own workflows and provisioning. For detailed steps, see Istio's Mesh Expansion guide and see the script that runs on each VM in /install/tools/

  1. First ensure you have a Compute Engine VM to use as a mesh expansion machine in the same project and network as your Istio installation. If you don't have one already, create one:

    gcloud compute instances create istio-vm
  2. Istio can administer services across multiple GKE namespaces. In this example, you'll put the VM service (even though it isn't on GKE) in the vm namespace because that's where the provided BookInfo routing rules look for it. Using different namespaces like this helps you keep your VM services separate from your regular GKE services. To use a non-default namespace for a mesh expansion machine, you need to specify this before running the setup scripts. In your Istio installation directory on your cluster admin machine, first set the SERVICE_NAMESPACE variable:


    Then create the namespace:

    kubectl create namespace $SERVICE_NAMESPACE
  3. Still on the cluster admin machine, run the following command with the setup script. This does the following:

    • Copies the generated files and VM setup script to the VM
    • Configures and verifies Cloud DNS settings so that the VM can connect to Istio components
    • Copies Istio auth secrets to the VM
    • Installs Istio Debian files on the VM, including the Istio sidecar proxy
    install/tools/ machineSetup istio-vm
  4. SSH into the Compute Engine VM using the gcloud command-line tool or any of the other options from its VM Instance Details console page, which you'll find linked from your VM instances page:

    gcloud compute ssh istio-vm
  5. On the Compute Engine VM, verify that the configured machine can access services running in the GKE cluster. For example, if you are running the BookInfo example from Installing Istio on GKE in the Kubernetes Engine cluster, you should be able to access the productpage service with curl from the VM, as in the following example:

    $ curl -v -w "\n" http://productpage.default.svc.cluster.local:9080/api/v1/products/0/ratings
    *   Trying
    * Connected to productpage.default.svc.cluster.local ( port 9080 (#0)
    > GET /api/v1/products/0/ratings HTTP/1.1
    > Host: productpage.default.svc.cluster.local:9080
    > User-Agent: curl/7.47.0
    > Accept: /
    < HTTP/1.1 200 OK
    < content-type: application/json
    < content-length: 54
    < server: envoy
    < date: Sun, 15 Oct 2017 00:04:49 GMT
    < x-envoy-upstream-service-time: 17

    • Connection #0 to host productpage.default.svc.cluster.local left intact {"ratings": {"Reviewer2": 4, "Reviewer1": 5}, "id": 0}
  6. Note the use of default in the product page URL here because the BookInfo example was created in the default namespace in the previous tutorial; if you chose to use a different namespace then substitute it here.

  7. Again on the VM, check that Istio processes are running:

    $ sudo systemctl status istio-auth-node-agent
    istio-auth-node-agent.service - istio-auth-node-agent: The Istio auth node agent
      Loaded: loaded (/lib/systemd/system/istio-auth-node-agent.service; disabled; vendor preset: enabled)
      Active: active (running) since Fri 2017-10-13 21:32:29 UTC; 9s ago
    Main PID: 6941 (node_agent)
      Tasks: 5
      Memory: 5.9M
      CPU: 92ms
      CGroup: /system.slice/istio-auth-node-agent.service
              └─6941 /usr/local/istio/bin/node_agent --logtostderr
    Oct 13 21:32:29 demo-vm-1 systemd[1]: Started istio-auth-node-agent: The Istio auth node agent.
    Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469314    6941 main.go:66] Starting Node Agent
    Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469365    6941 nodeagent.go:96] Node Agent starts successfully.
    Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.483324    6941 nodeagent.go:112] Sending CSR (retrial #0) ...
    Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.862575    6941 nodeagent.go:128] CSR is approved successfully. Will renew cert in 29m59.137732603s

Running a service on a mesh expansion machine

The following example shows how to run a service on a mesh expansion machine. In this example, you'll use the VM you configured in the last section to extend the BookInfo example from Installing Istio on GKE with a MySQL ratings database running on a Compute Engine VM.

  1. Ensure that istio-vm is configured as a mesh expansion VM for the cluster where BookInfo is running, as described above.

  2. Install a MySQL server on the Compute Engine VM:

    sudo apt-get update && sudo apt-get install --no-install-recommends -y mariadb-server
  3. For the purpose of this tutorial (don't do this in real life), configure the MySQL server so that user "root" has the password "password":

    sudo mysql -e "grant all privileges on *.* to 'root'@'localhost' identified by 'password'; flush privileges"
  4. Then use the provided mysqldb-init.sql schema to set up the BookInfo ratings database.

    curl| mysql -u root --password=password -h
  5. Next, register the new service with your Istio installation by using istioctl. First, get the primary internal IP address for the VM (you'll see it on its VM instance details page in the console, or use hostname --ip-address). Then on your cluster admin machine, run the following command, substituting the appropriate IP address:

    $ istioctl register -n vm mysqldb 3306
    I1014 22:54:12.176972   18162 register.go:44] Registering for service 'mysqldb' ip '', ports list [{3306 mysql}]
  6. Still on your cluster admin machine, update the BookInfo deployment with a version of the ratings service that uses the MySQL database and the routing to send traffic to it.

    $ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql-vm.yaml)
    deployment "ratings-v2-mysql-vm" created
    $ kubectl get pods -lapp=ratings
    NAME                                   READY     STATUS    RESTARTS   AGE
    ratings-v1-3016823457-mqbfx            2/2       Running   0          24m
    ratings-v2-mysql-vm-1319199664-9jxkp   2/2       Running   0          19s
    $ istioctl create -f samples/bookinfo/kube/route-rule-ratings-mysql-vm.yaml
    Created config route-rule/default/ratings-test-v2-mysql-vm at revision 4398
    Created config route-rule/default/reviews-test-ratings-v2-vm at revision 4399
  7. Finally, back on the Compute Engine VM, configure istio-vm's Istio proxy sidecar to intercept traffic on the relevant port (3306 in our example, as specified when registering the service). This is configured in /var/lib/istio/envoy/sidecar.env by adding the following three lines to the file.

    $ sudo vi /var/lib/istio/envoy/sidecar.env

    You need to restart the sidecar after changing the configuration.

    sudo systemctl restart istio
  8. After you've done all this, the BookInfo app's ratings service uses the new mesh expansion database. Try changing values in the ratings database on the VM and see them come up in the BookInfo app's product pages.

    $ mysql -u root -h --password=password test -e "select * from ratings"
    | ReviewID | Rating |
    |        1 |      5 |
    |        2 |      4 |
    # Change to 1 star:
    $ mysql -u root --password=password test -e "update ratings set rating=1 where reviewid=1"

    Updated ratings in UI

How it works

When the app on the VM instance makes a request to another service in the mesh, it uses Cloud DNS to resolve the name, getting back the service IP (or cluster IP the VIP assigned to the service). In Istio's special mesh expansion VM configuration, the VM app uses dnsmasq to resolve the name, which redirects all .cluster.local addresses to GKE. The dnsmasq setup also adds to resolv.conf, and, if necessary, configures DHCP to insert after each DHCP resolve.

When the app actually makes the request, the Istio VM setup uses ipnames to redirect the request via the Envoy proxy. The proxy then connects to the Istio-Pilot service to get the list of endpoints, and forwards the request to the appropriate mesh endpoint after applying the rules.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

If you don't want to continue exploring the BookInfo app in What's Next?, do the following:

  1. Delete the various internal load balancers used by the example:

    kubectl -n istio-system delete service --all
    kubectl -n kube-system delete service dns-ilb
  2. Wait until all the load balancers are deleted by watching the output of the following command:

    gcloud compute forwarding-rules list
  3. Delete the container cluster:

    gcloud container clusters delete [CLUSTER_NAME]
  4. Delete the database VM:

    gcloud compute instances delete istio-vm

What's next

The Istio site contains more guides and samples with fully working example uses for Istio that you can experiment with. These include:

  • Intelligent routing. This example shows how to use Istio's various traffic management capabilities with BookInfo, including the routing rules used in the last section of this tutorial.

  • In-Depth Telemetry. This example shows how to get uniform metrics, logs, and traces across BookInfo's services by using Istio Mixer and the Istio sidecar proxy.

Σας βοήθησε αυτή η σελίδα; Πείτε μας τη γνώμη σας:

Αποστολή σχολίων σχετικά με…

Αυτή η σελίδα
Compute Engine Documentation