Using Istio with Google Compute Engine

Istio is an open source framework for connecting, monitoring, and securing microservices. It lets you create a network or "mesh" of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special Envoy sidecar proxy to each of your application's pods. The Envoy proxy intercepts all network communication between microservices, and is configured and managed using Istio’s control plane functionality.

Currently, Istio's control plane can only be installed on Kubernetes implementations such as Google Kubernetes Engine, but its mesh expansion feature means that you can add services running on non- Kubernetes platforms into the service mesh, including services running on Compute Engine VMs. This lets you control the Kubernetes and VM services as a single mesh. This tutorial shows you how to configure Istio to use mesh expansion, and configure Compute Engine VMs so that they can be added to an Istio mesh. It assumes that you already have an existing Istio installation on Kubernetes Engine.

You can find out much more about Istio and how it works on its own website at istio.io. If you're interested in finding out how the mesh expansion configuration used in this tutorial works, see How it works, though you don't need to read this to complete the tutorial.

Objectives

  • Update an existing Istio on Kubernetes Engine installation to use mesh expansion
  • Configure Compute Engine VMs to join in an Istio service mesh
  • Run an Istio mesh service on a Compute Engine VM

Costs

This tutorial uses billable components of Cloud Platform including Compute Engine.

New Cloud Platform users might be eligible for a free trial.

Before you begin

  • You must have an existing Istio installation on Kubernetes Engine: you can find out how to do this and the relevant setup and requirements in Installing Istio on Google Kubernetes Engine.
  • You must have the BookInfo sample application installed and running, again as described in Installing Istio on Google Kubernetes Engine, and have istioctl in your PATH.
  • You must have enough IP address and backend service quota to run four internal load balancers (one load balancer and one IP address each) and BookInfo's ingress service (one load balancer and one IP address) from the previous tutorial.
  • Make sure your kubectl context is set to your Istio cluster.

    kubectl config current-context              # Display the current-context
    kubectl config use-context [CLUSTER_NAME]   # set the default context to [CLUSTER_NAME]
    

Set defaults for the gcloud command-line tool

To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set default configuration values by running the following commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b

Setting up the mesh for expansion

The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself and generate the configuration files that will allow it to be used by the Compute Engine VMs. Your Istio download includes a script to help with this on Kubernetes Engine: you'll find it in /install/tools/setupMeshEx.sh. Follow these steps on the machine where you have your Istio installation directory and your cluster credentials: this is your cluster admin machine.

  1. Use the provided mesh-expansion deployment to set up Internal Load Balancers for Pilot, Mixer, Istio Certificate Authority, and the Kubernetes DNS server. This ensures that these services can be accessed by the VMs.

    kubectl apply -f install/kubernetes/mesh-expansion.yaml
    
  2. Confirm that the services are up and running and that all the ILBs have EXTERNAL-IP values (you may have to wait a minute) before proceeding to the next step:

    $ kubectl -n istio-system get services
    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                  AGE
    istio-ca-ilb      LoadBalancer   10.47.245.69    10.150.0.9       8060:32174/TCP                                           3m
    istio-egress      ClusterIP      10.47.252.251   <none>           80/TCP                                                   7m
    istio-ingress     LoadBalancer   10.47.254.41    35.197.249.113   80:31822/TCP,443:30637/TCP                               7m
    istio-mixer       ClusterIP      10.47.244.179   <none>           9091/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP   8m
    istio-pilot       ClusterIP      10.47.241.19    <none>           8080/TCP,443/TCP                                         7m
    istio-pilot-ilb   LoadBalancer   10.47.243.136   10.150.0.6       8080:30064/TCP                                           3m
    mixer-ilb         LoadBalancer   10.47.242.213   10.150.0.8       9091:31278/TCP                                           3m

  3. From your Istio installation directory, use the helper script to generate the Istio cluster.env configuration to be deployed in the VMs, specifying your own cluster name. This file contains the cluster IP address ranges to intercept.

    install/tools/setupMeshEx.sh generateClusterEnv [CLUSTER_NAME]
    

    This creates a file with a single line like this:

    $ cat cluster.env
    ISTIO_SERVICE_CIDR=10.63.240.0/20

  4. Now use the same script to generate the DNS configuration file to be used in the VMs. This will allow apps on the Compute Engine VM to resolve cluster service names using dnsmasq, which will be intercepted by the sidecar and forwarded.

    install/tools/setupMeshEx.sh generateDnsmasq
    

    Example generated file:

    $ cat kubedns
    server=/svc.cluster.local/10.150.0.7
    address=/istio-mixer/10.150.0.8
    address=/istio-pilot/10.150.0.6
    address=/istio-ca/10.150.0.9
    address=/istio-mixer.istio-system/10.150.0.8
    address=/istio-pilot.istio-system/10.150.0.6
    address=/istio-ca.istio-system/10.150.0.9

Setting up a mesh expansion VM

Once you've set up the mesh and generated the relevant config files, the next step is to configure the Compute Engine VMs to join the mesh, including copying your generated files to the VMs. For tutorial purposes, you can use the provided setupMeshEx.sh script again to copy files and configure the machine. However, when adding your own VMs to a real mesh application, you should follow the steps manually in order to integrate them into your own workflows and provisioning. You can find the detailed steps in Istio's Mesh Expansion guide and see the script that setupMeshEx.sh runs on each VM in /install/tools/setupIstioVM.sh.

  1. First ensure you have a Compute Engine VM to use as a mesh expansion machine in the same project and network as your Istio installation. If you don't have one already, create one:

    gcloud compute instances create istio-vm
    
  2. Istio can administer services across multiple Kubernetes namespaces: in this example, you'll put the VM service (even though it's not on Kubernetes!) in the vm namespace, as that's where the provided BookInfo routing rules will look for it. Using different namespaces like this helps you keep your VM services separate from your regular Kubernetes services. To use a non-default namespace for a mesh expansion machine, you need to specify this before running the setup scripts. In your Istio installation directory on your cluster admin machine, first set the SERVICE_NAMESPACE variable:

    export SERVICE_NAMESPACE=vm
    

    Then create the namespace:

    kubectl create namespace $SERVICE_NAMESPACE
    
  3. Still on the cluster admin machine, run the following command with the setupMeshEx.sh setup script. This does the following:

    • Copies the generated files and VM setup script to the VM
    • Configures and verifies DNS settings so that the VM can connect to Istio components
    • Copies Istio auth secrets to the VM
    • Installs Istio Debian files on the VM, including the Istio sidecar proxy

    install/tools/setupMeshEx.sh machineSetup istio-vm

  4. SSH into the Compute Engine VM using gcloud or any of the other options from its VM Instance Details console page, which you'll find linked from your VM Instances page:

    gcloud compute ssh istio-vm
    
  5. On the Compute Engine VM, verify that the configured machine can access services running in the Kubernetes Engine cluster. For example, if you are running the BookInfo example from Installing Istio on Google Kubernetes Engine in the Kubernetes Engine cluster, you should be able to access the productpage service with curl from the VM, as in the following example:

    $ curl -v -w "\n" http://productpage.default.svc.cluster.local:9080/api/v1/products/0/ratings
    *   Trying 10.63.251.156...
    * Connected to productpage.default.svc.cluster.local (10.63.251.156) port 9080 (#0)
    > GET /api/v1/products/0/ratings HTTP/1.1
    > Host: productpage.default.svc.cluster.local:9080
    > User-Agent: curl/7.47.0
    > Accept: /
    >
    < HTTP/1.1 200 OK
    < content-type: application/json
    < content-length: 54
    < server: envoy
    < date: Sun, 15 Oct 2017 00:04:49 GMT
    < x-envoy-upstream-service-time: 17
    <
    * Connection #0 to host productpage.default.svc.cluster.local left intact
    {"ratings": {"Reviewer2": 4, "Reviewer1": 5}, "id": 0}

    Note the use of default in the product page URL here as the BookInfo example was created in the default namespace in the previous tutorial; if you chose to use a different namespace then substitute it here.

  6. Again on the VM, check that Istio processes are running:

    $ sudo systemctl status istio-auth-node-agent
    istio-auth-node-agent.service - istio-auth-node-agent: The Istio auth node agent
      Loaded: loaded (/lib/systemd/system/istio-auth-node-agent.service; disabled; vendor preset: enabled)
      Active: active (running) since Fri 2017-10-13 21:32:29 UTC; 9s ago
      Docs: http://istio.io/
    Main PID: 6941 (node_agent)
      Tasks: 5
      Memory: 5.9M
      CPU: 92ms
      CGroup: /system.slice/istio-auth-node-agent.service
              └─6941 /usr/local/istio/bin/node_agent --logtostderr

    Oct 13 21:32:29 demo-vm-1 systemd[1]: Started istio-auth-node-agent: The Istio auth node agent. Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469314 6941 main.go:66] Starting Node Agent Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469365 6941 nodeagent.go:96] Node Agent starts successfully. Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.483324 6941 nodeagent.go:112] Sending CSR (retrial #0) ... Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.862575 6941 nodeagent.go:128] CSR is approved successfully. Will renew cert in 29m59.137732603s

Running a service on a mesh expansion machine

Now let's look at what you need to do to run a service on a mesh expansion machine. In this example, you'll extend the BookInfo example from Installing Istio on Google Kubernetes Engine with a MySQL ratings database running on a Compute Engine VM, using the VM you configured in the last section.

  1. Ensure that istio-vm has been configured as a mesh expansion VM for the cluster where BookInfo is running, as described above.

  2. Install a MySQL server on the Compute Engine VM:

    sudo apt-get update && sudo apt-get install --no-install-recommends -y mariadb-server
    
  3. For the purpose of this tutorial (don't do this in real life!), configure the MySQL server so that user "root" has the password "password":

    sudo mysql -e "grant all privileges on *.* to 'root'@'localhost' identified by 'password'; flush privileges"
    
  4. Then use the provided mysqldb-init.sql schema to setup the BookInfo ratings database.

    curl https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql| mysql -u root --password=password -h 127.0.0.1
    
  5. Next, register the new service with your Istio installation using istioctl. First get the primary internal IP address for the VM (you'll see it on its VM Instance Details page in the console, or use hostname --ip-address). Then on your cluster admin machine, run the following, substituting the appropriate IP address:

    $ istioctl register -n vm mysqldb 10.150.0.5 3306
    I1014 22:54:12.176972   18162 register.go:44] Registering for service 'mysqldb' ip '10.150.0.5', ports list [{3306 mysql}]

  6. Still on your cluster admin machine, update the BookInfo deployment with a version of the ratings service that uses the MySQL database, and the routing to send traffic to it.

    $ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql-vm.yaml)
    deployment "ratings-v2-mysql-vm" created

    $ kubectl get pods -lapp=ratings
    NAME                                   READY     STATUS    RESTARTS   AGE
    ratings-v1-3016823457-mqbfx            2/2       Running   0          24m
    ratings-v2-mysql-vm-1319199664-9jxkp   2/2       Running   0          19s

    $ istioctl create -f samples/bookinfo/kube/route-rule-ratings-mysql-vm.yaml
    Created config route-rule/default/ratings-test-v2-mysql-vm at revision 4398
    Created config route-rule/default/reviews-test-ratings-v2-vm at revision 4399

  7. Finally, back on the Compute Engine VM, configure istio-vm's Istio proxy sidecar to intercept traffic on the relevant port (3306 in our example, as specified when registering the service). This is configured in /var/lib/istio/envoy/sidecar.env by adding the following three lines to the file.

    $ sudo vi /var/lib/istio/envoy/sidecar.env
    ...
    ISTIO_INBOUND_PORTS=3306
    ISTIO_SERVICE=mysqldb
    ISTIO_NAMESPACE=vm
    

    You need to restart the sidecar after changing the configuration.

    sudo systemctl restart istio
    
  8. After you've done all this, the BookInfo application's ratings service should use the new mesh expansion database. Try changing values in the ratings database on the VM and see them come up in the BookInfo app's product pages.

    $ mysql -u root -h 127.0.0.1 --password=password test -e "select * from ratings"
    +----------+--------+
    | ReviewID | Rating |
    +----------+--------+
    |        1 |      5 |
    |        2 |      4 |
    +----------+--------+
    # Change to 1 star:
    $ mysql -u root --password=password test -e "update ratings set rating=1 where reviewid=1"
    

    Updated ratings in UI

How it works

When the application on the VM wants to make a request to another service in the mesh, it must resolve the name using DNS, getting back the service IP (or cluster IP, the VIP assigned to the service). In Istio's special mesh expansion VM configuration, the VM application uses dnsmasq to resolve the name, which redirects all .cluster.local addresses to Kubernetes. The dnsmasq setup also adds 127.0.0.1 to resolv.conf, and if necessary configures DHCP to insert that after each DHCP resolve.

When the application actually makes the request, the Istio VM setup uses ipnames to redirect the request via the Envoy proxy. The proxy then connects to the Istio-Pilot service to get the list of endpoints, and forwards the request to the appropriate mesh endpoint after applying the rules.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

If you don't want to continue exploring the BookInfo app in What's Next?, do the following:

  1. Delete the various internal load balancers used by the example:

    kubectl -n istio-system delete service --all
    kubectl -n kube-system delete service dns-ilb
    
  2. Wait until all the load balancers are deleted by watching the output of the following command:

    gcloud compute forwarding-rules list
    
  3. Delete the container cluster:

    gcloud container clusters delete [CLUSTER_NAME]
    
  4. Delete the database VM:

    gcloud compute instances delete istio-vm
    

What's next

The Istio site contains more guides and samples with fully working example uses for Istio that you can experiment with. These include:

  • Intelligent Routing: This example shows how to use Istio's various traffic management capabilities with BookInfo, including the routing rules used in the last section of this tutorial.

  • In-Depth Telemetry: This example demonstrates how to get uniform metrics, logs, and traces across BookInfo's services using Istio Mixer and the Istio sidecar proxy.

Send feedback about...

Compute Engine Documentation