You are viewing documentation for a previous version of GKE On-Prem. View the latest documentation.

Installing Istio

This page explains how to install Istio in your GKE On-Prem cluster.

Overview

Istio is an open source framework for connecting, monitoring, and securing microservices, including services running on GKE On-Prem. It lets you create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special Envoy sidecar proxy to each of your application's Pods. The Envoy proxy intercepts all network communication between microservices, and is configured and managed using Istio's control plane functionality.

This guide shows you how to install and configure Istio on GKE On-Prem and deploy a demo Istio-enabled multi-service application.

Before you begin

Ensure that you've installed Cloud SDK.

Install Helm

For installing Istio, we recommend using Helm with one of Istio's configurable profiles.

If you haven't installed Helm already, follow the instructions in the Helm README to install the helm v2.x binary on the machine where you have your cluster credentials. Note that Helm v3 isn't currently supported.

Permissions and credentials

  1. Ensure that you have kubectl credentials for the GKE On-Prem user cluster where you want to install Istio. Note that Istio can be installed only on a GKE On-Prem user cluster, not an admin cluster.

  2. Grant cluster admin permissions to the current user. You need these permissions to create the necessary role based access control (RBAC) rules for Istio:

    kubectl create clusterrolebinding cluster-admin-binding \
      --clusterrole=cluster-admin \
      --user="$(gcloud config get-value core/account)"
    

    Although you can run the demo app without granting cluster admin permissions, the permissions are required if you want to access telemetry data and other Istio features.

Download Istio

For GKE On-Prem we recommend using Istio version 1.1.13.

Follow these steps on the same machine where you have your cluster credentials: this is your cluster admin machine.

  1. Download and expand the Istio 1.1.13 package into your current directory using the following command:

    curl -L https://github.com/istio/istio/releases/download/1.1.13/istio-1.1.13-linux.tar.gz | tar xz
    

    The installation directory contains:

    • Installation .yaml files for Kubernetes in install/
    • Sample applications in samples/
    • The istioctl client binary in the bin/ directory. istioctl is used when manually injecting Envoy as a sidecar proxy and for creating routing rules and policies.
    • The istio.VERSION configuration file
  2. Change to your installation root directory and add istioctl to your PATH:

    cd  istio-1.1.13
    export PATH=$PATH:${PWD}/bin
    

Setup namespace and certificate

Still on your cluster admin machine, do the following to set up the istio-system namespace for the control plane components:

kubectl create namespace istio-system

Then copy the required root certificate into istio-system for Citadel. This is required for GKE On-Prem clusters:

kubectl get secret istio-ca-secret --namespace=kube-system --export -o yaml | kubectl apply --validate=false --namespace=istio-system -f -

Install Istio

Now you're ready to install Istio. Istio is installed in the istio-system namespace you just created, and can manage microservices from all other namespaces. The installation includes Istio core components, tools, and samples.

  1. Ensure that you're in the Istio installation's root directory.

  2. Install the Istio Custom Resource Definitions (CRDs):

    helm template install/kubernetes/helm/istio-init \
      --name istio-init --namespace istio-system | kubectl apply -f -
    
  3. Wait a few seconds for all the CRDs to be committed in the Kubernetes API server.

  4. Install Istio with the default profile. Although you can choose another profile, we recommend the default profile for production deployments.

    helm template install/kubernetes/helm/istio \
      --name istio --namespace istio-system | kubectl apply -f -
    

    This deploys the core Istio components:

    • Istio-Pilot, which is responsible for service discovery and for configuring the Envoy sidecar proxies in an Istio service mesh.
    • The Mixer components Istio-Policy and Istio-Telemetry, which enforce usage policies and gather telemetry data across the service mesh.
    • Istio-Ingressgateway, which provides an ingress point for traffic from outside the cluster.
    • Istio-Citadel, which automates key and certificate management for Istio.

Verify Istio installation

  1. Ensure the following Kubernetes Services are deployed: istio-citadel, istio-pilot, istio-ingressgateway, istio-policy, and istio-telemetry (you'll also see the other deployed services):

    kubectl get service -n istio-system
    
    Output:
    NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                               AGE
    ...
    istio-citadel              ClusterIP      10.19.253.95            8060/TCP,9093/TCP                                                     37s
    istio-galley               ClusterIP      10.19.245.2             443/TCP,15014/TCP,9901/TCP                                            37s
    istio-ingressgateway       LoadBalancer   10.19.247.233        80:31380/TCP,443:31390/TCP,31400:31400/TCP                            40s
    istio-pilot                ClusterIP      10.19.243.14            15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP   38s
    istio-policy               ClusterIP      10.19.254.117           9091/TCP,15004/TCP,9093/TCP                                           39s
    istio-sidecar-injector     ClusterIP      10.19.248.228           443/TCP                                                               37s
    istio-statsd-prom-bridge   ClusterIP      10.19.252.35            9102/TCP,9125/UDP                                                     39s
    istio-telemetry            ClusterIP      10.19.250.11            9091/TCP,15004/TCP,9093/TCP,42422/TCP                                 39s
    ...
  2. Ensure the corresponding Kubernetes Pods are deployed and all containers are up and running: istio-pilot-*, istio-policy-*, istio-telemetry-*, istio-ingressgateway-*, and istio-citadel-*.

    kubectl get pods -n istio-system
    
    Output:
    NAME                                        READY     STATUS      RESTARTS   AGE
    istio-citadel-54f4678f86-4549b              1/1       Running     0          12m
    istio-cleanup-secrets-5pl77                 0/1       Completed   0          12m
    istio-galley-7bd8b5f88f-nhwlc               1/1       Running     0          12m
    istio-ingressgateway-665699c874-l62rg       1/1       Running     0          12m
    istio-pilot-68cbbcd65d-l5298                2/2       Running     0          12m
    istio-policy-7c5b5bb744-k6vm9               2/2       Running     0          12m
    istio-security-post-install-g9l9p           0/1       Completed   3          12m
    istio-sidecar-injector-85ccf84984-2hpfm     1/1       Running     0          12m
    istio-telemetry-5b6c57fffc-9j4dc            2/2       Running     0          12m
    istio-tracing-77f9f94b98-jv8vh              1/1       Running     0          12m
    prometheus-7456f56c96-7hrk5                 1/1       Running     0          12m
    ...

Configure an external IP address

The default Istio installation assumes that an external IP address is automatically allocated for LoadBalancer services. This is not true in GKE On-Prem clusters. Because of this, you need to allocate an IP address manually for the Istio ingress Gateway resource.

To configure an external IP address, follow one of the sections below, depending on your cluster's load balancing mode:

Integrated load balancing mode

  1. Open the istio-ingressgateway Service's configuration:

    kubectl edit svc -n istio-system istio-ingressgateway
    

    The configuration for the istio-ingressgateway Service opens in your shell's default text editor.

  2. In the file, add the following line under the specification (spec) block:

    loadBalancerIP: <your static external IP address>
    

    For example:

    spec:
     loadBalancerIP: 203.0.113.1
    
  3. Save the file.

Manual load balancing mode

To expose a service of type NodePort with a VIP on your selected load balancer, you need to find out the nodePort values first:

  1. View the istio-ingressgateway Service's configuration in your shell:

    kubectl get svc -n istio-system istio-ingressgateway -o yaml
    

    Each of the ports for istio's gateways are displayed. The command output might look like this:

     ...
     ports:

    • name: status-port nodePort: 30391 port: 15020 protocol: TCP targetPort: 15020
    • name: http2 nodePort: 31380 port: 80 protocol: TCP targetPort: 80
    • name: https nodePort: 31390 port: 443 protocol: TCP targetPort: 443
    • name: tcp nodePort: 31400 port: 31400 protocol: TCP targetPort: 31400
    • name: https-kiali nodePort: 31073 port: 15029 protocol: TCP targetPort: 15029
    • name: https-prometheus nodePort: 30253 port: 15030 protocol: TCP targetPort: 15030
    • name: https-grafana nodePort: 30050 port: 15031 protocol: TCP targetPort: 15031
    • name: https-tracing nodePort: 31204 port: 15032 protocol: TCP targetPort: 15032
    • name: tls nodePort: 30158 port: 15443 protocol: TCP targetPort: 15443 ...
  2. Expose these ports through your load balancer.

    For example, the service port named http2 has port 80 and nodePort 31380. Suppose the node addresses for your user cluster are 192.168.0.10, 192.168.0.11, and 192.168.0.12, and your load balancer's VIP is 203.0.113.1.

    Configure your load balancer so that traffic sent to 203.0.113.1:80 is forwarded to 192.168.0.10:31380, 192.168.0.11:31380, or 192.168.0.12:31380. You can select the service ports that you want to expose on this given VIP.

Deploy the sample application

Once Istio is installed and all its components are running, you can try deploying one of the sample applications provided with the installation. In this tutorial, we'll install BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings - all managed using Istio. You can find the source code and all the other files used in this example in your Istio installation's samples/bookinfo directory.

Following these steps deploys the BookInfo application's services in an Istio-enabled environment, with Envoy sidecar proxies injected alongside each service to provide Istio functionality.

  1. Ensure you're still in the root of the Istio installation directory on your cluster admin machine.

  2. Deploy the application using kubectl apply and istioctl kube-inject. The kube-inject command updates the BookInfo deployment so that a sidecar is deployed in each application Pod along with the service.

    kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
    
  3. Confirm that the application has been deployed correctly by running the following commands:

    kubectl get services
    Output:
    NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)              AGE
    details                    10.0.0.31    <none>        9080/TCP             6m
    kubernetes                 10.0.0.1     <none>        443/TCP              7d
    productpage                10.0.0.120   <none>        9080/TCP             6m
    ratings                    10.0.0.15    <none>        9080/TCP             6m
    reviews                    10.0.0.170   <none>        9080/TCP             6m

    and

    kubectl get pods
    Output:
    NAME                                        READY     STATUS    RESTARTS   AGE
    details-v1-1520924117-48z17                 2/2       Running   0          6m
    productpage-v1-560495357-jk1lz              2/2       Running   0          6m
    ratings-v1-734492171-rnr5l                  2/2       Running   0          6m
    reviews-v1-874083890-f0qf0                  2/2       Running   0          6m
    reviews-v2-1343845940-b34q5                 2/2       Running   0          6m
    reviews-v3-1813607990-8ch52                 2/2       Running   0          6m
  4. Finally, define the ingress gateway routing for the application:

    kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
    

Validate the application deployment

Now that it's deployed, let's see the BookInfo application in action. You already know the external IP address for the ingress gateway as you configured it earlier. So, for example, if you used 203.0.113.1 for your external IP:

export GATEWAY_URL=203.0.113.1

Trying the application

  1. Check that the BookInfo app is running with curl:

    curl -I http://${GATEWAY_URL}/productpage
    

    If the response shows 200, it means the application is working properly with Istio.

  2. Now point your browser to http://$GATEWAY_URL/productpage to view the BookInfo web page. If you refresh the page several times, you should see different versions of reviews shown in the product page, presented in a round robin style (red stars, black stars, no stars), since we haven't yet used Istio to control the version routing.

Deploying your own application

If you want to try deploying one of your own applications, just follow the same procedure with your own YAML deployment: Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing.

You can either use kube-inject to add the sidecars when deploying the application, as in our example, or enable Istio's automatic sidecar injection for the namespace where your application is running.

Uninstalling

  1. Use the following command to uninstall the Istio components:

    helm template install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl delete -f -
    
  2. Then delete the istio-system namespace:

    kubectl delete namespace istio-system
    

What's next?

Learn more about Istio on the Istio site and the Google Cloud Platform Istio documentation