Using Istio to load-balance internal gRPC services

Stay organized with collections Save and categorize content based on your preferences.

This tutorial shows you how to set up Internal TCP/UDP Load Balancing using Istio for gRPC services that are running on Google Kubernetes Engine (GKE). This setup lets other resources in your VPC network communicate with gRPC services by using a private, internal (RFC 1918) IP address, while Istio takes care of routing and load-balancing requests across the Kubernetes Pods that are running the gRPC services.

This tutorial assumes a basic knowledge of gRPC and GKE or Kubernetes.

Introduction

Istio provides ingress gateways for managing traffic that's entering the service mesh. Load balancers direct traffic from clients running outside the service mesh to the Istio ingress gateway. To allow traffic from clients in the internal VPC network, use Google Cloud Internal TCP/UDP Load Balancing. Internal TCP/UDP Load Balancing performs layer 4 (transport layer) load balancing across the nodes in the GKE cluster. The Istio ingress gateway receives the traffic and performs layer 7 (application layer) load balancing, distributing traffic to services in the Istio service mesh by using rules defined in virtual services and destination rules.

The sample gRPC service used in this tutorial returns a response header that contains the name of the Kubernetes Pod that handled the request. Using this information, you can see that load balancing by the Istio ingress gateway distributes requests made by a client over a single connection to multiple Kubernetes Pods in the GKE cluster.

Objectives

  • Create a GKE cluster
  • Install Istio and configure the ingress gateway to use Internal TCP/UDP Load Balancing.
  • Deploy a sample gRPC service.
  • Verify internal connectivity.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Clean up.

Before you begin

  1. Sign in to your Google Account, or if you don't have one, sign up for a new account.
  2. In the Google Cloud console, go to the project selector page.

    Go to project selector

  3. Select or create a Google Cloud project.

  4. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.

Initializing the environment

  1. In the Google Cloud console, from the Select a project drop-down, select the project you want to use.

  2. Open Cloud Shell:

    Go to Cloud Shell

    You use Cloud Shell to run all the commands in this tutorial.

  3. Enable the Cloud Build API and Google Kubernetes Engine API APIs:

    gcloud services enable \
        cloudbuild.googleapis.com \
        container.googleapis.com
    
  4. Clone the Git repository containing the sample gRPC service and switch to the working directory:

    git clone https://github.com/GoogleCloudPlatform/istio-samples.git
    cd istio-samples/sample-apps/grpc-greeter-go
    

Create a GKE cluster with Istio

  1. In Cloud Shell, create a GKE cluster:

    gcloud container clusters create grpc-istio-ilb-tutorial \
        --enable-ip-alias \
        --enable-stackdriver-kubernetes \
        --num-nodes 4 \
        --release-channel regular \
        --scopes cloud-platform \
        --zone us-central1-b
    

    This tutorial uses the us-central1-b zone. You can change the zone to suit your needs.

  2. Grant yourself cluster administrator rights:

    kubectl create clusterrolebinding cluster-admin-binding \
        --clusterrole cluster-admin \
        --user $(gcloud config get-value core/account)
    

    You must have the permissions defined in the cluster-admin Kubernetes cluster role in order to install Istio.

  3. Download and execute a script that installs the istioctl command-line tool:

    export ISTIO_VERSION=1.9.2
    curl -sL https://istio.io/downloadIstioctl | sh -
    

    The script installs the istioctl tool to the directory $HOME/.istioctl/bin.

  4. Create an Istio operator manifest:

    cat << EOF > istio-operator.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      annotations:
        config.kubernetes.io/local-config: "true"
    spec:
      profile: empty
      hub: gcr.io/istio-release
      components:
        base:
          enabled: true
        pilot:
          enabled: true
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
          k8s:
            serviceAnnotations:
              networking.gke.io/load-balancer-type: Internal
              networking.gke.io/internal-load-balancer-allow-global-access: "true"
    EOF
    

    The annotation networking.gke.io/load-balancer-type: "Internal" on the ingress gateway means that GKE will provision resources for Internal TCP/UDP Load Balancing in front of the Istio ingress gateway pods. If you remove this annotation, GKE instead provisions resources for external TCP/UDP Network Load Balancing.

    The annotation networking.gke.io/internal-load-balancer-allow-global-access: "true" allows clients from any region in your VPC network to access the internal TCP/UDP load balancer. You can remove this annotation if you only want to allow access to the internal TCP/UDP load balancer from clients in the same region.

  5. Install Istio in the GKE cluster:

    $HOME/.istioctl/bin/istioctl install -y --filename istio-operator.yaml
    
  6. Check the status of creating an external IP address for the istio-ingressgateway Kubernetes Service:

    kubectl get services istio-ingressgateway -n istio-system --watch
    

    Wait until the EXTERNAL-IP value changes from <pending> to an IP address. Press Control+C to stop waiting.

Create a TLS certificate for the Istio ingress gateway

  1. In Cloud Shell, create a TLS certificate and private key to allow TLS termination by the Istio ingress gateway:

    openssl req -x509 -newkey rsa:4096 -nodes -sha256 -days 365 \
      -keyout privkey.pem -out cert.pem -extensions san \
      -config \
      <(echo "[req]";
        echo distinguished_name=req;
        echo "[san]";
        echo subjectAltName=DNS:grpc.example.com
        ) \
      -subj '/CN=grpc.example.com'
    
  2. Create a Kubernetes Secret to store the TLS certificate and private key:

    kubectl -n istio-system create secret tls istio-ingressgateway-certs \
        --key privkey.pem --cert cert.pem \
        --dry-run=client --output yaml | kubectl apply -f -
    

Install the sample application

The next step is to build a container image for the sample gRPC service and deploy it to your GKE cluster. The sample gRPC service consists of a client component, referred to as the gRPC client, and a server component, referred to as the gRPC server.

  1. In Cloud Shell, enable automatic Istio sidecar injection in the default namespace:

    kubectl label namespace default istio-injection=enabled
    
  2. Use Cloud Build to create a container image for the gRPC server:

    gcloud builds submit server -t gcr.io/$GOOGLE_CLOUD_PROJECT/grpc-greeter-go-server
    
  3. Create the Kubernetes Deployment and Service objects for the gRPC server:

    envsubst < manifests/greeter-k8s.template.yaml | kubectl apply -f -
    
  4. Verify that the ClusterIP Kubernetes Service has been created and that the Pods are running:

    kubectl get services,pods
    

    The output looks similar to this:

    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    service/greeter      ClusterIP   10.0.18.67   <none>        8080/TCP   11s
    service/kubernetes   ClusterIP   10.0.16.1    <none>        443/TCP    1h
    NAME READY STATUS RESTARTS AGE pod/greeter-844cffd75-7hpcb 2/2 Running 0 56s pod/greeter-844cffd75-ffccl 2/2 Running 0 56s pod/greeter-844cffd75-zww6h 2/2 Running 0 56s

    The Pods show 2/2 in the READY column. This output means that for each Pod, both the gRPC server container and the Envoy sidecar container are running.

  5. Create the Istio gateway, virtual service, and destination rule objects for the gRPC server:

    kubectl apply -f manifests/greeter-istio-gateway.yaml \
        -f manifests/greeter-istio-virtualservice.yaml \
        -f manifests/greeter-istio-destinationrule.yaml
    
  6. Verify that all three objects were created successfully:

    kubectl get gateway,virtualservice,destinationrule
    

    The output looks similar to this:

    NAME                                  AGE
    gateway.networking.istio.io/greeter   1m
    NAME GATEWAYS HOSTS AGE virtualservice.networking.istio.io/greeter [greeter] [*] 1m
    NAME HOST AGE destinationrule.networking.istio.io/greeter greeter 1m

Verify internal connectivity

Internal TCP/UDP Load Balancing is regional, so you can test connectivity from a VM in the same zone or region.

  1. In Cloud Shell, use Cloud Build to create a container image for the gRPC client:

    gcloud builds submit client \
        -t gcr.io/$GOOGLE_CLOUD_PROJECT/grpc-greeter-go-client
    
  2. Create a Compute Engine virtual machine (VM) instance:

    gcloud compute instances create grpc-istio-ilb-tutorial-client-vm \
        --image-project cos-cloud \
        --image-family cos-stable \
        --scopes https://www.googleapis.com/auth/devstorage.read_only \
        --zone us-central1-b
    

    The devstorage.read_only scope is required in order to download container images from Container Registry.

  3. Store the internal TCP/UDP load balancer IP address in a file called ilb-ip.txt:

    kubectl -n istio-system get services istio-ingressgateway \
        -o jsonpath='{.status.loadBalancer.ingress[0].ip}' > ilb-ip.txt
    
  4. Copy the self-signed TLS certificate and the file containing the internal TCP/UDP load balancer IP address to the VM instance:

    gcloud compute scp cert.pem ilb-ip.txt grpc-istio-ilb-tutorial-client-vm:~ \
        --zone us-central1-b
    
  5. Use SSH to connect to the VM instance:

    gcloud compute ssh grpc-istio-ilb-tutorial-client-vm --zone us-central1-b
    
  6. On the VM, query the Compute Engine metadata server for the project ID and store it in the environment variable GOOGLE_CLOUD_PROJECT.

    GOOGLE_CLOUD_PROJECT=$(curl -sH "Metadata-Flavor: Google" \
        http://metadata.google.internal/computeMetadata/v1/project/project-id)
    

    In Cloud Shell, the environment variable GOOGLE_CLOUD_PROJECT is set by default, but that isn't the case in Compute Engine VM instances.

  7. Fetch Container Registry credentials for use with the docker command:

    docker-credential-gcr configure-docker --registries gcr.io
    
  8. Run the gRPC client container image:

    docker run --rm -v $(pwd)/cert.pem:/data/cert.pem \
        --add-host grpc.example.com:$(cat ilb-ip.txt) \
        gcr.io/$GOOGLE_CLOUD_PROJECT/grpc-greeter-go-client \
        --address=grpc.example.com:443
    

    The output looks similar to this:

    2019/03/27 15:12:53 Hello world from greeter-844cffd75-ffccl
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-zww6h
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-7hpcb
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-ffccl
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-zww6h
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-7hpcb
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-ffccl
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-zww6h
    2019/03/27 15:12:53 Hello world from greeter-844cffd75-7hpcb

    This output shows that gRPC unary requests are handled by the gRPC server pods (named greeter-*) according to the loadBalancer configuration in the Istio destination rule—in this case, ROUND_ROBIN.

  9. Leave the SSH session:

    exit
    

Examining the source code

To understand the load balancing configuration better, you can look at the sample application's source code.

For example, to understand the messages printed by the gRPC client, look at the Go source code for the server and the client. When the gRPC server handles a request, it looks up the hostname and adds it as a response header called hostname. Because the server is running in a Kubernetes Pod, the hostname is the name of the Pod.

hostname, err := os.Hostname()
if err != nil {
	log.Printf("Unable to get hostname %v", err)
}
if hostname != "" {
	grpc.SendHeader(ctx, metadata.Pairs("hostname", hostname))
}

When the gRPC client receives a response from the server, it gets the hostname header and prints it to the console.

if len(header["hostname"]) > 0 {
	hostname = header["hostname"][0]
}
log.Printf("%s from %s", r.Message, hostname)

To understand the Kubernetes Pod names printed to the console by the gRPC client, look at the Istio configuration for the gRPC server. Note that the DestinationRule object specifies ROUND_ROBIN as the loadBalancer algorithm. This algorithm is the reason that incoming requests rotate among the Pods in the Kubernetes Deployment.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: greeter
spec:
  host: greeter
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    tls:
      mode: ISTIO_MUTUAL

Troubleshooting

If you run into problems with this tutorial, we recommend that you review these documents:

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial:

  1. In Cloud Shell, delete the GKE cluster:

    gcloud container clusters delete grpc-istio-ilb-tutorial \
        --zone us-central1-b --quiet --async
    
  2. Delete the images in Container Registry:

    gcloud container images delete gcr.io/$GOOGLE_CLOUD_PROJECT/grpc-greeter-go-client \
        --force-delete-tags --quiet
    gcloud container images delete gcr.io/$GOOGLE_CLOUD_PROJECT/grpc-greeter-go-server \
        --force-delete-tags --quiet
    
  3. Delete the Compute Engine instance:

    gcloud compute instances delete grpc-istio-ilb-tutorial-client-vm \
        --zone us-central1-b --quiet
    

What's next