Setting up Traffic Director with Google Kubernetes Engine

This guide shows you how to configure Google Kubernetes Engine or Kubernetes pod hosts and the load balancing components that Traffic Director requires.

Before you follow the instructions in this guide, review Preparing for Traffic Director setup and make sure that you have completed the prerequisites.

You can configure Traffic Director using the Compute Engine load balancing SDK or REST APIs. See the load balancing API and gcloud references.

Configuring GKE/Kubernetes clusters for Traffic Director

This section describes the required steps to enable GKE/Kubernetes clusters to work with Traffic Director.

Creating the GKE cluster

GKE clusters must meet the following requirements:

  • Network Endpoint Groups support must be enabled. For more information and examples, refer to Standalone network endpoint groups. The standalone NEGs feature is available in General Availability for Traffic Director.
  • The cluster nodes instances' service account must have permission to access the Traffic Director API. For more information on the required permissions, refer to Enabling the service account to access the Traffic Director API.
  • The containers must have access to the Traffic Director API, protected by OAuth authentication. For more information, refer to host configuration.

The following example shows how to create a GKE cluster called traffic-director-cluster in the us-central1-a zone.

Console

To create a cluster using Cloud Console, perform the following steps:

  1. Go to the Kubernetes Engine menu in Cloud Console.

    Go to the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.

  4. Customize the template if necessary. The following fields are required:

    • Name: Enter traffic-director-cluster.
    • Location type: Zonal.
    • Zone: us-central1-a.
    • node pool:
      • Cluster size: The number of nodes to create in the cluster. You must have available resource quota for the nodes and their resources (such as firewall routes).
      • Machine type: Compute Engine machine type to use for the instances. Each machine type is billed differently. The default machine type is n1-standard-1. For machine type pricing information, refer to the [Compute Engine pricing page)(/compute/pricing#machinetype).
      • Click More options, scroll down to Access scopes and click Allow full access to all Cloud APIs. Click Save.
  5. Click Create.

After you create a cluster in Cloud Console, you need to configure kubectl to interact with the cluster. To learn more, refer to Generating a kubeconfig entry.

gcloud

gcloud container clusters create traffic-director-cluster \
  --zone us-central1-a \
  --scopes=https://www.googleapis.com/auth/cloud-platform \
  --enable-ip-alias

Getting the required GKE cluster privileges

For GKE, switch to the cluster(2) you just created by issuing the following command. This points kubectl to the correct cluster.

gcloud container clusters get-credentials traffic-director-cluster \
    --zone us-central1-a

Configuring GKE/Kubernetes services

This section shows how to prepare Kubernetes deployment specifications to work with Traffic Director. This consists of configuring services with NEGs as well as injecting sidecar proxies into pods that require access to the services managed by Traffic Director.

Configure firewall rules

To verify that the backend pods are running, you must configure a firewall rule allowing the health checker IP address ranges.

Console

  1. Go to the Firewall rules page in the Google Cloud Console.
    Go to the Firewall rules page
  2. Click Create firewall rule.
  3. On the Create a firewall rule page, supply the following information:
    • Name: Provide a name for the rule. For this example, use fw-allow-health-checks.
    • Network: Choose a VPC network.
    • Priority: Enter a number for the priority. Lower numbers have higher priorities. Be sure that the firewall rule has a higher priority than other rules that might deny ingress traffic.
    • Direction of traffic: Choose ingress.
    • Action on match: Choose allow.
    • Targets: Choose All instances in the network.
    • Source filter: Choose IP ranges.
    • Source IP ranges: 35.191.0.0/16,130.211.0.0/22
    • Allowed protocols and ports: Use tcp. TCP is the underlying protocol for all health check protocols.
    • Click Create.

gcloud

  1. Use the following gcloud command to create a firewall rule named fw-allow-health-checks that allows incoming connections to instances in your network with the allow-health-checks tag. Replace NETWORK_NAME with the name of your network.

    gcloud compute firewall-rules create fw-allow-health-checks \
        --network NETWORK_NAME \
        --action ALLOW \
        --direction INGRESS \
        --source-ranges 35.191.0.0/16,130.211.0.0/22 \
        --rules tcp

To learn more, see configure firewal rule for health checks.

Configuring GKE / Kubernetes services with NEGs

The first step in configuring GKE / Kubernetes services with NEGs is to expose the services that need to be managed by Traffic Director. To be exposed through NEGs, each specification must have the following annotation, matching the port that you want to expose.

...
metadata:
  annotations:
    cloud.google.com/neg: '{"exposed_ports":{"80":{}}}'

For each service, a standalone NEG is created, containing endpoints that are the pod's IP addresses and ports. For more information and examples, refer to Standalone network endpoint groups.

For demonstration purposes, you can deploy a sample service that serves its hostname over HTTP on port 80:

wget -q -O - \
https://storage.googleapis.com/traffic-director/demo/trafficdirector_service_sample.yaml \
| kubectl apply -f -

Verify that the new service hostname is created and the application pod is running:

kubectl get svc

This returns:

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service-test     ClusterIP   10.71.9.71   none          80/TCP    41m
[..skip..]

kubectl get pods

This returns:

NAME                        READY     STATUS    RESTARTS   AGE
app1-6db459dcb9-zvfg2       1/1       Running   0          6m
[..skip..]

Recording the NEG name

Find the NEG created from the example above and record the NEG's name.

Console

To view a list of network endpoint groups, go to the Network Endpoint Groups page in the Google Cloud Console.
Go to the Network Endpoint Groups page

gcloud

gcloud beta compute network-endpoint-groups list

This returns:

NAME                                           LOCATION       ENDPOINT_TYPE   SIZE
k8s1-7e91271e-default-service-test-80-a652810c  us-central1-a  GCE_VM_IP_PORT  1

Save the NEG name in the NEG_NAME variable, for example:

NEG_NAME=$(gcloud beta compute network-endpoint-groups list \
| grep service-test | awk '{print $1}')

Configuring Google Cloud load balancing components for Traffic Director

The instructions in this section ensure that GKE services are accessible on the service VIP load balanced by Traffic Director, using a load balancing configuration similar to other Google Cloud Load Balancing products.

You must configure the following components:

The Traffic Director configuration example that follows makes these assumptions:

  1. The NEGs and all other resources are created in default network, with auto mode, in the zone us-central1-a.
  2. The NEG name for the cluster is stored in the ${NEG_NAME} variable.

Creating the health check

Create the health check.

Console

  1. Go to the Health checks page in the Google Cloud Console.
    Go to the Health checks page
  2. Click Create Health Check.
  3. For the name, enter td-gke-health-check.
  4. For the protocol, select HTTP.
  5. Click Create.

gcloud

gcloud compute health-checks create http td-gke-health-check \
    --use-serving-port

Creating the backend service

Create a global backend service with a load balancing scheme of INTERNAL_SELF_MANAGED. In the Cloud Console, the load balancing scheme is set implicitly. Add the health check to the backend service.

Console

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. On the Services tab, click Create Service.

  3. Click Continue.

  4. For the service name, enter td-gke-service.

  5. Select your NEG.

  6. Set the Maximum RPS to 5.

  7. Select your health check, or click Create another health check and make sure to select HTTP as the protocol.

  8. Click Create.

gcloud

gcloud compute backend-services create td-gke-service \
    --global \
    --health-checks td-gke-health-check \
    --load-balancing-scheme INTERNAL_SELF_MANAGED

Add backend NEGs to the backend service.

gcloud compute backend-services add-backend td-gke-service \
    --global \
    --network-endpoint-group ${NEG_NAME} \
    --network-endpoint-group-zone us-central1-a \
    --balancing-mode RATE \
    --max-rate-per-endpoint 5

Creating the route rule

Use these instructions to create the route rule, forwarding rule, and internal IP address for your Traffic Director configuration.

Traffic sent to the internal IP address is intercepted by the Envoy proxy and sent to the appropriate service according to the host and path rules.

The forwarding rule is created as a global forwarding rule with the load-balancing-scheme set to INTERNAL_SELF_MANAGED.

You can set the address of your forwarding rule to 0.0.0.0. If you do, traffic is routed based on the HTTP hostname and path information configured in the URL map, regardless of the actual IP address that the hostname resolves to. In this case, the URLs (hostname plus URL path) of your services, as configured in the host rules, must be unique within your service mesh configuration. That is, you cannot have two different services, with different set of backends, that both use the same hostname and path combination.

Alternatively, you can enable routing based on the actual destination VIP of the service. If you configure the VIP of your service as an address parameter of the forwarding rule, only requests destined to this IP address are routed based on the HTTP parameters specified in the URL map.

Console

In the console, the target proxy is combined with the forwarding rule. When you create the forwarding rule, Google Cloud automatically creates a target HTTP proxy and attaches it to the URL map.

The route rule consist of the forwarding rule and the host and path rules (also known as the URL map).

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. On the Policies tab, click Create Routing Rule.

  3. Enter a policy name.

  4. Click Add forwarding rule.

  5. For the forwarding rule name, enter td-gke-forwarding-rule.

  6. Select your network.

  7. Select your Internal IP.

  8. Click Save.

  9. Optionally, add custom host and path rules or leave the path rules as the defaults and set the host to service-test.

  10. Click Save.

gcloud

  1. Create a URL map that uses the backend service.

    gcloud compute url-maps create td-gke-url-map \
       --default-service td-gke-service
    
  2. Create a URL map path matcher and a host rule to route traffic for your service based on hostname and a path. This example uses service-test as the service name and a default path matcher that matches all path requests for this host (/*). service-test is also the configured name of the Kubernetes service that is used in the sample configuration above.

    gcloud compute url-maps add-path-matcher td-gke-url-map \
       --default-service td-gke-service \
       --path-matcher-name td-gke-path-matcher
    
    gcloud compute url-maps add-host-rule td-gke-url-map \
       --hosts service-test \
       --path-matcher-name td-gke-path-matcher
    
  3. Create the target HTTP proxy.

    gcloud compute target-http-proxies create td-gke-proxy \
       --url-map td-gke-url-map
    
  4. Create the forwarding rule.

    gcloud compute forwarding-rules create td-gke-forwarding-rule \
      --global \
      --load-balancing-scheme=INTERNAL_SELF_MANAGED \
      --address=0.0.0.0 \
      --target-http-proxy=td-gke-proxy \
      --ports 80 --network default
    

At this point, Traffic Director is configured to load balance traffic for the services specified in the URL map across backends in the network endpoint group.

Depending on how your microservices are distributed on your network, you might need to add more forwarding rules or more host and path rules to the URL map.

Verifying the configuration by deploying a sample client for tests

This section shows how to reach Traffic Director backends from a client application.

To demonstrate functionality, you can deploy a sample pod running Busybox. The pod has access to service-test, which was created in the previous section and receives traffic that is load balanced by Traffic Director.

Injecting a sidecar proxy into GKE / Kubernetes pods

To access a service managed by Traffic Director, a pod must have an xDS API-compatible sidecar proxy installed.

The following steps provide a reference configuration for a given application deployment with sidecar injection.

  1. Download the reference specification from https://storage.googleapis.com/traffic-director/trafficdirector_istio_sidecar.yaml.
  2. Modify your application deployment specification by adding two more container specifications from the trafficdirector_istio_sidecar.yaml file:
    1. Sidecar proxy container that sets up the Envoy bootstrap configuration and starts Envoy (see the example of Istio-proxy container defined in trafficdirector_istio_sidecar.yaml).
    2. Init container that sets up the required netfilter rules for interception, and runs with the permissions required to modify netfilter (see the example of initContainers defined in trafficdirector_istio_sidecar.yaml).
  3. (Optional) Specify the IP address range for traffic to be intercepted by a sidecar proxy. For this, replace the '-i "*"' parameter in the args section of the initContainers definition with an IP address of the service for which traffic is redirected. By default, all traffic is intercepted.
    1. For example, you can specify "-i "10.0.0.0/24" to only redirect traffic for the 10.0.0.0/24 range.
  4. Re-deploy pods using the new deployment specification.

In this example, you deploy a Busybox client with an Istio-proxy sidecar and init containers added to the deployment using the reference specification:

wget -q -O - \
https://storage.googleapis.com/traffic-director/demo/trafficdirector_client_sample.yaml \
| kubectl apply -f -

The Busybox pod has two containers running. The first container is the client based on the Busybox image and the second container is the Envoy proxy injected as a sidecar. You can get more information about the pod by running the following command:

kubectl describe pods -l run=client

Reaching the backend service

Once configured, applications on pods that have a sidecar proxy injected can access services managed by Traffic Director services. To verify the configuration, you can access a shell on one of the containers.

If you used the demo configuration provided in this guide, you can execute the following verification command to make sure that the hostname of the serving pod is returned.

# Get name of the pod with busybox.
BUSYBOX_POD=$(kubectl get po -l run=client -o=jsonpath='{.items[0].metadata.name}')

# Command to execute that tests connectivity to the service service-test.
TEST_CMD="wget -q -O - service-test; echo"

# Execute the test command on the pod.
kubectl exec -it $BUSYBOX_POD -c busybox -- /bin/sh -c "$TEST_CMD"

Understanding traffic interception by the sidecar proxy

Note that, in this example, when the Busybox client makes requests to the backend service, each request is proxied by the sidecar proxy.

This demonstration application uses the Envoy proxy. Because of that, the client sees 'server: envoy' in the header of server responses.

To confirm this, use the following commands:

# Get the name of the pod with Busybox.
BUSYBOX_POD=$(kubectl get po -l run=client -o=jsonpath='{.items[0].metadata.name}')

# Command to send a request to service-test and output server response headers.
TEST_CMD="wget -S --spider service-test; echo"

# Execute the test command on the pod.
kubectl exec -it $BUSYBOX_POD -c busybox -- /bin/sh -c "$TEST_CMD"

In this example, you created a forwarding rule using the VIP address 0.0.0.0. This means that Traffic Director forwards requests to the backend based on the Host header only. In this case, the destination IP address can be any address as long as the request host header matches the host defined in the URL map service-test.

To confirm that, run the following test commands:

# Get name of the pod with Busybox.
BUSYBOX_POD=$(kubectl get po -l run=client -o=jsonpath='{.items[0].metadata.name}')

# Command to send a request to service-test setting the Host header and using a random IP address.
TEST_CMD="wget -q --header 'Host: service-test' -O - 1.2.3.4; echo"

# Execute the test command on the pod.
kubectl exec -it $BUSYBOX_POD -c busybox -- /bin/sh -c "$TEST_CMD"