Overview
In a service mesh, your application code doesn't need to know about your networking configuration. Instead, your applications communicate over a data plane, which is configured by a control plane that handles service networking. In this guide, Traffic Director is your control plane and the Envoy sidecar proxies are your data plane.
The Envoy sidecar injector makes it easy to add Envoy sidecar proxies to your Google Kubernetes Engine Pods. When the Envoy sidecar injector adds a proxy, it also sets that proxy up to handle application traffic and connect to Traffic Director for configuration.
The guide walks you through a simple setup of Traffic Director with Google Kubernetes Engine. These steps provide the foundation that you can extend to advanced use cases, such as a service mesh that extends across multiple Google Kubernetes Engine clusters and, potentially, Compute Engine VMs.
The setup process involves:
- Creating a GKE cluster for your workloads.
- Installing the Envoy sidecar injector and enabling injection.
- Deploying a sample client and verifying injection.
- Deploying a Kubernetes service for testing.
- Configuring Traffic Director with Cloud Load Balancing components to route traffic to the test service.
- Verifying the configuration by sending a request from the sample client to the test service.
Prerequisites
Before you follow the instructions in this guide, review Preparing for Traffic Director setup and make sure that you have completed the prerequisite tasks described in that document.
Creating a GKE cluster for your workloads
GKE clusters must meet the following requirements to support Traffic Director:
- Network endpoint groups support must be enabled. For more information and examples, refer to Standalone network endpoint groups.
- The service account for your GKE nodes/pods must have permission to access the Traffic Director API. For more information on the required permissions, refer back to Enabling the service account to access the Traffic Director API.
Creating the GKE cluster
Create a GKE cluster called traffic-director-cluster
in the us-central1-a
zone.
gcloud container clusters create traffic-director-cluster \ --zone us-central1-a \ --scopes=https://www.googleapis.com/auth/cloud-platform \ --enable-ip-alias
Pointing kubectl to the newly created cluster
Change the current context for kubectl
to the newly created cluster by issuing
the following command:
gcloud container clusters get-credentials traffic-director-cluster \ --zone us-central1-a
Installing the Envoy sidecar injector
The following sections provide instructions for installing the Envoy sidecar injector. When the sidecar injector is enabled, it automatically deploys sidecar proxies for both new and existing Google Kubernetes Engine workloads. Because the Envoy sidecar injector runs inside of your GKE cluster, you need to install it once to each cluster if you are using Traffic Director to support a multi-cluster service mesh.
Downloading the sidecar injector
Download and extract the Envoy sidecar injector.
wget https://storage.googleapis.com/traffic-director/td-sidecar-injector.tgz tar -xzvf td-sidecar-injector.tgz cd td-sidecar-injector
Configuring the sidecar injector
Configure the sidecar injector by editing specs/01-configmap.yaml
to:
- Populate
TRAFFICDIRECTOR_GCP_PROJECT_NUMBER
by replacingyour-project-here
with the project number of your project. The project number is a numeric identifier for your project. For information about obtaining a list of all your projects, see Identifying projects. - Populate
TRAFFICDIRECTOR_NETWORK_NAME
by replacingyour-network-here
with the Google Cloud Virtual Private Cloud network name that you want to use with Traffic Director. Make note of this VPC network name, because you will need it later when you configure Traffic Director.
You can also optionally enable logging and tracing for each proxy that is
injected automatically. For more information on these configurations, review
Configuring additional attributes for sidecar proxies.
When you use the sidecar injector, the value of TRAFFICDIRECTOR_ACCESS_LOG_PATH
can only be set to a file in the directory /etc/istio/proxy/
. For example,
the directory /etc/istio/proxy/access.log
is a valid location.
Note that TRAFFICDIRECTOR_INTERCEPTION_PORT
should not be configured in this
ConfigMap
, because it is already configured by the sidecar injector.
Configuring TLS for the sidecar injector
This section shows you how to configure TLS for the sidecar injector.
The sidecar injector uses a Kubernetes mutating admission webhook to inject proxies when new pods are created. This webhook is an HTTPS endpoint so you need to provide a key and certificate for TLS.
You can create a private key and a self-signed certificate using openssl
to
secure the sidecar injector.
Optionally, if you have your own private key and certificate signed by a trusted certificate authority (CA), you can skip this next step.
CN=istio-sidecar-injector.istio-control.svc openssl req \ -x509 \ -newkey rsa:4096 \ -keyout key.pem \ -out cert.pem \ -days 365 \ -nodes \ -subj "/CN=${CN}" cp cert.pem ca-cert.pem
This example openssl
command outputs a private 4096-bit RSA key to key.pem
and a self-signed certificate in X.509 format to cert.pem
. Because the
certificate is self-signed, the certificate is copied to ca-cert.pem
and
considered the certificate of the signing CA as well. The certificate remains
valid for 365 days and does not require a passphrase. For more information about
certificate creation and signing, refer to the Kubernetes documentation about
Certificate Signing Requests.
The steps in this section must be repeated annually to regenerate and re-apply new keys and certificates before they expire.
After you have your key and certificates, you must create a Kubernetes secret and update the sidecar injector's webhook.
Create the namespace under which the Kubernetes secret should be created:
kubectl apply -f specs/00-namespaces.yaml
Create the secret for the sidecar injector.
kubectl create secret generic istio-sidecar-injector -n istio-control \ --from-file=key.pem \ --from-file=cert.pem \ --from-file=ca-cert.pem
Modify the
caBundle
of the sidecar injection webhook namedistio-sidecar-injector-istio-control
inspecs/02-injector.yaml
:CA_BUNDLE=$(cat cert.pem | base64 | tr -d '\n') sed -i "s/caBundle:.*/caBundle:\ ${CA_BUNDLE}/g" specs/02-injector.yaml
Installing the sidecar injector to your GKE cluster
Deploy the sidecar injector.
kubectl apply -f specs/
Verify that the sidecar injector is running.
kubectl get pods -A | grep sidecar-injector
This returns output similar to the following:
istio-control istio-sidecar-injector-6b475bfdf9-79965 1/1 Running 0 11s istio-control istio-sidecar-injector-6b475bfdf9-vntjd 1/1 Running 0 11s
Enabling sidecar injection
The following command enables injection for the default
namespace. The
sidecar injector injects sidecar containers to pods created under this
namespace:
kubectl label namespace default istio-injection=enabled
You can verify that the default
namespace is properly enabled by running
the following:
kubectl get namespace -L istio-injection
This should return:
NAME STATUS AGE ISTIO-INJECTION default Active 7d16h enabled istio-control Active 7d15h istio-system Active 7d15h
Deploying a sample client and verifying injection
This section shows how to deploy a sample pod running Busybox, which provides a simple interface for reaching a test service. In a real deployment, you would deploy your own client application instead.
kubectl create -f demo/client_sample.yaml
The Busybox pod consists of two containers. The first container is the client based on the Busybox image and the second container is the Envoy proxy injected by the sidecar injector. You can get more information about the pod by running the following command:
kubectl describe pods -l run=client
This should return:
… Init Containers: # Istio-init sets up traffic interception for the pod. Istio-init: … Containers: # busybox is the client container that runs application code. busybox: … # Istio-proxy is the container that runs the injected Envoy proxy. Istio-proxy: …
Deploying a Kubernetes service for testing
The following sections provide instructions for setting up a test service that you use later in this guide to provide end-to-end verification of your setup.
Configuring GKE services with NEGs
GKE services must be exposed through network endpoint groups
(NEGs) so that you can configure them as backends of a Traffic Director backend
service. Add the NEG annotation to your Kubernetes service specification and
choose a name (by replacing NEG-NAME
in the sample below) so that you can find
it easily later. You need the name when you attach the NEG to your
Traffic Director backend service. For more information on annotating NEGs, see
Naming NEGs.
... metadata: annotations: cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "NEG-NAME"}}}' spec: ports: - port: 80 name: service-test protocol: TCP targetPort: 8000
This annotation creates a standalone NEG containing endpoints corresponding with the IP addresses and ports of the service's pods. For more information and examples, refer to Standalone network endpoint groups.
The following sample service includes the NEG annotation. The service serves
the hostname over HTTP on port 80
. Use the following command to get the
service and deploy it to your GKE cluster.
wget -q -O - \ https://storage.googleapis.com/traffic-director/demo/trafficdirector_service_sample.yaml \ | kubectl apply -f -
Verify that the new service is created and the application pod is running:
kubectl get svc
The output should be similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-test ClusterIP 10.71.9.71 none 80/TCP 41m [..skip..]
Verify that the application pod associated with this service is running:
kubectl get podsThis returns:
NAME READY STATUS RESTARTS AGE app1-6db459dcb9-zvfg2 2/2 Running 0 6m [..skip..]
Saving the NEG's name
Find the NEG created from the example above and record its name for Traffic Director configuration in the next section.
gcloud compute network-endpoint-groups list
This returns the following:
NAME LOCATION ENDPOINT_TYPE SIZE NEG-NAME us-central1-a GCE_VM_IP_PORT 1
Save the NEG's name in the NEG_NAME variable:
NEG_NAME=$(gcloud compute network-endpoint-groups list \ | grep service-test | awk '{print $1}')
Configuring Traffic Director with Cloud Load Balancing components
This section configures Traffic Director using Compute Engine load balancing resources. This enables the sample client's sidecar proxy to receive configuration from Traffic Director. Outbound requests from the sample client are handled by the sidecar proxy and routed to the test service.
You must configure the following components:
- A health check. For more information on health checks, read Health Check Concepts and Creating Health Checks.
- A backend service. For more information on backend services, read Backend Services.
- A routing rule map. This includes creating a forwarding rule, a target HTTP proxy and a URL map. For more information, read Using forwarding rules for Traffic Director, Using target proxies for Traffic Director, and Using URL maps.
Creating the health check and firewall rule
Console
- Go to the Health checks page in the Google Cloud Console.
Go to the Health checks page - Click Create Health Check.
- For the name, enter
td-gke-health-check
. - For the protocol, select HTTP.
Click Create.
Go to the Firewall page in the Google Cloud Console.
Go to the Firewall pageClick Create firewall rule.
On the Create a firewall rule page, supply the following information:
- Name: Provide a name for the rule. For this example, use
fw-allow-health-checks
. - Network: Choose a VPC network.
- Priority: Enter a number for the priority. Lower numbers have higher priorities. Be sure that the firewall rule has a higher priority than other rules that might deny ingress traffic.
- Direction of traffic: Choose ingress.
- Action on match: Choose allow.
- Targets: Choose All instances in the network.
- Source filter: Choose IP ranges.
- Source IP ranges:
35.191.0.0/16,130.211.0.0/22
- Allowed protocols and ports: Use
tcp
. TCP is the underlying protocol for all health check protocols. - Click Create.
- Name: Provide a name for the rule. For this example, use
gcloud
Create the health check.
gcloud compute health-checks create http td-gke-health-check \ --use-serving-port
Create the firewall rule to allow the health checker IP address ranges.
gcloud compute firewall-rules create fw-allow-health-checks \ --action ALLOW \ --direction INGRESS \ --source-ranges 35.191.0.0/16,130.211.0.0/22 \ --rules tcp
Creating the backend service
Create a global backend service
with a load balancing scheme of INTERNAL_SELF_MANAGED
. In the
Cloud Console, the load balancing scheme is set implicitly. Add the
health check to the backend service.
Console
Go to the Traffic Director page in the Cloud Console.
On the Services tab, click Create Service.
Click Continue.
For the service name, enter
td-gke-service
.Under Backend type, select Network endpoint groups.
Select the network endpoint group you created.
Set the Maximum RPS to
5
.Click Done.
Under Health check, select
td-gke-health-check
, which is the health check you created.Click Continue.
gcloud
Create the backend service and associate the health check with the backend service.
gcloud compute backend-services create td-gke-service \ --global \ --health-checks td-gke-health-check \ --load-balancing-scheme INTERNAL_SELF_MANAGED
Add the previously created NEG as a backend to the backend service.
gcloud compute backend-services add-backend td-gke-service \ --global \ --network-endpoint-group ${NEG_NAME} \ --network-endpoint-group-zone us-central1-a \ --balancing-mode RATE \ --max-rate-per-endpoint 5
Creating the routing rule map
The routing rule map defines how Traffic Director routes traffic in your mesh. As part of the routing rule map, you configure a virtual IP (VIP) address and a set of associated traffic management rules, such as host-based routing. When an application sends a request to the VIP, the attached Envoy sidecar proxy does the following:
- Intercepts the request.
- Evaluates it according to the traffic management rules in the URL map.
- Selects a backend service based on the hostname in the request.
- Chooses a backend or endpoint associated with the selected backend service.
- Sends traffic to that backend or endpoint.
Console
In the console, the target proxy is combined with the forwarding rule. When you create the forwarding rule, Google Cloud automatically creates a target HTTP proxy and attaches it to the URL map.
The route rule consist of the forwarding rule and the host and path rules (also known as the URL map).
Go to the Traffic Director page in the Cloud Console.
Click Routing rule maps
Click Create Routing Rule.
Enter
td-gke-url-map
as the Name of the URL map.Click Add forwarding rule.
For the forwarding rule name, enter
td-gke-forwarding-rule
.Select your network.
Select your Internal IP.
Click Save.
Optionally, add custom host and path rules or leave the path rules as the defaults.
Set the host to
service-test
.Click Save.
gcloud
Create a URL map that uses
td-gke-service
as the default backend service.gcloud compute url-maps create td-gke-url-map \ --default-service td-gke-service
Create a URL map path matcher and a host rule to route traffic for your service based on hostname and a path. This example uses
service-test
as the service name and a default path matcher that matches all path requests for this host (/*
).gcloud compute url-maps add-path-matcher td-gke-url-map \ --default-service td-gke-service \ --path-matcher-name td-gke-path-matcher gcloud compute url-maps add-host-rule td-gke-url-map \ --hosts service-test \ --path-matcher-name td-gke-path-matcher
Create the target HTTP proxy.
gcloud compute target-http-proxies create td-gke-proxy \ --url-map td-gke-url-map
Create the forwarding rule.
gcloud compute forwarding-rules create td-gke-forwarding-rule \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --address=0.0.0.0 \ --target-http-proxy=td-gke-proxy \ --ports 80 --network default
At this point, Traffic Director configures your sidecar proxies to route
requests that specify the service-test
hostname to backends of
td-gke-service
. In this case, those backends are endpoints in the network
endpoint group associated with the Kubernetes test service that you deployed
earlier.
Verifying the configuration
This section shows how to verify that traffic sent from the sample Busybox
client is routed to your service-test
Kubernetes service. To send a test
request, you can access a shell on one of the containers and execute the
following verification command. A service-test
Pod should return the hostname
of the serving pod.
# Get the name of the pod running Busybox. BUSYBOX_POD=$(kubectl get po -l run=client -o=jsonpath='{.items[0].metadata.name}') # Command to execute that tests connectivity to the service service-test. TEST_CMD="wget -q -O - service-test; echo" # Execute the test command on the pod. kubectl exec -it $BUSYBOX_POD -c busybox -- /bin/sh -c "$TEST_CMD"
Here's how the configuration is verified:
- The sample client sent a request that specified the
service-test
hostname. - The sample client has an Envoy sidecar proxy that was injected by the Envoy sidecar injector.
- The sidecar proxy intercepted the request.
- Because you configured
0.0.0.0
as the VIP in your routing rule map, the Envoy inspected the request's hostname. - Using the URL map, the Envoy matched the
service-test
hostname to thetd-gke-service
Traffic Director service. - The Envoy chose an endpoint from the network endpoint group associated with
td-gke-service
. - The Envoy sent the request to a pod associated with the
service-test
Kubernetes service.
What's next
Depending on how your microservices are distributed on your network, you might need to add more forwarding rules or more host and path rules to the URL map. For more information on forwarding rules and URL maps, read the following documents:
- Using forwarding rules
- Forwarding rules REST resource
- Global forwarding rules REST resource
- Using URL maps
- URL Map REST resource
- For more sidecar injector configuration options, see Options for Google Kubernetes Engine Pod setup with automatic Envoy injection.