Traffic Director overview

Traffic Director is a managed control plane for application networking. Traffic Director allows you to deliver global, highly-available services with advanced application networking capabilities such as traffic management and observability.

As the number of services and microservices in your deployment grows, you typically start to encounter common application networking challenges:

  • How do I make my services resilient?
  • How do I get traffic to my services and how do services know about, and communicate with, each other?
  • How do I understand what is happening when my services are communicating with each other?
  • How do I update my services without risking an outage?
  • How do I manage the infrastructure that makes this all possible?
Services need to communicate with each other (click to enlarge)
Services need to communicate with each other (click to enlarge)

Traffic Director helps you solve these types of challenges in a modern, service-based deployment. Best of all, it relies on Google Cloud-managed infrastructure so you get best-in-class capabilities, without having to manage your own infrastructure. You focus on shipping application code that solves your business problems while letting Traffic Director manage application networking complexities.

Traffic Director for service mesh

A common pattern for solving application networking challenges is to use a service mesh. Traffic Director supports service mesh, as well as many other deployment patterns that fit your needs.

A typical service mesh (click to enlarge)
A typical service mesh (click to enlarge)

Typical service mesh

In a typical service mesh:

  • You deploy your services to a Kubernetes cluster.
  • Each of the services' Pods has a dedicated proxy (usually Envoy) running as a sidecar proxy.
  • Each sidecar proxy talks to networking infrastructure (a control plane) that is installed in your cluster. The control plane tells the sidecar proxies about services, endpoints, and policies in your service mesh.
  • When a Pod sends or receives a request, the request goes to the Pod's sidecar proxy. The sidecar proxy handles the request, for example, sending it to its intended destination.

In the diagrams in this document, proxies are represented by the six-sided pink icons. The control plane is connected to each proxy and provides information that the proxies need to handle requests. Arrows between boxes show traffic flows. For example, application code in Service A sends a request. The proxy handles the request and forwards it to Service B.

This model enables you to move networking logic out of your application code. You can focus on delivering business value while letting your infrastructure take care of application networking.

How Traffic Director is different

Traffic Director works similarly to that model, but it's different in important ways. It all starts with the fact that Traffic Director is a Google Cloud -managed service. You don't install it. It doesn't run in your cluster. And you don't need to maintain it.

In the following diagram, Traffic Director is the control plane. There are four services in this Kubernetes cluster, each with sidecar proxies that are connected to Traffic Director. Traffic Director provides the information the proxies need to route requests. For example, application code on a Pod belonging to Service A sends a request. The sidecar proxy running alongside this Pod handles the request and routes it to a Pod belonging to Service B.

An example of a service mesh with Traffic Director (click to enlarge)
An example of a service mesh with Traffic Director (click to enlarge)

Beyond service mesh

Traffic Director supports more types of deployments than a typical service mesh.

Multi-cluster Kubernetes

With Traffic Director, you get application networking that works across Kubernetes clusters. In the following diagram, Traffic Director provides the control plane for Kubernetes clusters in us-central1 and europe-west1. Requests can be routed among the three services in us-central1, among the two services in europe-west1, and between services in the two clusters.

An example of multi-cluster Kubernetes with Traffic Director (click to enlarge)
An example of multi-cluster Kubernetes with Traffic Director (click to enlarge)

Your service mesh can extend across multiple Kubernetes clusters, in multiple Google Cloud regions. Services in one cluster can talk to services in another cluster. And you can even have services that consist of Pods in multiple clusters.

With Traffic Director's proximity-based global load balancing, requests destined for Service B go to the nearest Pod that can serve the request. You also get seamless failover: if a Pod happens to be down, the request automatically fails over to another Pod that can serve in the request, even if this Pod is in a different Kubernetes cluster.

Virtual machines

Kubernetes is becoming increasingly popular but many workloads are deployed to virtual machines (VMs). Traffic Director solves application networking for these workloads too: your VM-based workloads easily interoperate with your Kubernetes-based workloads.

In the following diagram, traffic enters your deployment through External HTTP(S) Load Balancing. It is routed to Service A in the Kubernetes cluster in asia-southeast1 and to Service D on a VM in europe-west1.

An example of VMs and Kubernetes with Traffic Director (click to enlarge)
An example of VMs and Kubernetes with Traffic Director (click to enlarge)

Google provides a seamless mechanism to set up VM-based workloads with Traffic Director. You simply add a flag to your Compute Engine VM instance template and we handle the infrastructure setup. This includes installing and configuring the proxies that deliver application networking capabilities.

Proxyless gRPC

gRPC is a feature-rich open-source RPC framework that you can use to write high-performance microservices. With Traffic Director, you can easily bring application networking capabilities (such as service discovery, load balancing, and traffic management) to your gRPC applications (learn more).

In the following diagram, gRPC applications route traffic to services based in Kubernetes clusters in one region and to services running on VMs in different regions. Two of the services include sidecar proxies and the others are proxyless.

An example of proxyless gRPC applications with Traffic Director (click to enlarge)
An example of proxyless gRPC applications with Traffic Director (click to enlarge)

Traffic Director supports proxyless gRPC services. These are services that use a recent version of the open source gRPC library that supports the xDS APIs. This means that your gRPC applications can connect to Traffic Director using the same xDS APIs that Envoy uses.

After they are connected, the gRPC library takes care of application networking functionality such as service discovery, load balancing and traffic management. This happens natively in gRPC, so service proxies are not required — that's why they're called proxyless gRPC applications.

Ingress and gateways

For many use cases, you need to handle traffic that originates from clients that aren't configured by Traffic Director. For example, you may need to ingress public internet traffic to your microservices. You might also want to configure a load balancer as a reverse proxy that handles traffic from a client before sending it on to a destination.

In the following diagram, an external HTTP(S) load balancer enables ingress for external clients, with traffic routed to services in a Kubernetes cluster. An internal HTTP(S) load balancer routes internal traffic to the service running on the VM.

Traffic Director with Cloud Load Balancing for ingress (click to enlarge)
Traffic Director with Cloud Load Balancing for ingress (click to enlarge)

Traffic Director works with Google Cloud Load Balancing to provide a managed ingress experience. Just set up an external or internal load balancer, then configure that load balancer to send traffic to your microservices. In the above diagram, public internet clients reach your services through External HTTP(S) Load Balancing. Clients, such as microservices that reside on your Virtual Private Cloud (VPC) network, use Internal HTTP(S) Load Balancing to reach your services.

In the following diagram, a VM in the europe-west1 region runs a proxy that acts as a gateway to three services that are not running proxies. Traffic from both an external HTTP(S) load balancer and an internal HTTP(S) load balancer is routed to the gateway and then to the three services.

Traffic Director used to configure a gateway (click to enlarge)
Traffic Director used to configure a gateway (click to enlarge)

For some use cases, you may want to set up Traffic Director to configure a gateway. This gateway is essentially a reverse proxy, typically Envoy running on one or more VMs, that listens for inbound requests, handles them, and sends them to a destination. The destination may be in any Google Cloud region or Google Kubernetes Engine cluster. It can even be a destination outside of Google Cloud that is reachable from Google Cloud by using hybrid connectivity.

Multi-environment

Whether you have services in Google Cloud, on-premises, in other clouds, or all of the these, your fundamental application networking challenges remain the same. How do you get traffic to these services? And how do these services communicate with each other?

In the following diagram, Traffic Director routes traffic from services running in Google Cloud to Service G, running in another public cloud, and to Service E and Service F, both running in an on-premises data center. Service A, Service B, and Service C, use Envoy as a sidecar proxy, while Service D is a proxyless gRPC service.

Traffic Director used for communication across environments (click to enlarge)
Traffic Director used for communication across environments (click to enlarge)

When you use Traffic Director, you can send requests to destinations outside of Google Cloud. This enables you to use Cloud Interconnect or Cloud VPN to privately route traffic from services inside of Google Cloud to services or gateways in other environments.

Setting up Traffic Director

Setting up Traffic Director consists of two steps. After you complete the setup process, your infrastructure handles application networking and Traffic Director keeps everything up-to-date based on changes to your deployment.

Deploy your applications

First, you deploy your application code to containers or VMs. We provide mechanisms that allow you to easily add application networking infrastructure (typically Envoy proxies) to your VM instances and Pods. This infrastructure is set up to talk to Traffic Director and learn about your services.

Configure Traffic Director

Next, you configure your global services and define how traffic should be handled. You can use the Google Cloud Console, gcloud CLI, Traffic Director API or other tooling, such as Terraform, to configure Traffic Director.

After you complete these steps, Traffic Director is ready to configure your application networking infrastructure.

Infrastructure handles application networking

When an application sends a request to my-service, your application networking infrastructure (for example, an Envoy sidecar proxy) handles the request according to information received from Traffic Director. This enables a request for my-service to be seamlessly routed to an application instance that is able to receive the request.

Monitoring and continuous updates

Traffic Director monitors the application instances that constitute your services. This enables Traffic Director to discover that a service is healthy or that a service's capacity has changed, for example, when a new Kubernetes Pod is created. Based on this information, Traffic Director continuously updates your application networking infrastructure.

Features

Traffic Director's features deliver application networking capabilities to your microservices. Some highlights are discussed in this section.

Fully-managed control plane, health checking, and load balancing

You want to spend your time delivering business value, not managing infrastructure. Traffic Director is a fully-managed solution with an uptime SLA so you don't have to install, configure, or update infrastructure. You benefit from the same infrastructure that Google uses for health checking and global load balancing.

Built on open source products

Traffic Director uses the same control plane (xDS) APIs that popular open source projects such as Envoy and Istio use. See the xDS control plane APIs page to view supported API versions.

The infrastructure that delivers application networking capabilities — either Envoy or gRPC, depending on your use case — is also open source so you don't need to worry about being locked in to proprietary infrastructure.

Scale

From one-off application networking solutions to massive service mesh deployments with thousands of services, Traffic Director is built to meet your scaling requirements.

Service discovery and tracking your endpoints and backends

When your application sends a request to my-service, the request is handled seamlessly by your infrastructure and sent to the correct destination. Your application doesn't need to know anything about IP addresses, protocols, or other networking complexities.

Global load balancing and failover

Traffic Director uses Google's global load balancing and health checking to optimally balance traffic based on backend proximity, health and capacity. You improve your service availability by having traffic automatically fail over to healthy backends with capacity.

Traffic management

Advanced traffic management, including routing and request manipulation (based on host name, path, headers, cookies and more), enables you to determine how traffic flows between your services. You can also apply actions like retries, redirects and weight-based traffic splitting for canary deployments. Advanced patterns like fault injection, traffic mirroring, and outlier detection enable DevOps use cases that improve your resiliency.

Observability

Your application networking infrastructure collects telemetry information, like metrics, logs and traces, that can be aggregated centrally in Cloud Operations. Once collected, you can gain insights and create alerts so you get notified if anything goes wrong.

What's next

Learn more about Traffic Director capabilities such as:

Read about the wide variety of features that Traffic Director offers.

If you're ready to get started, read one of our setup guides.