Traffic Director features

Traffic Director helps you run microservices in a global service mesh. The mesh handles networking for your microservices so that you can write application code that doesn't need to know about underlying networking complexities. This separation of application logic from networking logic allows you to improve your development velocity, increase service availability, and introduce modern DevOps practices to your organization.

Your service mesh consists of your applications, an xDS v2-compatible data plane (generally the open source Envoy proxy) and Traffic Director as your mesh control plane.

This document summarizes the features available in Traffic Director.

Fully managed control plane for service mesh

Traffic Director is a managed, highly available control plane service that runs in Google Cloud. You don't need to install or update your control plane, so you have one less component to manage in your service mesh infrastructure.

Platforms to run mesh services

You can run applications on the following platforms and adopt them into a global service mesh that is configured by Traffic Director.

Feature Supported
Compute Engine virtual machines (VMs)
Google Kubernetes Engine container instances
Kubernetes on Compute Engine container instances

Service management

Services in a mesh configured by Traffic Director benefit from service discovery, backend autoscaling, and endpoint auto-registration:

  • When an application in your mesh wants to reach another application, it can call on that service by name. This is referred to as service discovery.

  • These services are backed by instances that run your application code. These instances scale up or down dynamically based on your needs.

  • As new instances are created or removed, they need to be associated with your service. This is referred to as endpoint registration.

Feature Supported
Automated deployment of sidecar proxies for Compute Engine VMs
Automated injection of sidecar proxies for Google Kubernetes Engine Pods
Service discovery based on hostname
Instance autoscaling based on CPU utilization
Instance autoscaling based on traffic load/serving capacity (Compute Engine VMs in MIGs only)
Instance autohealing based on configurable health checks
Automatic endpoint registration for Compute Engine VM instances
Automatic endpoint registration for GKE container instances/pods
API to programmatically add or remove endpoints

Endpoints for your data plane traffic

Microservices use the data plane to reach services in your mesh, as well as outside of your mesh. Traffic Director enables you to separate application logic from networking logic so all your application needs to do is send requests to the data plane (for example, the sidecar proxy running alongside the application). The data plane takes care of sending requests to the right endpoint.

In the table below, applications described as being in the mesh are those applications that communicate with other services using the Traffic Director-managed data plane. Those applications can send traffic to in-mesh services, as well as services outside of the mesh.

Feature Supported
VM-based applications in the mesh
Container-based applications in the mesh
VM-based applications outside of the mesh
Container-based applications outside of the mesh

Data plane topologies

In the service mesh model, your applications communicate using a data plane. This data plane often consists of sidecar proxies deployed alongside your applications. Traffic Director is highly flexible and supports data plane topologies that fit your service networking needs.

Feature Supported
Sidecar proxies running alongside applications
Proxyless gRPC services
Middle proxies between two applications in a mesh
Edge proxies at the boundary of your mesh
Mesh spanning multiple GKE clusters and/or Compute Engine VMs in multiple regions

Programmatic, API-driven configuration

All configuration is exposed through our REST API and dashboard out-of-the-box, allowing you to automate changes across large teams and manage changes programmatically.

Feature Supported
REST APIs
Google Cloud Console
gcloud command-line interface
Deployment Manager
Terraform support

Request protocols

Applications can use the following request protocols when they communicate using the Traffic Director-configured data plane.

Feature Supported
HTTP
HTTP/2
gRPC

Routing and traffic management

Traffic Director supports advanced traffic management policies that you can use to steer, split, and shape traffic as it passes through your data plane. Note that most advanced traffic management is not enabled for Traffic Director with proxyless gRPC services.

Feature Supported with Envoy proxy Supported with proxyless gRPC
HTTP/Layer 7 request routing based on suffix/prefix/full/regex match on:
• Host name ✔ 1.30.0 or later
• Path ✔ 1.31.0 or later
• Headers ✔ 1.31.0 or later
• Method N/A
• Cookies ✔ 1.31.0 or later
• Request parameters N/A
Fault injection
Configurable timeouts
Retries
Redirects N/A
URI rewrites
Request/response header transformations
Traffic splitting
Traffic mirroring
Outlier detection

Load balancing

You can configure advanced load balancing methods and algorithms to load balance at the service, backend group (instance groups or network endpoint groups), and individual backend or endpoint levels. For more information, see Backend services overview.

Feature Supported with Envoy proxy Supported with proxyless gRPC
Service selection based on weight-based traffic splits
Backend (instance group or network endpoint group) selection based on region (prefer nearest region with healthy backend capacity) ✔ 1.30.0 or later
Backend selection using rate-based (requests per second) balancing mode ✔ 1.30.0 or later
Backend selection based on utilization-based balancing mode (VMs in Compute Engine instance groups only) ✔ 1.30.0 or later
Configurable maximum capacity per backend (Compute Engine and GKE only) ✔ 1.30.0 or later
Circuit breaking
Backend selection based on configurable load balancing policies*:
  • Round robin
  • Least request
  • Ring hash
  • Random
  • Original destination
  • Maglev
Round robin only

*See localityLbPolicy for additional details.

Failover

Enterprise workloads generally rely on high-availability deployments to ensure service uptime. Traffic Director supports these types of deployments by enabling multi-zone/multi-region redundancy.

Feature Supported
Automatic failover to another zone within the same region that has healthy backend capacity
Automatic failover to nearest region with healthy backend capacity

Health checks

Centralized health checking to determine backend health. For reference information, see Health checks overview.

Feature Supported
gRPC health checks
HTTP health checks
HTTPS health checks
HTTP/2 health checks
TCP health checks
Configurable health checks:
  • Port
  • Check intervals
  • Timeouts
  • Healthy and unhealthy thresholds
Configurable request path (HTTP, HTTPS, HTTP/2)
Configurable request string or path (TCP or SSL)
Configurable expected response string

Observability

Observability tools provide monitoring, debugging, and performance information to help you understand your service mesh. The following capabilities are either provided out-of-the-box or configured in your data plane. Your application code doesn't need to do anything special to generate this observability data.

The service health dashboard is available with proxyless gRPC services, but you cannot configure data plane logging and tracing. Traffic Director cannot configure a gRPC application's logging and tracing. You can enable this by following the instructions in the troubleshooting sections or gRPC guides available on open source sites. For example, you can use Opencensus to enable metrics collection and tracing in your proxyless gRPC services.

Feature Supported with proxies Supported with proxyless gRPC services
Service health dashboard
Data plane logging
Data plane tracing

Session affinity

Client-server communications often involve multiple successive requests. In such a case, it's helpful to route successive client requests to the same backend or server. Traffic Director provides configurable options to send requests from a particular client, on a best effort basis, to the same backend as long as the backend is healthy and has capacity. For more information, see Backend services overview

Feature Supported with proxies Supported with proxyless gRPC services
Client IP address
HTTP cookie
HTTP header
Generated cookie (sets client cookie on first request)

Network topologies

Traffic Director supports common Google Cloud network topologies.

Feature Supported
Single network in a Google Cloud project
Shared VPC (single network shared across multiple Google Cloud projects)

See Limitations for a detailed explanation of how Shared VPC is supported with Traffic Director.

Compliance

Traffic Director is compliant with the following standards.

Compliance certification
HIPAA
ISO 27001, ISO 27017, ISO 27018
SOC1, SOC2, SOC3
PCI DSS