Traffic Director helps you run microservices in a global service mesh. The mesh handles networking for your microservices so that you can write application code that doesn't need to know about underlying networking complexities. This separation of application logic from networking logic allows you to improve your development velocity, increase service availability, and introduce modern DevOps practices to your organization.
Your service mesh consists of your applications, an xDS-compatible data plane (generally the open source Envoy proxy) and Traffic Director as your mesh control plane.
You can also deploy proxyless gRPC services with Traffic Director in your service mesh using a supported version of gRPC.
This document summarizes the features available in Traffic Director. The value N/A (Not Applicable) means that a feature cannot be supported because it is not compatible with the particular Traffic Director configuration.
Fully managed control plane for service mesh
Traffic Director is a managed, highly available control plane service that runs in Google Cloud. You don't need to install or update your control plane, so you have one less component to manage in your service mesh infrastructure.
Supported xDS versions
Traffic Director uses open source xDS control plane APIs to configure Envoy and proxyless gRPC clients. These clients act on behalf of your application code to deliver Traffic Director's application networking capabilities.
Version | Support |
---|---|
xDS v2 | General Availability |
xDS v3 | Preview |
Platforms to run mesh services
You can run applications on the following platforms and adopt them into a global service mesh that is configured by Traffic Director.
Feature | Supported |
---|---|
Compute Engine virtual machines (VMs) | ✔ |
Google Kubernetes Engine container instances | ✔ |
Kubernetes on Compute Engine container instances | ✔ |
Service management
Services in a mesh configured by Traffic Director benefit from service discovery, backend autoscaling, and endpoint auto-registration:
When an application in your mesh wants to reach another application, it can call on that service by name. This is referred to as service discovery.
These services are backed by instances that run your application code. These instances scale up or down dynamically based on your needs.
As new instances are created or removed, they need to be associated with your service. This is referred to as endpoint registration.
Feature | Supported |
---|---|
Automated deployment of sidecar proxies for Compute Engine VMs | ✔ |
Automated injection of sidecar proxies for Google Kubernetes Engine Pods | ✔ |
Service discovery based on hostname | ✔ |
Instance autoscaling based on CPU utilization | ✔ |
Instance autoscaling based on traffic load/serving capacity (Compute Engine VMs in MIGs only) | ✔ |
Instance autohealing based on configurable health checks | ✔ |
Automatic endpoint registration for Compute Engine VM instances | ✔ |
Automatic endpoint registration for GKE container instances/pods | ✔ |
API to programmatically add or remove endpoints | ✔ |
Endpoints for your data plane traffic
Microservices use the data plane to reach services in your mesh, as well as outside of your mesh. Traffic Director enables you to separate application logic from networking logic so all your application needs to do is send requests to the data plane (for example, the sidecar proxy running alongside the application). The data plane takes care of sending requests to the right endpoint.
In the table below, applications described as being in the mesh are those applications that communicate with other services using the Traffic Director-managed data plane. Those applications can send traffic to in-mesh services, as well as services outside of the mesh.
Feature | Supported |
---|---|
VM-based applications in the mesh | ✔ |
Container-based applications in the mesh | ✔ |
VM-based applications outside of the mesh | ✔ |
Container-based applications outside of the mesh | ✔ |
Applications running in on-premises data centers | ✔ |
Applications in multi-cloud environments | ✔ |
Data plane topologies
In the service mesh model, your applications communicate using a data plane. This data plane often consists of sidecar proxies deployed alongside your applications. Traffic Director is highly flexible and supports data plane topologies that fit your service networking needs.
Feature | Supported |
---|---|
Sidecar proxies running alongside applications | ✔ |
Proxyless gRPC applications | ✔ |
Middle proxies between two applications in a mesh | ✔ |
Edge proxies at the boundary of your mesh | ✔ |
Mesh spanning multiple GKE clusters and/or Compute Engine VMs in multiple regions | ✔ |
Programmatic, API-driven configuration
All configuration is exposed through our REST API and dashboard out-of-the-box, allowing you to automate changes across large teams and manage changes programmatically.
Feature | Supported |
---|---|
REST APIs | ✔ |
Google Cloud Console | ✔ |
gcloud command-line interface |
✔ |
Deployment Manager | ✔ |
Terraform support | ✔ |
Language support with proxyless gRPC applications
You can create proxyless gRPC applications that work with Traffic Director using the following programming languages.
Language | Supported |
---|---|
Java | ✔ |
Go | ✔ |
C++ | ✔ |
Python | ✔ |
Ruby | ✔ |
PHP | ✔ |
Node | ✔ |
Request protocols
Applications can use the following request protocols when they communicate using the Traffic Director-configured data plane.
Feature | Supported |
---|---|
HTTP | ✔ |
HTTP/2 | ✔ |
TCP | ✔ |
gRPC | ✔ |
Routing and traffic management
Traffic Director supports advanced traffic management policies that you can use to steer, split, and shape traffic as it passes through your data plane. Note that most advanced traffic management is not enabled for Traffic Director with proxyless gRPC services, and none of the advanced traffic management features are available with the target TCP proxy resource.
Feature | Supported with Envoy proxy configured to handle HTTP or gRPC traffic | Supported with Envoy proxy configured to handle TCP traffic | Supported with proxyless gRPC |
---|---|---|---|
HTTP/Layer 7 request routing based on suffix/prefix/full/regex match on: | |||
• Host name | ✔ | N/A | ✔ |
• Path | ✔ | N/A | ✔ |
• Headers | ✔ | N/A | ✔ |
• Method | ✔ | N/A | N/A |
• Cookies | ✔ | N/A | ✔ |
• Request parameters | ✔ | N/A | N/A |
Fault injection | ✔ | N/A | |
Configurable timeouts | ✔ | N/A | |
Retries | ✔ | N/A | |
Redirects | ✔ | N/A | |
URI rewrites | ✔ | N/A | |
Request/response header transformations | ✔ | N/A | |
Traffic splitting | ✔ | ✔ | |
Traffic mirroring | ✔ | ||
Outlier detection | ✔ |
Load balancing
You can configure advanced load balancing methods and algorithms to load balance at the service, backend group (instance groups or network endpoint groups), and individual backend or endpoint levels. For more information, see Backend services overview.
Feature | Supported with Envoy proxy configured to handle HTTP or gRPC traffic | Supported with Envoy proxy configured to handle TCP traffic | Supported with proxyless gRPC |
---|---|---|---|
Service selection based on weight-based traffic splits | ✔ | ✔ | |
Backend (instance group or network endpoint group) selection based on region (prefer nearest region with healthy backend capacity) | ✔ | ✔ | ✔ |
Backend selection using rate-based (requests per second) balancing mode | ✔ | N/A | ✔ |
Backend selection based on utilization-based balancing mode (VMs in Compute Engine instance groups only) | ✔ | ✔ | ✔ |
Configurable maximum capacity per backend (Compute Engine and GKE only) | ✔ | ✔ | ✔ |
Circuit breaking | ✔ | ||
Backend selection based on configurable load balancing policies*:
|
✔ | ✔ | Round robin only |
*See localityLbPolicy
for additional details.
Service and backend capacity management
Traffic Director takes service and backend capacity into account to ensure optimal distribution of traffic across your services' backends. Traffic Director is integrated with Google Cloud infrastructure so that it automatically collects capacity data. You can also set and configure capacity manually.
Feature | Supported |
---|---|
Automatically tracks backend capacity and utilization, based on CPU, for VM instances in a Managed Instance Group | ✔ |
Manual capacity and overrides for VM and container instances in MIGs and NEGs based on request rate | ✔ |
Manual capacity draining | ✔ |
Failover
Enterprise workloads generally rely on high-availability deployments to ensure service uptime. Traffic Director supports these types of deployments by enabling multi-zone/multi-region redundancy.
Feature | Supported |
---|---|
Automatic failover to another zone within the same region that has healthy backend capacity | ✔ |
Automatic failover to nearest region with healthy backend capacity | ✔ |
Health checks
Centralized health checking to determine backend health. For reference information, see Health checks overview.
Feature | Supported |
---|---|
gRPC health checks | ✔ |
HTTP health checks | ✔ |
HTTPS health checks | ✔ |
HTTP/2 health checks | ✔ |
TCP health checks | ✔ |
Configurable health checks:
|
✔ |
Configurable request path (HTTP, HTTPS, HTTP/2) | ✔ |
Configurable request string or path (TCP or SSL) | ✔ |
Configurable expected response string | ✔ |
Observability
Observability tools provide monitoring, debugging, and performance information to help you understand your service mesh. The following capabilities are either provided out-of-the-box or configured in your data plane. Your application code doesn't need to do anything special to generate this observability data.
The service health dashboard is available with proxyless gRPC services, but you cannot configure data plane logging and tracing. Traffic Director cannot configure a gRPC application's logging and tracing. You can enable this by following the instructions in the troubleshooting sections or gRPC guides available on open source sites. For example, you can use Opencensus to enable metrics collection and tracing in your proxyless gRPC services.
Feature | Supported with proxies | Supported with proxyless gRPC services |
---|---|---|
Service health dashboard | ✔ | ✔ |
Data plane logging | ✔ | |
Data plane tracing | ✔ |
Session affinity
Client-server communications often involve multiple successive requests. In such a case, it's helpful to route successive client requests to the same backend or server. Traffic Director provides configurable options to send requests from a particular client, on a best effort basis, to the same backend as long as the backend is healthy and has capacity. For more information, see Backend services overview
Feature | Supported with HTTP(S) proxies | Supported with TCP proxies | Supported with proxyless gRPC services |
---|---|---|---|
Client IP address | ✔ | ✔ | |
HTTP cookie | ✔ | N/A | |
HTTP header | ✔ | N/A | |
Generated cookie (sets client cookie on first request) | ✔ | N/A |
Network topologies
Traffic Director supports common Google Cloud network topologies.
Feature | Supported |
---|---|
Single network in a Google Cloud project | ✔ |
Shared VPC (single network shared across multiple Google Cloud projects) | ✔ |
See Limitations for a detailed explanation of how Shared VPC is supported with Traffic Director.
Compliance
Traffic Director is compliant with the following standards.
Compliance certification | |
---|---|
HIPAA | ✔ |
ISO 27001, ISO 27017, ISO 27018 | ✔ |
SOC1, SOC2, SOC3 | ✔ |
PCI DSS | ✔ |