Traffic Director load balancing

Traffic Director uses sidecar proxies to deliver global load balancing for your internal microservices. You can deploy internal microservices (sidecar proxy-based) with instances in multiple regions. Traffic Director provides health, routing, and backend information to the sidecar proxies, enabling them to perform optimal traffic routing to application instances in multiple cloud regions for a service.

In the following diagram, user traffic enters a Google Cloud deployment through an external global load balancer. The external load balancer distributes traffic to the Google Front End (GFE) microservice in either us-central1 or asia-southeast1, depending on the location of the end user.

The internal deployment features three global microservices: GFE, Shopping Cart, and Payments. Each service runs on managed instance groups (MIGs) in two regions, us-central1 and asia-southeast1. Traffic Director uses a global load-balancing algorithm that directs traffic from the user in California to the microservices deployed in us-central1. Requests from the user in Singapore are directed to the microservices in asia-southeast1.

An incoming user request is routed to the GFE microservice. The service proxy installed on the host with the GFE then directs traffic to the Shopping Cart. The sidecar proxy installed on the host with the Shopping Cart directs traffic to the Payments microservice.

Traffic Director in a global load-balancing deployment.
Traffic Director in a global load-balancing deployment (click to enlarge)

In the following example, if Traffic Director receives health check results that indicate that the virtual machine (VM) instances running the Shopping Cart microservice in us-central1 are unhealthy, Traffic Director instructs the sidecar proxy for the GFE microservices to fail over traffic to the Shopping Cart microservice running in asia-southeast1. Because autoscaling is integrated with traffic management in Google Cloud, Traffic Director notifies the MIG in asia-southeast1 of the additional traffic, and the MIG increases in size.

Traffic Director detects that all backends of the Payments microservice are healthy, so Traffic Director instructs Envoy's proxy for the Shopping Cart to send a portion of the traffic—up to the customer's configured capacity—to asia-southeast1 and overflow the rest to us-central1.

Failover with Traffic Director in a global load-balancing deployment.
Failover with Traffic Director in a global load-balancing deployment (click to enlarge)

Load-balancing components in Traffic Director

During Traffic Director setup, you configure several load-balancing components:

  • A global forwarding rule, which includes the VIP address, the target proxy, and the URL map. These resources are part of Traffic Director's traffic routing mechanism. The target proxy must be a target HTTP proxy.
  • The backend service, which contains configuration values.
  • A health check, which provides health checking for the VMs and Google Kubernetes Engine (GKE) Pods in your deployment.

The following diagram shows an application running on Compute Engine VMs or GKE Pods, the components, and the traffic flow in a Traffic Director deployment. It shows Traffic Director and the Cloud Load Balancing resources that it uses to determine traffic routing. An xDS API-compatible sidecar proxy (such as Envoy, as shown) runs on a client VM instance or in a Kubernetes Pod. Traffic Director serves as the control plane and uses xDS APIs to communicate directly with each proxy. In the data plane, the application sends traffic to the VIP address configured in the Google Cloud forwarding rule. The sidecar proxy intercepts the traffic and redirects it to the appropriate backend.

Traffic Director resources to be configured.
Traffic Director resources to be configured (click to enlarge)

What's next