Traffic Director service discovery

Traffic Director provides service and endpoint discovery. This allows you to group the VMs and containers that run your code as endpoints of your services. Traffic Director monitors these services so that it can share up- to-date information with its clients. So when one of your applications sends a request using its Traffic Director client, such as an Envoy sidecar proxy, it benefits from up-to-date information about your services.

In the context of Traffic Director, a client is application code running on a VM or container that and that formulates requests to send to a server. A server is application code that is receiving such requests. A Traffic Director client is an Envoy or gRPC or other xDS client that is connected to Traffic Director and is part of the data plane.

In the data plane, Envoy or gRPC does the following:

  1. It examines a request and matches the request to a backend service, a resource that you configure during deployment.
  2. After the request is matched, Envoy or gRPC chooses a backend or endpoint and directs the request to that backend or endpoint.

Service Discovery

Traffic Director provides service discovery. When you configure Traffic Director, you create services (backend services). You also define routing rules that specify how an outbound request (a request sent by your application code and handled by a Traffic Director client) is matched to a particular service. So when a Traffic Director client handles a request that matches a rule, it can choose the service that should receive the request.

For example:

  • You have a VM running your application. This VM has an Envoy sidecar proxy that is connected to Traffic Director and handles outbound requests on behalf of the application.
  • You configured a backend service named payments. This backend service has two NEG backends that point to various container instances that run the code for your payments service.
  • You configured a routing rule map that has a forwarding rule (with example IP 0.0.0.0 and port 80), a target proxy, and a URL map (with example hostname payments.example.com pointing to the payments service.

With this configuration, when your application (on the VM) sends an HTTP request to payments.example.com on port 80, the Traffic Director client knows that this is a request destined for the payments service.

When you use Traffic Director with proxyless gRPC services, service discovery works similarly. But a gRPC library, acting as a Traffic Director client, only gets information about the services for which you specify an xDS resolver. Envoy, by default, gets information about all services configured on the Virtual Private Cloud network specified in the Envoy bootstrap file.

Endpoint discovery

Service discovery enables clients to know about your services. Endpoint discovery enables clients to know about the instances that are running your code.

When you create a service, you also specify the backends for that service. These are either VMs in managed instance groups (MIGs) or, generally, containers in network endpoint groups (NEGs). Traffic Director monitors these MIGs and NEGs so that it knows when instances and endpoints are created and removed.

Traffic Director continuously shares up-to-date information about these services with its clients. This enables clients to avoid sending traffic to endpoints that no longer exist. It also enables clients to learn about new endpoints and start taking advantage of the additional capacity provided by these endpoints.

In the above example, Traffic Director returns the two healthy endpoints in MIG-1 and three healthy endpoints in MIG-2 for the service shopping-cart. Beyond adding endpoints into MIGs or NEGs and setting up Traffic Director, you don't need any additional configuration to enable service discovery with Traffic Director.

How sidecar proxy traffic interception works in Traffic Director

Traffic Director supports the sidecar proxy model. Under this model, when an application sends traffic to its destination, the traffic is intercepted by iptables and redirected to the sidecar proxy on the host where the application is running. The sidecar proxy decides how to load balance the traffic, then sends the traffic to its destination.

In the following diagram, which assumes that Traffic Director is correctly configured, Envoy is the sidecar proxy. The sidecar proxy is running on the same host as the application.

A sample service, called Web, is configured on VIP 10.0.0.1:80, where it can be discovered and load-balanced by Traffic Director. Traffic Director discovers the setup through forwarding rule configuration, which provides the VIP and port. The backends for the service Web are configured and function, but they are located outside the Compute Engine VM host in the diagram.

Traffic Director decides that the optimal backend for traffic to the service Web from the host is 192.168.0.1:80.

Traffic Director host networking  (click to enlarge)
Traffic Director host networking (click to enlarge)

The traffic flow in the diagram is this:

  1. The application sends traffic to the service Web, which resolves to the IP address 10.0.0.1:80.
  2. The netfilter on the host is configured so that traffic destined to 10.0.0.1:80 is redirected to 10.0.0.1:15001.
  3. Traffic is redirected to 127.0.0.1:15001, the interception port of the Envoy proxy.
  4. The Envoy proxy interception listener on 127.0.0.1:15001 receives the traffic and performs a lookup for the original destination of the request (10.0.0.1:80). The lookup results in 192.168.0.1:8080 being selected as an optimal backend, as programmed by Traffic Director.
  5. Envoy establishes a connection over the network with 192.168.0.1:8080 and proxies HTTP traffic between the application and this backend until the connection is terminated.