Traffic Director service discovery

Traffic Director provides service and endpoint discovery. These features let you group the virtual machine (VM) instances and container instances that run your code as endpoints of your services.

Traffic Director monitors these services so that it can share up-to-date information with its clients. Therefore, when one of your applications uses its Traffic Director client (such as an Envoy sidecar proxy) to send a request, it benefits from up-to-date information about your services.

In the context of Traffic Director, a client is application code that runs on a VM or container that formulates requests to send to a server. A server is application code that receives such requests. A Traffic Director client is an Envoy or gRPC or other xDS client that is connected to Traffic Director and is part of the data plane.

In the data plane, Envoy or gRPC does the following:

  1. It examines a request and matches the request to a backend service, a resource that you configure during deployment.
  2. After the request is matched, Envoy or gRPC chooses a backend or endpoint and directs the request to that backend or endpoint.

Service discovery

Traffic Director provides service discovery. When you configure Traffic Director, you create backend services. You also define routing rules that specify how an outbound request (a request sent by your application code and handled by a Traffic Director client) is matched to a particular service. When a Traffic Director client handles a request that matches a rule, it can choose the service that should receive the request.

For example:

  • You have a VM running your application. This VM has an Envoy sidecar proxy that is connected to Traffic Director and handles outbound requests on behalf of the application.
  • You configured a backend service named payments. This backend service has two network endpoint group (NEG) backends that point to various container instances that run the code for your payments service.
  • You configured a routing rule map that has a forwarding rule (with example IP address 0.0.0.0 and port 80), a target proxy, and a URL map (with example hostname payments.example.com that points to the payments service).

With this configuration, when your application (on the VM) sends an HTTP request to payments.example.com on port 80, the Traffic Director client knows that this request is destined for the payments service.

When you use Traffic Director with proxyless gRPC services, service discovery works similarly. However, a gRPC library acting as a Traffic Director client only gets information about the services for which you specify an xDS resolver. By default, Envoy gets information about all services configured on the Virtual Private Cloud (VPC) network specified in the Envoy bootstrap file.

Endpoint discovery

Service discovery enables clients to know about your services. Endpoint discovery enables clients to know about the instances that are running your code.

When you create a service, you also specify the backends for that service. These backends are either VMs in managed instance groups (MIGs) or containers in NEGs. Traffic Director monitors these MIGs and NEGs so that it knows when instances and endpoints are created and removed.

Traffic Director continuously shares up-to-date information about these services with its clients. This information enables clients to avoid sending traffic to endpoints that no longer exist. It also enables clients to learn about new endpoints and take advantage of the additional capacity that these endpoints provide.

In the preceding example, Traffic Director returns the two healthy endpoints in MIG-1 and the three healthy endpoints in MIG-2 for the service shopping-cart. Beyond adding endpoints into MIGs or NEGs and setting up Traffic Director, you don't need any additional configuration to enable service discovery with Traffic Director.

Sidecar proxy traffic interception in Traffic Director

Traffic Director supports the sidecar proxy model. Under this model, when an application sends traffic to its destination, iptables intercept the traffic and redirect it to the sidecar proxy on the host where the application is running. The sidecar proxy decides how to load balance the traffic, and then sends the traffic to its destination.

In the following diagram, which assumes that Traffic Director is correctly configured, Envoy is the sidecar proxy. The sidecar proxy is running on the same host as the application.

A sample service called Web is configured on virtual IP address (VIP) 10.0.0.1:80 where Traffic Director can discover and load balance it. Traffic Director discovers the setup through forwarding rule configuration, which provides the VIP and port. The backends for the service Web are configured and function, but they are located outside the Compute Engine VM host in the diagram.

Traffic Director decides that the optimal backend for traffic to the service Web from the host is 192.168.0.1:80.

Traffic Director host networking.
Traffic Director host networking (click to enlarge)

The traffic flow in the diagram is as follows:

  1. The application sends traffic to the service Web, which resolves to the IP address 10.0.0.1:80.
  2. The netfilter on the host is configured so that traffic destined to 10.0.0.1:80 is redirected to 10.0.0.1:15001.
  3. Traffic is redirected to 127.0.0.1:15001, the interception port of the Envoy proxy.
  4. The Envoy proxy interception listener on 127.0.0.1:15001 receives the traffic and performs a lookup for the original destination of the request (10.0.0.1:80). The lookup results in 192.168.0.1:8080 being selected as an optimal backend, as programmed by Traffic Director.
  5. Envoy establishes a connection over the network with 192.168.0.1:8080 and proxies HTTP traffic between the application and this backend until the connection is terminated.

What's next