Traffic Director provides advanced traffic management capabilities that give you fine-grained control over how traffic is handled. Traffic Director supports these uses cases:
- Fine-grained routing of requests to (one or more) services
- Request- and response-based actions like redirects and header transformations
- Fine-tuned traffic distribution among a service's backends for improved load balancing
These advanced traffic management capabilities allow you to meet your availability and performance objectives. One of the benefits of using Traffic Director for these use cases is that you can update how traffic is managed without needing to modify your application code.
Use case examples
Advanced traffic management addresses many use cases. This section provides a few high-level examples.
You can find more examples, including sample code, in the Configuring advanced traffic management and Setting up VM-based proxyless gRPC services with advanced traffic management guides.
Fine-grained traffic routing for personalization
You can route traffic to a service based on the request's parameters. For example, you might
use this to provide a more personalized experience for Android
users. In the following diagram, Traffic Director configures your service mesh to send requests with
the user-agent:Android
header to your Android service instead of your generic
service.
user-agent
header set to Android
(click to enlarge)Weight-based traffic splitting for safer deployments
Deploying a new version of an existing production service can be risky. Even after your tests pass in a test environment, you might not want to route all of your users to the new version right away.
Traffic Director allows you to define weight-based traffic splits to distribute traffic across multiple services. For example, you can send 1% of traffic to the new version of your service, monitor that everything works, then gradually increase the proportion of traffic going to the new service.
Traffic mirroring for debugging
When you're debugging an issue, it might be helpful to send copies of production traffic to a debugging service. Traffic Director allows you to set up request mirroring policies so that requests are sent to one service and copies of those requests are sent to another service.
Fine-tuned load balancing for performance
Depending on your application characteristics, you might find that you can improve performance and availability by fine-tuning how traffic gets distributed across a service's backends. With Traffic Director, you can apply advanced load balancing algorithms so that traffic is distributed according to your needs.
The following diagram, in contrast to previous diagrams, shows both a destination backend service (Production Service) and the backends for that backend service (Virtual Machine 1, Virtual Machine 2, Virtual Machine 3). With advanced traffic management, you can configure both how a destination backend service is selected, as well as how traffic is distributed among the backends for that destination service.
How advanced traffic management works
You configure advanced traffic management using the routing rule map and backend services resources that you use when setting up Traffic Director. Traffic Director, in turn, configures your Envoy proxies and proxyless gRPC applications to enforce the advanced traffic management policies that you set up.
At a high level, you do the following:
Configure the routing rule map to do the following, based on the characteristics of the outbound request:
Select the backend service to which requests is routed.
Optionally, perform additional actions
Configure the service (backend service) to control how traffic is distributed to backends and endpoints after a destination service is selected.
Traffic routing and actions
In Traffic Director, the routing rule map refers to the combination of the forwarding rule, target proxy, and URL map resources. All advanced traffic management capabilities related to routing and actions are configured using the URL map.
You can set up the following advanced traffic management features in your routing rule map.
Request handling
When a client sends a request, the request is handled as followed:
- The request is matched to a specific routing rule map. This match
is made as follows:
- If you're using Envoy:
- The request's destination IP address and port are
compared to the IP address and port of forwarding rules in all routing rule
maps. Only routing rule maps with forwarding rules that have load
balancing scheme
INTERNAL_SELF_MANAGED
are considered. - The forwarding rule that matches the request references a target HTTP or gRPC proxy, which references a URL map. This URL map contains information that is used for routing and actions.
- The request's destination IP address and port are
compared to the IP address and port of forwarding rules in all routing rule
maps. Only routing rule maps with forwarding rules that have load
balancing scheme
- If you're using proxyless gRPC:
- The request's IP address is ignored, because you can only
use the
0.0.0.0
IP address when your create a forwarding rule that references a target gRPC proxy. Only routing rule maps with forwarding rules that have load balancing schemeINTERNAL_SELF_MANAGED
are considered. - The port in the target URI (for example, xds:///foo.myservice:8080)
is compared to the port of forwarding rules with load balancing
scheme
INTERNAL_SELF_MANAGED
. The IP address of a forwarding rule is not used. - The forwarding rule that matches the request references a target gRPC proxy, which references a URL map. This URL map contains information that is used for routing and actions.
- The request's IP address is ignored, because you can only
use the
- If you're using Envoy:
- When the appropriate URL map is determined, the URL map is evaluated to determine the destination backend service and, optionally, apply actions.
- After the destination backend service is selected, traffic is distributed among the backends or endpoints for that destination backend service, based on configuration in the backend service resource.
The second step is described in the following section, Simple routing based on host and path. The third step is discussed in Advanced routing and actions.
Simple routing based on host and path
Traffic Director supports a simplified routing scheme as well as a more advanced scheme. More advanced routing schemes, including actions, are described in the next section, Advanced routing and actions. In the simple scheme, you specify a host and, optionally, a path. The request's host and path are evaluated to determine the service to which a request should be routed.
- The request's host is the domain name portion of a URL. For example,
the host portion of the URL
http://example.com/video/
isexample.com
. - The request's path is the part of the URL that follows the hostname. For
example,
/video
inhttp://example.com/video
.
Setting up simple routing based on host and path
Simple routing based on host and path is set up in the routing rule map, which consists of:
- A global forwarding rule
- A target HTTP proxy or a target gRPC proxy
- A URL map
Most of the configuration is done in the URL map and, after you've created the initial routing rule map, you generally only need to modify the URL map portion of the routing rule map. In this diagram, path rules have actions similar to the next diagram.
The simplest rule is a default rule, in which you only specify a wildcard
(*
) host rule and a path matcher with a default service. After you create
the default rule, you can add additional rules, which specify different hosts and
paths. Outbound requests are evaluated against these rules as follows.
If a request's host (for example example.com
) matches a host rule:
- The path matcher is evaluated next.
- Each path matcher contains one or more path rules that are evaluated against the request's path.
- If a match is found, the request gets routed to the service specified in the path rule.
- Each path matcher contains a default service to which requests are routed if the host rule matches but no path rules match.
If the request does not match any of the host rules that you've specified, it is routed to the service specified in the default rule.
For more information about the URL map resource's fields and how they work, see the urlMaps REST API page.
Advanced routing and actions
If you want to do more than route a request based on the request's host and path, you can set up advanced rules to route requests and perform actions.
At a high level, advanced routing and actions work as follows:
- As with simple routing, the request's host is compared to the host rules that you configure in the URL map. If a request's host matches a host rule, the host rule's path matcher is evaluated.
- The path matcher contains one or more route rules that are evaluated
against the request.
- These route rules are evaluated in priority order by matching the request attributes (host, path, header, and query parameters) according to specific match conditions, for example, prefix match.
- After a route rule is selected, actions may be applied. The default action is to route the request to a single destination service but you can configure other actions as well.
Advanced Routing
Advanced routing is similar to the simple routing described above, except that you can specify rule priority and additional match conditions, as described below.
With advanced routing, you must specify a unique priority for each rule. This priority determines the order in which route rules are evaluated, with lower priority values taking precedence over higher priority values. After a request matches a rule, the rule is applied and other rules are ignored.
Advanced routing also supports additional match conditions. For example, you can specify that a rule will match a request's header if the header's name matches exactly or only partially, for example based on prefix or suffix. It can match based on evaluating the header name against a regular expression, or on other criteria such as checking for the presence of a header.
By combining host, path, header and query parameters with priorities and match conditions, you can create highly expressive rules that fit your exact traffic management requirements.
HTTP hosts vs. gRPC hosts
If you're writing an HTTP-based application, the host is the domain name
portion of the URL that the application calls out to. For example, the host
portion of the URL http://example.com/video/
is example.com
.
If you're writing a gRPC-based application, the host is the name a client uses
in the channel URI to connect to a specific service. For example, the host
portion of the channel URI xds:///example.com
is example.com
.
HTTP paths vs. gRPC paths
If you're writing an HTTP-based application, the path is the part of the URL
that follows the hostname, for example, /video
in http://example.com/video
.
If you're writing a gRPC-based application, the path is in the :path
header
of the HTTP/2 request and looks like /SERVICE_NAME/METHOD_NAME/
. For example,
if you call the Download
method on the Example
gRPC service, the contents of
the :path
header would look like /Example/Download
.
Other gRPC headers (metadata)
gRPC supports sending metadata between the gRPC client and gRPC server to provide additional information about an RPC call. This metadata is in the form of key-value pairs that are carried as headers in the HTTP/2 request.
Actions
Traffic Director allows you to specify actions that your Envoy proxies or proxyless gRPC applications take when handling a request. The following actions can be configured using Traffic Director.
Action (API field name) | Description |
---|---|
Redirects (urlRedirect ) |
Returns a configurable 3xx response code. It also sets the
Location response header with the appropriate URI, replacing
the host and path as specified in the redirect action.
|
URL rewrites (urlRewrite ) |
Rewrites the host name portion of the URL, the path portion of the URL, or both, before sending a request to the selected backend service. |
Header transformations (headerAction ) |
Adds or removes request headers before sending a request to the backend service. Can also add or remove response headers after receiving a response from the backend service. |
Traffic mirroring (requestMirrorPolicy ) |
In addition to forwarding the request to the selected backend service,
sends an identical request to the configured mirror backend service on a
fire
and forget basis. The load balancer doesn't wait for a
response from the backend to which it sends the mirrored request. Mirroring is useful for testing a new version of a backend service. You can also use it to debug production errors on a debug version of your backend service, rather than on the production version. |
Weight-based traffic splitting
(weightedBackendServices )
|
Allows traffic for a matched rule to be distributed to multiple backend
services, proportional to a user-defined weight assigned to the individual
backend service. This capability is useful for configuring staged deployments or A/B testing. For example, the route action could be configured such that 99% of the traffic is sent to a service that's running a stable version of an application, while 1% of the traffic is sent to a separate service running a newer version of that application. |
Retries (retryPolicy ) |
Configures the conditions under which the load balancer retries failed requests, how long the load balancer waits before retrying, and the maximum number of retries permitted. |
Timeout (timeout ) |
Specifies the timeout for the selected route. Timeout is computed from the time that the request is fully processed up until the time that the response is fully processed. Timeout includes all retries. |
Fault injection (faultInjectionPolicy ) |
Introduces errors when servicing requests to simulate failures, including high latency, service overload, service failures, and network partitioning. This feature is useful for testing the resiliency of a service to simulated faults. |
Security policies (corsPolicy ) |
Cross-origin resource sharing (CORS) policies handle settings for enforcing CORS requests. |
For additional information about actions and how they work, see the URL maps API reference.
In each route rule, you can specify one of the following route actions (referred to as "Primary actions" in the Google Cloud Console):
- Route traffic to a single service (
service
). - Split traffic between multiple services (
weightedBackendServices
). - Redirect URLs (
urlRedirect
).
In addition, you can combine any one of the previously mentioned route actions with one or more of the following route actions (referred to as "Add-on actions" in the Google Cloud Console):
- Manipulate request/response headers (
headerAction
). - Mirror traffic (
requestMirrorPolicy
). - Rewrite URL host/path (
urlRewrite
). - Retry failed requests (
retryPolicy
). - Set timeout (
timeout
). - Introduce faults to a percentage of the traffic (
faultInjectionPolicy
). - Add CORS policy (
corsPolicy
).
Because actions are associated with specific route rules, the Envoy proxy or proxyless gRPC application can apply different actions based on the request that it is handling.
Distributing traffic among a service's backends
As discussed in Request handling, when a client handles an outbound request, it first selects a destination service. After it selects a destination service, it needs to figure out which backend/endpoint should receive the request.
In the above diagram, the Rule has been simplified. This would typically be a host rule, path matcher and one or more path or route rules. The destination service is the (Backend) Service. Backend 1, …, and Backend n actually receive and handle the request. These backends might be, for example, Compute Engine virtual machines that host your server-side application code.
By default, the client that handles the request will send requests to the nearest healthy backend that has capacity. To avoid overloading a specific backend, it load balances subsequent requests across other backends of the destination service using the round robin load balancing algorithm. But in some cases, you might want more fine-grained control over this behavior.
Load balancing, session affinity and protecting backends
You can set the following traffic distribution policies on each service.
Policy (API field name) | Description |
---|---|
Load balancing mode (balancingMode ) |
Control how a network endpoint group or a managed instance group is selected after a destination service has been selected. You can configure the balancing mode to distribute load based on concurrent connections, request rate, and more. |
Load balancing policy (localityLbPolicy ) |
Sets the load balancing algorithm that is used to distribute traffic among backends within a network endpoint group or a managed instance group. You can choose from a variety of different algorithms (such as round robin, least request, and more) to optimize performance. |
Session affinity (sessionAffinity ) |
Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has capacity. Traffic Director supports four different session affinity options: client IP address, HTTP cookie-based, HTTP header-based, and generated cookie affinity, which is generated by Traffic Director itself). |
Consistent hash (consistentHash ) |
Can be used to provide soft session affinity based on HTTP headers, cookies or other properties. |
Circuit breakers (circuitBreakers ) |
Sets upper limits on the volume of connections and requests per connection to a backend service. |
Outlier detection (outlierDetection ) |
Specify the criteria to (1) remove unhealthy backends or endpoints from managed instance groups or network endpoint groups and (2) add a backend or endpoint back when it is considered healthy enough to receive traffic again. Whether a backend/endpoint is considered healthy is determined by the health check associated with the service. |
For additional information regarding different traffic distribution options and how they work, see the Backend services API reference.
Filtering configuration
One of Traffic Director's core responsibilities is to generate configuration and send that configuration out to various Traffic Director clients, for example, Envoy proxies and/or gRPC applications. Traffic Director controls your service mesh by sending configuration to its clients which tells them what they need to do -- Traffic Director is the control plane.
When you create or update a configuration in Traffic Director, Traffic Director translates this configuration into a language that its clients can understand. By default, Traffic Director shares this configuration with all of its clients. In some cases, you may want to tailor which Traffic Director clients receive specific configuration, in other words, filter the configuration to specific clients.
While this is advanced functionality, the following examples illustrate when filtering configuration might be helpful:
- Your organization uses the Shared VPC networking model and multiple teams are using Traffic Director in different service projects. If you want to isolate your configuration from other service projects, you can filter the configuration so that specific Traffic Director clients receive only a subset of the configuration.
- You have a very large number of routing rules and services configured in Traffic Director and you want to avoid sending a massive amount of configuration to every Traffic Director client. Keep in mind that a client that needs to evaluate an outbound request using a large, complex configuration may be less performant than a client that only needs to evaluate a request using a streamlined configuration.
Configuration filtering is based on the concept of metadata filters:
- When a Traffic Director client connects, it presents information from its bootstrap file to Traffic Director.
- This information contains the contents of metadata fields, in the form of key/value pairs, that you specify in the bootstrap file when you deploy your Envoy proxies and/or gRPC applications.
- You can add metadata filters to the forwarding rule and/or route rule(s) that you configure in your routing rule map.
- When you add metadata filters to these resources, Traffic Director only shares the configuration with clients that present metadata that matches the metadata filter conditions.
You can apply metadata filters on the forwarding rule. In this case the routing rule map and associated services are only shared with Traffic Director clients that present matching metadata.
Alternatively, you can apply metadata filters on specific route rules. In this
case, Traffic Director only shares the specific routing rule and associated
services with Traffic Director clients that present matching metadata.
For information on how to configure metadata filters, see
Setting up config filtering based on MetadataFilter
match.
Session Affinity
This example shows you how to enable session affinity.
After configuring managed instance groups or network endpoint groups, create a new backend service with session affinity.
gcloud beta compute backend-services update BACKEND_SERVICE_NAME \ --health-checks=HEALTH_CHECK \ --protocol=TCP \ --session-affinity=CLIENT_IP \ --global
Create a new forwarding rule, replacing PORT with the port.
gcloud beta compute forwarding-rules create td-vm-forwarding-rule
--global
--load-balancing-scheme=INTERNAL_SELF_MANAGED
--address=[VIP]
--address-region=us-central1
--target-tcp-proxy=td-vm-proxy
--ports PORT
--network default
At this point, Traffic Director is configured to load balance traffic for the VIP specified in the forwarding rule across backends in the managed instance group.
Limitations
Some features described in this document cannot be configured for proxyless gRPC services with Traffic Director. For supported features, see Routing and traffic management, Load balancing and other tables in Traffic Director features.
For additional limitations that apply to proxyless gRPC applications with Traffic Director, see the Limitations section in the overview guide.
What's next
- For more information about setting up these features with sidecar proxy deployments, see Configuring advanced traffic management.
- For more information about using these features with proxyless gRPC deployments, see Setting up VM-based proxyless gRPC services with advanced traffic management.