Traffic Director with proxyless gRPC services overview
This guide provides you with an overview of the architecture of Traffic Director with proxyless gRPC services, including use cases and architecture patterns. The guide covers the older Traffic Director APIs. For information about the service routing APIs, see the overview.
Traffic Director's managed control plane enables global routing, load balancing, and regional failover for service mesh and load-balancing use cases. This includes deployments that incorporate sidecar proxies and gateway proxies. Traffic Director support for proxyless gRPC applications offers an additional deployment model in which gRPC applications can participate in a service mesh without needing a sidecar proxy.
In a typical example, a gRPC client establishes a connection with a gRPC server that hosts your backend logic. Traffic Director gives your gRPC clients information about which servers to contact, how to load balance requests to multiple instances of a server, and what to do with requests if a server is not running.
For a complete overview of how Traffic Director works, see the Traffic Director overview.
How Traffic Director works with gRPC applications
Traffic Director configures gRPC clients with a supported gRPC version, similarly to how sidecar proxies such as Envoy are configured. However, proxyless gRPC applications connected directly to Traffic Director don't need sidecar proxies. Instead, Traffic Director uses open source xDS APIs to configure gRPC applications directly. These gRPC applications act as xDS clients, connecting to Traffic Director's global control plane. After they're connected, gRPC applications receive dynamic configuration from the control plane, enabling endpoint discovery, load balancing, regional failover, and health checks. This approach enables additional Traffic Director deployment patterns.
In the first illustration, a service mesh is enabled by using a sidecar proxy.
To configure a sidecar proxy, Traffic Director uses xDS APIs. Clients communicate with the server through the sidecar proxy.
In the second illustration, a service mesh is enabled without a sidecar proxy by using a proxyless gRPC client.
If you are deploying only gRPC services that Traffic Director configures, you can create a service mesh without deploying any proxies at all. This makes it easier to bring service mesh capabilities to your gRPC applications.
Name resolution scheme
Name resolution works for proxyless deployments in the following ways:
- You set
xdsas the name resolution scheme in the gRPC client channel when connecting to a service. The target URI is formatted as
xds:///hostname:port. When the port is not specified, the default value is 80—for example, in the target URI
- The gRPC client resolves the
hostname:portin the target URI by sending a listener discovery service (LDS) request to Traffic Director.
- Traffic Director looks up the configured forwarding rules that have
a matching port. It then looks up the corresponding URL map for a host rule
hostname:port. It returns the associated routing configuration to the gRPC client.
The host rules that you configure in Traffic Director must be unique across
all URL maps. For example,
example.hostname:8080 are all different rules.
Name resolution with different deployment types
The name resolution scheme is different for proxyless deployments and deployments that use Envoy proxies.
The gRPC client uses the
xds name resolution scheme to connect to a service,
allowing the client to receive the service configuration from
Traffic Director. The gRPC client then communicates directly with the
You can combine sidecar proxy and proxyless deployment patterns for increased flexibility. Combining patterns is especially helpful when your organization and network support multiple teams with different feature requirements, operational needs, and gRPC versions.
In the following illustration, both proxyless gRPC clients and gRPC clients
with a sidecar proxy communicate with a gRPC server. The gRPC clients with
sidecar proxies use the
dns name resolution scheme.
The following use cases help you understand when you might want to use Traffic Director with proxyless gRPC applications. Your deployment can include proxyless gRPC applications, gRPC applications with sidecar proxies, or a mix of both.
Resource efficiency in a large-scale service mesh
If your service mesh includes hundreds or thousands of clients and backends, you might find that the total resource consumption from running sidecar proxies starts to add up. When you use proxyless gRPC applications, you don't need to introduce sidecar proxies to your deployment. A proxyless approach can result in efficiency gains.
High-performance gRPC applications
For some use cases, every millisecond of request and response latency matters. In such a case, you might find reduced latency when you use a proxyless gRPC application, instead of passing every request through a client-side sidecar proxy and, potentially, a server-side sidecar proxy.
Service mesh for environments where you can't deploy sidecar proxies
In some environments, you might not be able to run a sidecar proxy as an
additional process alongside your client or server application. Or, you might
not be able to configure a machine's network stack for request interception and
redirection—for example, by using
In this case, you can use
Traffic Director with proxyless gRPC applications to introduce service
mesh capabilities to your gRPC applications.
Heterogeneous service mesh
Because both proxyless gRPC applications and Envoy communicate with Traffic Director, your service mesh can include both deployment models. Including both models enables you to satisfy the particular operational, performance, and feature needs of different applications and different development teams.
Migrate from a service mesh with proxies to a mesh without proxies
If you already have a gRPC application with a sidecar proxy that Traffic Director configured, you can transition to a proxyless gRPC application.
When a gRPC client is deployed with a sidecar proxy, it uses DNS to resolve the hostname that it is connecting to. The sidecar proxy intercepts traffic to provide service mesh functionality.
You can define whether a gRPC client uses the proxyless route or the sidecar
proxy route to communicate with a gRPC server by modifying the name resolution
scheme that it uses. Proxyless clients use the
xds name resolution scheme, while
sidecar proxies use the
dns name resolution scheme. Some gRPC clients might
even use the proxyless route when talking to one gRPC server, but use the sidecar
proxy route when talking to another gRPC server. This lets you gradually migrate
to a proxyless deployment.
To migrate from a service mesh with proxies to a mesh without proxies, you create a new Traffic Director service that your proxyless gRPC client uses. You can use the same APIs to configure Traffic Director for the existing and new versions.
Architecture and resources
The configuration data model for proxyless gRPC services extends the Traffic Director configuration model, with some additions and limitations that are described in this guide. Some of these limitations are temporary because proxyless gRPC does not support all of Envoy's features, and others are inherent to using RPCs. For example, HTTP redirects that use gRPC are not supported. To help you understand the configuration model in this guide, we recommend that you familiarize yourself with Traffic Director concepts and configuration.
The following diagram shows the resources that you must configure for proxyless gRPC applications.
Routing rule maps
A routing rule map defines how requests are routed in the mesh. It consists of a forwarding rule, a target gRPC proxy, and a URL map. Routing rule maps apply only to deployments that use the load balancing APIs. They do not apply with the service routing APIs or Gateway APIs.
Typically, you create the forwarding rule by using the virtual IP address and
port of the service that you are configuring. A gRPC client that uses the
name resolution scheme does not perform DNS lookup to resolve the hostname in the
channel URI. Instead, such a client resolves the
hostname[:port] in the target
URI by sending an LDS request to Traffic Director. There is no DNS lookup
involved, and a DNS entry for the hostname is not required.
As a result, Traffic Director uses only the port specified in the URI to
look up the forwarding rule, ignoring the IP address. The default value of the
80. Traffic Director then looks for a matching host rule in the URL
map associated with the target proxy referenced by the forwarding rule.
Therefore, a forwarding rule that references a target gRPC proxy with the
validateForProxyless field set to
TRUE must have its IP address set to
validateForProxyless is set to
TRUE, configurations that
specify an IP address other than
0.0.0.0 are rejected. This check does not
detect duplicate forwarding rules with the same port in other routing rule maps.
Note the following:
- The load-balancing scheme for the forwarding rule must be
- You can have multiple forwarding rules, but the
IP:portof each forwarding rule must be unique, and the port must match the port specified in the host rule.
- If more than one forwarding rule has a port that matches the port in the
target URI, Traffic Director looks for a matching
hostname[:port]in the host rules of the URL maps referenced by all such forwarding rules. Traffic Director looks for an exact match between the
hostname[:port]in the host rule and the
hostname[:port]specified in the target URI. Host rules and default rules that contain wildcard characters are skipped.
If more than one match is found, the behavior is undefined and can lead to unpredictable behavior. This generally happens when both of the following conditions are met:
- The same hostname is used across multiple URL maps.
- Multiple forwarding rules with the load-balancing scheme
INTERNAL_SELF_MANAGEDspecify the same port.
For this reason, we recommend that you do not re-use the same hostname across multiple URL maps that are referenced by forwarding rules that specify the same port.
Target gRPC proxy
Target proxies point to URL maps, which in turn contain rules used to route traffic to the correct backend. When you configure Traffic Director for gRPC applications, use a target gRPC proxy, not a target HTTP proxy, regardless of whether you are using proxyless gRPC applications or gRPC applications that use Envoy.
Target gRPC proxies have a field called
validateForProxyless in the REST API.
The default value is
FALSE. Setting this field to
TRUE enables configuration
checks so that you do not accidentally enable a feature that is not compatible
with proxyless gRPC.
In the Google Cloud CLI, the flag
--validate-for-proxyless is the
equivalent of the
Because Traffic Director support for proxyless gRPC applications does not include the full range of capabilities that are available to gRPC applications with a sidecar proxy, you can use this field to ensure that an invalid configuration, which might be specified in the URL map or backend service, is rejected. Configuration validation is done based on the features that Traffic Director supports with the most recent version of gRPC.
The URL map defines the routing rules that your proxyless gRPC clients use to
send traffic. The URL map contains one or more host rules in the format
hostname:port. Each of these host rules resolves to a service.
When you configure your gRPC client, you specify the target URI for the service
that the client needs to contact. This URI also uses the
This URI corresponds to the host rule entry in the URL map.
For example, if you configure the target URI
your gRPC client, Traffic Director sends it the configuration that corresponds to
the URL map host rule for
myservice:8080. This configuration includes, among
other information, a list of endpoints, which are each an IP address:port pair
corresponding to specific gRPC server instances.
If you don't specify a port in the target URI,
80 is used as the
default value. This means the following:
- The target URI
xds:///myservicematches the URL map host rule
- The target URI
xds:///myservice:80matches the URL map host rule
- The target URI
xds:///myservicedoes not match the URL map host rule
- The target URI
xds:///myservice:80does not match the URL map host rule
The string in the target URI and the string in the URL map host rule must be identical.
For more information, see URL map limitations.
A gRPC health check can check the status of a gRPC service that is running on a backend virtual machine (VM) instance or a network endpoint group (NEG).
If a gRPC health check cannot be used because your gRPC server does not implement the gRPC health checking protocol, use a TCP health check instead. Do not use an HTTP, HTTPS, or HTTP/2 health check.
The backend service defines how a gRPC client communicates with a gRPC server.
When you create a backend service for gRPC, set the protocol field to
A backend service configured with a protocol set to
GRPCcan also be accessed by gRPC applications through a sidecar proxy. In that situation, the gRPC client must not use the
xdsname resolution scheme.
In all Traffic Director deployments, the load-balancing scheme for the backend service must be
- To learn about the service routing APIs and how they work, see the overview.
- To learn about routing rule maps and how they manage traffic in Traffic Director deployments, see Traffic Director routing rule maps overview.
- To prepare to configure Traffic Director with proxyless gRPC applications, see Preparing to set up Traffic Director with proxyless gRPC services.
- To learn about limitations that apply to proxyless gRPC deployments, see Limits and limitations with proxyless gRPC.