This guide provides you with an overview of the architecture of Traffic Director with proxyless gRPC services, including use cases and architecture patterns.
Overview
Traffic Director's managed control plane enables global routing, load balancing, and regional failover for service mesh and load balancing use cases. This includes deployments incorporating sidecar proxies and gateway proxies. Traffic Director support for proxyless gRPC applications offers an additional deployment model in which gRPC applications can participate in a service mesh without needing a sidecar proxy. For a complete overview of how Traffic Director works, see Traffic Director overview.
In a typical example, a gRPC client establishes a connection with a gRPC server that hosts your backend logic. Traffic Director gives your gRPC clients information about which servers to contact, how to load balance requests to multiple instances of a server, and what to do with requests if a server is not running.
How Traffic Director works with gRPC applications
Traffic Director configures gRPC clients with a supported gRPC version, similarly to how sidecar proxies such as Envoy are configured. However, proxyless gRPC applications connected directly to Traffic Director don't need sidecar proxies. Instead, Traffic Director uses open source xDS v2 APIs to configure gRPC applications directly. These gRPC applications act as xDS clients, connecting to Traffic Director's global control plane. After they're connected, gRPC applications receive dynamic configuration from the control plane, enabling endpoint discovery, load balancing, regional failover, health checks, and more.
This approach enables additional Traffic Director deployment patterns. In the first illustration below, a service mesh is enabled using a sidecar proxy.
Traffic Director configures a sidecar proxy using xDS v2 APIs. The gRPC client communicates with the gRPC server through the sidecar proxy.
In the second illustration, a service mesh is enabled without a sidecar proxy, using a proxyless gRPC client:
If you are deploying only gRPC services configured by Traffic Director, you can create a service mesh without deploying any proxies at all. This makes it easier to bring service mesh capabilities to your gRPC applications.
Name resolution scheme
Here is how name resolution works for proxyless deployments:
- You set
xds
as the name resolution scheme in the gRPC client channel when connecting to a service. The target URI is formatted asxds:///hostname:port
. When the port is not specified, the default value is 80, for example, in the target URIxds:///foo.myservice
. - The gRPC client resolves the
hostname:port
in the target URI by sending an LDS request to Traffic Director. - Traffic Director looks up the configured forwarding rules that have
a matching port. Then it looks up the corresponding URL map for a host rule
that matches
hostname:port
. It returns the associated routing config to the gRPC client.
The host rules you configure in Traffic Director must be unique across
all URL maps. For example, foo.myservice
, foo.myservice:80
,
and foo.myservice:8080
are all different rules.
Name resolution with different deployment types
The name resolution scheme is different for proxyless deployments and those that use Envoy proxies.
The gRPC client uses the xds
name resolution scheme to connect to a service,
allowing the client to receive the service configuration from
Traffic Director. The gRPC client then communicates directly with the
gRPC server.
You can combine sidecar and proxyless deployment patterns for increased flexibility. This is especially helpful when your organization and network support multiple teams with different feature requirements, operational needs and gRPC versions.
In the following illustration, both proxyless gRPC clients and gRPC clients with a sidecar proxy communicate with a gRPC server.
The gRPC clients with sidecar proxies use the dns
name resolution scheme.
Use cases
These examples illustrate how you can adopt a service mesh with your proxyless gRPC services.
The following use cases help you understand when you might want to use Traffic Director with proxyless gRPC applications. Keep in mind that your deployment can include proxyless gRPC applications, gRPC applications with sidecars, or a mix of both.
Resource efficiency in a large-scale service mesh
If your service mesh includes hundreds or thousands of clients and backends, you might find that the total resource consumption from running sidecar proxies starts to add up. When you use proxyless gRPC applications, you don't need to introduce sidecar proxies to your deployment. A proxyless approach can result in efficiency gains.
High-performance gRPC applications
For some use cases, every millisecond of request and response latency matters. In such a case, you might find reduced latency when you use a proxyless gRPC application, instead of passing every request through a client-side sidecar proxy and, potentially, a server-side sidecar proxy.
Service mesh for environments where you can't deploy sidecar proxies
In some environments, you might not be able to run a sidecar proxy as an additional process alongside your client or server application. Or, you might not be able to configure a machine's network stack for request interception and redirection, for example, using iptables. In this case, you can use Traffic Director with proxyless gRPC applications to introduce service mesh capabilities to your gRPC applications.
Heterogeneous service mesh
Because both proxyless gRPC applications and Envoy communicate with Traffic Director, your service mesh can include both deployment models. This enables you to satisfy the particular operational, performance, and feature needs of different applications and different development teams.
Migrating from a service mesh with proxies to a mesh without proxies
If you already have a gRPC application with a sidecar configured by Traffic Director, you can transition to a proxyless gRPC application.
When a gRPC client is deployed with a sidecar proxy, it uses DNS to resolve the hostname that it is connecting to. Traffic is intercepted by the sidecar proxy to provide service mesh functionality.
You can define whether a gRPC client uses the proxyless route or the sidecar
route to communicate with a gRPC server by modifying the name resolution scheme
that it uses. Proxyless clients use the xds
name resolution scheme, while
sidecar proxies use the dns
name resolution scheme. Some gRPC clients may
even use the proxyless route when talking to one gRPC server, but the sidecar
route when talking to another gRPC server. This allows you to gradually migrate
to a proxyless deployment.
To do this, you create a new Traffic Director service that your proxyless gRPC client uses. You can configure Traffic Director for the existing and new versions using the same APIs.
Architecture and resources
The configuration data model for proxyless gRPC services extends the Traffic Director configuration model, with some additions and limitations that are described in this guide. Some of these limitations are temporary, because proxyless gRPC does not yet support all of Envoy's features, and others are inherent to using RPCs. For example, HTTP redirects are not supported using gRPC). We recommend that you familiarize yourself with Traffic Director concepts and configuration to help you understand the configuration model in this guide.
The following diagram shows the resources that you must configure for proxyless gRPC applications.
Routing rule maps
A routing rule map defines how requests are routed in the mesh. It consists of a forwarding rule, a target gRPC proxy, and a URL map.
Forwarding rule
Typically, you create the forwarding rule using the virtual IP address and
port of the service you are configuring. A gRPC client using the xds
name
resolution scheme does not perform DNS lookup to resolve the hostname in the
channel URI. Instead, such a client resolves the hostname[:port] in the target
URI by sending an
LDS
request to Traffic Director. There is no DNS lookup involved and a DNS
entry for the hostname is not required. As a result, Traffic Director uses
only the port specified in the URI to look up the forwarding
rule, ignoring the IP address. The default value of the port is 80
. Then,
Traffic Director looks for a matching host rule in the URL map associated
with the target proxy referenced by the forwarding rule.
Therefore, a forwarding rule that references a target gRPC proxy with the
validateForProxyless
field set to TRUE
must have its IP address set to
0.0.0.0
. When validateForProxyless
is set to TRUE
, configurations that
specify an IP address other than 0.0.0.0
are rejected.
Note the following:
- The load balancing scheme for the forwarding rule must be
INTERNAL_SELF_MANAGED
. - You can have multiple forwarding rules, but
IP:port
of each forwarding rule must be unique and the port must match the port specified in the host rule. - If more than one forwarding rule has a port that matches the port in the
target URI, Traffic Director looks for a matching
hostname[:port]
in the host rules of the URL maps referenced by all such forwarding rules. Traffic Director looks for an exact match between thehostname[:port]
in the host rule and thehostname[:port]
specified in the target URI. Host rules and default rules containing wildcard characters are skipped.
Note also that, if more than one match is found, the behavior is undefined and can lead to unpredictable behavior. This generally happens when the following conditions are both met:
- The same hostname is used across multiple URL maps.
- Multiple forwarding rules with the load balancing scheme
INTERNAL_SELF_MANAGED
specify the same port.
For this reason, we recommend that you do not re-use the same hostname across multiple URL maps that are referenced by forwarding rules specifying the same port.
Target gRPC proxy
Target proxies point to URL maps, which in turn contain rules used to route traffic to the correct backend. When you configure Traffic Director for gRPC applications, use a target gRPC proxy, not a target HTTP proxy, regardless of whether you are using proxyless gRPC applications or gRPC applications that use Envoy.
Target gRPC proxies have a field called validateForProxyless
in the REST API.
The default value is FALSE
. Setting this
field to TRUE
enables configuration checks so that you do not accidentally
enable a feature that is not compatible with proxyless gRPC.
In the gcloud
command-line tool, the flag --validate-for-proxyless
is the
equivalent of the validateForProxyless
field.
Because Traffic Director support for proxyless gRPC applications does not include the full range of capabilities that are available to gRPC applications with a sidecar proxy, you can use this field to ensure that an invalid configuration, which might be specified in the URL map or backend service, is rejected. Note also that configuration validation is done based on the features that Traffic Director supports with the most recent version of gRPC.
URL map
The URL map defines the routing rules that your proxyless gRPC clients use to
send traffic. The URL map contains a one or more host rules in the format
hostname:port
. Each of these host rules resolves to a service.
When you configure your gRPC client, you specify the target URI for the service
that the client needs to contact. This URI also uses the hostname:port
format.
This URI corresponds to the host rule entry in the URL map.
For example, if you configure the target URI xds:///myservice:8080
in your
gRPC client, Traffic Director sends it the configuration corresponding to
the URL map host rule for myservice:8080
. This configuration includes, among
other information, a list of endpoints, which are each an IP address:port pair
corresponding to specific gRPC server instances.
- The target URI
xds:///myservice
matches the URL map host rulemyservice
. - The target URI
xds:///myservice
does not match the URL map host rulemyservice:80
. - The target URI
xds:///myservice:80
does not match the URL map host rulemyservice
.
The string in the target URI and the string in the URL map host rule must be identical.
For additional information, see URL map limitations.
Health checks
A gRPC health check can check the status of a gRPC service that is running on a backend VM or NEG.
If a gRPC health check cannot be used because your gRPC server does not implement the gRPC health checking protocol, use a TCP health check instead. Do not use an HTTP, HTTPS, or HTTP/2 health check.
For more information, see gRPC health check and Additional flag for gRPC health checks.
Backend service
The backend service defines how a gRPC client communicates with a gRPC server.
When you create a backend service for gRPC, set the protocol field to GRPC
.
A backend service configured with protocol set to
GRPC
can also be accessed by gRPC applications through a sidecar proxy. In that situation, the gRPC client must not use thexds
name resolution scheme.In all Traffic Director deployments, the load balancing scheme for the backend service must be
INTERNAL_SELF_MANAGED
.
Backends
Backends host your gRPC server instances. You can use managed or unmanaged instance groups in Compute Engine and zonal network endpoint groups (NEGs) in Google Kubernetes Engine as backends to host your gRPC server instances.
What's next
- For more information on routing, see Traffic Director routing rule maps overview
- For proxyless setup information, start with Preparing to set up Traffic Director with proxyless gRPC services
- For information about limitations on proxyless gRPC deployments, see Traffic Director limitations with proxyless gRPC applications