Traffic Director service routing APIs overview
This document is intended for mesh or platform administrators and service developers who have an intermediate to advanced level of familiarity with Traffic Director and service mesh concepts. This document applies to deployments using Envoy and gRPC clients. For more information on Traffic Director concepts, see the general overview and the proxyless gRPC services overview.
Traffic Director provides service networking capabilities to your applications, including advanced traffic management, observability, and security. However, configuring and operating a service mesh is a complex task for mesh administrators and service developers.
This document describes the service routing APIs for configuring Traffic Director. These APIs are designed to simplify and improve your overall mesh configuration experience.
This API model replaces the older
resources with API resources called
These resources provide a more contextually-relevant configuration
experience when you define your service networking control plane.
This document introduces the following service routing API model and resources.
- Service-to-service (east-west) traffic management and security configuration for Envoy sidecar proxies and proxyless gRPC clients.
- Traffic management and security configuration for Envoy proxies acting as ingress gateways, allowing external clients to connect to the service mesh (north-south).
RouteAPIs with following types
The Google Cloud console does not provide support for the service routing APIs. You must implement these API resources using the Google Cloud CLI or the REST APIs. Additionally, there is no automated migration path from the older APIs to the service routing APIs. To replace an existing deployment, you must create a new Traffic Director deployment with the service routing APIs, and then shut down the old deployment.
Use cases and benefits
The service routing APIs let you configure Traffic Director for both proxyless gRPC and Envoy proxy deployments. The service routing API model enables several key benefits.
In the following diagram, two services in the service mesh are connected by a
Mesh resource. The two
HTTPRoute resources configure routing. The mesh or
platform admin manages the
Mesh resource and the two service owners create the
routing configuration for their services.
Role-oriented API design enables clear separation of responsibilities
The service routing APIs let you separate mesh configuration responsibilities based on organizational roles:
- Mesh administrators can define the logical mesh as well as the ingress gateway infrastructure.
- Service owners (application developers) can independently define access patterns for their services. They can also define and apply traffic management policies for their services.
In the following diagram, Cloud Load Balancing and a
provide an ingress gateway for traffic entering the mesh from a client that is
not in the mesh. The mesh administrator configures and manages the
resource, while the service owners configure and manage their own services and
Enhanced reliability with self-serve model
In the older Traffic Director API, the URL map defines routing for service-to-service communication in the mesh, as well as for external traffic entering the mesh though a managed load balancer. Multiple teams might be editing a single URL map resource, which presents potential reliability issues and complicates the process of delegating per-service configuration to service owners.
The service routing APIs introduce per-protocol, per-route resources that can be configured and owned by independent service owners. This approach has several advantages.
- Service owners now have autonomy over how they want to configure policies and traffic management for the services they own.
- Updating one
Routeresource does not affect other
Routeresources in the mesh. The update process is also less error-prone because service owners manage much smaller configurations.
- The service owner who is responsible for the destination service or
hostname owns each
- Service owners do not have to depend on mesh administrators to update routing using the centralized URL map resource.
Configure only what's relevant
The service routing APIs replace forwarding rules, target proxies, and URL maps. You no longer need to allocate virtual IP addresses from your Virtual Private Cloud (VPC) network for service-to-service communication with sidecar proxies and proxyless gRPC.
Enable a service mesh spanning multiple projects in Shared VPC environments
The service routing API model lets service owners to participate in a shared mesh
infrastructure using Shared VPC and other means of connectivity while
maintaining independent control over their services. For example, service owners
can define the
Route resources in their own projects. Platform administrators
can define a
Mesh in a centrally administered host project, then grant service
owners IAM permissions to attach their
Route resources to a shared
Gateway. The following diagram shows an example with Shared VPC.
The service routing APIs also let you have service mesh clients connected to different networks using VPC Network Peering.
Route traffic based on the server name indicator
TLSRoute resource lets you route TLS-encrypted traffic based on the Server
Name Indication (SNI) in the TLS handshake. You can configure TLS traffic to be
routed to the appropriate backend services by configuring the SNI match in the
TLSRoute resource. In these deployments, proxies only route traffic and the
TLS session is terminated at the destination backend instance.
TLSRoute resource is supported only with Envoy proxies that are deployed
as sidecar proxies or gateways.
TLSRoute resource attached to a
The deployment shown in the following diagram routes any service mesh traffic
where the SNI extension has the value
service1 to the backend service
service1. Additionally, any service mesh traffic where the SNI extension has the
service2 is routed to the backend service
service2. The SNI extension
value and the backend service name are independent of each other.
TLSRoute resource attached to a
The deployment shown in the following diagram routes any inbound traffic to the
Gateway resource where the SNI extension has the value
serviceA to the
backend service service
serviceA. Additionally, any inbound traffic to the
Gateway where the SNI extension has the value
serviceB is routed to the
serviceB. The SNI extension value and the backend service name
are independent of each other. The SNI extension value and the header in HTTP
requests are also independent.
Gateway resource does not terminate the TLS connection at the
Envoy proxy. Instead, the TLS connection is terminated at the corresponding
backend service. The
Gateway cannot inspect any information encrypted in the
TLS layer, other than seeing the
ClientHello, which contains a plain text SNI
Gateway performs TLS passthrough in this mode. Note that
ClientHello is unsupported.
First class gRPC support
You can configure proxyless gRPC clients by using first-class gRPC attributes such as matching by method, instead of translating to equivalent paths and using path matchers.
Traffic splitting for TCP traffic
You can now implement weight-based traffic splitting for TCP traffic across multiple backend services. You can configure patterns such as canary (blue- green) rollouts when you update your service. Traffic splitting also lets you migrate traffic in a controlled manner without downtime.
When you use the service routing API
all traffic is automatically intercepted. For more information, see
Options for Compute Engine VM setup with automatic Envoy deployment.
Architecture and resources
This section describes the service routing API model and its resources, and helps you to understand how the service routing API resources work together.
Mesh resource represents an instance of a service mesh. You use it to
create a logical service mesh in your project. Each
Mesh resource must have a
unique name in the project. After a
Mesh resource is created, its name cannot
Mesh resource is referenced in the
Route resource to add routes for
services that are part of the mesh.
Envoy proxy and proxyless gRPC clients receive configuration from
Traffic Director by joining the service mesh identified by the
resource's name. The
Mesh name, as a bootstrap parameter, is supported by
automated Envoy deployment on Compute Engine
and by the
automatic envoy injector on GKE.
Mesh resource supports the following data plane deployments:
- Envoy running alongside the application as sidecar proxies
- Proxyless gRPC clients
- Mix of Envoy sidecar and proxyless gRPC clients
Route resource is used to set up routing to the services. There are four
different types of the
Route API resource. They define the protocol used to
route traffic to a backend service.
The API doesn't contain a
Route API verbatim. The only configurable API
Route resource references one or more
to add the routes that are part of the corresponding
Gateway configuration. A
Route resource can reference both
Route resource also references one or more
resources. The services are configured using the backend service API with the
existing configuration flow. The service routing APIs do not change how backend
services and health checks are defined in the Traffic Director configuration.
You simply create a backend service resource that points to one or more MIG
or NEG backends.
The following diagram shows the relationships among the
Route resources and the backend service resource and its backends.
You define other traffic management capabilities, such as routing, header
modifications, timeouts, and weight-based traffic splitting in
For example, in the following diagram, an
HTTPRoute resource defines a 70% / 30%
traffic split between two backend services.
TLSRoute resource to route TLS traffic to backend services based on
SNI hostnames and Application-Layer Protocol Negotiation (ALPN) name.
configuration implies TLS passthrough, in which the Envoy proxy does not
terminate TLS traffic.
TLSRoute resource references one or more
Gateway resources to
add the routes that are part of the corresponding Mesh or Gateway configuration.
TLSRoute resource also references one or more backend service resources.
The services are configured using the backend service API resource using the
existing configuration flow and APIs.
Gateway resource is used to represent Envoy proxies acting as ingress
gateways, allowing external clients to connect to the service mesh (north-south
traffic). This resource has listening ports along with a
scope parameter. The
Envoy proxy that acts as an ingress gateway binds to the ports specified and to
0.0.0.0, which represents all of the IP addresses on the local VM. The following
diagram shows Envoy proxies deployed as an ingress service and configured by the
Gateway resource. In this particular example, Envoy proxies are configured to
listen on port 80 for incoming connections from clients.
Gateway API resource only supports the Envoy proxy data plane. It does
not support proxyless gRPC.
gRPCRoutes are supported in the
but the gRPC traffic is routed by the Envoy proxy, acting as a middle-proxy.
What are a
Gateway scope and merged
Gateway resource instance represents the ports and configuration specific
to traffic received on those ports. The
Gateway API resource has a parameter,
scope, that is used to logically group and merge the configuration of two or
For example, If you want the
Gateway proxies to listen on ports 80 and 443 to
receive HTTP and HTTPS traffic respectively, you create two
Gateway resource with port 80, for HTTP traffic, and the other
with 443, for HTTPS traffic. Give the
scope field in each the same value.
Traffic Director dynamically merges the individual configurations of all
Gateways that have the same scope. On the data plane side, the Envoy proxies
that run in the ingress gateway mode must also present the same scope parameter
to Traffic Director to receive the
Gateway configuration. Note that you
specify the scope when you create the
Gateway resource, and you specify the
same scope as the bootstrap parameter for the proxies.
The following are key considerations for the
Gatewayscope parameter is mandatory. Specify the scope in the
Gatewayresource and in the bootstrap configuration of the Envoy proxies even when only one
- Creating a
Gatewayresource does not deploy a service with an Envoy proxy. Deploying the Envoy proxy is a separate step.
Gatewayresource has a
typethat represents the type of ingress deployment. This field is reserved for future use. The only currently-supported value is
OPEN_MESH, which is the default value and which cannot be modified.
Mesh deployments with mixed protocols and data planes
You can have a mixed data plane deployment, with Envoy proxy and proxyless gRPC in the same mesh. When you create such deployments, consider the following.
- Envoy sidecar deployments support all Routes (
- Proxyless gRPC deployments only support
GRPCRouteis limited to features supported only by gRPC proxyless deployments.
Supported topologies in multi-project Shared VPC environments
Traffic Director supports adding
Route resources that are defined in other
projects to a
Gateway resource defined in a centralized admin
project. Authorized service owners can directly add their service routing
configurations to the
In a typical cross-project scenario, you choose a project (host project or
centrally controlled admin project) as the mesh admin project where you create
Mesh resource. The mesh admin project owner authorizes
Route resources from other
projects to reference the
Mesh resource, allowing the routing configuration
from other projects to be part of the mesh. A mesh data plane, whether Envoy or
gRPC, requests configuration from the admin project and receives a union of all
of the routes attached to the
Mesh. For a
Gateway, the routes
are also merged across all
Gateways that use the same scope.
Mesh admin project can be any project that you choose, and the
configuration works as long as the underlying projects have VPC network
connectivity, either through Shared VPC or VPC Network Peering.
IAM permissions and roles
The following are the IAM permissions that are required to securely get,
create, update, delete, list, and use the
- Mesh admins need to have
- Gateway admins need to have
- Service owners need to have
Mesh admins need to grant the
networkservices.mesh.use permission to service
owners so that the service owners can attach their
Route resources to the
Mesh resource. The same model applies to
To see all IAM permissions for
Mesh resources, go to the
IAM permissions reference page
and search for
There are no additional predefined roles required. The existing, predefined role
Compute Network Admin
networkservices.* permissions by default.
You might need to add the previously described permissions to your custom
Comparison of the service routing and older API models
This section makes a topic-by-topic comparison between the older and service routing Traffic Director API models.
|Older APIs||Service routing APIs|
|API resources||Forwarding rule, target proxy, URL map and backend service.||Gateway, Mesh, Route, and backend service.|
|IP addresses and port numbers of services||You must provision IP addresses and port numbers for your services and
configure forwarding rules, which need to match the IP:Port pairs for all
You must manually map the IP addresses to hostnames, or you must use the catch-all IP address
|You do not need to configure IP addresses for
|Service mesh scope||Traffic Director programs all proxies attached to the VPC network, so the mesh scope is VPC network.||Traffic Director does not program proxies based on the VPC network.
For east-west service-to-service communication, Envoy and proxyless gRPC clients use the name of the
For north-south ingress gateway use cases, the
|Cross-project referencing in Shared VPC environments||Cross-project referencing is not supported. All API resources must be configured in the same project.||It is possible to create
With automatic Envoy deployment on Compute Engine API and with automatic sidecar injection on GKE, this value defaults to
|The interception port is configured in the
The value continues to default to
Bootstrapping Envoy and gRPC clients on Compute Engine and GKE
|Older APIs||Service routing APIs|
|Using automatic Envoy deployment on Compute Engine||When you create the VM template, you specify a command-line parameter,
||When you create the VM template, you specify additional parameters. For
|Using automatic sidecar injection on GKE||You specify the required bootstrap attributes in the
||Same workflow with the new attributes specified in the configMap.|
|Using manual sidecar injection on GKE||As explained here, the application pod needs to have an Envoy sidecar container bootstrapped with the required attributes.||Same workflow with the new attributes.|
|Using Compute Engine or GKE to deploy gRPC clients||The client application must be bootstrapped with the required attributes.||Same workflow with the new attributes.|
Configuring mesh and gateway security use cases
|Older APIs||Service routing APIs|
|Service-to-service mTLS in GKE||Follow the instructions here for Envoy sidecar based
Follow the instructions here for proxyless gRPC-based deployments.
|The same instructions apply.
Client TLS policy and server TLS policy must be applied to the backend service and endpoint policy resources, respectively. Because both of these APIs are orthogonal to the service routing APIs, the configuration flow remains the same as before.
|Securing middle-proxy (ingress or egress gateway) deployments||Follow the instructions here.
The server TLS policy and authorization policy resources are attached to the target HTTPS proxy resource.
|You attach the server TLS policy and the authorization policy resources to the Gateway.|
Considerations and limitations
- The Google Cloud console does not support the service routing APIs in this release.
- Use the
xDS API version 3
- Minimum Envoy version of 1.20.0 (since the service routing APIs are supported only on xDS version 3)
- Minimum gRPC bootstrap generator version of v0.14.0
TLSRouteresource is supported only with Envoy proxies that are deployed as sidecar proxies or gateways.
- Only Compute Engine VMs with automatic Envoy deployment and GKE Pods with automatic Envoy injection are supported. You cannot use manual deployment with the service routing APIs.
- Terraform is not supported in this release.
- The service routing APIs are not backward compatible with the older APIs.
- When a
TCPRouteresource is attached to a
Meshresource, the port used to match TCP traffic cannot be used to serve anything except the traffic described by this
- For example, your deployments might include a
TCPRouteresource that matches port "8000" and an HttpRoute resource. When both are attached to the same
Meshresource, traffic routed by the
HTTPRouteresource cannot use port 8000 even when the underlying IP addresses are different. This limitation comes from Envoy proxy implementation, which assigns precedence to the port-matched route first.
- For example, your deployments might include a
Gatewayresource does not provision a managed load balancer and it does not dynamically create an Envoy service.
- Automatically deployed Envoys serving as ingress gateways must not have the
serving_portsargument to the
- Automatically deployed Envoy does not support providing a project number different from the project of the VM.
- For information about the available configurations with the service routing APIs, read Traffic Director service routing setup guides .
- For information about the service routing APIs, read the documentation for the network services APIs.