When you configure Traffic Director, one of the resources that you configure is the target proxy.
In the context of Traffic Director, target proxies serve two primary purposes:
They have a specific type. This type is used so that Traffic Director clients know which protocol to use when opening a connection to the backends/endpoints associated with a service.
They combine with other resources (for example, forwarding rules and URL maps) to create a routing rule map. The routing rule map provides additional capabilities (such as routing rules), depending on the type of target proxy. Invalid selections are generally either hidden from the user interface or rejected by the API.
Target proxy types correspond to different request protocols
Traffic Director generates different configuration for its clients based on the type of target proxy that you configure. When you configure a different target proxy type, the client will use a specific request protocol.
- Target HTTP proxies: Traffic Director's clients will initiate HTTP connections
- Target gRPC proxies: Traffic Director's clients will initiate gRPC connections
- Target TCP proxies: Traffic Director's clients will initiate TCP connections
Note that you aren't restricted to just choosing one type. For example, your application may want to use HTTP when addressing some services but use TCP when addressing other services. For such a use case, you would need to create both a target HTTP proxy and a target TCP proxy.
Valid resource combinations in a routing rule map
To avoid misconfigurations, Traffic Director only lets you create routing rule maps that look like the following:
- Forwarding Rule -> Global Target HTTP Proxy -> URL Map -> (one or more Backend Service(s))
- Forwarding Rule -> Global Target gRPC Proxy -> URL Map -> (one or more Backend Service(s))
- Forwarding Rule -> Global Target TCP Proxy -> (one Backend Service)
Note that health checks and backend groups are not shown in the above list.
If you're using the Google Cloud console to setup a target HTTP proxy, the target proxy is set up implicitly as part of your routing rule map configuration. Note that TCP proxy setup is not yet supported in the console.
If you're using the
gcloud command-line interface or the APIs, you need to
configure the target proxy explicitly.
Traffic handling when using a target HTTP proxy
When you configure HTTP-based services, each service instance will generally have an Envoy proxy deployed alongside it. This Envoy proxy is configured by Traffic Director, is part of your service mesh data plane and handles traffic as follows.
The Envoy proxy receives the outbound request and compares the request's destination IP and port to the IP and port configured in each forwarding rule that references a target HTTP proxy. If a match is found, the Envoy proxy will evaluate the request according to the target HTTP proxy's corresponding URL Map.
Traffic handling when using a target TCP proxy
When you configure TCP-based services, each service instance will generally have an Envoy proxy deployed alongside it. This Envoy proxy is configured by Traffic Director, is part of your service mesh data plane and handles traffic as follows.
The Envoy proxy receives the outbound request and compares the request's destination IP and port to the IP and port configured in each forwarding rule that references a target TCP proxy. Each forwarding rule routes TCP traffic to a target proxy, which points to a default backend service, which specifies a health check and determines the appropriate backend.
Traffic handling when using target gRPC proxy
When you configure gRPC-based services, your service instances generally won't have Envoy proxies deployed alongside them. Instead, the gRPC library is configured by Traffic Director, is part of your service mesh data plane and handles traffic as follows.
The gRPC library compares the
hostname[:port] specified in the URI to the host
rules in all URL maps that are referenced by a target gRPC proxy. If a match is
found, the gRPC library will evaluate the request according to the path rules
associated with the matching host rule.
Configuring traffic interception for gateway and middle proxies
In a service mesh, you typically have the following:
- Service instances each have a dedicated Envoy sidecar proxy or a gRPC library.
- If you are using Envoy, services instances are configured to intercept and redirect outbound requests to the Envoy proxy.
- The Envoy proxy or gRPC library handles outbound requests.
But you can also use Traffic Director to configure a gateway or middle proxy. In this setup, the Envoy proxy sits separate from your service instances. It listens for inbound requests on a port and handles requests when it receives them.
For this type of setup, you don't need to configure interception or redirection.
Instead, you can simply enable the
--proxy-bind flag on the target HTTP proxy.
This configures inbound traffic interception and causes the Envoy proxy to
listen for inbound requests on the IP address and port (that are configured in
the forwarding rule).
Recall that the forwarding rule references a target proxy. So if you enable
--proxy-bind on a target HTTP proxy, the proxy listens on the IP address and
port of the forwarding rule that references this target HTTP proxy.
Note the following:
- If you're using Traffic Director for a service mesh, you don't need to use
--proxy-bindbecause sidecar proxies generally receive and forward outbound traffic.
--proxy-bindflag is not available for target gRPC proxies.
Target proxy resources
For adding, deleting, listing, and getting information about target proxies,
you can use the REST API or the
You can, however, use the following
gcloud commands to get information about a
gcloud compute [target-http-proxies | target-tcp-proxies | target-grpc-proxies ] list
gcloud compute [target-http-proxies | target-tcp-proxies | target-grpc-proxies ] describe target-proxy-name
For descriptions of the properties and methods available to you when working with target proxies through the REST API, see the following pages:
gcloud command-line tool, see the following
For more information, see:
- Setting Up Traffic Director for Compute Engine with VMs.
- Setting Up Traffic Director for Google Kubernetes Engine with pods
- Traffic Director routing rule maps overview