Set up TCP traffic routing
This guide demonstrates how you can use Traffic Director to route traffic to services that handle TCP traffic. These services include databases, VOIP services, and management platforms, which expect to receive TCP traffic on specific ports or port ranges.
This guide applies to deployment with the older APIs. If you are using the new
service routing APIs, which are in preview, see
Traffic Director setup for TCP services with a
Use this guide to do the following:
- Set up a service that represents a collection of identical backends or endpoints that accept TCP requests from clients.
- Set up a routing rule map so that Envoy proxies that Traffic Director configures can send TCP requests.
The following diagrams show how TCP routing works for virtual machine (VM) instances and network endpoint groups (NEGs), respectively.
- This guide builds on the Envoy for service mesh preparation guide, and assumes that you already have a basic understanding of how Traffic Director works.
- The guide focuses on aspects of Traffic Director setup that are different when you configure your services and routing for TCP traffic.
- The guide assumes that you have already set up a managed instance group (MIG) or NEG.
Configure load-balancing resources
Use the following steps to configure the load-balancing resources.
Create a TCP health check
Setup with VMs
If you are configuring MIGs, use the following command to create a global
health check; replace
PORT with the TCP port
number that this health check monitors:
gcloud compute health-checks create tcp td-vm-health-check \ --global \ --port=PORT
Setup with NEGs
If you are configuring NEGs, use the following command to create a health check:
gcloud compute health-checks create tcp td-gke-health-check \ --use-serving-port
For more information about health checks, see the following:
- Creating health checks in the Cloud Load Balancing documentation
gcloud compute health-checks create tcpin the
Create a firewall rule for your health check
This rule enables health checks from Google Cloud's health checkers to reach your backends.
gcloud compute firewall-rules create fw-allow-health-checks \ --action=ALLOW \ --direction=INGRESS \ --source-ranges=188.8.131.52/16,184.108.40.206/22 \ --target-tags=TAGS \ --rules=tcp:80
For more information about firewall rules for health checks, see the following:
- Firewall rules for health checks in the Cloud Load Balancing documentation
gcloud compute firewall-rules createin the
gcloudcommand reference (includes guidance for
Create a backend service for your TCP backends
The backend service setup for TCP with Traffic Director differs slightly from the setup for HTTP. The following steps capture those differences.
Setup with VMs
If you are configuring MIGs, use the following command to create a backend service and add the health check:
gcloud compute backend-services create td-vm-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --health-checks=td-vm-health-check \ --protocol=TCP \ --global-health-checks
The final step for setting up your backend service is to add your backends. Because this step is standard Traffic Director setup (in other words, nothing specific for TCP-based services), it is not shown here. For more information, see Create the backend service in the setup guide for Compute Engine VMs with automatic Envoy deployment.
Setup with NEGs
If you are configuring NEGs, use the following commands to create a backend service and associate the health check with the backend service:
gcloud compute backend-services create td-gke-service \ --global \ --health-checks=td-gke-health-check \ --protocol=TCP \ --load-balancing-scheme=INTERNAL_SELF_MANAGED
The final step for setting up your backend service is to add your backend group. Because this step is standard Traffic Director setup (in other words, nothing specific for TCP-based services), it is not shown here. For more information, see Create the backend service in the setup guide for GKE Pods with automatic Envoy injection.
Set up routing for your TCP-based backend service
Consider the following:
- Your forwarding rule must have the load-balancing scheme
The virtual IP address (VIP) and port that you configure in the forwarding rule are the VIP and port that your applications use when sending traffic to your TCP services. You can choose the VIP from one of your subnets. Traffic Director uses this VIP and port to match client requests to a particular backend service.
For information about how client requests are matched to backend services, see Service discovery.
Use the following command to create the target TCP proxy; replace
BACKEND_SERVICEwith the name of the backend service created in the previous step. In the following example, we use
td-tcp-proxyas the name for the target TCP proxy, but you can choose a name that suits your needs.
gcloud compute target-tcp-proxies create td-tcp-proxy \ --backend-service=BACKEND_SERVICE
Create the forwarding rule. The forwarding rule specifies the VIP and port that are used when matching client requests to a particular backend service. For more information, see
gcloud compute forwarding-rules createin the
gcloud compute forwarding-rules create td-tcp-forwarding-rule \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --address=VIP\ --target-tcp-proxy=td-tcp-proxy \ --ports=PORT \ --network=default
At this point, Traffic Director is configured to load balance traffic for the VIP specified in the forwarding rule across your backends.
If your applications are attempting to send requests to your TCP-based services, do the following:
- Confirm that the TCP health check port matches the port on which the TCP application expects to receive health check traffic.
- Confirm that the port name of the backend service matches what is specified in the instance group.
- To view the other steps in the setup process, see Prepare to set up Traffic Director with Envoy.
- To find general Traffic Director troubleshooting information, see Troubleshooting deployments that use Envoy.