Routing TCP traffic with Traffic Director

This guide demonstrates how you can use Traffic Director to route traffic to services that handle TCP traffic. These services include databases, VOIP services, and management platforms, which expect to receive TCP traffic on specific ports or port ranges.

Use this guide to do the following:

  • Set up a service that represents a collection of identical backends or endpoints that accept TCP requests from clients.
  • Set up a routing rule map so that Envoy proxies that Traffic Director configures can send TCP requests.

The following diagrams show how TCP routing works for virtual machine (VM) instances and network endpoint groups (NEGs), respectively.

Compute Engine API resources for VMs.
Setting up Traffic Director for TCP, Compute Engine backends (click to enlarge)


Compute Engine API resources for NEGs.
Setting up Traffic Director for TCP, NEG backends (click to enlarge)

Preparation

  • This guide builds on the Envoy for service mesh preparation guide, and assumes that you already have a basic understanding of how Traffic Director works.
  • The guide focuses on aspects of Traffic Director setup that are different when you configure your services and routing for TCP traffic.
  • The guide assumes that you have already set up a managed instance group (MIG) or NEG.

Configure load-balancing resources

Use the following steps to configure the load-balancing resources.

Create a TCP health check

Setup with VMs

If you are configuring MIGs, use the following command to create a global health check; replace PORT with the TCP port number that this health check monitors:

gcloud compute health-checks create tcp td-vm-health-check \
    --global \
    --port=PORT

Setup with NEGs

If you are configuring NEGs, use the following command to create a health check:

gcloud compute health-checks create tcp td-gke-health-check \
    --use-serving-port

For more information about health checks, see the following:

Create a firewall rule for your health check

This rule enables health checks from Google Cloud's health checkers to reach your backends.

gcloud

gcloud compute firewall-rules create fw-allow-health-checks \
  --action=ALLOW \
  --direction=INGRESS \
  --source-ranges=35.191.0.0/16,130.211.0.0/22 \
  --target-tags=TAGS \
  --rules=tcp:80

For more information about firewall rules for health checks, see the following:

Create a backend service for your TCP backends

The backend service setup for TCP with Traffic Director differs slightly from the setup for HTTP. The following steps capture those differences.

Setup with VMs

If you are configuring MIGs, use the following command to create a backend service and add the health check:

 gcloud compute backend-services create td-vm-service \
     --global \
     --load-balancing-scheme=INTERNAL_SELF_MANAGED \
     --health-checks=td-vm-health-check \
     --protocol=TCP \
     --global-health-checks

The final step for setting up your backend service is to add your backend group. Because this step is standard Traffic Director setup (in other words, nothing specific for TCP-based services), it is not shown here. For more information, see Create the backend service in the setup guide for Compute Engine VMs with automatic Envoy deployment.

Setup with NEGs

If you are configuring NEGs, use the following commands to create a backend service and associate the health check with the backend service:

gcloud compute backend-services create td-gke-service \
    --global \
    --health-checks=td-gke-health-check \
    --protocol=TCP \
    --load-balancing-scheme=INTERNAL_SELF_MANAGED

The final step for setting up your backend service is to add your backend group. Because this step is standard Traffic Director setup (in other words, nothing specific for TCP-based services), it is not shown here. For more information, see Create the backend service in the setup guide for GKE Pods with automatic Envoy injection.

Set up routing for your TCP-based backend service

In this section, you create a global target TCP proxy and a global forwarding rule. These resources enable your applications to send traffic to the backends with your newly created backend services.

Consider the following:

  • Your forwarding rule must have the load-balancing scheme INTERNAL_SELF_MANAGED.
  • The virtual IP address (VIP) and port that you configure in the forwarding rule are the VIP and port that your applications use when sending traffic to your TCP services. You can choose the VIP from one of your subnets. Traffic Director uses this VIP and port to match client requests to a particular backend service.

    For information about how client requests are matched to backend services, see Service discovery.

gcloud

  1. Use the following command to create the target TCP proxy; replace BACKEND_SERVICE with the name of the backend service created in the previous step. In the following example, we use td-tcp-proxy as the name for the target TCP proxy, but you can choose a name that suits your needs.

    gcloud compute target-tcp-proxies create td-tcp-proxy \
       --backend-service=BACKEND_SERVICE
    
  2. Create the forwarding rule. The forwarding rule specifies the VIP and port that are used when matching client requests to a particular backend service. For more information, see gcloud compute forwarding-rules create in the gcloud command reference.

    gcloud compute forwarding-rules create td-tcp-forwarding-rule \
        --global \
        --load-balancing-scheme=INTERNAL_SELF_MANAGED \
        --address=VIP\
        --target-tcp-proxy=td-tcp-proxy \
        --ports=PORT \
        --network=default
    

At this point, Traffic Director is configured to load balance traffic for the VIP specified in the forwarding rule across your backends.

Troubleshooting

If your applications are attempting to send requests to your TCP-based services, do the following:

  • Confirm that the TCP health check port matches the port on which the TCP application expects to receive health check traffic.
  • Confirm that the port name of the backend service matches what is specified in the instance group.

What's next