Setting up Traffic Director for Compute Engine with VMs

This guide shows you how to configure the Compute Engine VM hosts and the load balancing components that Traffic Director requires.

Before you follow the instructions in this guide, review Preparing for Traffic Director setup and make sure that you have completed the prerequisite tasks described in that document.

You can configure Traffic Director using the Compute Engine load balancing SDK or REST APIs. See the load balancing API and gcloud references.

Overview of Compute Engine VM host configuration

This document provides the configuration process for services that run on Compute Engine VMs. The configuration process for the client VMs consists of setting up a sidecar proxy and traffic interception on a Compute Engine VM host. Then you configure load balancing using Google Cloud load balancing APIs.

This setup guide provides information on how to obtain and inject Envoy proxies from third-party sources that are not managed by Google.

When an application sends traffic to the service configured in Traffic Director, the traffic is intercepted by the xDS API-compatible sidecar proxy and then load balanced to the backends according to the configuration in the GCP load balancing components. For more information on host networking and traffic interception, read How traffic interception and load balancing work in Traffic Director.

For each VM host that requires access to Traffic Director services, perform the following steps:

  1. Set the API access scope of the VM to allow full access to the Google Cloud APIs.

    • When you create the VMs, under Identity and API access, click Allow full access to all Cloud APIs.

      Go to the VM instances page.

    • With the gcloud command line tool, specify the following:

      --scopes=https://www.googleapis.com/auth/cloud-platform.

  2. Allow outgoing connections to trafficdirector.googleapis.com (TCP, port 443) from the VM, so that the sidecar proxy can connect to the APIs over gRPC. Outgoing connections to port 443 are enabled by default.

  3. Deploy an xDS API-compatible sidecar proxy (such as Envoy), with a bootstrap configuration pointing to trafficdirector.googleapis.com:443 as its xDS server. Refer to this sample bootstrap configuration file as an example.

  4. Redirect IP traffic that is destined to the services to the sidecar proxy interception listener port.

    1. The sidecar proxy interception listener port is defined as TRAFFICDIRECTOR_INTERCEPTION_PORT in the proxy's bootstrap metadata configuration and is set to 15001 in the sample bootstrap configuration file.
    2. The Istio iptables script can be used to set up traffic interception.

For an example implementation of steps 3 and 4, read the next section.

Configuring a single Compute Engine VM for Traffic Director

Use this procedure as an example of how sidecar proxy deployment and traffic interception can be implemented to provide a VM with access to Traffic Director services.

First, download and prepare the configuration files and sample scripts.

  1. Log in to the Linux host you are using during the setup process.

  2. Download the archive of required files to the Linux host and untar the archive:

    wget https://storage.googleapis.com/traffic-director/traffic-director.tar.gz
    tar -xzvf traffic-director.tar.gz; cd traffic-director
    

    The archive contains the following files:

    • sidecar.env – Config file with environment variables.
    • pull_envoy.sh – Sample script to pull the Envoy binary from a provided Docker image tag. If no tag is provided, the script pulls from https://hub.docker.com/r/istio/proxyv2/tags.
    • iptables.sh – Script for setting up netfilter rules.
    • bootstrap_template.yaml – Bootstrap template file for Envoy.
    • run.sh – Top-level script that uses the sidecar.env configuration file to set up iptables for interception and to run the Envoy sidecar proxy.
  3. On each host that runs a sidecar proxy, create a system user to run the Envoy proxy process. This is the Envoy proxy user. Login is disabled for the Envoy proxy user.

    sudo adduser --system --disabled-login envoy
    
  4. Edit the sidecar.env file to modify the configuration. Read the inline comments in the configuration file for a detailed description of each variable.

    1. In the sidecar.env file, set the ENVOY_USER variable to the username that you choose to be the Envoy proxy user.
  5. Copy your own Envoy binary into the traffic-director directory or follow these steps to obtain an Envoy binary:

    1. Install Docker tools on the Linux host that you are using. The files for each supported operating system are in the Supported platforms section.
    2. Run the pull_envoy.sh script to extract the Envoy binary.

Next, for each VM host that runs applications using Traffic Director, perform the following steps:

  1. Copy the entire traffic-director directory with the modified sidecar.env file and Envoy binary to each VM host running applications that you expect to use Traffic Director.
  2. Add the run.sh script to your system's startup script, which ensures that the script is run after every VM reboot.
  3. On each VM host, run the run.sh script:

    cd traffic-director
    sudo ./run.sh start
    
  4. Verify that the Envoy proxy started correctly.

    sudo ./run.sh status
    

    You should see the following output:

    OK: Envoy seems to be running.
    OK: Traffic interception seems to be enabled.
    

    Alternatively, you can confirm that the proxy process is running by using the ps command. Make sure that envoy appears in the output.

    ps aux | grep envoy
    

    You can verify traffic interception direction by using the following:

    sudo iptables -S -t nat | grep ISTIO_REDIRECT
    

    The expected output is:

    -N ISTIO_REDIRECT
    -A ISTIO_OUTPUT -j ISTIO_REDIRECT
    -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001`
    

Advanced traffic interception configuration

If intercepting all outgoing VM traffic is not acceptable for your deployment, you can redirect specific traffic to the sidecar proxy, while the rest of the traffic will follow the regular route defined by your host networking configuration.

To achieve this, modify the earlier Compute Engine VM host setup procedure as follows:

  1. Decide on the range of IP addresses that Traffic Director-controlled services should resolve to. Traffic destined to these IP addresses is intercepted by the sidecar proxy. The range is specific to your deployment.

  2. In the sidecar.env file, set the value of SERVICE_CIDR to this range. Traffic to these IP addresses is redirected by netfilter to a sidecar proxy and load balanced according to the configuration provided by Traffic Director.

  3. The sidecar proxy is not strictly required to run as a dedicated user that is excluded from traffic interception. However, this is still recommended.

  4. After you execute the run.sh script as directed in the main procedure, iptables is configured to intercept this specific range only.

  5. Run the following command to verify that the netfilter is configured correctly. For ${SERVICE_CIDR}, substitute the value you configured as the intercepted IP address range.

    sudo iptables -L -t nat | grep -E "(ISTIO_REDIRECT).+${SERVICE_CIDR})"
    

Configuring a managed instance group for Traffic Director

Managed instance groups create new backend VMs by using autoscaling. If these backend VMs run applications that need access to Traffic Director services, you must install a sidecar proxy and configure traffic interception on the backend VMs.

The following example configures each Compute Engine VM instance in a managed instance group similarly to the configuration described for a single Compute Engine VM in the previous section. This example shows how to:

  • Create a VM template with a full Envoy configuration and a sample service that serves its hostname using the HTTP protocol.
  • Configure a managed instance group using this template.

Creating the instance template

First, create the Compute Engine VM instance template. This template auto-configures the Envoy sidecar proxy and sample apache2 web service through the startup-script parameter.

Console

  1. In the Cloud Console, go to the Instance Templates page.

    Go to the Instance templates page

  2. Click Create instance template.
  3. Fill in the fields as follows:

    • Name: td-vm-template
    • Boot disk: Debian GNU/Linux 9 (stretch)
    • Service account: Compute Engine default service account
    • Access scopes: Allow full access to all Cloud APIs
  4. Under Firewall, select the boxes next to Allow HTTP traffic and Allow HTTPS traffic.

  5. Click Management, Security, Disks, Networking, Sole Tenancy.

  6. In the Management tab, copy the following script into the Startup script field.

    
    #! /bin/bash
    # Add a system user to run Envoy binaries. Login is disabled for this user
    sudo adduser --system --disabled-login envoy
    # Download and extract the Traffic Director tar.gz file
    sudo wget -P /home/envoy https://storage.googleapis.com/traffic-director/traffic-director.tar.gz
    sudo tar -xzf /home/envoy/traffic-director.tar.gz -C /home/envoy
    sudo cat << END > /home/envoy/traffic-director/sidecar.env
    ENVOY_USER=envoy
    # Exclude the proxy user from redirection so that traffic doesn't loop back
    # to the proxy
    EXCLUDE_ENVOY_USER_FROM_INTERCEPT='true'
    # Intercept all traffic by default
    SERVICE_CIDR='*'
    GCP_PROJECT_NUMBER=''
    VPC_NETWORK_NAME=''
    ENVOY_PORT='15001'
    ENVOY_ADMIN_PORT='15000'
    LOG_DIR='/var/log/envoy/'
    LOG_LEVEL='info'
    XDS_SERVER_CERT='/etc/ssl/certs/ca-certificates.crt'
    END
    sudo apt-get update -y
    sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y
    sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
    sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/debian stretch stable' -y
    sudo apt-get update -y
    sudo apt-get install docker-ce apache2 -y
    sudo service apache2 restart
    echo '<!doctype html><html><body><h1>'\`/bin/hostname\`'</h1></body></html>' | sudo tee /var/www/html/index.html
    sudo /home/envoy/traffic-director/pull_envoy.sh
    sudo /home/envoy/traffic-director/run.sh start
    
    
  7. Click Create to create the template.

gcloud

  1. Create the instance template.

    
    gcloud compute instance-templates create td-vm-template \
      --scopes=https://www.googleapis.com/auth/cloud-platform \
      --tags=http-td-tag,http-server,https-server \
      --image-family=debian-9 \
      --image-project=debian-cloud \
      --metadata=startup-script="#! /bin/bash
    
    # Add a system user to run Envoy binaries. Login is disabled for this user
    sudo adduser --system --disabled-login envoy
    # Download and extract the Traffic Director tar.gz file
    sudo wget -P /home/envoy https://storage.googleapis.com/traffic-director/traffic-director.tar.gz
    sudo tar -xzf /home/envoy/traffic-director.tar.gz -C /home/envoy
    sudo cat << END > /home/envoy/traffic-director/sidecar.env
    ENVOY_USER=envoy
    # Exclude the proxy user from redirection so that traffic doesn't loop back
    # to the proxy
    EXCLUDE_ENVOY_USER_FROM_INTERCEPT='true'
    # Intercept all traffic by default
    SERVICE_CIDR='*'
    GCP_PROJECT_NUMBER=''
    VPC_NETWORK_NAME=''
    ENVOY_PORT='15001'
    ENVOY_ADMIN_PORT='15000'
    LOG_DIR='/var/log/envoy/'
    LOG_LEVEL='info'
    XDS_SERVER_CERT='/etc/ssl/certs/ca-certificates.crt'
    END
    sudo apt-get update -y
    sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y
    sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
    sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/debian stretch stable' -y
    sudo apt-get update -y
    sudo apt-get install docker-ce apache2 -y
    sudo service apache2 restart
    echo '<!doctype html><html><body><h1>'\`/bin/hostname\`'</h1></body></html>' | sudo tee /var/www/html/index.html
    sudo /home/envoy/traffic-director/pull_envoy.sh
    sudo /home/envoy/traffic-director/run.sh start"
    
    

Creating the managed instance group

If you don't have a managed instance group with services running, create a managed instance group, using a VM template such as the one shown in the previous section. This example uses the instance template created in the previous section to demonstrate functionality. You do not have to use the instance template.

Console

  1. Go to the Instance Groups page in the Cloud Console.

    Go to the Instance Groups page

  2. Click Create an instance group.
  3. Enter td-vm-mig-us-central1 for the name for the managed instance group, and select the us-central1-a zone.
  4. Under Group type, select Managed instance group.
  5. Under Instance template, select the instance template you created.
  6. Specify 2 as the number of instances that you want to create in the group.
  7. Click Create to create the new group.

gcloud

Use the gcloud command line tool to create a managed instance group with the VM template you previously created.

gcloud compute instance-groups managed create td-vm-mig-us-central1 \
    --zone us-central1-a --size=2 --template=td-vm-template

Configuring Traffic Director with Google Cloud load balancing components

The instructions in this section configure Traffic Director load balancing for your services. You configure the following components:

Creating the health check

Use the following instructions to create a health check. For more information, refer to Creating health checks.

Console

  1. Go to the Health checks page in the Google Cloud Console.
    Go to the Health checks page
  2. Click Create Health Check.
  3. For the name, enter td-vm-health-check.
  4. For the protocol, select HTTP.
  5. Click Create.

gcloud

  1. Create the health check.
gcloud compute health-checks create http td-vm-health-check
  1. Create the firewall rule.

    gcloud compute firewall-rules create fw-allow-health-checks \
      --action ALLOW \
      --direction INGRESS \
      --source-ranges 35.191.0.0/16,130.211.0.0/22 \
      --target-tags http-td-tag,http-server,https-server \
      --rules tcp
    

Creating the backend service

Create the backend service. If you use the gcloud command-line tool, you must designate it as a global backend service with a load balancing scheme of INTERNAL_SELF_MANAGED. Add the health check and a managed or unmanaged instance group to the backend service. Note that this example uses the managed instance group with Compute Engine VM template that runs the sample HTTP service created in Creating the managed instance group.

Console

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. On the Services tab, click Create Service.

  3. Click Continue.

  4. For the service name, enter td-vm-service.

  5. Select the correct VPC network.

  6. Ensure that the Backend type is Instance groups.

  7. Select the managed instance group you created.

  8. Enter the correct Port numbers.

  9. Choose Utilization or Rate as the Balancing mode. The default value is Rate.

  10. Click Done.

  11. Select the health check you created, or click Create another health check and make sure to select HTTP as the protocol.

  12. Click Continue.

  13. Click Create.

gcloud

gcloud compute backend-services create td-vm-service \
    --global \
    --load-balancing-scheme=INTERNAL_SELF_MANAGED \
    --health-checks td-vm-health-check
gcloud compute backend-services add-backend td-vm-service \
    --instance-group td-vm-mig-us-central1 \
    --instance-group-zone us-central1-a \
    --global

Creating the routing rule map

Console

In the Cloud Console, the target proxy is combined with the forwarding rule. When you create the forwarding rule, Google Cloud automatically creates a target HTTP proxy and attaches it to the URL map.

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. On the Routing rule maps tab, click Create Routing Rule Map.

  3. Enter a name.

  4. Click Add Forwarding Rule.

  5. For the forwarding rule name, enter td-vm-forwarding-rule.

  6. Select your network.

  7. Select your Internal IP. Traffic sent to this IP address is intercepted by the Envoy proxy and sent to the appropriate service according to the host and path rules.

    The forwarding rule is created as a global forwarding rule with the load-balancing-scheme set to INTERNAL_SELF_MANAGED.

    You can set the address of your forwarding rule to 0.0.0.0. If you do, traffic is routed based on the HTTP hostname and path information configured in the URL map, regardless of the actual destination IP address of the request. In this case, the hostnames of your services, as configured in the host rules, must be unique within your service mesh configuration. That is, you cannot have two different services, with different set of backends, that both use the same hostname.

    Alternatively, you can enable routing based on the actual destination VIP of the service. If you configure the VIP of your service as an address parameter of the forwarding rule, only requests destined to this address are routed based on the HTTP parameters specified in the URL map.

  8. Click Save.

  9. Optionally, add custom host and path rules or leave the path rules as the defaults and set the host to service-test.

  10. Click Save.

gcloud

  1. Create a URL map that uses the backend service.

    gcloud compute url-maps create td-vm-url-map \
       --default-service td-vm-service
    
  2. Create a URL map path matcher and a host rule to route traffic for your service based on hostname and a path. This example uses service-test as the service name and a default path matcher that matches all path requests for this host (/*).

    gcloud compute url-maps add-path-matcher td-vm-url-map \
       --default-service td-vm-service --path-matcher-name td-vm-path-matcher
    
    gcloud compute url-maps add-host-rule td-vm-url-map --hosts service-test \
       --path-matcher-name td-vm-path-matcher
    
  3. Create the target HTTP proxy.

    gcloud compute target-http-proxies create td-vm-proxy \
       --url-map td-vm-url-map
    
  4. Create the forwarding rule.

    The forwarding rule must be global and must be created with the value of load-balancing-scheme set to INTERNAL_SELF_MANAGED.

    You can set the address of your forwarding rule to 0.0.0.0. If you do, traffic is routed based on the HTTP hostname and path information configured in the URL map, regardless of the actual destination IP address of the request. In this case, the hostnames of your services, as configured in the host rules, must be unique within your service mesh configuration. That is, you cannot have two different services, with different sets of backends, that both use the same hostname.

    Alternatively, you can enable routing based on the actual destination VIP of the service. If you configure the VIP of your service as an address parameter of the forwarding rule, only requests destined to this address are routed based on the HTTP parameters specified in the URL map.

    This example uses 0.0.0.0 as the address parameter, meaning that routing for your service is performed based on the HTTP hostname and path parameters only.

    gcloud compute forwarding-rules create td-vm-forwarding-rule \
       --global \
       --load-balancing-scheme=INTERNAL_SELF_MANAGED \
       --address=0.0.0.0 \
       --target-http-proxy=td-vm-proxy \
       --ports 80 \
       --network default
    

At this point, Traffic Director is configured to load balance traffic for the services specified in the URL map across backends in the managed instance group.

Depending on how your microservices are distributed on your network, you might need to add more forwarding rules or more host and path rules to the URL map. For more information on forwarding rules and URL maps, read the following documents:

Verifying the configuration

When the configuration is complete, each Compute Engine VM that has a sidecar proxy can access services configured in Traffic Director using the HTTP protocol.

If you followed the specific examples in this guide, using the Compute Engine VM template with the demonstration HTTP server and service hostname service-test, use these steps to verify the configuration.

  1. Log in to one of the VM hosts that has a sidecar proxy installed.
  2. Execute the command curl -H 'Host: service-test' 10.0.0.1. This request returns the hostname of the managed instance group backend that served the request.

In step 2, note that you can use any IP address. For example, the command curl -I -H 'Host: service-test' 1.2.3.4 would work in Step 2.

This is because the forwarding rule has the address parameter set to 0.0.0.0, which instructs Traffic Director to match based on the host defined in the URL map. In the example configuration, the host name is service-test.