Configuring advanced traffic management

This document provides information on how to configure advanced traffic management for your Traffic Director deployment.

Before you configure advanced traffic management

Follow the instructions in Setting Up Traffic Director, including configuring Traffic Director and any VM hosts or GKE clusters you need. Create the required Google Cloud resources.

Setting up traffic splitting

These instructions assume the following:

  • Your Traffic Director deployment has a URL map called review-url-map.
  • The URL map sends all traffic to one backend service, called review1, which serves as the default backend service as well.
  • You plan to route 5% of traffic to a new version of a service. That service is running on a backend VM or endpoint in a NEG associated with the backend service review2.
  • No host rules or path matchers are used.

To set up traffic splitting, follow these steps:

Console

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. Click Create Routing rule maps.

  3. Enter the Routing rule map name.

  4. Select HTTP from the Protocol menu.

  5. Select an existing forwarding rule.

  6. Under Routing rules, select Advanced host, path and route rule.

  7. Under Path matcher, select Split traffic between services.

  8. Click the YAML View button.

  9. Add the following settings to the Path matcher box:

        - defaultService: $[SERVICE_URL1]
          name: matcher1
          routeRules:
          - priority: 2
            matchRules:
            - prefixMatch: ''
            routeAction:
             weightedBackendServices:
             - backendService: $[SERVICE_URL1]
               weight: 95
             - backendService: $[SERVICE_URL2]
               weight: 5
    
  10. Click Done.

  11. Click Create.

gcloud

  1. Use the gcloud export command to get the URL map configuration:

    gcloud compute url-maps export review-url-map \
        --destination=review-url-map-config.yaml
    
  2. Add the following section to the review-url-map-config.yaml file, replacing [SERVICE_URL1] and [SERVICE_URL2] with the URLs to the services between which you are splitting traffic:

        hostRules:
        - description: ''
          hosts:
          - '*'
        pathMatcher: matcher1
        pathMatchers:
        - defaultService: $[SERVICE_URL1]
          name: matcher1
          routeRules:
          - priority: 2
            matchRules:
            - prefixMatch: ''
            routeAction:
             weightedBackendServices:
             - backendService: $[SERVICE_URL1]
               weight: 95
             - backendService: $[SERVICE_URL2]
               weight: 5
  1. Update the URL map:
 gcloud compute url-maps import review-url-map \
     --source=review-url-map-config.yaml

After you are satisfied with the new version, you can gradually adjust the weights of the two services and eventually send all traffic to review2.

Advanced traffic management enables you to configure session affinity based on a provided cookie. To configure HTTP_COOKIE based session affinity for a backend service named service1, follow these directions.

To set up session affinity using HTTP_COOKIE:

Console

  1. Go to the Traffic Director page in the Cloud Console.

    Go to the Traffic Director page

  2. Click the service that you want to update.

  3. Click the EDIT pencil.

  4. Click Advanced configurations.

  5. Under Session affinity, select HTTP cookie.

  6. Under Locality Load balancing policy, select Ring hash.

  7. In the HTTP Cookie name field, enter http_cookie.

  8. In the HTTP Cookie path field, enter /cookie_path.

  9. In the HTTP Cookie TTL field, enter 100.

  10. In the Minimum ring size field, enter 10000.

  11. Click SAVE.

gcloud

  1. Use the gcloud export command to get the backend service configuration.
 gcloud compute backend-services export service1 \
     --destination=service1-config.yaml --global
  1. Update the service1-config.yaml file as follows:

     sessionAffinity: 'HTTP_COOKIE'
     localityLbPolicy: 'RING_HASH'
     consistentHash:
      httpCookie:
       name: 'http_cookie'
       path: '/cookie_path'
       ttl:
         seconds: 100
         nanos: 30
      minimumRingSize: 10000
    
  2. Update the backend service config file:

gcloud compute backend-services import service1 \
    --source=service1-config.yaml --global

Setting up config filtering based on MetadataFilter match

MetadataFilters are enabled with forwarding rules and HttpRouteRuleMatch. Use this feature to control a particular forwarding rule or route rule so that the control plane sends the forwarding rule or route rule only to proxies whose node metadata matches the metadatafilter setting. If you do not specify any MetadataFilters, the rule is sent to all Envoy proxies.

This feature makes it easy to operate a staged deployment of a configuration. For example, create a forwarding rule named forwarding-rule1, which you want to be pushed only to Envoys whose node metadata contains 'app: review' and 'version: canary'.

Follow the steps below to add MetadataFilters to the forwarding rule.

To add MetadataFilter to a forwarding rule, follow these steps:

  1. Use the gcloud export command to get the forwarding rule config.

    gcloud compute forwarding-rules export forwarding-rule1 \
        --destination=forwarding-rule1-config.yaml --global
    
  2. Update the forwarding-rule1-config.yaml file.

    metadataFilters:
    - filterMatchCriteria: 'MATCH_ALL'
      filterLabels:
      - name: 'app'
        value: 'review'
      - name: 'version'
        value: 'canary'
    
  3. Use the gcloud import command to update the forwarding-rule1-config.yaml file.

    gcloud compute forwarding-rules import forwarding-rule1 \
        --source=forwarding-rule1-config.yaml --global
    
  4. Use these instructions to add node metadata to Envoy before starting Envoy. Note that only string value is supported.

    a. For a VM-based deployment, in bootstrap_template.yaml, add the following under metadata section:

        app: 'review'
        version: 'canary'
    

    b. For Google Kubernetes Engine-based or Kubernetes-based deployment, in trafficdirector_istio_sidecar.yaml, add the following under the env section:

        - name: ISTIO_META_app
          value: 'review'
        - name: ISTIO_META_version
          value: 'canary'
    

Examples using metadata filtering

Use the following instructions for a scenario in which multiple projects are in the same Shared VPC network and you want each service project's Traffic Director resources to be visible to proxies in the same project. This example corresponds to the first use case in Forwarding rule resource. The second use case is configured similarly, but using different metadata key-value pairs.

The Shared VPC set-up is as follows:

  • Host project name: vpc-host-project
  • Service projects: project1, project2
  • Backend services with backend instances or endpoints running an xDS compliant proxy in project1 and project2

To configure Traffic Director to isolate project1:

  1. Create all forwarding rules in project1 with the following metadata filter:

        metadataFilters:
        - filterMatchCriteria: 'MATCH_ALL'
          filterLabels
          - name: 'project_name'
            value: `project1
          - name: 'version'
            value: 'production'
    
  2. When you configure the proxies deployed to instances or endpoints in project1, include the following metadata in the node metadata section of the bootstrap file:

        project_name: 'project1'
        version: 'production'
    
  3. Verify that the proxies already deployed in project2 did not receive the forwarding rule created in the first step. To do this, try to access services in project1 from a system running a proxy in project2. For information on verifying that a Traffic Director configuration is functioning correctly, see Verifying the configuration.

The following example corresponds to the third use case in Forwarding rule resource.

To test a new configuration on a subset of proxies before you make it available to all proxies:

  1. Start the proxies that you are using for testing with the following node metadata. Do not include this node metadata for proxies that you are not using for testing.

     version: 'test'
    
  2. For each new forwarding rule that you want to test, include the following metadata filter:

      metadataFilters:
      - filterMatchCriteria: 'MATCH_ALL'
        filterLabels:
        - name: 'version'
          value: 'test'
    
  3. Test the new configuration by sending traffic to the test proxies, and make any necessary changes. If the new configuration is working correctly, only the proxies that you test receive the new configuration. The remaining proxies do not receive the new configuration and are not able to use it.

  4. When you confirm that the new configuration works correctly, remove the metadata filter associated with it. This allows all proxies to receive the new configuration.

Troubleshooting

Use this information for troubleshooting when traffic is not being routed according to the route rules and traffic policies that you configured.

Symptoms:

  • Increased traffic to services in rules above the rule in question.
  • An unexpected increase in 4xx and 5xx HTTP responses for a given route rule.

Solution: Review the priority assigned to each rule, because route rules are interpreted in priority order.

When you define route rules, check to be sure that rules with higher priority (that is, with lower priority numbers) do not inadvertently route traffic that would otherwise have been routed by a subsequent route rule. Consider the following example:

  • First route rule

    • Route rule match pathPrefix = '/shopping/';
    • Redirect action: send traffic to backend service service-1
    • Rule priority: 4
  • Second route rule

    • Route rule match regexMatch = '/shopping/cart/ordering/.*';
    • Redirect action: send traffic to backend service service-2
    • Rule priority: 8

In this case, a request with the path '/shopping/cart/ordering/cart.html' is routed to service-1. Even though the second rule would have matched, it is ignored because the first rule had priority.

Limitations

  • If the value of BackendService.sessionAffinity is not NONE, and BackendService.localityLbPolicy is set to a load balancing policy other than MAGLEV or RING_HASH, the session affinity settings will not take effect.
  • UrlMap.headerAction, UrlMap.defaultRouteAction and UrlMap.defaultUrlRedirect do not currently work as intended. You must specify UrlMap.defaultService for handling traffic that does not match any of the hosts in UrlMap.hostRules[] in that UrlMap. Similarly, UrlMap.pathMatchers[].headerAction, UrlMap.pathMatchers[].defaultRouteAction and UrlMap.pathMatchers[].defaultUrlRedirect do not currently work. You must specify UrlMap.pathMatchers[].defaultService for handling traffic that does not match any of the routeRules for that pathMatcher.
  • The gcloud import command doesn't delete top-level fields of the resource, such as the backend service and the URL map. For example, if a backend service is created with settings for circuitBreakers, those settings can be updated via a subsequent gcloud import command. However, those settings cannot be deleted from the backend service. The resource itself can be deleted and recreated without the circuitBreakers settings.
  • Import for forwarding rules doesn't work properly. An exported YAML file can't be re-imported. The workaround is to export the config file, make changes, delete the forwarding rule, and import the configuration file.