TCP Proxy Load Balancing Concepts

Google Cloud Platform (GCP) TCP Proxy Load Balancing allows you to use a single IP address for all users around the world. GCP TCP proxy load balancing automatically routes traffic to the instances that are closest to the user.

Note that global load balancing requires that you use the Premium Tier of Network Service Tiers, which is the default tier. Otherwise, load balancing is handled regionally.

Cloud TCP Proxy Load Balancing is intended for non-HTTP traffic. For HTTP traffic, use HTTP Load Balancing instead. For proxied SSL traffic, use SSL Proxy Load Balancing.

TCP Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.

Contents

Overview

When you use TCP Proxy Load Balancing for your TCP traffic, you can terminate your customers’ TCP sessions at the load balancing layer, then forward the traffic to your virtual machine instances using TCP or SSL.

TCP Proxy Load Balancing can be configured as a global load balancing service. With this configuration, you can deploy your instances in multiple regions, and global load balancing automatically directs traffic to the region closest to the user. If a region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections remain in the current region.

TCP Proxy Load Balancing advantages:

  • Intelligent routing — the load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without paying attention to capacity. Use of smarter routing allows provisioning at N+1 or N+2 instead of x*N.
  • Security patching — if vulnerabilities arise in the TCP stack, Cloud Load Balancing applies patches at the load balancer automatically to keep your instances safe.
  • TCP Proxy Load Balancing supports the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 5222

Components

The following are components of TCP Proxy load balancers.

Forwarding rules and addresses

Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy and one or more backend services.

Each forwarding rule provides a single IP address that you can use in DNS records for your application. No DNS-based load balancing is required. You can either reserve a static IP address to be used or let Cloud Load Balancing assign one for you. We recommend reserving a static IP address: otherwise, you must update your DNS record with the newly-assigned ephemeral IP address whenever you delete a forwarding rule and create a new one.

Target proxies

TCP Proxy Load Balancing terminates TCP connections from the client and creates new connections to the instances. By default, the original client IP address and port information is not preserved. You can preserve this information using PROXY protocol. The target proxies route incoming requests directly to backend services.

Backend services

Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group or network endpoint group and serving capacity metadata. Backend serving capacity can be based on CPU or requests per second (RPS).

Each backend service specifies the health checks to perform for the available instances.

To ensure minimal interruptions to your users, you can enable connection draining on backend services. Such interruptions might happen when an instance is terminated, removed manually, or removed by an autoscaler. To learn more about using connection draining to minimize service interruptions, read the Enabling Connection Draining documentation.

Protocol to the backends

When you configure a backend service for the TCP Proxy load balancer, you set the protocol that the backend service uses to communicate with the backends. You can choose either SSL or TCP. The load balancer uses only the protocol that you specify, and will not attempt to negotiate a connection with the other protocol.

Backend buckets

Backend buckets direct incoming traffic to Google Cloud Storage buckets instead of instance groups.

Buckets are containers that hold data. You can use buckets as common storage between VM instances, Google App Engine, and other cloud services. Use a storage bucket when you have a large amount of data that doesn't need to be locally stored on a single VM instance.

Firewall rules

Firewall rules allow traffic to reach your instances. You must configure firewall rules to allow traffic from both the load balancer and the health checker.

You can use a single firewall rule if:

  • The rule allows traffic on the port that your global forwarding rule uses.
  • Your health checker uses the same port.

If your health checker uses a different port, you must create a separate firewall rule for that port.

Note that firewall rules block and allow traffic at the instance level, not at the edges of the network. They cannot prevent traffic from reaching the load balancer itself.

Also note that Google Cloud Platform uses a large range of IP addresses, which change over time. If you need to determine external IP addresses at a particular time, use the instructions in the Google Compute Engine FAQ.

Return path

For health checks, Google Cloud uses special routes that aren't defined in your VPC network. For more information on this, read Load balancer return paths.

TCP Proxy Load Balancing Example

With TCP proxy, traffic coming over a TCP connection is terminated at the load balancing layer, then proxied to the closest available instance group.

In this example, the connections for traffic from users in Iowa and Boston are terminated at the load balancing layer. In the diagram, these connections are labeled 1a and 2a. Separate connections are established from the load balancer to the selected backend instances. These connections are labeled 1b and 2b.

Google Cloud Load Balancing with TCP termination (click to enlarge)
Google Cloud Load Balancing with TCP termination (click to enlarge)

Session affinity

Session affinity sends all requests from the same client to the same virtual machine instance, if the instance is healthy and has capacity.

TCP Proxy Load Balancing offers client IP affinity, which forwards all requests from the same client IP address to the same instance.

Interfaces

You configure and update the TCP Proxy Load Balancing service using the following interfaces:

  • The gcloud command-line tool: a command-line tool included in the Cloud SDK. The TCP Proxy Load Balancing documentation provides samples using this tool. For a complete overview of the tool, see the gcloud Tool Guide. You can find commands related to load balancing in the gcloud compute command group.

    You can also get detailed help for any gcloud command by using the --help flag:

    gcloud compute http-health-checks create --help
    
  • The Google Cloud Console: The Google Cloud Console can accomplish all load balancing tasks.

  • The REST API: The Cloud Load Balancing API can accomplish all load balancing tasks. The API reference documentation describes the resources and methods available to you.

Open ports

The TCP proxy load balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. The reverse proxy functionality is provided by the Google Front Ends (GFE).

The firewall rules that you set block traffic from the GFEs to the backends, but do not block incoming traffic to the GFEs.

The TCP proxy load balancers have a number of open ports to support other Google services that run on the same architecture. If you run a security or port scan against the external IP address of your load balancer, additional ports appear to be open.

This does not affect TCP proxy load balancers. External forwarding rules, which are used in the definition of an SSL load balancer, can only reference TCP ports 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, and 5222. Traffic with a different TCP destination port is not forwarded to the load balancer's backend.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Load Balancing