TCP Proxy Load Balancing is a reverse proxy load balancer that distributes TCP traffic coming from the internet to virtual machine (VM) instances in your Google Cloud VPC network. When using TCP Proxy Load Balancing, traffic coming over a TCP connection is terminated at the load balancing layer, and then forwarded to the closest available backend using TCP or SSL.
TCP Proxy Load Balancing lets you use a single IP address for all users worldwide. The TCP proxy load balancer automatically routes traffic to the backends that are closest to the user.
With the Premium Tier, TCP Proxy Load Balancing can be configured as a global load balancing service. With Standard Tier, the TCP proxy load balancer handles load balancing regionally. For details, see Load balancer behavior in Network Service Tiers.
In this example, the connections for traffic from users in Seoul and Boston are
terminated at the load balancing layer. These connections are
2a. Separate connections are established from the load
balancer to the selected backend instances. These connections are labeled
TCP Proxy Load Balancing is intended for TCP traffic on specific well-known ports, such as port 25 for Simple Mail Transfer Protocol (SMTP). For more information, see Port specifications. For client traffic that is encrypted on these same ports, use SSL Proxy Load Balancing.
For information about how the Google Cloud load balancers differ from each other, see the following documents:
Some benefits of the TCP proxy load balancer include:
- IPv6 termination. TCP Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, and then proxied over IPv4 to your backends.
- Intelligent routing. The load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without considering capacity. The use of smarter routing allows provisioning at N+1 or N+2 instead of x*N.
- Security patching. If vulnerabilities arise in the TCP stack, Cloud Load Balancing applies patches at the load balancer automatically to keep your backends safe.
- Support for the following well-known TCP ports. 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300.
The following are components of TCP proxy load balancers.
Forwarding rules and IP addresses
Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy and a backend service.
Each forwarding rule provides a single IP address that you can use in DNS records for your application. No DNS-based load balancing is required. You can either reserve a static IP address that you can use or let Cloud Load Balancing assign one for you. We recommend that you reserve a static IP address; otherwise, you must update your DNS record with the newly- assigned ephemeral IP address whenever you delete a forwarding rule and create a new one.
External forwarding rules used in the definition of a TCP proxy load balancer can reference exactly one of the ports listed in: Port specifications for forwarding rules.
TCP Proxy Load Balancing terminates TCP connections from the client and creates new connections to the backends. By default, the original client IP address and port information is not preserved. You can preserve this information by using the PROXY protocol. The target proxies route incoming requests directly to backend services.
Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group or network endpoint group, and information about the backend's serving capacity. Backend serving capacity can be based on CPU or requests per second (RPS).
TCP proxy load balancers each have a single backend service resource. Changes to the backend service are not instantaneous. It can take several minutes for changes to propagate to Google Front Ends (GFEs).
Each backend service specifies the health checks to perform for the available backends.
To ensure minimal interruptions to your users, you can enable connection draining on backend services. Such interruptions might happen when a backend is terminated, removed manually, or removed by an autoscaler. To learn more about using connection draining to minimize service interruptions, see Enabling connection draining.
Protocol for communicating with the backends
When you configure a backend service for the TCP proxy load balancer, you set the protocol that the backend service uses to communicate with the backends. You can choose either SSL or TCP. The load balancer uses only the protocol that you specify, and does not attempt to negotiate a connection with the other protocol.
The backend instances must allow connections from the following sources:
- The load balancer Google Front End (GFE) for all requests sent to your backends
- Health check probes
To allow this traffic, you must create ingress firewall rules. The ports for these firewall rules must allow traffic as follows:
To the destination port for each backend service's health check.
For instance group backends: Determined by the mapping between the backend service's named port and the port numbers associated with that named port on each instance group. The numbers can vary among instance groups assigned to the same backend service.
GCE_VM_IP_PORTNEG backends: To the port numbers of the endpoints.
Firewall rules are implemented at the VM instance level, not on GFE proxies. You cannot use Google Cloud firewall rules to prevent traffic from reaching the load balancer. You can use Google Cloud Armor to achieve this.
For more information about health check probes and why it's necessary to allow traffic from them, see Probe IP ranges and firewall rules.
For SSL proxy load balancers and TCP proxy load balancers, the required source ranges are as follows:
These ranges apply to health checks and requests from the GFE.
Source IP addresses
The source IP address for packets, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two TCP connections:
Connection 1, from original client to the load balancer (GFE):
- Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
- Destination IP address: your load balancer's IP address.
Connection 2, from the load balancer (GFE) to the backend VM or endpoint:
Source IP address: an IP address in one of the ranges specified in Firewall rules.
Destination IP address: the internal IP address of the backend VM or container in the Virtual Private Cloud (VPC) network.
The TCP proxy load balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. These load balancers are implemented using Google Front End (GFE) proxies worldwide.
GFEs have several open ports to support other Google services that run on the same architecture. To see a list of some of the ports likely to be open on GFEs, see Forwarding rule: Port specifications. There might be other open ports for other Google services running on GFEs.
Running a port scan on the IP address of a GFE-based load balancer is not useful from an auditing perspective for the following reasons:
A port scan (for example, with
nmap) generally expects no response packet or a TCP RST packet when performing TCP SYN probing. GFEs will send SYN-ACK packets in response to SYN probes for a variety of ports if your load balancer uses a Premium Tier IP address. However, GFEs only send packets to your backends in response to packets sent to your load balancer's IP address and the destination port configured on its forwarding rule. Packets sent to different load balancer IP addresses or your load balancer's IP address on a port not configured in your forwarding rule do not result in packets being sent to your load balancer's backends. Even without any special configuration, Google infrastructure and GFEs provide defense-in-depth for DDoS attacks and SYN floods.
Packets sent to the IP address of your load balancer could be answered by any GFE in Google's fleet; however, scanning a load balancer IP address and destination port combination only interrogates a single GFE per TCP connection. The IP address of your load balancer is not assigned to a single device or system. Thus, scanning the IP address of a GFE-based load balancer does not scan all the GFEs in Google's fleet.
With that in mind, the following are some more effective ways to audit the security of your backend instances:
A security auditor should inspect the forwarding rules configuration for the load balancer's configuration. The forwarding rules define the destination port for which your load balancer accepts packets and forwards them to the backends. For GFE-based load balancers, each external forwarding rule can only reference a single destination TCP port.
A security auditor should inspect the firewall rule configuration applicable to backend VMs. The firewall rules that you set block traffic from the GFEs to the backend VMs, but do not block incoming traffic to the GFEs. For best practices, see the firewall rules section.
The way a TCP proxy load balancer distributes traffic to its backends depends on the balancing mode and the hashing method selected to choose a backend (session affinity).
How connections are distributed
For Premium Tier:
- You can have only one backend service, and the backend service can have backends in multiple regions. For global load balancing, you deploy your backends in multiple regions, and the load balancer automatically directs traffic to the region closest to the user. If a region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections remain in the current region.
- Google advertises your load balancer's IP address from all points of presence, worldwide. Each load balancer IP address is global anycast.
- If you configure a backend service with backends in multiple regions, Google Front Ends (GFEs) attempt to direct requests to healthy backend instance groups or NEGs in the region closest to the user. Details for the process are documented on this page.
For Standard Tier:
Google advertises your load balancer's IP address from points of presence associated with the forwarding rule's region. The load balancer uses a regional external IP address.
You can configure backends in the same region as the forwarding rule. The process documented here still applies, but GFEs only direct requests to healthy backends in that one region.
Request distribution process:
The balancing mode and choice of target define backend fullness
from the perspective of each zonal
GCE_VM_IP_PORT NEG, zonal instance group,
or zone of a regional instance group. Distribution within a zone is then done
with consistent hashing.
The load balancer uses the following process:
- The forwarding rule's external IP address is advertised by edge routers at the borders of Google's network. Each advertisement lists a next hop to a Layer 3/4 load balancing system (Maglev) as close to the user as possible.
- Maglev systems inspect the source IP address of the incoming packet. They direct the incoming request to the Maglev systems that Google's geo-IP systems determine are as close to the user as possible.
- The Maglev systems route traffic to a first-layer Google Front End (GFE). The first-layer GFE terminates TLS if required and then routes traffic to second-layer GFEs according to this process:
- If a backend service uses instance group or
GCE_VM_IP_PORTNEG backends, the first layer-GFEs prefer second-layer GFEs that are located in or near the region that contains the instance group or NEG.
- For backend buckets and backend services with hybrid NEGs, serverless
NEGs, and internet NEGs, the first-layer GFEs choose second-layer GFEs in a
subset of regions such that the round trip time between the two GFEs is
Second-layer GFE preference is not a guarantee, and it can dynamically change based on Google's network conditions and maintenance.
Second-layer GFEs are aware of health check status and actual backend capacity usage.
- The second-layer GFE directs requests to backends in zones within its region.
- For Premium Tier, sometimes second-layer GFEs send requests to backends in zones of different regions. This behavior is called spillover.
Spillover is governed by two principles:
- Spillover is possible when all backends known to a second-layer GFE are at capacity or are unhealthy.
- The second-layer GFE has information for healthy, available backends in zones of a different region.
The second-layer GFEs are typically configured to serve a subset of backend locations.
Spillover behavior does not exhaust all possible Google Cloud zones. If you need to direct traffic away from backends in a particular zone or in an entire region, you must set the capacity scaler to zero. Configuring backends to fail health checks does not guarantee that the second-layer GFE spills over to backends in zones of a different region.
When distributing requests to backends, GFEs operate at a zonal level.
With a low number of connections, second-layer GFEs sometimes prefer one zone in a region over the other zones. This preference is normal and expected. The distribution among zones in the region doesn't become even until the load balancer receives more connections.
When you add a backend to the backend service, you set a load balancing mode.
For TCP Proxy Load Balancing, the balancing mode can be
If the load balancing mode is
CONNECTION, the load is spread based on how
many concurrent connections the backend can handle. You must also specify
exactly one of the following parameters:
maxConnections (except for
regional managed instance groups),
If the load balancing mode is
UTILIZATION, the load is spread
based on the utilization of instances in an instance group.
For information about comparing the load balancer types and the supported balancing modes, see Load balancing methods.
Session affinity sends all requests from the same client to the same backend, if the backend is healthy and has capacity.
TCP Proxy Load Balancing offers client IP affinity, which forwards all requests from the same client IP address to the same backend.
If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region. If all backends within a region are unhealthy, traffic is distributed to healthy backends in other regions (Premium Tier only). If all backends are unhealthy, the load balancer drops traffic.
Load balancing for GKE applications
If you are building applications in Google Kubernetes Engine, you can use standalone NEGs to load balance traffic directly to containers. With standalone NEGs you are responsible for creating the Service object that creates the NEG, and then associating the NEG with the backend service so that the load balancer can connect to the Pods.
Related GKE documentation:
- TCP proxy load balancers do not support VPC Network Peering.
- To configure a TCP proxy load balancer, see Setting up TCP Proxy Load Balancing.
- To set up monitoring for your TCP proxy load balancer, see Using monitoring.
- To view a list of Google points of presence (PoPs), see GFE locations.