Google Cloud Internal TCP/UDP Load Balancing is a regional load balancer that enables you to run and scale your services behind an internal load balancing IP address that is accessible only to your internal virtual machine (VM) instances.
Internal TCP/UDP Load Balancing distributes traffic among VM instances in the same region in a Virtual Private Cloud (VPC) network by using an internal IP address.
As shown in the following high-level diagram, an Internal TCP/UDP Load Balancing service has a frontend (the forwarding rule) and a backend (the backend service and instance groups).
For information about how the Google Cloud load balancers differ from each other, see the following documents:
Protocols, scheme, and scope
Each internal TCP/UDP load balancer supports:
- One backend service with load balancing scheme
INTERNALand the TCP or the UDP protocol (but not both)
- Backend managed and unmanaged instance groups that are located in one region and VPC network
- One or more forwarding rules, each using either the TCP or UDP protocol, matching the backend service's protocol
- Each forwarding rule with its own unique IP address or multiple forwarding rules that share a common IP address
- Each forwarding rule with up to five ports or all ports
- If global access is enabled, clients in any region
- If global access is disabled, clients in the same region as the load balancer
An internal TCP/UDP load balancer doesn't support:
- Backend VMs in multiple regions
- Balancing traffic that originates from the internet, unless you're using it with an external load balancer
You can enable global access to allow client VM instances from any region to access your internal TCP/UDP load balancer. The client VM must be in the same network or in a VPC network connected by using VPC Network Peering.
The following table summarizes client access.
|Global access disabled||Global access enabled|
|Clients must be in the same region as the load balancer. They also must be in the same VPC network as the load balancer or in a VPC network that is connected to the load balancer's VPC network by using VPC Network Peering.||Clients can be in any region. They still must be in the same VPC network as the load balancer or in a VPC network that's connected to the load balancer's VPC network by using VPC Network Peering.|
|On-premises clients can access the load balancer through Cloud VPN tunnels or interconnect attachments (VLANs). These tunnels or attachments must be in the same region as the load balancer.||On-premises clients can access the load balancer through Cloud VPN tunnels or interconnect attachments (VLANs). These tunnels or attachments can be in any region.|
The internal TCP/UDP load balancers address many use cases. This section provides a few high-level examples.
You can access an internal TCP/UDP load balancer in your VPC network from a connected network by using the following:
- VPC Network Peering
- Cloud VPN and Cloud Interconnect
For detailed examples, see Internal TCP/UDP Load Balancing and connected networks.
Three-tier web service example
You can use Internal TCP/UDP Load Balancing in conjunction with other load balancers. For example, if you incorporate external HTTP(S) load balancers, the external load balancer is the web tier and relies on services behind the internal load balancer.
The following diagram depicts an example of a three-tier configuration that uses external HTTP(S) load balancers and internal TCP/UDP load balancers:
Three-tier web service with global access example
If you enable global access, your web-tier VMs can be in another region, as shown in the following diagram.
This multi-tier application example shows the following:
- A globally-available internet-facing web tier that load balances traffic with HTTP(S) Load Balancing.
- An internal backend load-balanced database tier in the
us-east1region that is accessed by the global web tier.
- A client VM that is part of the web tier in the
europe-west1region that accesses the internal load-balanced database tier located in
The following table summarizes the component requirements for Internal TCP/UDP Load Balancing used with a Shared VPC network. For an example, see creating an internal TCP/UDP load balancer on the Provisioning Shared VPC page.
|IP address||Forwarding rule||Backend components|
internal IP address must be defined in the same project
as the backend VMs.
For the load balancer to be available in a Shared VPC network, the Google Cloud internal IP address must be defined in the same service project where the backend VMs are located, and it must reference a subnet in the desired Shared VPC network in the host project. The address itself comes from the primary IP range of the referenced subnet.
If you create an internal IP address in a service project and the IP address subnet is in the service project's VPC network, your internal TCP/UDP load balancer is local to the service project. It's not local to any Shared VPC host project.
forwarding rule must be defined in the same project as the backend
For the load balancer to be available in a Shared VPC network, the internal forwarding rule must be defined in the same service project where the backend VMs are located, and it must reference the same subnet (in the Shared VPC network) that the associated internal IP address references.
If you create an internal forwarding rule in a service project and the forwarding rule's subnet is in the service project's VPC network, your internal TCP/UDP load balancer is local to the service project. It's not local to any Shared VPC host project.
|In a Shared VPC scenario, backend VMs are located in a service project. A regional internal backend service and health check must be defined in that service project.|
Additional use cases
- Using an internal TCP/UDP load balancer as the next hop to a NAT gateway—You can route traffic to your firewall or gateway virtual appliance backends through an internal TCP/UDP load balancer.
- Hub and spoke: Exchanging next-hop routes by using
VPC Network Peering—You can configure a hub-and-spoke topology with
your next-hop firewall virtual appliances located in the
hubVPC network. Routes using the load balancer as a next hop in the
hubVPC network can be usable in each
- Load balancing to multiple NICs on the backend VMs.
For more information about these use cases, see Internal TCP/UDP load balancers as next hops.
How Internal TCP/UDP Load Balancing works
An internal TCP/UDP load balancer has the following characteristics:
- It's a managed service.
- It's not a proxy.
- It's implemented in virtual networking.
Unlike a proxy load balancer, an internal TCP/UDP load balancer doesn't terminate connections from clients and then open new connections to backends. Instead, an internal TCP/UDP load balancer routes original connections directly from clients to the healthy backends, without any interruption.
- There's no intermediate device or single point of failure.
- Client requests to the load balancer's IP address go directly to the healthy backend VMs.
- Responses from the healthy backend VMs go directly to the clients, not back through the load balancer. TCP responses use direct server return. For more information, see TCP and UDP request and return packets.
The load balancer monitors VM health by using health check probes. For more information, see the Health check section.
The Google Cloud Linux guest environment, Windows guest environment, or an equivalent process configures each backend VM with the IP address of the load balancer. Google Cloud virtual networking manages traffic delivery, scaling as appropriate.
An internal TCP/UDP load balancer with multiple backend instance groups distributes connections among backend VMs in all of those instance groups. For information about the distribution method and its configuration options, see traffic distribution.
You can use any type of instance group—unmanaged instance groups, managed zonal instance groups, or managed regional instance groups—but not network endpoint groups (NEGs), as backends for your load balancer.
High availability describes how to design an internal load balancer that is not dependent on a single zone.
Instances that participate as backend VMs for internal TCP/UDP load balancers must be
running the appropriate Linux or Windows guest environment or other processes
that provide equivalent functionality. This guest environment must be able to
contact the metadata server (
read instance metadata so that it can generate local routes to accept traffic
sent to the load balancer's internal IP address.
This diagram illustrates traffic
distribution among VMs located in two separate instance groups. Traffic sent
from the client instance to the IP address of the load balancer (
is distributed among backend VMs in either instance group. Responses sent from
any of the serving backend VMs are delivered directly to the client VM.
The internal TCP/UDP load balancer is highly available by design. There are no special steps to make the load balancer highly available because the mechanism doesn't rely on a single device or VM instance.
To ensure that your backend VM instances are deployed to multiple zones, follow these deployment recommendations:
Use regional managed instance groups if you can deploy your software by using instance templates. Regional managed instance groups automatically distribute traffic among multiple zones, providing the best option to avoid potential issues in any given zone.
If you use zonal managed instance groups or unmanaged instance groups, use multiple instance groups in different zones (in the same region) for the same backend service. Using multiple zones protects against potential issues in any given zone.
An internal TCP/UDP load balancer consists of the following Google Cloud components.
|Internal IP address||This is the address for the load balancer.||The internal IP address must be in the same subnet as the internal forwarding rule. The subnet must be in the same region and VPC network as the backend service.|
|Internal forwarding rule||An internal forwarding rule in combination with the internal IP address is the frontend of the load balancer. It defines the protocol and port(s) that the load balancer accepts, and it directs traffic to a regional internal backend service.||Forwarding rules for internal TCP/UDP load balancers must do the following:
• Have a
• Use an
• Reference a
|Regional internal backend service||The regional internal backend service defines the protocol used to communicate with the backends (instance groups), and it specifies a health check. Backends can be unmanaged instance groups, managed zonal instance groups, or managed regional instance groups.||The backend service must do the following:
• Have a
• Use a
• Have an associated health check.
• Have an associated region. The region of the forwarding rule and all backend instance groups must be the same as the backend service's region.
• Be associated with a single VPC network. When not specified, the network is inferred based on the network used by each backend VM's default network interface (
Although the backend service is not tied to a specific subnet, the forwarding rule's subnet must be in the same VPC network as the backend service.
|Health check||Every backend service must have an associated health check. The health check defines the parameters under which Google Cloud considers the backends that it manages to be eligible to receive traffic. Only healthy VMs in the backend instance groups receive traffic sent from client VMs to the IP address of the load balancer.||Even though the forwarding rule and backend service can use either
Internal IP address
Internal TCP/UDP Load Balancing uses an internal IPv4 address from the primary IP range of the subnet that you select when you create the internal forwarding rule. The IP address can't be from a secondary IP range of the subnet.
You specify the IP address for an internal TCP/UDP load balancer when you create the forwarding rule. You can choose to receive an ephemeral IP address or use a reserved IP address.
Your internal TCP/UDP load balancer requires the following firewall rules:
- An ingress allow rule to permit traffic from the health check ranges
- An ingress allow rule that permits traffic from the internal IP addresses of clients
The example in Configuring firewall rules demonstrates how to create both.
A forwarding rule specifies the protocol and ports on which the load balancer accepts traffic. Because internal TCP/UDP load balancers are not proxies, they pass traffic to backends on the same protocol and port.
An internal TCP/UDP load balancer requires at least one internal forwarding rule. You can define multiple forwarding rules for the same load balancer.
The forwarding rule must reference a specific subnet in the same VPC network and region as the load balancer's backend components. This requirement has the following implications:
- The subnet that you specify for the forwarding rule doesn't need to be the same as any of the subnets used by backend VMs; however, the subnet must be in the same region as the forwarding rule.
- When you create an internal forwarding rule, Google Cloud chooses an available regional internal IP address from the primary IP address range of the subnet that you select. Alternatively, you can specify an internal IP address in the subnet's primary IP range.
Forwarding rules and global access
An internal TCP/UDP load balancer's forwarding rules are regional, even when global access
is enabled. After you enable global access, the regional internal forwarding
allowGlobalAccess flag is set to
Forwarding rules and port specifications
When you create an internal forwarding rule, you must choose one of the following port specifications:
- Specify at least one and up to five ports, by number.
ALLto forward traffic on all ports.
An internal forwarding rule that supports either all TCP ports or all UDP ports allows backend VMs to run multiple applications, each on its own port. Traffic sent to a given port is delivered to the corresponding application, and all applications use the same IP address.
When you need to forward traffic on more than five specific ports, combine
firewall rules with forwarding rules. When you create the
forwarding rule, specify all ports, and then create ingress
rules that only permit traffic to the desired ports. Apply the firewall rules to
the backend VMs.
You cannot modify a forwarding rule after you create it. If you need to change the specified ports or the internal IP address for an internal forwarding rule, you must delete and recreate it.
Multiple forwarding rules for a single backend service
You can configure multiple internal forwarding rules that all reference the same internal backend service. An internal TCP/UDP load balancer requires at least one internal forwarding rule.
Configuring multiple forwarding rules for the same backend service lets you do the following, using either TCP or UDP (not both):
Assign multiple IP addresses to the load balancer. You can create multiple forwarding rules, each using a unique IP address. Each forwarding rule can specify all ports or a set of up to five ports.
Assign a specific set of ports, using the same IP address, to the load balancer. You can create multiple forwarding rules sharing the same IP address, where each forwarding rule uses a specific set of up to five ports. This is an alternative to configuring a single forwarding rule that specifies all ports.
For more information about scenarios involving two or more internal forwarding rules that share a common internal IP address, see Multiple forwarding rules with the same IP address.
When using multiple internal forwarding rules, make sure that you configure the
software running on your backend VMs to bind to all of the forwarding rule IP
addresses or to any address (
0.0.0.0/0). The destination IP address for a
packet delivered through the load
balancer is the internal IP address associated with the corresponding internal
forwarding rule. For more information, see TCP and UDP request and return
Each internal TCP/UDP load balancer has one regional internal backend service that defines backend parameters and behavior. The name of the backend service is the name of the internal TCP/UDP load balancer shown in the Google Cloud Console.
Each backend service defines the following backend parameters:
Protocol. A backend service accepts either TCP or UDP traffic, but not both, on the ports specified by one or more internal forwarding rules. The backend service allows traffic to be delivered to backend VMs on the same ports to which traffic was sent. The backend service protocol must match the forwarding rule's protocol.
Health check. A backend service must have an associated health check.
Each backend service operates in a single region and distributes traffic for backend VMs in a single VPC network:
Regionality. Backends are instance groups in the same region as the backend service (and forwarding rule). The backends can be unmanaged instance groups, zonal managed instance groups, or regional managed instance groups.
VPC network. All backend VMs must have a network interface in the VPC network associated with the backend service. You can either explicitly specify a backend service's network or use an implied network. In either case, every internal forwarding rule's subnet must be in the backend service's VPC network.
Backend services and network interfaces
Each backend service operates in a single VPC network and
Google Cloud region. The VPC network can be implied or
explicitly specified with the
--network flag in the
backend-services create command:
When explicitly specified, the backend service's VPC
--networkflag identifies the network interface on each backend VM to which traffic is load balanced. Each backend VM must have a network interface in the specified VPC network. In this case, network interface identifiers (
nic7) can be different among backend VMs. For example:
- Different backend VMs in the same unmanaged instance group might use different interface identifiers if each VM has an interface in the specified VPC network.
- The interface identifier doesn't need to be the same among all backend
instance groups—it might be
nic0for backend VMs in one backend instance group and
nic2for backend VMs in another backend instance group.
If you don't include the
--networkflag when you create the backend service, the backend service chooses a network based on the network of the initial (or only) network interface used by all backend VMs. This means that
nic0must be in the same VPC network for all VMs in all backend instance groups.
The load balancer's backend service must be associated with a global or regional health check. Special routes outside of the VPC network facilitate communication between health check systems and the backends.
You can use any of the following health check protocols; the protocol of the health check does not have to match the protocol of the load balancer:
- HTTP, HTTPS, or HTTP/2. If your backend VMs serve traffic by using HTTP, HTTPS, or HTTP/2, it's best to use a health check that matches that protocol because HTTP-based health checks offer options appropriate to that protocol. Serving HTTP-type traffic through an internal TCP/UDP load balancer means that the load balancer's protocol is TCP.
- SSL or TCP. If your backend VMs do not serve HTTP-type traffic, you should use either an SSL or TCP health check.
Regardless of the type of health check that you create, Google Cloud sends health check probes to the IP address of the internal TCP/UDP load balancer, simulating how load-balanced traffic is delivered. Software running on your backend VMs must respond to both load-balanced traffic and health check probes sent to the IP address of the load balancer. For more information, see Destination for probe packets.
Health checks and UDP traffic
Google Cloud does not offer a health check that uses the UDP protocol. When you use Internal TCP/UDP Load Balancing with UDP traffic, you must run a TCP-based service on your backend VMs to provide health check information.
In this configuration, client requests are load balanced by using the UDP protocol, and a TCP service is used to provide information to Google Cloud health check probers. For example, you can run a simple HTTP server on each backend VM that returns an HTTP 200 response to Google Cloud. In this example, you should use your own logic running on the backend VM to ensure that the HTTP server returns 200 only if the UDP service is properly configured and running.
TCP and UDP request and return packets
When a client system sends a TCP or UDP packet to an internal TCP/UDP load balancer, the packet's source and destination are as follows:
- Source: the client's primary internal IP address or the IP address from one of the client's alias IP ranges.
- Destination: the IP address of the load balancer's forwarding rule.
When the load balancer sends a response packet, that packet's source and destination depend on the protocol:
- TCP is connection-oriented, and internal TCP/UDP load balancers use direct server return. This means that response packets are sent from the IP address of the load balancer's forwarding rule.
- In contrast, UDP is connectionless. By default, return packets are sent from the primary internal IP address of the backend instance's network interface. However, you can change this behavior. For example, configuring a UDP server to bind to the forwarding rule's IP address causes response packets to be sent from the forwarding rule's IP address.
The following table summarizes sources and destinations for response packets:
|TCP||The IP address of the load balancer's forwarding rule||The requesting packet's source|
|UDP||Depends on the UDP server software||The requesting packet's source|
The way that an internal TCP/UDP load balancer distributes new connections depends on whether you have configured failover:
If you haven't configured failover, an internal TCP/UDP load balancer distributes new connections among all of its healthy backend VMs if at least one backend VM is healthy. When all backend VMs are unhealthy, the load balancer distributes new connections among all backends as a last resort.
If you have configured failover, an internal TCP/UDP load balancer distributes new connections among VMs in its active pool, according to a failover policy that you configure. When all backend VMs are unhealthy, you can choose to drop traffic.
By default, the method for distributing new connections uses a hash calculated from five pieces of information: the client's IP address, the source port, the load balancer's internal forwarding rule IP address, the destination port, and the protocol. You can modify the traffic distribution method for TCP traffic by specifying a session affinity option.
The health check state controls the distribution of new connections. An established TCP session persists on an unhealthy backend VM if the unhealthy backend VM is still handling the connection.
Session affinity options
Session affinity controls the distribution of new connections from clients to the load balancer's backend VMs. You set session affinity when your backend VMs need to keep track of state information for their clients when sending TCP traffic. This is a common requirement for web applications.
Session affinity works on a best-effort basis for TCP traffic. Because the UDP protocol doesn't support sessions, session affinity doesn't affect UDP traffic.
The internal TCP/UDP load balancers support the following session affinity options, which you specify for the entire internal backend service, not per backend instance group:
- None. This is the default setting, which is effectively the same as Client IP, protocol, and port.
- Client IP. Directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the destination IP address.
- Client IP and protocol. Directs a particular client's requests to the same backend VM based on a hash created from three pieces of information: the client's IP address, the destination IP address, and the load balancer's protocol (TCP or UDP).
Client IP, protocol, and port. Directs a particular client's requests to the same backend VM based on a hash created from these five pieces of information:
- Source IP address of the client sending the request
- Source port of the client sending the request
- Destination IP address
- Destination port
- Protocol (TCP or UDP)
The destination IP address is the IP address of the load balancer's forwarding rule, unless packets are delivered to the load balancer because of a custom static route. If an internal TCP/UDP load balancer is a next hop for a route, see the next section in this article, Session affinity and next hop internal TCP/UDP load balancer.
Session affinity and next hop internal TCP/UDP load balancer
Regardless of the session affinity option that you choose, Google Cloud uses the packet's destination. When sending a packet directly to the load balancer, the packet's destination matches the IP address of the load balancer's forwarding rule.
However, when using an internal TCP/UDP load balancer as a next hop for a custom static route, the packet's destination is most likely not the IP address of the load balancer's forwarding rule. For packets with a destination within the route's destination range, the route directs to the load balancer.
To use an internal TCP/UDP load balancer as the next hop for a custom static route, see Internal TCP/UDP load balancers as next hops.
Session affinity and health check state
Changing health states of backend VMs can cause a loss of session affinity. For example, if a backend VM becomes unhealthy, and there is at least one other healthy backend VM, an internal TCP/UDP load balancer does not distribute new connections to the unhealthy VM. If a client had session affinity with that unhealthy VM, it is directed to the other healthy backend VM instead, losing its session affinity.
Testing connections from a single client
When testing connections to the IP address of an internal TCP/UDP load balancer from a single client system, you should keep the following in mind:
If the client system is not a VM being load balanced—that is, not a backend VM, new connections are delivered to the load balancer's healthy backend VMs. However, because all session affinity options rely on at least the client system's IP address, connections from the same client might be distributed to the same backend VM more frequently than you might expect.
Practically, this means that you cannot accurately monitor traffic distribution through an internal TCP/UDP load balancer by connecting to it from a single client. The number of clients needed to monitor traffic distribution varies depending on the load balancer type, the type of traffic, and the number of healthy backends.
If the client VM is a backend VM of the load balancer, connections sent to the IP address of the load balancer's forwarding rule are always answered by the client/backend VM. This happens regardless of whether the backend VM is healthy. It happens for all traffic sent to the load balancer's IP address, not just traffic on the protocol and ports specified in the load balancer's internal forwarding rule.
For more information, see sending requests from load-balanced VMs.
For information about quotas and limits, see Load balancing resource quotas.
- To configure and test an internal TCP/UDP load balancer, see Setting up Internal TCP/UDP Load Balancing.
- To configure Cloud Monitoring for Internal TCP/UDP Load Balancing, see Internal TCP/UDP Load Balancing monitoring.
- To troubleshoot issues with your internal TCP/UDP load balancer, see Troubleshooting Internal TCP/UDP Load Balancing.