Internal TCP/UDP load balancers as next hops

Internal TCP/UDP Load Balancing is a regional load balancer that enables you to run and scale your services behind an internal load balancing IP address. You can use an internal TCP/UDP load balancer as the next gateway to which packets are forwarded along the path to their final destination. To do this, you set the load balancer as the next hop in a custom static route.

This configuration is useful in the following cases:

  • When you need to load balance traffic across multiple VMs that are functioning as virtual network address translation (NAT) appliances.

  • When you need a load balancer to serve as a next hop for a default route. When virtual machine (VM) instances in your Virtual Private Cloud (VPC) network send traffic to the internet, the traffic is routed through load-balanced gateway virtual appliances.

  • When you need to send traffic through multiple load balancers in two or more directions by using the same set of multi-NIC VMs as backends. To do this, you create a load balancer and use it as the next hop for a custom static route in each VPC network. Each internal TCP/UDP load balancer operates within a single VPC network, distributing traffic to the network interfaces of backend VMs in that network.

Load balancing to multiple NICs (click to enlarge)
Load balancing to multiple NICs (click to enlarge)

In the preceding diagram, a VM instance group serves as the backend for two different load balancers. This use case is called internal TCP/UDP load balancers as next hops with common backends because multiple NICs (nic0 and nic1, in this case) on the backend VMs are being load balanced. This deployment is allowed because Internal TCP/UDP Load Balancing supports load balancing to any interface on backend VM instances (not just the primary interface nic0).

In contrast, two sets of VM instances can serve as the backends for two different load balancers. You might use two sets of backends when you have different traffic profiles for incoming and outgoing traffic. This use case is called internal TCP/UDP load balancers as next hops with different backends because only the primary interfaces (nic0) on the backend VMs are load balanced. This setup is shown in the following diagram:

Load balancing to single NICs (click to enlarge)
Load balancing to single NICs (click to enlarge)

You can create a custom static route to pass TCP and UDP traffic to an internal TCP/UDP load balancer where the load balancer is the next hop for the static route. The route can be the default route (0.0.0.0/0), an external (publicly-routable) CIDR prefix, or an internal CIDR prefix, if the prefix doesn't conflict with a subnet route. For example, you can replace your default route (0.0.0.0/0) with a route that directs traffic to third-party backend VMs for packet processing.

To use the custom static route, the VM instances in each VPC network must be in the same region as their associated load balancer.

If a Cloud Router is in the same region as a load balancer, you can advertise this route to other connected networks by using Cloud Router custom route advertisements. For more information, see Internal load balancing and connected networks.

For more information, see:

Benefits of using your internal TCP/UDP load balancer as a next hop

When the load balancer is a next hop for a static route, you don't need to explicitly configure your clients to send traffic to the load balancer or to each backend VM. You can integrate your backend VMs in a bump-in-the-wire fashion.

Using an internal TCP/UDP load balancer as a next hop for a static route provides the same benefits as Internal TCP/UDP Load Balancing. The load balancer's health check ensures that new connections are routed to healthy backend VMs. By using a managed instance group as a backend, you can configure autoscaling to grow or shrink the set of VMs based on service demand.

Specifications

Following are specifications for using internal TCP/UDP load balancers as next hops.

Client IP session affinity

Client IP session affinity is one available session affinity option. It is a two-tuple affinity that uses the source IP address and the destination IP address as inputs for a hash function.

When using an internal TCP/UDP load balancer by itself, the destination IP address is the IP address of the load balancer's forwarding rule. Client IP session affinity in this context means that connections from a client with a constant source IP address are delivered to the same backend VM if the backend VM is healthy.

In contrast, when using an internal TCP/UDP load balancer as a next hop for a static route, the destination IP address varies because the load balancer's backend VMs process and route packets to different destinations. Using Client IP session affinity in this context doesn't cause packets to be processed by the same backend VM, even if the client has a constant source IP address.

Destination range

When using an internal TCP/UDP load balancer as the next hop for a custom static route, you must follow these rules:

  • The route's destination cannot be equal to or more specific than a subnet route.
  • The internal IP address that you assign to the load balancer's forwarding rule can't be within the route's destination prefix.

    For example, suppose that you have a subnet with a primary IP range of 10.0.1.0/24. The internal TCP/UDP load balancer's forwarding rule's IP address can be 10.0.1.5 because 10.0.1.5 fits within the primary IP range of the subnet.

    Now suppose that you create a custom static route with a destination prefix of 10.0.0.0/8. This is supported because 10.0.0.0/8 is less specific than the subnet route. However, the next hop for this route cannot be the load balancer's forwarding rule because 10.0.1.5 fits within 10.0.0.0/8.

Same VPC network and region

Custom static routes that use internal TCP/UDP load balancers as next hops are limited to the following:

  • A single VPC network. The load balancer and the custom static route must be in the same VPC network.

  • A single region or all regions. Unless you configure global access, the custom static route is only available to resources in the same region as the load balancer. This regional restriction is enforced even though the route itself is part of the routing table for the entire VPC network. If you enable global access, the custom static route is available to resources in any region.

Order of operations

You must create an internal TCP/UDP load balancer before you can create a custom static route that uses it as a next hop. The load balancer must exist before you can create the route. If you try to create a route that refers to a nonexistent load balancer, Google Cloud returns an error.

You specify an internal TCP/UDP load balancer next hop by using the forwarding rule's name and the load balancer's region, not by using the internal IP address associated with the forwarding rule.

After you've created a route with a next hop that refers to an internal TCP/UDP load balancer, you cannot delete the load balancer unless you first delete the route. Specifically, you can't delete an internal forwarding rule until no custom static route uses that forwarding rule as a next hop.

Backend requirements

  • You must configure all the internal TCP/UDP load balancer's backend VMs to allow IP forwarding (--can-ip-forward = True). For more information, see Considerations for instance-based or load balancer-based routing.

  • You cannot use an internal TCP/UDP load balancer whose backends are Google Kubernetes Engine (GKE) nodes as a next hop for a custom static route. Software on the nodes can only route traffic to Pods if the destination matches an IP address managed by the cluster, not an arbitrary destination.

All TCP and UDP traffic

When using an internal TCP/UDP load balancer as a next hop, Google Cloud forwards all TCP and UDP traffic on all ports to the backend VMs, regardless of the following:

  • The forwarding rule's protocol and port configuration
  • The backend service's protocol configuration

Additional specifications

  • A custom static route cannot use network tags if the route has an internal TCP/UDP load balancer as the route's next hop.

  • You cannot create multiple custom static routes with the same priority, the same destination, and the same internal TCP/UDP load balancer next hop. Consequently, each custom static route with the same next hop load balancer must have at least a unique destination or a unique priority.

  • You cannot distribute traffic among multiple internal TCP/UDP load balancers by using equal-cost multipath (ECMP) routing. This means that you cannot create routes with the same destination and the same priority but with different internal TCP/UDP load balancers as next hops.

For more information, see the Routes overview.

Use cases

You can use an internal TCP/UDP load balancer as a next hop in multiple deployments and topologies.

For each example, note the following guidelines:

  • Each VM interface must be in a separate VPC network.

  • You cannot use backend VMs or load balancers to route traffic between subnets in the same VPC network because subnet routes cannot be overridden.

  • The internal TCP/UDP load balancer is a software-defined pass-through load balancer. NAT and proxying are only performed by the backend VMs—the firewall virtual appliances.

Using an internal TCP/UDP load balancer as the next hop to a NAT gateway

This use case load balances traffic from internal VMs to multiple NAT gateway instances that route traffic to the internet.

NAT use case (click to enlarge)
NAT use case (click to enlarge)

Hub and spoke: Exchanging next-hop routes by using VPC Network Peering

In addition to exchanging subnet routes, you can configure VPC Network Peering to export and import custom static and dynamic routes. Custom static routes that use network tags or that have a next hop of the default internet gateway are excluded.

You can configure a hub-and-spoke topology with your next-hop firewall virtual appliances located in the hub VPC network by doing the following:

  • In the hub VPC network, create an internal TCP/UDP load balancer with firewall virtual appliances as the backends.
  • In the hub VPC network, create a custom static route, and set the next hop to be the internal TCP/UDP load balancer.
  • Connect the hub VPC network to each of the spoke VPC networks by using VPC Network Peering.
  • For each peering, configure the hub network to export its custom routes, and configure the corresponding spoke network to import custom routes. The route with the load balancer next hop is one of the routes that the hub network exports.

The next hop firewall appliance load balancer in the hub VPC network is usable in each spoke network, according to the routing order.

Hub and spoke (click to enlarge)
Hub and spoke (click to enlarge)

Load balancing to multiple NICs

In the following use case, the backend VMs are virtual appliance instances (for example, firewall instances or NAT gateways) with NICs in multiple VPC networks. Firewall instances and NAT gateways can be provided by a third party as virtual appliances. The virtual appliances are Compute Engine VMs with multiple NICs.

This example shows a single set of backend virtual appliances in a managed VM instance group.

In the VPC network called testing, the internal TCP/UDP load balancer has a forwarding rule called fr-ilb1. In the example, this load balancer distributes traffic to the nic0 interface on each virtual appliance, but it could be any NIC.

In the VPC network called production, a different internal TCP/UDP load balancer has a forwarding rule called fr-ilb2. This load balancer distributes traffic to a different interface, nic1 in this example.

Traffic with multi-NIC load balancing(click to enlarge)
Traffic with multi-NIC load balancing (click to enlarge)

For a detailed configuration setup, see Load balancing to multiple backend NICs.

Using an internal TCP/UDP load balancer as the next hop with a single NIC

Like the previous use case, the backend VMs are virtual appliance instances (for example, firewall instances or NAT gateways) with NICs in multiple VPC networks.

The following examples show two sets of backend virtual appliances in two managed VM instance groups.

The backends are summarized as follows.

Load-balanced instances for testing-to-production traffic Load-balanced instances for production-to-testing traffic
fw-instance-a
fw-instance-b
fw-instance-c
fw-instance-d
fw-instance-e
fw-instance-f

In the testing-to-production direction, traffic originates in the VPC network called testing. In the testing network, the internal TCP/UDP load balancer has a forwarding rule called fr-ilb1. This example uses internal TCP/UDP load balancers with backend services that do not have a specified network parameter. This means that each load balancer only distributes traffic to the primary network interface of each backend VM.

Traffic with single-NIC load balancing (<code>testing</code>-to-<code>production</code>) (click to enlarge)
Traffic with single-NIC load balancing (testing-to-production) (click to enlarge)

In the production-to-testing direction, traffic originates in the VPC network called production. In the production network, a different internal TCP/UDP load balancer has a forwarding rule called fr-ilb2. Again, this example uses internal TCP/UDP load balancers with backend services that do not have a specified network parameter. This means that each load balancer only distributes traffic to the primary network interface of each backend VM. Because each virtual appliance can have only one nic0, this deployment requires a second set of virtual appliances, the first set for testing-to-production load balancing and the second set for production-to-testing load balancing.

Traffic with single-NIC load balancing (<code>production</code>-to-<code>testing</code>) (click to enlarge)
Traffic with single-NIC load balancing (production-to-testing) (click to enlarge)

What's next