Internal passthrough Network Load Balancers as next hops

An internal passthrough Network Load Balancer is a regional load balancer that enables you to run and scale your services behind an internal IP address. You can use an internal passthrough Network Load Balancer as the next hop to which packets are forwarded along the path to their final destination. To do this, you set the load balancer as the next hop in a custom static route.

Before reviewing the information on this page, you should already be familiar with concepts from the following:

An internal passthrough Network Load Balancer next hop is useful in the following cases:

  • To load balance traffic across multiple VMs that are functioning as gateway or router VMs.

  • To use gateway virtual appliances as your next hop for a default route. With this configuration, virtual machine (VM) instances in your Virtual Private Cloud (VPC) network send traffic to the internet through a set of load-balanced virtual gateway VMs.

  • To send traffic through multiple load balancers in two or more directions by using the same set of multi-NIC gateway or router VMs as backends. To do this, you create a load balancer and use it as the next hop for a custom static route in each VPC network. Each internal passthrough Network Load Balancer operates within a single VPC network, distributing traffic to the network interfaces of backend VMs in that network.

Architecture

In the following diagram, a VM instance group of router VMs serves as the backend for two different load balancers. The first internal passthrough Network Load Balancer sends packets to nic0 of the backend VMs, and the second internal passthrough Network Load Balancer sends packets to nic1 on the same backends.

Load balancing to multiple NICs.
Load balancing to multiple NICs (click to enlarge).

Benefits of using your internal passthrough Network Load Balancer as a next hop

When the load balancer is a next hop for a static route, no special configuration is needed within the guest operating systems of the client VMs in the VPC network where the route is defined. Client VMs send packets to the load balancer's backends through VPC network routing, in a bump-in-the-wire fashion.

Using an internal passthrough Network Load Balancer as a next hop for a static route provides the same benefits as a standalone internal passthrough Network Load Balancer. The load balancer's health check ensures that new connections are routed to healthy backend VMs. By using a managed instance group as a backend, you can configure autoscaling to grow or shrink the set of VMs based on service demand.

Specifications

The following are specifications for using internal passthrough Network Load Balancers as next hops.

Routes

You can create a custom static route to pass TCP, UDP, and other protocol traffic to an internal passthrough Network Load Balancer where the load balancer is the next hop for the static route. The route can be an external (publicly-routable) CIDR prefix or an internal CIDR prefix, if the prefix doesn't conflict with a subnet route. For example, you can replace your default route (0.0.0.0/0) with a route that directs traffic to third-party backend VMs for packet processing.

Options for specifying the next hop

You can specify an internal passthrough Network Load Balancer next hop in one of two ways:

  • By using the forwarding rule's name and region
  • By using the forwarding rule's IP address

For details about the project and VPC network in which an internal passthrough Network Load Balancer next hop can reside, see Next hops and features.

You can exchange static routes with internal passthrough Network Load Balancer next hops using VPC Network Peering. For details, see Options for exchanging static routes.

Client IP session affinity

Internal passthrough Network Load Balancers offer two similar "client IP address" session affinity options:

  • Client IP (CLIENT_IP): A two-tuple hash of a packet's source IP address and destination IP address. When an internal passthrough Network Load Balancer is not a route's next hop, packets sent to the load balancer's forwarding rule IP address share a common destination IP address—the forwarding rule IP address. In this situation, one of the addresses used by the two-tuple hash remains constant. Thus, if the number of configured and healthy backends doesn't change and packets have identical source IP addresses, this two-tuple session affinity option selects the same backend.
  • Client IP, no destination (CLIENT_IP_NO_DESTINATION): A one-tuple hash of a packet's source IP address. When you are using an internal passthrough Network Load Balancer as a next hop, the destination IP address often varies because destination IP addresses are those specified by the route's destination attribute. In this situation, the two-tuple hash Client IP (CLIENT_IP) session affinity cannot select the same backend even when the number of configured and healthy backends doesn't change and packets have identical source IP addresses. (An exception to this rule is when only one backend is configured.) If you require packets with identical source IP addresses to be routed to the same backend, you must use the Client IP, no destination (CLIENT_IP_NO_DESTINATION) session affinity option.

Destination range

The destination of a custom static route cannot be equal to or more specific than a subnet route. Note that more specific means that the subnet mask is longer. This rule applies to all custom static routes, including when the next hop is an internal passthrough Network Load Balancer. For example, suppose that your subnet route is 10.140.0.0/20. The destination of a custom static route can't be the same (10.140.0.0/20), and it can't be more specific, as in 10.140.0.0/22.

Same VPC network and region

Custom static routes that use internal passthrough Network Load Balancers as next hops are limited to the following:

  • A single VPC network. The load balancer and the custom static route must be in the same VPC network.

  • A single region or all regions. Unless you configure global access, the custom static route is only available to resources in the same region as the load balancer. This regional restriction is enforced even though the route itself is part of the routing table for the entire VPC network. If you enable global access, the custom static route is available to resources in any region.

Advertising the custom static route

To advertise the prefix (destination) for the custom static route, you can use a Cloud Router custom route advertisement. The scope of the route advertisement depends on the load balancer's global access setting, as follows:

  • When global access is disabled, the internal passthrough Network Load Balancer is only available to VMs, Cloud VPN tunnels, and Cloud Interconnect attachments (VLANs) that are in the same region as the load balancer. Consequently, a custom route advertisement for a custom static route's prefix only makes sense if the Cloud Router and load balancer are in the same region.

  • When global access is enabled, the internal passthrough Network Load Balancer is available to VMs, Cloud VPN tunnels, and Cloud Interconnect attachments (VLANs) that are in any region. With global dynamic routing, on-premises systems can use the custom static route from any connected region.

The following table summarizes the accessibility of the load balancer.

Global access VPC network dynamic routing mode Load balancer access
Disabled Regional Accessible by routers in the same region
Disabled Global Accessible by routers in the same region
Enabled Regional Accessible by all routers in any region
Enabled Global Accessible by all routers in any region

For more information, see Internal passthrough Network Load Balancers and connected networks.

Order of operations

You must create an internal passthrough Network Load Balancer before you can create a custom static route that uses it as a next hop. The load balancer must exist before you can create the route. If you try to create a route that refers to a nonexistent load balancer, Google Cloud returns an error.

You specify an internal passthrough Network Load Balancer next hop by using the forwarding rule's name and the load balancer's region, or by using the internal IP address associated with the forwarding rule.

After you've created a route with a next hop that refers to an internal passthrough Network Load Balancer, you cannot delete the load balancer unless you first delete the route. Specifically, you can't delete an internal forwarding rule until no custom static route uses that load balancer as a next hop.

Backend requirements

  • You must configure all of the internal passthrough Network Load Balancer's backend VMs to allow IP forwarding (--can-ip-forward = True). For more information, see Considerations for instance-based or load balancer-based routing.

  • You cannot use an internal passthrough Network Load Balancer whose backends are Google Kubernetes Engine (GKE) nodes as a next hop for a custom static route. Software on the nodes can only route traffic to Pods if the destination matches an IP address managed by the cluster, not an arbitrary destination.

Processing of TCP, UDP, and other protocol traffic

When an internal passthrough Network Load Balancer is deployed as a next hop, Google Cloud forwards all traffic on all ports to the backend VMs, regardless of the following:

  • The forwarding rule's protocol and port configuration
  • The backend service's protocol configuration

The internal passthrough Network Load Balancer, which is the next hop of the route, seamlessly supports forwarding all the traffic for the protocols supported by Google Cloud VPC networks (such as TCP, UDP, and ICMP).

Additional considerations

  • Supported forwarding rules. Google Cloud supports only next hop internal passthrough Network Load Balancer forwarding rules. Google Cloud does not support next hop forwarding rules used by other load balancers, protocol forwarding, or as Private Service Connect endpoints.

  • Specification methods and forwarding rule network and project. You can specify a next hop forwarding rule by using one of following three methods. The specification method that you use determines whether the forwarding rule's network must match the route's network and in what project the forwarding rule can be located:

    • By forwarding rule name (--next-hop-ilb) and region (--next-hop-ilb-region): When you specify a next hop forwarding rule by name and region, the forwarding rule's network must match the route's VPC network. The forwarding rule must be located in the same project that contains the forwarding rule's network (a standalone project or a Shared VPC host project).

    • By forwarding rule resource link: A forwarding rule's resource link uses the format /projects/PROJECT_ID/regions/REGION/forwardingRules/FORWARDING_RULE_NAME, where PROJECT_ID is the project ID of the project that contains the forwarding rule, REGION is the forwarding rule's region, and FORWARDING_RULE_NAME is the forwarding rule's name. When you specify a next hop forwarding rule by its resource link, the forwarding rule's network must match the route's VPC network. The forwarding rule can be located in either the project that contains the forwarding rule's network (a standalone project or a Shared VPC host project) or a Shared VPC service project.

    • By a forwarding rule IPv4 address: When you specify a next hop forwarding rule by its IPv4 address, the forwarding rule's network can be either the route's VPC network or a peered VPC network. The forwarding rule can be located in either the project that contains the forwarding rule's network (a standalone project or a Shared VPC host project) or a Shared VPC service project.

  • Effect of global access. Custom static routes using internal passthrough Network Load Balancer next hops are programmed in all regions. Whether the next hop is usable depends on the load balancer's global access setting. With global access enabled, the load balancer next hop is accessible in all regions of the VPC network. With global access disabled, the load balancer next hop is only accessible in the same region as the load balancer. With global access disabled, packets sent from another region to a route using an internal passthrough Network Load Balancer next hop are dropped.

  • When all backends are unhealthy. When all backends of an internal passthrough Network Load Balancer fail health checks, the routes using that load balancer next hop are still in effect. Packets processed by the route are sent to one of the next hop load balancer's backends according to traffic distribution.

  • Forwarding rules that use a common internal IP address (--purpose=SHARED_LOADBALANCER_VIP) are not supported. Next hop internal passthrough Network Load Balancers and internal passthrough Network Load Balancer forwarding rules with a common IP address are mutually exclusive features. A next hop internal passthrough Network Load Balancer must use an IP address that is unique to the load balancer's forwarding rule so that only one backend service (one load balancer) is unambiguously referenced. It's possible for forwarding rules that use a common internal IP address to reference different backend services (different internal passthrough Network Load Balancers).

  • Same destination and multiple next hop internal passthrough Network Load Balancers. If you create two or more custom static routes with the same destination, using different internal passthrough Network Load Balancer next hops, Google Cloud never distributes traffic among the load balancer next hops using ECMP. If the routes have unique priorities, Google Cloud uses the next hop internal passthrough Network Load Balancer from the route with the highest priority. If the routes have equal priorities, Google Cloud still selects just one next hop internal passthrough Network Load Balancer. In this latter situation, as illustrated in the diagram below, Google Cloud uses a deterministic, internal algorithm to select a single next hop forwarding rule (forwarding-rule-a), ignoring other routes with the same priority.

    Google Cloud selects a single next hop when static routes with different internal passthrough Network Load Balancer next hops have the same priority and destination.
  • Multiple destinations and the same next hop internal passthrough Network Load Balancer.

    With instance tags:

    If you use instance tags (also called network tags), you can use the same next hop internal passthrough Network Load Balancer for multiple custom static routes with the same destination and priority.

    Without instance tags: Without instance tags, you cannot create multiple custom static routes having the same combination of destination, priority, and internal passthrough Network Load Balancer next hop. For example, route-x, route-y, and route-z can all be created, but route-x-copy cannot be created.

    Static routes that don't have instance tags cannot be created with the same destination, priority, and internal passthrough Network Load Balancer next hop.
  • Instance tags.

    You can specify instance tags (also called network tags) so that the next-hop route only applies to client instances that have been configured with the tag. This lets you select which client instances get populated with which tagged next-hop route and which set of appliances to route your traffic to.

    You don't need to segregate the different client instances into separate VPC networks, each pointing to their preferred internal passthrough Network Load Balancer front-ending a set of appliances.

  • Multiple routes to the same destination prefix. With instance tags, you can specify multiple routes to the same destination with different internal load balancers as next-hops. You can use different instance tags or different priorities for these same destination routes.

Use cases

You can use an internal passthrough Network Load Balancer as a next hop in multiple deployments and topologies.

For each example, note the following guidelines:

  • Each VM interface must be in a separate VPC network.

  • You cannot use backend VMs or load balancers to route traffic between subnets in the same VPC network because subnet routes cannot be overridden.

  • The internal passthrough Network Load Balancer is a software-defined pass-through load balancer. Packets are delivered to backend VMs without alterations to source or destination information (addresses or addresses and ports).

    Routing, packet filtering, proxying, and address translation are the responsibility of the virtual appliance VMs that serve as backends for the internal passthrough Network Load Balancer.

Using an internal passthrough Network Load Balancer as the next hop to a NAT gateway

This use case load balances traffic from internal VMs to multiple NAT gateway instances that route traffic to the internet.

NAT use case.
NAT use case (click to enlarge).

Hub and spoke: Exchanging next-hop routes by using VPC Network Peering

In addition to exchanging subnet routes, you can configure VPC Network Peering to export and import custom static and dynamic routes. Custom static routes that have a next hop of the default internet gateway are excluded. Custom static routes that use next-hop internal passthrough Network Load Balancers are included.

You can configure a hub-and-spoke topology with your next-hop firewall virtual appliances located in the hub VPC network by doing the following:

  • In the hub VPC network, create an internal passthrough Network Load Balancer with firewall virtual appliances as the backends.
  • In the hub VPC network, create a custom static route, and set the next hop to be the internal passthrough Network Load Balancer.
  • Connect the hub VPC network to each of the spoke VPC networks by using VPC Network Peering.
  • For each peering, configure the hub network to export its custom routes, and configure the corresponding spoke network to import custom routes. The route with the load balancer next hop is one of the routes that the hub network exports.

Subject to the routing order, the next hop firewall appliance load balancer in the hub VPC network is available in the spoke networks:

  • to clients in the same region as the load balancer, if global access is disabled
  • to clients in all regions, if global access is enabled, according to the routing order.
Hub and spoke.
Hub and spoke (click to enlarge).

Load balancing to multiple NICs

In the following use case, the backend VMs are virtual appliance instances (for example, packet inspection, routing, or gateway VMs) with NICs in multiple VPC networks. These virtual appliance instances can be commercial solutions from third parties or solutions that you build yourself. The virtual appliances are Compute Engine VMs with multiple NICs.

This example shows a single set of backend virtual appliances in a managed VM instance group.

In the VPC network called testing, the internal passthrough Network Load Balancer has a forwarding rule called fr-ilb1. In the example, this load balancer distributes traffic to the nic0 interface.

In the VPC network called production, a different internal passthrough Network Load Balancer has a forwarding rule called fr-ilb2. This load balancer distributes traffic to a different interface, nic1 in this example.

Traffic with multi-NIC load balancing.
Traffic with multi-NIC load balancing (click to enlarge).

For a detailed configuration setup, see Load balancing to multiple backend NICs.

Symmetric hashing

The preceding example doesn't use source network address translation (SNAT). SNAT isn't required because Google Cloud uses symmetric hashing. This means that when packets belong to the same flow, Google Cloud calculates the same hash. In other words, the hash doesn't change when the source IP address:port is swapped with the destination IP address:port.

Notes:

  • Symmetric hashing is enabled automatically when you create the internal passthrough Network Load Balancer forwarding rule on or after June 22, 2021.

  • To enable symmetric hashing on existing internal passthrough Network Load Balancers, you must re-create the forwarding rule and the next-hop route, as described in Enabling symmetric hashing.

  • Symmetric hashing is only supported with internal passthrough Network Load Balancers.

  • Symmetric hashing is supported with the following session affinity types for protocols TCP and UDP:

    • Client IP (2-tuple)
    • Client IP and protocol (3-tuple)
    • Client IP, protocol, and port (5-tuple)
  • You can optionally use SNAT if your use case requires it for some reason.

What's next