Passthrough Network Load Balancer overview

Passthrough Network Load Balancers are Layer 4 regional, passthrough load balancers. These load balancers distribute traffic among backends in the same region as the load balancer. As the name suggests, passthrough Network Load Balancers are not proxies. Load-balanced packets are received by backend VMs with the packet's source and destination IP addresses, protocol, and, if the protocol is port-based, the source and destination ports unchanged. Load-balanced connections are terminated at the backends. Responses from the backend VMs go directly to the clients, not back through the load balancer. The industry term for this is direct server return (DSR).

The following diagram shows a sample passthrough Network Load Balancer architecture.

Passthrough Network Load Balancer architecture.
Passthrough Network Load Balancer architecture.

You'd use a passthrough Network Load Balancer in the following circumstances:

  • You need to forward original client packets to the backends un-proxied—for example, if you need the client source IP address to be preserved.
  • You need to load balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic, or you need to load balance a TCP port that isn't supported by other load balancers.
  • It is acceptable to have SSL traffic decrypted by your backends instead of by the load balancer. The passthrough Network Load Balancer cannot perform this task. When the backends decrypt SSL traffic, there is a greater CPU burden on the VMs.
  • You are able to manage the backend VM's SSL certificates yourself. Google-managed SSL certificates are only available for proxy load balancers.
  • You have an existing setup that uses a passthrough load balancer, and you want to migrate it without changes.

Passthrough Network Load Balancers are available in the following modes of deployment.

Scope Traffic type Network service tier Load-balancing scheme IP address Frontend ports Links
External passthrough Network Load Balancer

Load balances traffic that comes from clients on the internet.

Regional TCP, UDP, ESP, GRE, ICMP, and ICMPv6 Premium or Standard EXTERNAL IPv4 and IPv6 A single port, range of ports, or all ports Architecture details
Internal passthrough Network Load Balancer

Load balance traffic within your VPC network or networks connected to your VPC network.

Regional TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE Premium INTERNAL IPv4 and IPv6 A single port, range of ports, or all ports Architecture details

The load-balancing scheme is an attribute on the forwarding rule and the backend service of a load balancer and indicates whether the load balancer can be used for internal or external traffic.

External passthrough Network Load Balancers

External passthrough Network Load Balancers are built on Maglev and Andromeda. Clients can connect to these load balancers from anywhere on the internet regardless of whether the IP address of the load balancer is in the Premium Tier or the Standard Tier. The load balancer can also receive traffic from Google Cloud VMs with external IP addresses or from Google Cloud VMs that have internet access through Cloud NAT or instance-based NAT.

These load balancers can also use Google Cloud Armor to enable advanced network DDoS protection. For more information, see Configure advanced network DDoS protection using Google Cloud Armor.

The following diagram shows an external passthrough Network Load Balancer whose forwarding rule has the IP address 120.1.1.1. The load balancer is configured in the us-central1 region with its backends located in the same region. Traffic is routed from a user in Singapore (near asia-southeast1) to the load balancer in us-central1 (forwarding rule IP address 120.1.1.1).

External passthrough Network Load Balancer traffic routing in Premium and Standard Network Tiers.
External passthrough Network Load Balancer traffic routing in Premium and Standard Network Tiers.

If the IP address of the load balancer is in the Premium Tier, the traffic traverses Google's high‑quality global backbone with the intent that packets enter and exit a Google edge peering point as close as possible to the client. If the IP address of the load balancer is in the Standard Tier, the traffic enters and exits the Google network at a peering point closest to the Google Cloud region where the load balancer is configured.

The architecture of an external passthrough Network Load Balancer depends on whether you use a backend service or a target pool to set up the backend.

Backend service-based load balancers

External passthrough Network Load Balancers can be created with a regional backend service that defines the behavior of the load balancer and how it distributes traffic to its backend instance groups. Backend services enable features that are not supported with legacy target pools, such as support for non-legacy health checks (TCP, SSL, HTTP, HTTPS, or HTTP/2), auto-scaling with managed instance groups, connection draining, and a configurable failover policy.

Backend service-based load balancers support IPv4 and IPv6 traffic. They can load-balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic. You can also use source IP-based traffic steering to direct traffic to specific backends. Load balancing to Google Kubernetes Engine (GKE) is handled by using the built-in GKE Service controller. In addition, backend service-based external passthrough Network Load Balancers are supported with App Hub, which is in preview.

For architecture details, see Backend service-based external passthrough Network Load Balancer overview.

You can also transition an existing target pool-based load balancer to use a backend service instead. For instructions, see Migrate load balancers from target pools to backend services.

Target pool-based load balancers

A target pool is the legacy backend supported with external passthrough Network Load Balancers. A target pool defines a group of instances that should receive incoming traffic from the load balancer.

Target pool-based load balancers support either TCP or UDP traffic. Forwarding rules for target pool-based external passthrough Network Load Balancers only support external IPv4 addresses.

For architecture details, see Target pool-based external passthrough Network Load Balancer overview.

Internal passthrough Network Load Balancers

Internal passthrough Network Load Balancers distribute traffic among internal virtual machine (VM) instances in the same region in a Virtual Private Cloud (VPC) network. They enable you to run and scale your services behind an internal IP address that is accessible only to systems in the same VPC network or systems connected to your VPC network.

These load balancers are built on the Andromeda network virtualization stack. They support only regional backends so that you can autoscale across a region, protecting your service from zonal failures. Additionally, this load balancer can only be configured in Premium Tier.

Internal passthrough Network Load Balancers support the following features:

  • Global access. When global access is enabled, clients from any region can access the load balancer.
  • Access from connected networks. You can make your internal load balancer accessible to clients from networks beyond its own Google Cloud VPC network. The other networks must be connected to the load balancer's VPC network by using either VPC Network Peering, Cloud VPN, or Cloud Interconnect.
  • Load balancing to GKE by using the built-in GKE Service controller.
  • Support for App Hub. Resources used by Internal passthrough Network Load Balancers can be designated as services in App Hub, which is in preview.
  • Load balancer as next hop. You can use an internal passthrough Network Load Balancer as the next gateway to which packets are forwarded along the path to their final destination. To do this, you set the load balancer as the next hop in a custom static route. The load balancer deployed as a next hop in a custom route processes all traffic regardless of the protocol (TCP, UDP, or ICMP). Additional use cases include the following:

    • Hub and spoke architectures: To exchange next-hop routes by using VPC Network Peering, you can configure a hub-and-spoke topology with your next-hop firewall virtual appliances located in the hub VPC network. Routes that use the load balancer as a next hop in the hub VPC network can be usable in each spoke network.
    • Load balancing to multiple NICs on the backend VMs.

    For more information about these use cases, see Internal passthrough Network Load Balancers as next hops.

The following diagram depicts an example of a three-tier configuration that uses both external Application Load Balancers and internal passthrough Network Load Balancers.

Three-tier web app with an external Application Load Balancer and an internal passthrough Network Load Balancer.
Three-tier web app with an external Application Load Balancer and an internal passthrough Network Load Balancer.

For more details, see Internal passthrough Network Load Balancer architecture overview.