External passthrough Network Load Balancers are regional, layer 4 load balancers that distribute external traffic among backends (instance groups or network endpoint groups (NEGs)) in the same region as the load balancer. These backends must be in the same region and project but can be in different VPC networks. These load balancers are built on Maglev and the Andromeda network virtualization stack.
External passthrough Network Load Balancers can receive traffic from:
- Any client on the internet
- Google Cloud VMs with external IPs
- Google Cloud VMs that have internet access through Cloud NAT or instance-based NAT
External passthrough Network Load Balancers are not proxies. The load balancer itself doesn't terminate user connections. Load-balanced packets are sent to the backend VMs with their source and destination IP addresses, protocol, and, if applicable, ports, unchanged. The backend VMs then terminate user connections. Responses from the backend VMs go directly to the clients, not back through the load balancer. This process is known as direct server return (DSR).
Backend service-based external passthrough Network Load Balancers support the following features:
- Managed and unmanaged instance group backends. Backend service-based external passthrough Network Load Balancers support both managed and unmanaged instance groups as backends. Managed instance groups automate certain aspects of backend management and provide better scalability and reliability as compared to unmanaged instance groups.
- Zonal NEG backends. Backend service-based
external passthrough Network Load Balancers support using zonal NEGs with
GCE_VM_IP
endpoints. Zonal NEGGCE_VM_IP
endpoints let you do the following:- Forward packets to any network interface, not just
nic0
. - Place the same
GCE_VM_IP
endpoint in two or more zonal NEGs connected to different backend services.
- Forward packets to any network interface, not just
- Support for multiple protocols. Backend service-based external passthrough Network Load Balancers can load-balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.
- Support for IPv6 connectivity. Backend service-based external passthrough Network Load Balancers can handle both IPv4 and IPv6 traffic.
- Fine-grained traffic distribution control. A backend service allows traffic to be distributed according to a configurable session affinity, connection tracking mode, and weighted load balancing. The backend service can also be configured to enable connection draining and designate failover backends for the load balancer. Most of these settings have default values that let you get started quickly.
- Support for non-legacy health checks. Backend service-based external passthrough Network Load Balancers let you use health checks that match the type of traffic (TCP, SSL, HTTP, HTTPS, or HTTP/2) that they are distributing.
- Google Cloud Armor integration. Google Cloud Armor supports advanced network DDoS protection for external passthrough Network Load Balancers. For more information, see Configure advanced network DDoS protection.
GKE integration. If you are building applications in GKE, we recommend that you use the built-in GKE Service controller, which deploys Google Cloud load balancers on behalf of GKE users. This is the same as the standalone load balancing architecture described on this page, except that its lifecycle is fully automated and controlled by GKE.
Related GKE documentation:
Architecture
The following diagram illustrates the components of an external passthrough Network Load Balancer:
The load balancer is made up of several configuration components. A single load balancer can have the following:
- One or more regional external IP addresses
- One or more regional external forwarding rules
- One regional external backend service
- One or more backends: either all instance groups or all zonal NEG backends
(
GCE_VM_IP
endpoints) - Health check associated with the backend service
Additionally, you must create firewall rules that allow your load balancing traffic and health check probes to reach the backend VMs.
IP address
An external passthrough Network Load Balancer requires at least one forwarding rule. The forwarding rule references a regional external IP address that is accessible anywhere on the internet.
- For IPv4 traffic, the forwarding rule references a single regional external IPv4 address. Regional external IPv4 addresses come from a pool unique to each Google Cloud region. The IP address can be a reserved static address or an ephemeral address.
For IPv6 traffic, the forwarding rule references a
/96
range of IPv6 addresses from a dual-stack or IPv6-only (Preview) subnet. The subnet must have an assigned external IPv6 subnet range in the VPC network. External IPv6 addresses are available only in Premium Tier.For more details about IPv6 support, see the VPC documentation on IPv6 subnet ranges and IPv6 addresses.
Use a reserved IP address for the forwarding rule if you need to keep the address associated with your project for reuse after you delete a forwarding rule or if you need multiple forwarding rules to reference the same IP address.
External passthrough Network Load Balancers support both Standard Tier and Premium Tier for regional external IPv4 addresses. Both the IP address and the forwarding rule must use the same network tier. Regional external IPv6 addresses are only available in the Premium Tier.
Forwarding rule
A regional external forwarding rule specifies the protocol and ports on which the load balancer accepts traffic. Because external passthrough Network Load Balancers are not proxies, they pass traffic to backends on the same protocol and ports, if the packet carries port information. The forwarding rule in combination with the IP address forms the frontend of the load balancer.
The load balancer preserves the source IP addresses of incoming packets. The destination IP address for incoming packets is an IP address associated with the load balancer's forwarding rule.
Incoming traffic is matched to a forwarding rule, which is a combination of a particular IP address (either an IPv4 address or an IPv6 address range), protocol, and if the protocol is port-based, one of port(s), a range of ports, or all ports. The forwarding rule then directs traffic to the load balancer's backend service.
If the forwarding rule references an IPv4 address, the forwarding rule is not associated with any subnet. That is, its IP address comes from outside of any Google Cloud subnet range.
If the forwarding rule references a
/96
IPv6 address range, the forwarding rule must be associated with a subnet, and that subnet must be (a) dual-stack and (b) have an external IPv6 subnet range (--ipv6-access-type
set toEXTERNAL
). The subnet that the forwarding rule references can be the same subnet used by the backend instances; however, backend instances can use a separate subnet if chosen. When backend instances use a separate subnet, the following must be true:
An external passthrough Network Load Balancer requires at least one forwarding rule. Forwarding rules can be configured to direct traffic coming from a specific range of source IP addresses to a specific backend service (or target instance). For details, see traffic steering. You can define multiple forwarding rules for the same load balancer as described in Multiple forwarding rules.
If you want the load balancer to handle both IPv4 and IPv6 traffic, create two forwarding rules: one rule for IPv4 traffic that points to IPv4 (or dual-stack) backends, and one rule for IPv6 traffic that points only to dual-stack backends. It's possible to have an IPv4 and an IPv6 forwarding rule reference the same backend service, but the backend service must reference dual-stack backends.
Forwarding rule protocols
External passthrough Network Load Balancers support the following protocol options for each
forwarding rule: TCP
, UDP
, and L3_DEFAULT
.
Use the TCP
and UDP
options to configure TCP or UDP load balancing.
The L3_DEFAULT
protocol option enables an external passthrough Network Load Balancer to
load balance
TCP, UDP, ESP, GRE, ICMP, and ICMPv6
traffic.
In addition to supporting protocols other than TCP and UDP, L3_DEFAULT
makes
it possible for a single forwarding rule to serve multiple protocols. For
example, IPsec services typically handle some combination of ESP and
UDP-based IKE and NAT-T traffic. The L3_DEFAULT
option allows a single
forwarding rule to be configured to process all of those protocols.
Forwarding rules using the TCP
or UDP
protocols can reference a backend
service using either the same protocol as the forwarding rule or a backend
service whose protocol is UNSPECIFIED
.
L3_DEFAULT
forwarding rules can only
reference a backend service with protocol UNSPECIFIED
.
If you're using the L3_DEFAULT
protocol, you must configure the forwarding
rule to accept traffic on all ports. To configure all ports, either set
--ports=ALL
by using
the Google Cloud CLI, or set allPorts
to
True
by using the API.
The following table summarizes how to use these settings for different protocols.
Traffic to be load balanced | Forwarding rule protocol | Backend service protocol |
---|---|---|
TCP | TCP |
TCP or UNSPECIFIED |
L3_DEFAULT |
UNSPECIFIED |
|
UDP | UDP |
UDP or UNSPECIFIED |
L3_DEFAULT |
UNSPECIFIED |
|
ESP, GRE, ICMP/ICMPv6 (echo request only) | L3_DEFAULT |
UNSPECIFIED |
Multiple forwarding rules
You can configure multiple regional external forwarding rules for the same external passthrough Network Load Balancer. Each forwarding rule can have a different regional external IP address, or multiple forwarding rules can have the same regional external IP address.
Configuring multiple regional external forwarding rules can be useful for these use cases:
- You need to configure more than one external IP address for the same backend service.
- You need to configure different protocols or non-overlapping ports or port ranges for the same external IP address.
- You need to steer traffic from certain source IP addresses to specific load balancer backends.
Google Cloud requires that incoming packets match no more than one forwarding rule. Except for steering forwarding rules, which are discussed in the next section, two or more forwarding rules that use the same regional external IP address must have unique protocol and port combinations according to these constraints:
- A forwarding rule configured for all ports of a protocol prevents the
creation of other forwarding rules using the same protocol and IP address.
Forwarding rules using
TCP
orUDP
protocols can be configured to use all ports, or they can be configured for specific ports. For example, if you create a forwarding rule using IP address198.51.100.1
, theTCP
protocol, and all ports, you cannot create any other forwarding rule using IP address198.51.100.1
and theTCP
protocol. You can create two forwarding rules, both using the IP address198.51.100.1
and theTCP
protocol, if each one has unique ports or non-overlapping port ranges. For example, you can create two forwarding rules using IP address198.51.100.1
and the TCP protocol, where one forwarding rule's ports are80,443
and the other uses the port range81-442
. - Only one
L3_DEFAULT
forwarding rule can be created per IP address. This is because theL3_DEFAULT
protocol uses all ports by definition. In this context, the all ports term includes protocols without port information. A single
L3_DEFAULT
forwarding rule can coexist with other forwarding rules that use specific protocols (TCP
orUDP
). TheL3_DEFAULT
forwarding rule can be used as a last resort when forwarding rules using the same IP address but more specific protocols exist. AnL3_DEFAULT
forwarding rule processes packets sent to its destination IP address if and only if the packet's destination IP address, protocol, and destination port don't match a protocol-specific forwarding rule.To illustrate this, consider these two scenarios. Forwarding rules in both scenarios use the same IP address
198.51.100.1
.- Scenario 1. The first forwarding rule uses the
L3_DEFAULT
protocol. The second forwarding rule uses theTCP
protocol and all ports. TCP packets sent to any destination port of198.51.100.1
are processed by the second forwarding rule. Packets using different protocols are processed by the first forwarding rule. - Scenario 2. The first forwarding rule uses the
L3_DEFAULT
protocol. The second forwarding rule uses theTCP
protocol and port 8080. TCP packets sent to198.51.100.1:8080
are processed by the second forwarding rule. All other packets, including TCP packets sent to different destination ports, are processed by the first forwarding rule.
- Scenario 1. The first forwarding rule uses the
Forwarding rule selection
Google Cloud selects one or zero forwarding rules to process an incoming packet by using this elimination process, starting with the set of forwarding rule candidates which match the destination IP address of the packet:
Eliminate forwarding rules whose protocol doesn't match the packet's protocol, except for
L3_DEFAULT
forwarding rules. Forwarding rules using theL3_DEFAULT
protocol are never eliminated by this step becauseL3_DEFAULT
matches all protocols. For example, if the packet's protocol is TCP, only forwarding rules using theUDP
protocol are eliminated.Eliminate forwarding rules whose port doesn't match the packet's port. Forwarding rules configured for all ports are never eliminated by this step because an all ports forwarding rule matches any port.
If the remaining forwarding rule candidates include both
L3_DEFAULT
and protocol specific forwarding rules, eliminate theL3_DEFAULT
forwarding rules. If the remaining forwarding rule candidates are allL3_DEFAULT
forwarding rules, none are eliminated at this step.At this point, either the remaining forwarding rule candidates fall into one of the following categories:
- A single forwarding rule remains which matches the packet's destination IP address, protocol, and port, and is used to route the packet.
- Two or more forwarding rule candidates remain which match the packet's destination IP address, protocol, and port. This means the remaining forwarding rule candidates include steering forwarding rules (discussed in the next section). Select the steering forwarding rule whose source range includes the most specific (longest prefix match) CIDR containing the packet's source IP address. If no steering forwarding rules have a source range including the packet's source IP address, select the parent forwarding rule.
- Zero forwarding rule candidates remain and the packet is dropped.
When using multiple forwarding rules, make sure that you configure the software running on your backend VMs so that it binds to all the external IP address(es) of the load balancer's forwarding rule(s).
Traffic steering
Forwarding rules for external passthrough Network Load Balancers can be configured to direct traffic coming from a specific range of source IP addresses to a specific backend service (or target instance).
Traffic steering is useful for troubleshooting and for advanced configurations. With traffic steering, you can direct certain clients to a different set of backends, a different backend service configuration, or both. For example:
- Traffic steering lets you create two forwarding rules which direct traffic to the same backend (instance group or NEG) by way of two backend services. The two backend services can be configured with different health checks, different session affinities, or different traffic distribution control policies (connection tracking, connection draining, and failover).
- Traffic steering lets you create a forwarding rule to redirect traffic from a low-bandwidth backend service to a high-bandwidth backend service. Both backend services contain the same set of backend VMs or endpoints, but load-balanced with different weights using weighted load balancing.
- Traffic steering lets you create two forwarding rules which direct traffic to different backend services, with different backends (instance groups or NEGs). For example, one backend could be configured using different machine types in order to better process traffic from a certain set of source IP addresses.
Traffic steering is configured with a forwarding rule API parameter called
sourceIPRanges
. Forwarding rules that have at least one source IP range
configured are called steering forwarding rules.
A steering forwarding rule can have a list of up to 64 source IP ranges. You can update the list of source IP ranges configured for a steering forwarding rule at any time.
Each steering forwarding rule requires that you first create a parent forwarding rule. The parent and steering forwarding rules share the same regional external IP address, IP protocol, and port information; however, the parent forwarding rule does not have any source IP address information. For example:
- Parent forwarding rule: IP address:
198.51.100.1
, IP protocol:TCP
, ports: 80 - Steering forwarding rule: IP address:
198.51.100.1
, IP protocol:TCP
, ports: 80, sourceIPRanges:203.0.113.0/24
A parent forwarding rule that points to a backend service can be associated with a steering forwarding rule that points to a backend service or a target instance.
For a given parent forwarding rule, two or more steering forwarding rules can
have overlapping, but not identical, source IP ranges. As an example, one
steering forwarding rule can have the source IP range 203.0.113.0/24
and
another steering forwarding rule for the same parent can have the source IP
range 203.0.113.0
.
You must delete all steering forwarding rules before you can delete the parent forwarding rule upon which they depend.
To learn how incoming packets are processed when steering forwarding rules are used, see Forwarding rule selection.
Session affinity behavior across steering changes
This section describes the conditions under which session affinity might break when the source IP ranges for steering forwarding rules are updated:
- If an existing connection continues to match the same forwarding rule after you change the source IP ranges for a steering forwarding rule, session affinity doesn't break. If your change results in an existing connection matching a different forwarding rule, then:
- Session affinity always breaks under these circumstances:
- The newly matched forwarding rule directs an established connection to a backend service (or target instance) which doesn't reference the previously selected backend VM.
- The newly matched forwarding rule directs an established connection to a backend service which does reference the previously selected backend VM, but the backend service is not configured to persist connections when backends are unhealthy, and the backend VM fails the backend service's health check.
- Session affinity might break when the newly matched forwarding rule directs an established connection to a backend service, and the backend service does reference the previously selected VM, but the backend service's combination of session affinity and connection tracking mode results in a different connection tracking hash.
Preserving session affinity across steering changes
This section describes how to avoid breaking session affinity when the source IP ranges for steering forwarding rules are updated:
- Steering forwarding rules pointing to backend services. If both the parent and the steering forwarding rule point to backend services, you'll need to manually make sure that the session affinity and connection tracking policy settings are identical. Google Cloud does not automatically reject configurations if they are not identical.
- Steering forwarding rules pointing to target instances. A parent forwarding rule that points to a backend service can be associated with a steering forwarding rule that points to a target instance. In this case, the steering forwarding rule inherits session affinity and connection tracking policy settings from the parent forwarding rule.
For instructions on how to configure traffic steering, see Configure traffic steering.
Regional backend service
Each external passthrough Network Load Balancer has one regional backend service that defines the behavior of the load balancer and how traffic is distributed to its backends. The name of the backend service is the name of the external passthrough Network Load Balancer shown in the Google Cloud console.
Each backend service defines the following backend parameters:
Protocol. A backend service accepts traffic on the IP address and ports (if configured) specified by one or more regional external forwarding rules. The backend service passes packets to backend VMs while preserving the packet's source and destination IP addresses, protocol, and, if the protocol is port-based, the source and destination ports.
Backend services used with external passthrough Network Load Balancers support the following protocol options:
TCP
,UDP
, orUNSPECIFIED
.Backend services with the
UNSPECIFIED
protocol can be used with any forwarding rule regardless of the forwarding rule protocol. Backend services with a specific protocol (TCP
orUDP
) can only be referenced by forwarding rules with the same protocol (TCP
orUDP
). Forwarding rules with theL3_DEFAULT
protocol can only refer to backend services with theUNSPECIFIED
protocol.See Forwarding rule protocol specification for a table with possible forwarding rule and backend service protocol combinations.
Traffic distribution. A backend service allows traffic to be distributed according to a configurable session affinity, connection tracking mode, and weighted load balancing. The backend service can also be configured to enable connection draining and designate failover backends for the load balancer.
Health check. A backend service must have an associated regional health check.
Backends: each backend service operates in a single region and distributes traffic to either instance groups or zonal NEGs in the same region. You can use either instance groups or zonal NEGs, but not a combination of both, as backends for an external passthrough Network Load Balancer:
- If you choose instance groups, you can use unmanaged instance groups, zonal managed instance groups, regional managed instance groups, or a combination of instance group types.
- If you choose zonal NEGs, you must use
GCE_VM_IP
zonal NEGs.
Each instance group or NEG backend has an associated VPC network, even if that instance group or NEG hasn't been connected to a backend service yet. For more information about how a network is associated with each type of backend, see Instance group backends and network interfaces and Zonal NEG backends and network interfaces.
Instance groups
An external passthrough Network Load Balancer distributes connections among backend VMs contained within managed or unmanaged instance groups. Instance groups can be regional or zonal in scope.
An external passthrough Network Load Balancer only distributes traffic to the first network interface (nic0
)
of backend VMs. The load balancer supports instance groups whose member
instances use any VPC network in the same region, as long as the VPC network is
in the same project as the backend service. (All VMs within a given instance
group must use the same VPC network.)
Each instance group has an associated VPC network, even if that instance group hasn't been connected to a backend service yet. For more information about how a network is associated with instance groups, see Instance group backends and network interfaces.
The external passthrough Network Load Balancer is highly available by design. There are no special steps needed to make the load balancer highly available because the mechanism doesn't rely on a single device or VM instance. You only need to make sure that your backend VM instances are deployed to multiple zones so that the load balancer can work around potential issues in any given zone.
Regional managed instance groups. Use regional managed instance groups if you can deploy your software by using instance templates. Regional managed instance groups automatically distribute traffic among multiple zones, providing the best option to avoid potential issues in any given zone.
An example deployment using a regional managed instance group is shown here. The instance group has an instance template that defines how instances should be provisioned, and each group deploys instances within three zones of the
us-central1
region.Zonal managed or unmanaged instance groups. Use zonal instance groups in different zones (in the same region) to protect against potential issues in any given zone.
An example deployment using zonal instance groups is shown here. This load balancer provides availability across two zones.
Zonal NEGs
An external passthrough Network Load Balancer distributes connections among GCE_VM_IP
endpoints contained
within zonal network endpoint
groups. These endpoints must be
located in the same region as the load balancer. For some recommended zonal NEG
use cases, see Zonal network endpoint groups
overview.
Endpoints in the NEG must be primary internal IPv4 addresses of VM network interfaces that are in the same subnet and zone as the zonal NEG. The primary internal IPv4 address from any network interface of a multi-NIC VM instance can be added to a NEG as long as it is in the NEG's subnet.
Zonal NEGs support both IPv4 and dual-stack (IPv4 and IPv6) VMs. For both IPv4 and dual-stack VMs, it is sufficient to specify only the VM instance when attaching an endpoint to a NEG. You don't need to specify the endpoint's IP address. The VM instance must always be in the same zone as the NEG.
Each zonal NEG has an associated VPC network and a subnet, even if that zonal NEG hasn't been connected to a backend service yet. For more information about how a network is associated with zonal NEGs, see Zonal NEG backends and network interfaces.
Instance group backends and network interfaces
The VPC network associated with an instance group is the
VPC network used by every member VM's nic0
network interface.
- For managed instance groups (MIGs), the VPC network for the instance group is defined in the instance template.
- For unmanaged instance groups, the VPC network for the instance
group is defined as the VPC network used by the
nic0
network interface of the first VM instance that you add to the unmanaged instance group.
Within a given (managed or unmanaged) instance group, all VM instances must
have their nic0
network interfaces in the same VPC network.
Each member VM can optionally have additional network interfaces (nic1
through nic7
), but each network interface must attach to a different
VPC network. These networks must also be different from the
VPC network associated with the instance group.
nic0
interfaces. If you want to receive traffic on a non-default network
interface (nic1
through nic7
), you must use zonal NEGs with GCE_VM_IP
endpoints.
Zonal NEG backends and network interfaces
When you create a new zonal NEG with GCE_VM_IP
endpoints, you must explicitly
associate the NEG with a subnetwork of a VPC network before you
can add any endpoints to the NEG. Neither the subnet nor the VPC
network can be changed after the NEG is created.
Within a given NEG, each GCE_VM_IP
endpoint actually represents a network
interface. The network interface must be in the subnetwork associated with the
NEG. From the perspective of a Compute Engine instance, the network
interface can use any identifier (nic0
through nic7
). From the perspective
of being an endpoint in a NEG, the network interface is identified by using its
primary IPv4 address.
There are two ways to add a GCE_VM_IP
endpoint to a NEG:
- If you specify only a VM name (without any IP address) when adding an endpoint, Google Cloud requires that the VM has a network interface in the subnetwork associated with the NEG. The IP address that Google Cloud chooses for the endpoint is the primary internal IPv4 address of the VM's network interface in the subnetwork associated with the NEG.
- If you specify both a VM name and an IP address when adding an endpoint, the IP address that you provide must be a primary internal IPv4 address for one of the VM's network interfaces. That network interface must be in the subnetwork associated with the NEG. Note that specifying an IP address is redundant because there can only be a single network interface that is in the subnetwork associated with the NEG.
Backend services and VPC networks
The backend service is not associated with any VPC network; however, each backend instance group or zonal NEG is associated with a VPC network, as noted previously. As long as all backends are located in the same region and project, and as long as all backends are of the same type (instance groups or zonal NEGs), you can add backends that use either the same or different VPC networks.
To distribute packets to nic1
through nic7
, you must use
zonal NEG backends (with GCE_VM_IP
endpoints) because an instance group's
associated VPC network always matches the VPC
network used by the nic0
interface of all member instances.
Dual-stack backends (IPv4 and IPv6)
If you want the load balancer to use dual-stack backends that handle both IPv4 and IPv6 traffic, note the following requirements:
- Backends must be configured in dual-stack
subnets that are in the
same region as the load balancer's IPv6 forwarding rule. For the backends, you
can use a subnet with the
ipv6-access-type
set to eitherEXTERNAL
orINTERNAL
. If the backend subnet'sipv6-access-type
is set toINTERNAL
, you must use a different IPv6-only subnet withipv6-access-type
set toEXTERNAL
for the load balancer's external forwarding rule. - Backends must be configured to be dual-stack with
stack-type
set toIPv4_IPv6
. If the backend subnet'sipv6-access-type
is set toEXTERNAL
, you must also set the--ipv6-network-tier
toPREMIUM
. For instructions, see Create an instance template with IPv6 addresses.
IPv6-only backends
If you want the load balancer to use IPv6-only backends, note the following requirements:
- IPv6-only instances are only supported in unmanaged instance groups. No other backend type is supported.
- Backends must be configured in either
dual-stack or
IPv6-only
subnets that are
in the same region as the load balancer's IPv6 forwarding rule. For the
backends, you can use a subnet with the
ipv6-access-type
set to eitherINTERNAL
orEXTERNAL
. If the backend subnet'sipv6-access-type
is set toINTERNAL
, you must use a different IPv6-only subnet withipv6-access-type
set toEXTERNAL
for the load balancer's external forwarding rule. - Backends must be configured to be IPv6-only with the VM
stack-type
set toIPv6_ONLY
. If the backend subnet'sipv6-access-type
is set toEXTERNAL
, you must also set the--ipv6-network-tier
toPREMIUM
. For instructions, see Create an instance template with IPv6 addresses.
Note that IPv6-only VMs can be created under both dual-stack and IPv6-only subnets, but dual-stack VMs can't be created under IPv6-only subnets.
Health checks
External passthrough Network Load Balancers use regional health checks to determine which instances can receive new connections. Each external passthrough Network Load Balancer's backend service must be associated with a regional health check. Load balancers use health check status to determine how to route new connections to backend instances.
For more details about how Google Cloud health checks work, see How health checks work.
External passthrough Network Load Balancers support the following types of health checks:
- HTTP, HTTPS, or HTTP/2. If your backend VMs serve traffic using HTTP, HTTPS, or HTTP/2, it's best to use a health check that matches that protocol. For details, see Success criteria for HTTP, HTTPS, and HTTP/2 health checks.
- SSL or TCP. If your backend VMs don't serve HTTP-type traffic, you should use either an SSL or a TCP health check. For details, see Success criteria for SSL and TCP health checks.
Health checks for other protocol traffic
Google Cloud doesn't offer any protocol-specific health checks beyond the ones listed in the Health checks section, earlier on this page. When you use an external passthrough Network Load Balancer to load balance a protocol other than TCP, you must still run a TCP-based service on your backend VMs to provide the required health check information.
For example, if you are load balancing UDP traffic, client requests are load balanced by using the UDP protocol, and you must run a TCP service to provide information to Google Cloud health check probers. To achieve this, you can run an HTTP server on each backend VM that returns an HTTP 200 response to health check probers. You should use your own logic running on the backend VM to ensure that the HTTP server returns 200 only if the UDP service is properly configured and running.
Firewall rules
Because external passthrough Network Load Balancers are passthrough load balancers, you control access to the load balancer's backends using Google Cloud firewall rules. You must create ingress allow firewall rules or an ingress allow hierarchical firewall policy to permit health checks and the traffic that you're load balancing.
Forwarding rules and ingress allow firewall rules or Hierarchical firewall policies work together in the following way: a forwarding rule specifies the protocol and, if defined, port requirements that a packet must meet to be forwarded to a backend VM. Ingress allow firewall rules control whether the forwarded packets are delivered to the VM or dropped. All VPC networks have an implied deny ingress firewall rule that blocks incoming packets from any source. The Google Cloud default VPC network includes a limited set of pre-populated ingress allow firewall rules.
To accept traffic from any IP address on the internet, you must create an ingress allow firewall rule with the
0.0.0.0/0
source range. To only allow traffic from certain IP address ranges, use more restrictive source ranges.As a security best practice, your ingress allow firewall rules should only permit the IP protocols and ports that you need. Restricting the protocol (and, if possible, port) configuration is especially important when using forwarding rules whose protocol is set to
L3_DEFAULT
.L3_DEFAULT
forwarding rules forward packets for all supported IP protocols (on all ports if the protocol and packet have port information).External passthrough Network Load Balancers use Google Cloud health checks. Therefore, you must always allow traffic from the health check IP address ranges. These ingress allow firewall rules can be made specific to the protocol and ports of the load balancer's health check.
IP addresses for request and return packets
When a backend VM receives a load-balanced packet from a client, the packet's source and destination are:
- Source: the external IP address associated with a Google Cloud VM or internet-routable IP address of a system connecting to the load balancer.
- Destination: the IP address of the load balancer's forwarding rule.
Because the load balancer is a pass-through load balancer (not a proxy), packets arrive bearing the destination IP address of the load balancer's forwarding rule. Software running on backend VMs should be configured to do the following:
- Listen on (bind to) the load balancer's forwarding rule IP address or any IP
address (
0.0.0.0
or::
) - If the load balancer forwarding rule's protocol supports ports: Listen on (bind to) a port that's included in the load balancer's forwarding rule
Return packets are sent directly from the load balancer's backend VMs to the client. The return packet's source and destination IP addresses depend on the protocol:
- TCP is connection-oriented so backend VMs must reply with packets whose source IP addresses match the forwarding rule's IP address so that the client can associate the response packets with the appropriate TCP connection.
- UDP, ESP, GRE, and ICMP are connectionless. Backend VMs can send response packets whose source IP addresses either match the forwarding rule's IP address or match any assigned external IP address for the VM. Practically speaking, most clients expect the response to come from the same IP address to which they sent packets.
The following table summarizes sources and destinations for response packets:
Traffic type | Source | Destination |
---|---|---|
TCP | The IP address of the load balancer's forwarding rule | The requesting packet's source |
UDP, ESP, GRE, ICMP | For most use cases, the IP address of the load balancer's forwarding rule † | The requesting packet's source. |
† When a VM has an external IP address or when you are using Cloud NAT, it is also possible to set the response packet's source IP address to the VM NIC's primary internal IPv4 address. Google Cloud or Cloud NAT changes the response packet's source IP address to either the NIC's external IPv4 address or a Cloud NAT external IPv4 address in order to send the response packet to the client's external IP address. Not using the forwarding rule's IP address as a source is an advanced scenario because the client receives a response packet from an external IP address that does not match the IP address to which it sent a request packet.
Return path
External passthrough Network Load Balancers use special routes outside of your VPC network to direct incoming requests and health check probes to each backend VM.
The load balancer preserves the source IP addresses of packets. Responses from the backend VMs go directly to the clients, not back through the load balancer. The industry term for this is direct server return.
Shared VPC architecture
Except for the IP address, all of the components of an external passthrough Network Load Balancer must exist in the same project. The following table summarizes Shared VPC components for external passthrough Network Load Balancers:
IP address | Forwarding rule | Backend components |
---|---|---|
A regional external IP address must be defined in either the same project as the load balancer or the Shared VPC host project. | A regional external forwarding rule must be defined in the same project as the instances in the backend service. | The regional backend service must be defined in the same project and same region where the backends (instance group or zonal NEG) exist. Health checks associated with the backend service must be defined in the same project and the same region as the backend service. |
Traffic distribution
The way that external passthrough Network Load Balancers distribute new connections depends on whether you have configured failover:
- If you haven't configured failover, an external passthrough Network Load Balancer distributes new connections to its healthy backend VMs if at least one backend VM is healthy. When all backend VMs are unhealthy, the load balancer distributes new connections among all backends as a last resort. In this situation, the load balancer routes each new connection to an unhealthy backend VM.
- If you have configured failover, an external passthrough Network Load Balancer distributes
new connections among healthy backend VMs in its active pool, according to a
failover policy that you configure. When all backend VMs are unhealthy, you
can choose from one of the following behaviors:
- (Default) The load balancer distributes traffic to only the primary VMs. This is done as a last resort. The backup VMs are excluded from this last-resort distribution of connections.
- The load balancer drops traffic.
For details about how connections are distributed, see the next section Backend selection and connection tracking.
For details about how failover works, see the Failover section.
Backend selection and connection tracking
External passthrough Network Load Balancers use configurable backend selection and connection tracking algorithms to determine how traffic is distributed to backend VMs.
External passthrough Network Load Balancers use the following algorithm to distribute packets among backend VMs (in its active pool, if you have configured failover):
- If the load balancer has an entry in its connection tracking table matching the characteristics of an incoming packet, the packet is sent to the backend specified by the connection tracking table entry. The packet is considered to be part of a previously established connection, so the packet is sent to the backend VM that the load balancer previously determined and recorded in its connection tracking table.
If the load balancer receives a packet for which it has no connection tracking entry, the load balancer does the following:
The load balancer selects a backend. The load balancer calculates a hash based on the configured session affinity. It uses this hash to select a backend from among the ones that are healthy (unless all backends are unhealthy, in which case all backends are considered as long as the failover policy hasn't been configured to drop traffic in this situation). The default session affinity,
NONE
, uses the following hash algorithms:- For TCP and unfragmented UDP packets, a 5-tuple hash of the packet's source IP address, source port, destination IP address, destination port, and the protocol.
- For fragmented UDP packets and all other protocols, a 3-tuple hash of the packet's source IP address, destination IP address, and the protocol.
Backend selection can be customized by using a hash algorithm that uses fewer pieces of information. For all the supported options, see session affinity options.
In addition, note the following:
If you enable weighted load balancing, the hash based backend selection becomes weighted based on the weights reported by backend instances. For example:
- Consider a backend service configured with session affinity set to
NONE
and a forwarding rule with protocolUDP
. If there are two healthy backend instances with weights 1 and 4, then the backends will get 20% and 80% of the UDP packets respectively. - Consider a backend service that is configured with 3-tuple session
affinity and connection tracking.
The session affinity is
CLIENT_IP_PROTO
and connection tracking mode isPER_SESSION
. If there are three healthy backend instances with weights 0, 2 and 6, then the backends will get traffic for 0%, 25%, and 75% of the new source IP addresses (the source IP addresses for which there are no existing connection tracking table entries) respectively. Traffic for existing connections will go to the previously assigned backends.
The load balancer adds an entry to its connection tracking table. This entry records the selected backend for the packet's connection so that all future packets from this connection are sent to the same backend. Whether connection tracking is used depends on the protocol:
TCP packets. Connection tracking is always enabled, and cannot be turned off. By default, connection tracking is 5-tuple, but it can be configured to be less than 5-tuple. When it is 5-tuple, TCP SYN packets are treated differently. Unlike non-SYN packets, they discard any matching connection tracking entry and always select a new backend.
The default 5-tuple connection tracking is used when:- tracking mode is
PER_CONNECTION
(all session affinities), or, - tracking mode is
PER_SESSION
and the session affinity isNONE
, or, - tracking mode is
PER_SESSION
and the session affinity isCLIENT_IP_PORT_PROTO
.
- tracking mode is
UDP, ESP, and GRE packets. Connection tracking is enabled only if session affinity is set to something other than
NONE
.ICMP and ICMPv6 packets. Connection tracking cannot be used.
For additional details about when connection tracking is enabled, and what tracking method is used when connection tracking is enabled, see connection tracking mode.
In addition, note the following:
- An entry in the connection tracking table expires 60 seconds after the load balancer processes the last packet that matched the entry. For more information, see Idle timeout.
- Depending on the protocol, the load balancer might remove connection tracking table entries when backends become unhealthy. For details and how to customize this behavior, see Connection persistence on unhealthy backends.
Session affinity options
Session affinity controls the distribution of new connections from clients to the load balancer's backend VMs. Session affinity is specified for the entire regional external backend service, not on a per backend basis.
External passthrough Network Load Balancers support the following session affinity options:
- None (
NONE
). 5-tuple hash of source IP address, source port, protocol, destination IP address, and destination port - Client IP, Destination IP (
CLIENT_IP
). 2-tuple hash of source IP address and destination IP address - Client IP, Destination IP, Protocol (
CLIENT_IP_PROTO
). 3-tuple hash of source IP address, destination IP address, and protocol - Client IP, Client Port, Destination IP, Destination Port, Protocol
(
CLIENT_IP_PORT_PROTO
). 5-tuple hash of source IP address, source port, protocol, destination IP address, and destination port
To learn how these session affinity options affect the backend selection and connection tracking methods, see this table.
Connection tracking mode
Whether connection tracking is enabled depends only on the protocol of the
load-balanced traffic and the session affinity settings. Tracking mode specifies
the connection tracking algorithm to be used when connection tracking is
enabled. There are two tracking modes: PER_CONNECTION
(default) and
PER_SESSION
.
PER_CONNECTION
(default). This is the default tracking mode. With this connection tracking mode, TCP traffic is always tracked per 5-tuple, regardless of the session affinity setting. For UDP, ESP, and GRE traffic, connection tracking is enabled when the selected session affinity is notNONE
. UDP, ESP, and GRE packets are tracked using the tracking methods described in this table.PER_SESSION
. If session affinity isCLIENT_IP
orCLIENT_IP_PROTO
, configuring this mode results in 2-tuple and 3-tuple connection tracking, respectively, for all protocols (except ICMP and ICMPv6, which are not connection-trackable). For other session affinity settings,PER_SESSION
mode behaves identically toPER_CONNECTION
mode.
To learn how these tracking modes work with different session affinity settings for each protocol, see the following table.
Backend selection | Connection tracking mode | ||
---|---|---|---|
Session affinity setting | Hash method for backend selection | PER_CONNECTION (default) |
PER_SESSION |
Default: No session affinity
( |
TCP and unfragmented UDP: 5-tuple hash Fragmented UDP and all other protocols: 3-tuple hash |
|
|
Client IP, Destination IP
( |
All protocols: 2-tuple hash |
|
|
Client IP, Destination IP, Protocol
( |
All protocols: 3-tuple hash |
|
|
Client IP, Client Port, Destination IP, Destination Port, Protocol
( |
TCP and unfragmented UDP: 5-tuple hash Fragmented UDP and all other protocols: 3-tuple hash |
|
|
To learn how to change the connection tracking mode, see Configure a connection tracking policy.
Connection persistence on unhealthy backends
The connection persistence settings control whether an existing connection persists on a selected backend VM or endpoint after that backend becomes unhealthy, as long as the backend remains in the load balancer's configured backend group (in an instance group or a NEG).
The behavior described in this section does not apply to cases where you remove a backend instance from an instance group or remove a backend endpoint from its NEG, or remove the instance group or the NEG from the backend service. In such cases, established connections only persist as described in connection draining.
The following connection persistence options are available:
DEFAULT_FOR_PROTOCOL
(default)NEVER_PERSIST
ALWAYS_PERSIST
The following table summarizes connection persistence options and how connections persist for different protocols, session affinity options, and tracking modes.
Connection persistence on unhealthy backends option | Connection tracking mode | |
---|---|---|
PER_CONNECTION |
PER_SESSION |
|
DEFAULT_FOR_PROTOCOL |
TCP: connections persist on unhealthy backends (all session affinities) All other protocols: connections never persist on unhealthy backends |
TCP: connections persist on unhealthy backends if
session affinity is All other protocols: connections never persist on unhealthy backends |
NEVER_PERSIST |
All protocols: connections never persist on unhealthy backends | |
ALWAYS_PERSIST |
TCP: connections persist on unhealthy backends (all session affinities) ESP, GRE, UDP: connections persist on unhealthy
backends if session affinity is not ICMP, ICMPv6: not applicable because they are not connection-trackable This option should only be used for advanced use cases. |
Configuration not possible |
TCP connection persistence behavior on unhealthy backends
Whenever a TCP connection with 5-tuple tracking persists on an unhealthy backend:
- If the unhealthy backend continues to respond to packets, the connection continues until it is reset or closed (by either the unhealthy backend or the client).
- If the unhealthy backend sends a TCP reset (RST) packet or does not respond to packets, then the client might retry with a new connection, letting the load balancer select a different, healthy backend. TCP SYN packets always select a new, healthy backend.
To learn how to change connection persistence behavior, see Configure a connection tracking policy.
Idle timeout
Entries in connection tracking tables expire 60 seconds after the load balancer processes the last packet that matched the entry. This idle timeout value can't be modified.
Weighted load balancing
You can configure a network load balancer to distribute traffic across the load balancer's backend instances based on the weights reported by an HTTP health check using weighted load balancing.
Weighted load balancing requires that you configure both of the following:
- You must set the locality load balancer policy (
localityLbPolicy
) of the backend service toWEIGHTED_MAGLEV
. - You must configure the backend service with an HTTP health check. The HTTP
health check responses must contain a custom HTTP response header field
X-Load-Balancing-Endpoint-Weight
to specify the weights with values from0
to1000
for each backend instance.
If you use the same backend (instance group or NEG) for multiple backend
service-based external passthrough Network Load Balancers using weighted load balancing, we recommend
using a unique request-path
for each health check of the backend service. This
allows the endpoint instance to assign different weights to different backend
services. For more
information, see Success criteria for HTTP, HTTPS, and HTTP/2 health
checks.
For selecting a backend for a new connection, backends are assigned a strict priority order based on their weight and health status as shown in the following table:
Weight | Healthy | Backend selection priority |
---|---|---|
Weight greater than zero | Yes | 4 |
Weight greater than zero | No | 3 |
Weight equals zero | Yes | 2 |
Weight equals zero | No | 1 |
Only the highest priority backends are eligible for backend selection. If all eligible backends have zero weight, the load balancer distributes new connections among all eligible backends, treating them with equal weight. For examples of weighted load balancing, see Backend selection and connection tracking.
Weighted load balancing can be used in the following scenarios:
If some connections process more data than others, or some connections live longer than others, the backend load distribution might get uneven. By signaling a lower per-instance weight, an instance with high load can reduce its share of new connections, while it keeps servicing existing connections.
If a backend is overloaded and assigning more connections might break existing connections, such backends assign zero weight to itself. By signaling zero weight, a backend instance stops servicing new connections, but continues to service existing ones.
If a backend is draining existing connections before maintenance, it assigns zero weight to itself. By signaling zero weight, the backend instance stops servicing new connections, but continues to service existing ones.
For more information, see Configure weighted load balancing.
Connection draining
Connection draining is a process applied to established connections in the following cases:
- A VM or endpoint is removed from a backend (instance group or NEG).
- A backend removes a VM or endpoint (by replacement, abandonment, when rolling upgrades, or scaling down).
- A backend is removed from a backend service.
By default, connection draining is disabled. When disabled, established connections are terminated as quickly as possible. When connection draining is enabled, established connections are allowed to persist for a configurable timeout, after which the backend VM instance is terminated.
For more details about how connection draining is triggered and how to enable connection draining, see Enabling connection draining.
UDP fragmentation
Backend service-based external passthrough Network Load Balancers can process both fragmented and unfragmented UDP packets. If your application uses fragmented UDP packets, keep the following in mind:
- UDP packets might become fragmented before reaching a Google Cloud VPC network.
- Google Cloud VPC networks forward UDP fragments as they arrive (without waiting for all fragments to arrive).
- Non-Google Cloud networks and on-premises network equipment might forward UDP fragments as they arrive, delay fragmented UDP packets until all fragments have arrived, or discard fragmented UDP packets. For details, see the documentation for the network provider or network equipment.
If you expect fragmented UDP packets and need to route them to the same backends, use the following forwarding rule and backend service configuration parameters:
Forwarding rule configuration: Use only one
UDP
orL3_DEFAULT
forwarding rule per load-balanced IP address, and configure the forwarding rule to accept traffic on all ports. This ensures that all fragments arrive at the same forwarding rule. Even though the fragmented packets (other than the first fragment) lack a destination port, configuring the forwarding rule to process traffic for all ports also configures it to receive UDP fragments that have no port information. To configure all ports, either use the Google Cloud CLI to set--ports=ALL
or use the API to setallPorts
toTrue
.Backend service configuration: Set the backend service's session affinity to
CLIENT_IP
(2-tuple hash) orCLIENT_IP_PROTO
(3-tuple hash) so that the same backend is selected for UDP packets that include port information and UDP fragments (other than the first fragment) that lack port information. Set the backend service's connection tracking mode toPER_SESSION
so that the connection tracking table entries are built by using the same 2-tuple or 3-tuple hashes.
Using target instances as backends
If you're using target
instances
as backends for the external passthrough Network Load Balancer and you expect fragmented UDP packets,
use only one UDP
or L3_DEFAULT
forwarding rule per IP address,
and configure the forwarding rule to accept traffic on all ports. This ensures
that all fragments arrive at the same forwarding rule even if they don't have
the same destination port. To configure all ports, either set
--ports=ALL
using
gcloud
, or set allPorts
to
True
using the API.
Failover
You can configure an external passthrough Network Load Balancer to distribute connections among VM instances or endpoints in primary backends (instance groups or NEGs), and then switch, if needed, to using failover backends. Failover provides yet another method of increasing availability, while also giving you greater control over how to manage your workload when your primary backends aren't healthy.
By default, when you add a backend to an external passthrough Network Load Balancer's backend service, that backend is a primary backend. You can designate a backend to be a failover backend when you add it to the load balancer's backend service, or by editing the backend service later.
For more details about how failover works, see Failover overview for external passthrough Network Load Balancers.
Outbound internet connectivity from backends
VM instances configured as an external passthrough Network Load Balancer's backend endpoints can initiate connections to the internet using the load balancer's forwarding rule IP address as the source IP address of the outbound connection.
Generally, a VM instance always uses its own external IP address or Cloud NAT to initiate connections. You use the forwarding rule IP address to initiate connections from backend endpoints only in special scenarios such as when you need VM instances to originate and receive connections at the same external IP address, and you also need the backend redundancy provided by the external passthrough Network Load Balancer for inbound connections.
Outbound packets sent from backend VMs directly to the internet have no restrictions on traffic protocols and ports. Even if an outbound packet is using the forwarding rule's IP address as the source, the packet's protocol and source port don't have to match the forwarding rule's protocol and port specification. However, inbound response packets must match the forwarding rule IP address, protocol, and destination port of the forwarding rule. For more information, see Paths for external passthrough Network Load Balancers and external protocol forwarding.
Additionally, any responses to the VM's outbound connections are subject to load balancing, just like all the other incoming packets meant for the load balancer. This means that responses might not arrive on the same backend VM that initiated the connection to the internet. If the outbound connections and load balanced inbound connections share common protocols and ports, then you can try one of the following suggestions:
Synchronize outbound connection state across backend VMs, so that connections can be served even if responses arrive at a backend VM other than the one that has initiated the connection.
Use a failover configuration, with a single primary VM and a single backup VM. Then, the active backend VM that initiates the outbound connections always receives the response packets.
This path to internet connectivity from an external passthrough Network Load Balancer's backends is the default intended behavior according to Google Cloud's implied firewall rules. However, if you have security concerns about leaving this path open, you can use targeted egress firewall rules to block unsolicited outbound traffic to the internet.
Limitations
You cannot use the Google Cloud console to do the following tasks:
- Create or modify an external passthrough Network Load Balancer whose forwarding rule uses the
L3_DEFAULT
protocol. - Create or modify an external passthrough Network Load Balancer whose backend service protocol is set
to
UNSPECIFIED
. - Create or modify an external passthrough Network Load Balancer that configures a connection tracking policy.
- Create or modify source IP-based traffic steering for a forwarding rule.
Use either the Google Cloud CLI or the REST API instead.
- Create or modify an external passthrough Network Load Balancer whose forwarding rule uses the
External passthrough Network Load Balancers don't support VPC Network Peering.
Pricing
For pricing information, see Pricing.
What's next
- To configure an external passthrough Network Load Balancer with a backend service for TCP or UDP traffic only (supporting IPv4 and IPv6 traffic), see Set up an external passthrough Network Load Balancer with a backend service.
- To configure an external passthrough Network Load Balancer for multiple IP protocols (supporting IPv4 and IPv6 traffic), see Set up an external passthrough Network Load Balancer for multiple IP protocols.
- To configure an external passthrough Network Load Balancer with a zonal NEG backend, see Set up an external passthrough Network Load Balancer with zonal NEGs
- To configure an external passthrough Network Load Balancer with a target pool, see Set up an external passthrough Network Load Balancer with a target pool.
- To learn how to transition an external passthrough Network Load Balancer from a target pool backend to a regional backend service, see Transitioning an external passthrough Network Load Balancer from a target pool to a backend service.