Failover for external passthrough Network Load Balancer overview

You can configure a backend service-based external passthrough Network Load Balancer to distribute connections among virtual machine (VM) instances in primary backends, and then switch, if needed, to using failover backends. Failover provides one method of increasing availability, while also giving you greater control over how to manage your workload when your primary backend VMs aren't healthy.

This page describes concepts and requirements specific to failover for external passthrough Network Load Balancers. Make sure that you are familiar with the conceptual information in the following articles before you configure failover for your external passthrough Network Load Balancer:

These concepts are important to understand because configuring failover modifies the load balancer's standard traffic distribution algorithm.

By default, when you add a backend to an external passthrough Network Load Balancer's backend service, that backend is a primary backend. You can designate a backend to be a failover backend when you add it to the load balancer's backend service, or by editing the backend service later. Failover backends only receive connections from the load balancer after a configurable ratio of primary VMs don't pass health checks.

Supported backends

Instance groups (managed and unmanaged) and zonal NEGs (with GCE_VM_IP endpoints) are supported as backends. For simplicity, the examples on this page show unmanaged instance groups.

Using managed instance groups with autoscaling and failover might cause the active pool to repeatedly failover and failback between the primary and failover backends. Google Cloud doesn't prevent you from configuring failover with managed instance groups because your deployment might benefit from this setup.

Architecture

The following example depicts an external passthrough Network Load Balancer with one primary backend and one failover backend.

  • The primary backend is an unmanaged instance group in us-west1-a.
  • The failover backend is a different unmanaged instance group in us-west1-c.
Failover example for an external passthrough Network Load Balancer.
Failover example for an external passthrough Network Load Balancer (click to enlarge).

The next example depicts an external passthrough Network Load Balancer with two primary backends and two failover backends, both distributed between two zones in the us-west1 region. This configuration increases reliability because it doesn't depend on a single zone for all primary or all failover backends.

  • Primary backends are unmanaged instance groups ig-a and ig-d.
  • Failover backends are unmanaged instance groups ig-b and ig-c.
Multi-zone external passthrough Network Load Balancer failover.
Multi-zone external passthrough Network Load Balancer failover (click to enlarge).

During failover, both primary backends become inactive, while the healthy VMs in both failover backends become active. For a full explanation of how failover works in this example, see the Failover example.

Backend instance groups and VMs

The instance groups in external passthrough Network Load Balancers are either primary backends or failover backends. You can designate a backend to be a failover backend at the time that you add it to the backend service or by editing the backend after you add it. Otherwise, instance groups are primary by default.

You can configure multiple primary backends and multiple failover backends in a single external passthrough Network Load Balancer by adding them to the load balancer's backend service.

A primary VM is a member of an instance group that you've defined to be a primary backend. The VMs in a primary backend participate in the load balancer's active pool (described in the next section), unless the load balancer switches to using its failover backends.

A backup VM is a member of an instance group that you've defined to be a failover backend. The VMs in a failover backend participate in the load balancer's active pool when primary VMs become unhealthy. The number of unhealthy primary VMs that triggers failover is a configurable percentage.

Limits

  • Instance groups. You can have up to 50 primary backend instance groups and up to 50 failover backend instance groups.

Active pool

The active pool is the collection of backend VMs to which an external passthrough Network Load Balancer sends new connections. Membership of backend VMs in the active pool is computed automatically based on which backends are healthy and conditions that you can specify, as described in Failover policy.

The active pool never combines primary VMs and backup VMs. The following examples clarify the membership possibilities. During failover, the active pool contains only backup VMs. During normal operation (failback), the active pool contains only primary VMs.

Active pool on failover and failback.
Active pool on failover and failback (click to enlarge).

Failover and failback

Failover and failback are the automatic processes that switch backend VMs into or out of the load balancer's active pool. When Google Cloud removes primary VMs from the active pool and adds healthy failover VMs to the active pool, the process is called failover. When Google Cloud reverses this, the process is called failback.

Failover policy

A failover policy is a collection of parameters that Google Cloud uses for failover and failback. Each external passthrough Network Load Balancer has one failover policy that has multiple settings:

  • Failover ratio
  • Dropping traffic when all backend VMs are unhealthy
  • Connection draining on failover and failback

Failover ratio

A configurable failover ratio determines when Google Cloud performs a failover or failback, changing membership in the active pool. The ratio can be from 0.0 to 1.0, inclusive. If you don't specify a failover ratio, Google Cloud uses a default value of 0.0. It's a best practice to set your failover ratio to a number that works for your use case rather than relying on this default.

Conditions VMs in active pool
  1. The failover ratio (x) != 0.0.
    The ratio of healthy primary VMs >= x.
  2. The failover ratio (x) = 0.0.
    The number of healthy primary VMs > 0.
All healthy primary VMs
If at least one backup VM is healthy and:
  1. The failover ratio (x) != 0.0.
    The ratio of healthy primary VMs < x.
  2. The failover ratio = 0.0.
    The number of healthy primary VMs = 0.
All healthy backup VMs
When all primary VMs and all backup VMs are unhealthy and you haven't configured your load balancer to drop traffic during this situation All primary VMs, as a last resort

The following examples clarify membership in the active pool. For an example with calculations, see the Failover example.

  • A failover ratio of 1.0 requires that all primary VMs be healthy. When at least one primary VM becomes unhealthy, Google Cloud performs a failover, moving the backup VMs into the active pool.
  • A failover ratio of 0.1 requires that at least 10% of the primary VMs be healthy; otherwise, Google Cloud performs a failover.
  • A failover ratio of 0.0 means that Google Cloud performs a failover only when all the primary VMs are unhealthy. Failover doesn't happen if at least one primary VM is healthy.

An external passthrough Network Load Balancer distributes connections among VMs in the active pool according to the traffic distribution algorithm.

Dropping traffic when all backend VMs are unhealthy

By default, when all primary and backup VMs are unhealthy, Google Cloud distributes new connections among all primary VMs. It does so as a last resort.

If you prefer, you can configure your external passthrough Network Load Balancer to drop new connections when all primary and backup VMs are unhealthy.

Connection draining on failover and failback

When connection draining is enabled for the failover policy, established connections to instances in either the primary or failover instance groups continue to be sent to the instances with which they have been established, even after failover or failback, thus preventing connection breakage. When connection draining is disabled for the failover policy, any existing connections are terminated immediately during failover or failback.

If the protocol for your load balancer is TCP, the following is true:

  • By default, connection draining is enabled. Existing TCP sessions can persist on their current backend VMs even if the backend VM isn't in the load balancer's active pool.

  • You can disable connection draining during failover and failback events. Disabling connection draining during failover and failback ensures that all TCP sessions, including established ones, are quickly terminated. Connections to backend VMs might be closed with a TCP reset packet.

Disabling connection draining on failover and failback is useful for scenarios such as the following:

  • Patching backend VMs. Prior to patching, configure your primary VMs to fail health checks so that the load balancer performs a failover. Disabling connection draining ensures that all connections are moved to the backup VMs quickly and in a planned fashion. This lets you install updates and restart the primary VMs without existing connections persisting. After patching, Google Cloud can perform a failback when a sufficient number of primary VMs (as defined by the failover ratio) pass their health checks.

  • Single backend VM for data consistency. If you need to ensure that only one VM is the destination for all connections, disable connection draining so that switching from a primary to a backup VM does not allow existing connections to persist on both. This reduces the possibility of data inconsistencies by keeping just one backend VM active at any given time.

Failover example

The following example describes failover behavior for the multi-zone external passthrough Network Load Balancer example presented in the architecture section.

Multi-zone external passthrough Network Load Balancer failover.
Multi-zone external passthrough Network Load Balancer failover (click to enlarge).

The primary backends for this load balancer are the unmanaged instance groups ig-a in us-west1-a and ig-d in us-west1-c. Each instance group contains two VMs. All four VMs from both instance groups are primary VMs:

  • vm-a1 in ig-a
  • vm-a2 in ig-a
  • vm-d1 in ig-d
  • vm-d2 in ig-d

The failover backends for this load balancer are the unmanaged instance groups ig-b in us-west1-a and ig-c in us-west1-c. Each instance group contains two VMs. All four VMs from both instance groups are backup VMs:

  • vm-b1 in ig-b
  • vm-b2 in ig-b
  • vm-c1 in ig-c
  • vm-c2 in ig-c

Suppose you want to configure a failover policy for this load balancer such that new connections are delivered to backup VMs when the number of healthy primary VMs is fewer than two. To accomplish this, set the failover ratio to 0.5 (50%). Google Cloud uses the failover ratio to calculate the minimum number of primary VMs that must be healthy by multiplying the failover ratio by the number of primary VMs: 4 × 0.5 = 2

When all four primary VMs are healthy, Google Cloud distributes new connections to all of them. When primary VMs fail health checks:

  • If vm-a1 and vm-d1 become unhealthy, Google Cloud distributes new connections between the remaining two healthy primary VMs, vm-a2 and vm-d2, because the number of healthy primary VMs is at least the minimum.

  • If vm-a2 also fails health checks, leaving only one healthy primary VM, vm-d2, Google Cloud recognizes that the number of healthy primary VMs is fewer than the minimum, so it performs a failover. The active pool is set to the four healthy backup VMs, and new connections are distributed among those four (in instance groups ig-b and ig-c). Even though vm-d2 remains healthy, it is removed from the active pool and does not receive new connections.

  • If vm-a2 recovers and passes its health check, Google Cloud recognizes that the number of healthy primary VMs is at least the minimum of two, so it performs a failback. The active pool is set to the two healthy primary VMs, vm-a2 and vm-d2, and new connections are distributed between them. All backup VMs are removed from the active pool.

  • As other primary VMs recover and pass their health checks, Google Cloud adds them to the active pool. For example, if vm-a1 becomes healthy, Google Cloud sets the active pool to the three healthy primary VMs, vm-a1, vm-a2, and vm-d2, and distributes new connections among them.

What's next