Choose a load balancer

This document helps you determine which Google Cloud load balancer best meets your needs. To see an overview of all the Cloud Load Balancing products available, see Cloud Load Balancing overview.

To determine which Cloud Load Balancing product to use, you must first determine what traffic type your load balancers must handle. As a general rule, you'd choose an Application Load Balancer when you need a flexible feature set for your applications with HTTP(S) traffic. You'd choose a proxy Network Load Balancer to implement TCP proxy load balancing to backends in one or more regions. And, you'd choose a passthrough Network Load Balancer to preserve client source IP addresses, avoid the overhead of proxies, and to support additional protocols like UDP, ESP, and ICMP.

You can further narrow down your choices depending on your application's requirements: whether your application is external (internet-facing) or internal and whether you need backends deployed globally or regionally.

The following diagram summarizes all the available deployment modes for Cloud Load Balancing.

Choose a load balancer.
Choose a load balancer (click to enlarge).

1 Global external Application Load Balancers support two modes of operation: global and classic.

2 Global external proxy Network Load Balancers support two modes of operation: global and classic.

3 Passthrough Network Load Balancers preserve client source IP addresses. Passthrough Network Load Balancers also support additional protocols like UDP, ESP, and ICMP.

Load balancing aspects

To decide which load balancer best suits your implementation of Google Cloud, consider the following aspects of Cloud Load Balancing:

Traffic type

The type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use.

Load balancer type Traffic type
Application Load Balancers HTTP or HTTPS
Passthrough Network Load Balancers

TCP or UDP

These load balancers also support other IP protocol traffic such as ESP, GRE, ICMP, and ICMPv6.

Proxy Network Load Balancers TCP with optional SSL offload

External versus internal load balancing

Google Cloud load balancers can be deployed as external or internal load balancers:

  • External load balancers distribute traffic that comes from the internet to your Google Cloud Virtual Private Cloud (VPC) network.

  • Internal load balancers distribute traffic that comes from clients in the same VPC network as the load balancer or clients connected to your VPC network by using VPC Network Peering, Cloud VPN, or Cloud Interconnect.

To determine which load balancer works for your application, use the summary table.

Global versus regional load balancing

Depending on the type of traffic you need the load balancer to handle, and whether your clients are internal or external, you might have the option to choose between either a global load balancer or a regional load balancer.

  • Choose a global load balancer or cross-region load balancer when you want the load balancer to be distributed globally or span multiple regions. Such load balancers can also distribute traffic to backends across multiple regions, making them suitable when your application or content is distributed across multiple regions, or when you want the flexibility to add multi-region backends as your application grows to new geographies.

    Only external load balancers are available as global load balancers. For internal load balancers with backends in multiple regions, choose the cross-region load balancer. Cross-region load balancers provide access by using a regional internal IP address that comes from the regional subnet in the VPC network. This is different from global load balancers, which provide access by using a single anycast IP address and by providing IPv6 termination at the load balancer. Review the following table to learn more.

  • Choose a regional load balancer when you need backends in one region only, you require only IPv4 termination (not IPv6), or when you have jurisdictional compliance requirements for traffic to stay in a particular region.

    Workloads that require regionalized resources for compliance reasons mandate that certain resources must be kept in a specific region, or require traffic to be terminated in a given region. If you require geographic control over where TLS is terminated, you should use a regional load balancer. A regional load balancer is deployed in a specific region that you choose and can connect to backends in the same region only. Thus, regional load balancers guarantee that you terminate TLS only in the region in which you've deployed your load balancer and its backends. Global load balancers terminate Transport Layer Security (TLS) in locations that are distributed globally, so as to minimize latency between clients and the load balancer.

Load balancers are a critical component of most highly available applications. It is important to understand that the resilience of your overall application depends not just on the scope of the load balancer you choose (global or regional), but also on the redundancy of your backend services.

The following table summarizes load balancer resilience based on the load balancer's distribution or scope.

Load balancer scope Architecture Resilient to zonal outage Resilient to regional outage
Global Each load balancer is distributed across all regions
Cross-region Each load balancer is distributed across multiple regions
Regional Each load balancer is distributed across multiple zones in the region An outage in a given region affects the regional load balancers in that region

To determine which load balancer works for your application, use the summary table.

Proxy versus passthrough load balancing

Depending on the type of traffic you need the load balancer to handle, and whether your clients are internal or external, you might have the option to choose between either a proxy load balancer or a passthrough load balancer.

Proxy load balancers terminate incoming client connections at the load balancer and then open new connections from the load balancer to the backends. All the Application Load Balancers and the proxy Network Load Balancers work this way. They terminate client connections by using either Google Front Ends (GFEs) or Envoy proxies.

Passthrough load balancers don't terminate client connections. Instead, load-balanced packets are received by backend VMs with the packet's source, destination, and, if applicable, port information unchanged. Connections are then terminated by the backend VMs. Responses from the backend VMs go directly to the clients, not back through the load balancer. The term for this is direct server return. Use a passthrough load balancer when you need to preserve the client packet information. As the name suggests, the passthrough Network Load Balancers come under this category.

To determine which load balancer works for your application, use the summary table.

Premium versus Standard Network Service Tiers

Network Service Tiers lets you optimize connectivity between systems on the internet and your Google Cloud instances. Premium Tier delivers traffic on Google's premium backbone, while Standard Tier uses regular ISP networks. As a general rule, you'd choose Premium Tier for high performance and low latency. You can choose Standard Tier as a low-cost alternative for applications that don't have strict requirements for latency or performance.

Premium Tier. If the IP address of the load balancer is in the Premium Tier, the traffic traverses Google's high‑quality global backbone with the intent that packets enter and exit a Google edge peering point as close as possible to the client. If you don't specify a network tier, your load balancer defaults to using the Premium Tier. Note that all internal load balancers are always Premium Tier. Additionally, the global external Application Load Balancer can also only be configured in Premium Tier.

Standard Tier. If the IP address of the load balancer is in the Standard Tier, the traffic enters and exits the Google network at a peering point closest to the Google Cloud region where the load balancer is configured. As noted in the summary table, not all load balancers can be deployed in the Standard Tier, so plan your budget accordingly.

Because you choose a tier at the resource level—such as the external IP address for a load balancer or VM—you can use Standard Tier for some resources and Premium Tier for others. You can use the decision tree in the Network Service Tiers documentation to help you make your decision.

DDoS protections for external load balancers

Google Cloud Armor provides both always-on and user-configurable DDoS protections, depending on the type of load balancer.

Load balancer type or mode Always-on DDoS protection Google Cloud Armor security policies
Global external Application Load Balancer
Classic Application Load Balancer
Regional external Application Load Balancer
Global external proxy Network Load Balancer
Classic proxy Network Load Balancer
External passthrough Network Load Balancer

You can also configure advanced network DDoS protection for external passthrough Network Load Balancers, protocol forwarding, or VMs with public IP addresses. For more information about advanced network DDoS protection, see Configure advanced network DDoS protection.

Summary of Google Cloud load balancers

The following table provides more specific information about each load balancer.

Load balancer Deployment mode Traffic type Network service tier Load-balancing scheme
Application Load Balancers Global external HTTP or HTTPS Premium Tier EXTERNAL_MANAGED
Regional external HTTP or HTTPS Premium or Standard Tier EXTERNAL_MANAGED
Classic HTTP or HTTPS

Global in Premium Tier

Regional in Standard Tier

EXTERNAL
Regional internal HTTP or HTTPS Premium Tier INTERNAL_MANAGED
Cross-region internal HTTP or HTTPS Premium Tier INTERNAL_MANAGED
Proxy Network Load Balancers Global external TCP with optional SSL offload Premium Tier EXTERNAL_MANAGED
Regional external TCP Premium or Standard Tier EXTERNAL_MANAGED
Classic TCP with optional SSL offload

Global in Premium Tier

Regional in Standard Tier

EXTERNAL
Regional internal TCP without SSL offload Premium Tier INTERNAL_MANAGED
Cross-region internal TCP without SSL offload Premium Tier INTERNAL_MANAGED
Passthrough Network Load Balancers

External

Always regional

TCP, UDP, ESP, GRE, ICMP, and ICMPv6 Premium or Standard Tier EXTERNAL

Internal

Always regional

TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE Premium Tier INTERNAL

The load-balancing scheme is an attribute on the forwarding rule and the backend service of a load balancer and indicates whether the load balancer can be used for internal or external traffic.

The term *_MANAGED in the load-balancing scheme indicates that the load balancer is implemented as a managed service either on Google Front Ends (GFEs) or on the open source Envoy proxy. In a load-balancing scheme that is *_MANAGED, requests are routed either to the GFE or to the Envoy proxy.

What's next