Cloud Load Balancing overview

A load balancer distributes user traffic across multiple instances of your applications. By spreading the load, load balancing reduces the risk that your applications become overburdened, slow, or nonfunctional.

Simple overview of load balancing (click to enlarge)
Simple overview of load balancing (click to enlarge)

About Cloud Load Balancing

By using Cloud Load Balancing, you can serve content as close as possible to your users on a system that can respond to over one million queries per second.

Cloud Load Balancing is a fully distributed, software-defined managed service. It isn't hardware-based, so you don't need to manage a physical load balancing infrastructure.

Google Cloud offers the following load balancing features:

  • Single IP address to serve as the frontend
  • Automatic intelligent autoscaling of your backends
  • External load balancing for when your users reach your applications from the internet
  • Internal load balancing for when your clients are inside of Google Cloud
  • Regional load balancing for when your applications are available in a single region
  • Global load balancing for when your applications are available across the world
  • Pass-through load balancing (see also direct server return (DSR) or direct routing)
  • Proxy-based load balancing (as an alternative to pass-through)
  • Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols, such as IP address and TCP or UDP port
  • Layer 7-based load balancing to add content-based routing decisions based on attributes, such as the HTTP header and the uniform resource identifier
  • Integration with Cloud CDN for cached content delivery

For a more extensive list of features, see Load balancer features.

Types of Cloud Load Balancing

The following table summarizes the characteristics of each Google Cloud load balancer, including whether the load balancer uses an internal or an external IP address, whether the load balancer is regional or global, and the supported Network Service Tiers and traffic types.

Internal or external Regional or global Supported network tiers Proxy or pass-through Traffic type Load balancer type
Internal Regional Premium only Pass-through TCP or UDP Internal TCP/UDP
Regional Premium only Proxy HTTP or HTTPS Internal HTTP(S)
External Regional Premium or Standard Pass-through TCP or UDP TCP/UDP Network
Global in Premium Tier

Effectively regional1 in Standard Tier
Premium or Standard Proxy TCP TCP Proxy
Premium or Standard Proxy SSL SSL Proxy
Premium or Standard Proxy HTTP or HTTPS External HTTP(S)

1Effectively regional means that, while the backend service is global, if you choose Standard Tier, the external forwarding rule and external IP address must be regional, and the backend instance groups or network endpoint groups (NEGs) attached to the global backend service must be in the same region as the forwarding rule and IP address. For more information, see Configuring Standard Tier for HTTP(S) Load Balancing, TCP Proxy Load Balancing, and SSL Proxy Load Balancing.

Global versus regional load balancing

Use global load balancing when your backends are distributed across multiple regions, your users need access to the same applications and content, and you want to provide access by using a single anycast IP address. Global load balancing can also provide IPv6 termination.

Use regional load balancing when your backends are in one region, and you only require IPv4 termination.

External versus internal load balancing

Google Cloud load balancers can be divided into external and internal load balancers:

  • External load balancers distribute traffic coming from the internet to your Google Cloud Virtual Private Cloud (VPC) network. Global load balancing requires that you use the Premium Tier of Network Service Tiers. For regional load balancing, you can use Standard Tier.

  • Internal load balancers distribute traffic to instances inside of Google Cloud.

External and internal load balancing types (click to enlarge)
External and internal load balancing types (click to enlarge)

The following diagram illustrates a common use case: how to use external and internal load balancing together. In the illustration, traffic from users in San Francisco, Iowa, and Singapore is directed to an external load balancer, which distributes that traffic to different regions in a Google Cloud network. An internal load balancer then distributes traffic between the us-central-1a and us-central-1b zones.

How external and internal load balancing work together (click to enlarge)
How external and internal load balancing work together (click to enlarge)

Traffic type

The type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use:

  • For HTTP and HTTPS traffic, use:
    • External HTTP(S) Load Balancing
    • Internal HTTP(S) Load Balancing
  • For TCP traffic, use:
    • TCP Proxy Load Balancing
    • Network Load Balancing
    • Internal TCP/UDP Load Balancing
  • For UDP traffic, use:
    • Network Load Balancing
    • Internal TCP/UDP Load Balancing

Backend region and network

The following table summarizes support for backends residing in different VPC networks. The table also provides information about multi-NIC load balancing support.

Load balancer type Backend region and network Multi-NIC notes
Internal TCP/UDP Load Balancing All backends must be in the same VPC network and the same region as the backend service. The backend service must also be in the same region and VPC network as the forwarding rule. By using multiple load balancers, it can load balance to multiple NICs on the same backend.
Internal HTTP(S) Load Balancing All backends must be in the same VPC network and the same region as the backend service. The backend service must also be in the same region and VPC network as the forwarding rule. The backend VM's nic0 must be in the same network and region used by the forwarding rule.
HTTP(S) Load Balancing, SSL Proxy Load Balancing, TCP Proxy Load Balancing In Premium Tier: Backends can be in any region and any VPC network.

In Standard Tier: Backends must be in the same region as the forwarding rule, but can be in any VPC network.
The load balancer only sends traffic to the first network interface (nic0), whichever VPC network that nic0 is in.

Firewall rules

The following table summarizes the minimum required firewall rules for load balancer access.

Load balancer type Minimum required ingress allow firewall rules Overview Example
External HTTP(S) Load Balancing
  • Health check ranges
Overview Example
Internal HTTP(S) Load Balancing
  • Health check ranges
  • Proxy-only subnet
Overview Example
Internal TCP/UDP Load Balancing
  • Health check ranges
  • Internal source IP addresses of clients
Overview Example
SSL Proxy Load Balancing
  • Health check ranges
Overview Example
TCP Proxy Load Balancing
  • Health check ranges
Overview Example
Network Load Balancing
  • Health check ranges
  • External source IP addresses of clients on the internet
    (for example, 0.0.0.0/0 or a specific set of ranges)
Overview Example

DDoS protections for external load balancers

Google Cloud provides different DDos protections, depending on the load balancer type.

Proxy-based external load balancers

All of the Google Cloud proxy-based external load balancers automatically inherit DDoS protection from Google Front Ends (GFEs), which are part of Google's production infrastructure.

In addition to the automatic DDoS provided by the GFEs, you can configure Google Cloud Armor for external HTTP(S) load balancers.

Pass-through external load balancers

The only pass-through external load balancer is the network load balancer. These load balancers are implemented using the same Google routing infrastructure used to implement external IP addresses for Compute Engine VMs. For inbound traffic to a network load balancer, Google Cloud limits incoming packets per VM.

For more information, see Inbound bandwidth to an external IP address.

The underlying technology of Google Cloud load balancers

This section provides more information about each type of Google Cloud load balancer, including links to overview documentation for a deeper understanding.

External and internal load balancing types and the underlying technology (click to enlarge)
External and internal load balancing types and the underlying technology (click to enlarge)
  • Google Front Ends (GFEs) are software-defined, distributed systems that are located in Google points of presence (PoPs) and perform global load balancing in conjunction with other systems and control planes.
  • Andromeda is Google Cloud's software-defined network virtualization stack.
  • Maglev is a distributed system for Network Load Balancing.
  • Envoy proxy is an open source edge and service proxy, designed for cloud-native applications.

Internal HTTP(S) Load Balancing

Internal HTTP(S) Load Balancing is built on the Andromeda network virtualization stack and is a managed service based on the open source Envoy proxy. This load balancer provides proxy-based load balancing of Layer 7 application data. You specify how traffic is routed with URL maps. The load balancer uses an internal IP address that acts as the frontend to your backends.

External HTTP(S) Load Balancing

HTTP(S) Load Balancing is implemented on GFEs. GFEs are distributed globally and operate together using Google's global network and control plane. In Premium Tier, GFEs offer cross-regional load balancing, directing traffic to the closest healthy backend that has capacity and terminating HTTP(S) traffic as close as possible to your users.

Internal TCP/UDP Load Balancing

Internal TCP/UDP Load Balancing is built on the Andromeda network virtualization stack. Internal TCP/UDP Load Balancing enables you to load balance TCP/UDP traffic behind an internal load balancing IP address that is accessible only to your internal virtual machine (VM) instances. By using Internal TCP/UDP Load Balancing, an internal load balancing IP address is configured to act as the frontend to your internal backend instances. You use only internal IP addresses for your load balanced service. Overall, your configuration becomes simpler.

Internal TCP/UDP Load Balancing supports regional managed instance groups so that you can autoscale across a region, protecting your service from zonal failures.

External TCP/UDP Network Load Balancing

Network Load Balancing is built on Maglev. This load balancer enables you to load balance traffic on your systems based on incoming IP protocol data, including address, port, and protocol type. It is a regional, non-proxied load balancing system. Use Network Load Balancing for UDP traffic, and for TCP and SSL traffic on ports that are not supported by the SSL proxy load balancer and TCP proxy load balancer. A network load balancer is a pass-through load balancer that does not proxy connections from clients.

SSL Proxy Load Balancing

SSL Proxy Load Balancing is implemented on GFEs that are distributed globally. If you choose the Premium Tier of Network Service Tiers, an SSL proxy load balancer is global. In Premium Tier, you can deploy backends in multiple regions, and the load balancer automatically directs user traffic to the closest region that has capacity. If you choose the Standard Tier, an SSL proxy load balancer can only direct traffic among backends in a single region.

TCP Proxy Load Balancing

TCP Proxy Load Balancing is implemented on GFEs that are distributed globally. If you choose the Premium Tier of Network Service Tiers, a TCP proxy load balancer is global. In Premium Tier, you can deploy backends in multiple regions, and the load balancer automatically directs user traffic to the closest region that has capacity. If you choose the Standard Tier, a TCP proxy load balancer can only direct traffic among backends in a single region.

Interfaces

You can configure and update your load balancers through the following interfaces:

  • The gcloud command-line tool: a command-line tool included in the Cloud SDK. The HTTP(S) Load Balancing documentation calls on this tool frequently to accomplish tasks. For a complete overview of the tool, see the gcloud Tool Guide. You can find commands related to load balancing in the gcloud compute command group.

    You can also get detailed help for any gcloud command by using the --help flag:

    gcloud compute http-health-checks create --help
    
  • The Google Cloud Console: Load balancing tasks can be accomplished through the Google Cloud Console.

  • The REST API: All load balancing tasks can be accomplished using the Cloud Load Balancing API. The API reference docs describe the resources and methods available to you.

What's next