A load balancer distributes user traffic across multiple instances of your applications. By spreading the load, load balancing reduces the risk that your applications experience performance issues.
About Cloud Load Balancing
By using Cloud Load Balancing, you can serve content as close as possible to your users on a system that can respond to over one million queries per second.
Cloud Load Balancing is a fully distributed, software-defined managed service. It isn't hardware-based, so you don't need to manage a physical load balancing infrastructure.
Google Cloud offers the following load balancing features:
- Single IP address to serve as the frontend
- Automatic intelligent autoscaling of your backends
- External load balancing for when your users reach your applications from the internet
- Internal load balancing for when your clients are inside of Google Cloud
- Regional load balancing for when your applications are available in a single region
- Global load balancing for when your applications are available across the world
- Pass-through load balancing (see also direct server return (DSR) or direct routing)
- Proxy-based load balancing (as an alternative to pass-through)
- Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols such as TCP, UDP, ESP, or ICMP
- Layer 7-based load balancing to add content-based routing decisions based on attributes, such as the HTTP header and the uniform resource identifier
- Integration with Cloud CDN for cached content delivery
For a more extensive list of features, see Load balancer features.
Types of Cloud Load Balancing
The following table summarizes the characteristics of each Google Cloud load balancer, including whether the load balancer uses an internal or an external IP address, whether the load balancer is regional or global, and the supported Network Service Tiers and traffic types.
|Internal or external||Regional or global||Supported network tiers||Proxy or pass-through||Traffic type||Load balancer type|
|Internal||Regional||Premium only||Pass-through||TCP or UDP||Internal TCP/UDP|
|Regional||Premium only||Proxy||HTTP or HTTPS||Internal HTTP(S)|
|External||Regional||Premium or Standard||Pass-through||TCP, UDP, ESP, or ICMP (Preview)||External TCP/UDP Network|
|Global in Premium Tier
Effectively regional1 in Standard Tier
|Premium or Standard||Proxy||TCP||TCP Proxy|
|Premium or Standard||Proxy||SSL||SSL Proxy|
|Premium or Standard||Proxy||HTTP or HTTPS||External HTTP(S)|
Global versus regional load balancing
Use global load balancing when your backends are distributed across multiple regions, your users need access to the same applications and content, and you want to provide access by using a single anycast IP address. Global load balancing can also provide IPv6 termination.
Use regional load balancing when your backends are in one region, and you only require IPv4 termination.
External versus internal load balancing
Google Cloud load balancers can be divided into external and internal load balancers:
External load balancers distribute traffic coming from the internet to your Google Cloud Virtual Private Cloud (VPC) network. Global load balancing requires that you use the Premium Tier of Network Service Tiers. For regional load balancing, you can use Standard Tier.
Internal load balancers distribute traffic to instances inside of Google Cloud.
The following diagram illustrates a common use case: how to use external and
internal load balancing together. In the illustration, traffic from users in San
Francisco, Iowa, and Singapore is directed to an external load balancer, which
distributes that traffic to different regions in a Google Cloud network.
Two internal load balancers distribute traffic within the two regions:
us-central1, that internal load balancer distributes traffic between
The type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use:
- For HTTP and HTTPS traffic, you can use:
- External HTTP(S) Load Balancing
- Internal HTTP(S) Load Balancing
- For TCP traffic, you can use:
- TCP Proxy Load Balancing
- Network Load Balancing
- Internal TCP/UDP Load Balancing
- For UDP traffic, you can use:
- Network Load Balancing
- Internal TCP/UDP Load Balancing
- For ESP or ICMP traffic, you can use:
- Network Load Balancing (Preview)
Backend region and network
The following table summarizes support for backends residing in different VPC networks. The table also provides information about multi-NIC load balancing support.
|Load balancer type||Backend region and network||Multi-NIC notes|
|Internal TCP/UDP Load Balancing||All backends must be in the same VPC network and the same region as the backend service. The backend service must also be in the same region and VPC network as the forwarding rule.||By using multiple load balancers, it can load balance to multiple NICs on the same backend.|
|Internal HTTP(S) Load Balancing||All backends must be in the same VPC network and the same region as the backend service. The backend service must also be in the same region and VPC network as the forwarding rule.||The backend VM's
|HTTP(S) Load Balancing, SSL Proxy Load Balancing, TCP Proxy Load Balancing||In Premium Tier: Backends can be in any region and any
In Standard Tier: Backends must be in the same region as the forwarding rule, but can be in any VPC network.
|The load balancer only sends traffic to the first network interface
The following table summarizes the minimum required firewall rules for load balancer access.
|Load balancer type||Minimum required ingress allow firewall rules||Overview||Example|
|External HTTP(S) Load Balancing||
|Internal HTTP(S) Load Balancing||
|Internal TCP/UDP Load Balancing||
|SSL Proxy Load Balancing||
|TCP Proxy Load Balancing||
|Network Load Balancing||
DDoS protections for external load balancers
Google Cloud provides different DDos protections, depending on the load balancer type.
Proxy-based external load balancers
All of the Google Cloud proxy-based external load balancers automatically inherit DDoS protection from Google Front Ends (GFEs), which are part of Google's production infrastructure.
In addition to the automatic DDoS protection provided by the GFEs, you can configure Google Cloud Armor for external HTTP(S) load balancers.
Pass-through external load balancers
The only pass-through external load balancer is the network load balancer. These load balancers are implemented using the same Google routing infrastructure used to implement external IP addresses for Compute Engine VMs. For inbound traffic to a network load balancer, Google Cloud limits incoming packets per VM.
For more information, see Ingress to external IP address destinations.
The underlying technology of Google Cloud load balancers
This section provides more information about each type of Google Cloud load balancer, including links to overview documentation for a deeper understanding.
- Google Front Ends (GFEs) are software-defined, distributed systems that are located in Google points of presence (PoPs) and perform global load balancing in conjunction with other systems and control planes.
- Andromeda is Google Cloud's software-defined network virtualization stack.
- Maglev is a distributed system for Network Load Balancing.
- Envoy proxy is an open source edge and service proxy, designed for cloud-native applications.
Internal HTTP(S) Load Balancing
Internal HTTP(S) Load Balancing is built on the Andromeda network virtualization stack and is a managed service based on the open source Envoy proxy. This load balancer provides proxy-based load balancing of Layer 7 application data. You specify how traffic is routed with URL maps. The load balancer uses an internal IP address that acts as the frontend to your backends.
External HTTP(S) Load Balancing
HTTP(S) Load Balancing is implemented on GFEs. GFEs are distributed globally and operate together using Google's global network and control plane. In Premium Tier, GFEs offer cross-regional load balancing, directing traffic to the closest healthy backend that has capacity and terminating HTTP(S) traffic as close as possible to your users.
Internal TCP/UDP Load Balancing
Internal TCP/UDP Load Balancing is built on the Andromeda network virtualization stack. Internal TCP/UDP Load Balancing enables you to load balance TCP/UDP traffic behind an internal load balancing IP address that is accessible only to your internal virtual machine (VM) instances. By using Internal TCP/UDP Load Balancing, an internal load balancing IP address is configured to act as the frontend to your internal backend instances. You use only internal IP addresses for your load balanced service. Overall, your configuration becomes simpler.
Internal TCP/UDP Load Balancing supports regional managed instance groups so that you can autoscale across a region, protecting your service from zonal failures.
External TCP/UDP Network Load Balancing
Network Load Balancing is built on Maglev. This load balancer enables you to load balance traffic on your systems based on incoming IP protocol data, including address, protocol, and port (optional). It is a regional, non-proxied load balancing system. That is, a network load balancer is a pass-through load balancer that does not proxy connections from clients.
Backend service-based network load balancers support TCP, UDP, ESP, and ICMP traffic.
Target pool-based network load balancers support only TCP or UDP traffic.
SSL Proxy Load Balancing
SSL Proxy Load Balancing is implemented on GFEs that are distributed globally. If you choose the Premium Tier of Network Service Tiers, an SSL proxy load balancer is global. In Premium Tier, you can deploy backends in multiple regions, and the load balancer automatically directs user traffic to the closest region that has capacity. If you choose the Standard Tier, an SSL proxy load balancer can only direct traffic among backends in a single region.
TCP Proxy Load Balancing
TCP Proxy Load Balancing is implemented on GFEs that are distributed globally. If you choose the Premium Tier of Network Service Tiers, a TCP proxy load balancer is global. In Premium Tier, you can deploy backends in multiple regions, and the load balancer automatically directs user traffic to the closest region that has capacity. If you choose the Standard Tier, a TCP proxy load balancer can only direct traffic among backends in a single region.
You can configure and update your load balancers through the following interfaces:
gcloudcommand-line tool: a command-line tool included in the Cloud SDK. The HTTP(S) Load Balancing documentation calls on this tool frequently to accomplish tasks. For a complete overview of the tool, see the gcloud Tool Guide. You can find commands related to load balancing in the
gcloud computecommand group.
You can also get detailed help for any
gcloudcommand by using the
gcloud compute http-health-checks create --help
The Google Cloud Console: Load balancing tasks can be accomplished through the Google Cloud Console.
The REST API: All load balancing tasks can be accomplished using the Cloud Load Balancing API. The API reference docs describe the resources and methods available to you.
- To help you determine which Google Cloud load balancer best meets your needs, see Choosing a load balancer.