External HTTP(S) Load Balancing use cases

Stay organized with collections Save and categorize content based on your preferences.

The external HTTP(S) load balancers address many use cases. This page provides some high-level examples.

Three-tier web services

You can use external HTTP(S) Load Balancing to support traditional three-tier web services. The following example shows how you can use three types of Google Cloud load balancers to scale three tiers. At each tier, the load balancer type depends on your traffic type:

The diagram shows how traffic moves through the tiers:

  1. An external HTTP(S) load balancer (the subject of this overview) distributes traffic from the internet to a set of web frontend instance groups in various regions.
  2. These web frontends send the HTTP(S) traffic to a set of regional, internal HTTP(S) load balancers. For the external HTTP(S) load balancer to forward traffic to an internal HTTP(S) load balancer, the external HTTP(S) load balancer must have backend instances with web server software (such as Netscaler or NGINX) configured to forward the traffic to the frontend of the internal HTTP(S) load balancer.
  3. The internal HTTP(S) load balancers distribute the traffic to middleware instance groups.
  4. These middleware instance groups send the traffic to internal TCP/UDP load balancers, which load balance the traffic to data storage clusters.
Layer 7-based routing for internal tiers in a multi-tier app.
Layer 7-based routing for internal tiers in a multi-tier app

Multi-region load balancing

When you configure HTTP(S) Load Balancing in Premium Tier, it uses a global external IP address and can intelligently route requests from users to the closest backend instance group or NEG, based on proximity. For example, if you set up instance groups in North America, Europe, and Asia, and attach them to a load balancer's backend service, user requests around the world are automatically sent to the VMs closest to the users, assuming the VMs pass health checks and have enough capacity (defined by the balancing mode). If the closest VMs are all unhealthy, or if the closest instance group is at capacity and another instance group is not at capacity, the load balancer automatically sends requests to the next closest region with capacity.

In Premium Tier, the external HTTP(S) load balancer provides multi-region load balancing, using multiple backend services, each with backend instance groups or NEGs in multiple regions.

Representation of multi-region load balancing.
Representation of multi-region load balancing

Workloads with jurisdictional compliance

Some workloads with regulatory or compliance requirements require that network configurations and traffic termination reside in a specific region. For these workloads, a regional external HTTP(S) load balancer is often the preferred option to provide the jurisdictional controls these workloads require.

Advanced traffic management

With global external HTTP(S) load balancers and regional external HTTP(S) load balancers, you can add advanced traffic management capabilities that give you fine-grained control over how traffic is handled. These capabilities help you to meet your availability and performance objectives. One of the benefits of using HTTP(S) Load Balancing for these use cases is that you can update how traffic is managed without needing to modify your application code.

For more details, see the following:

Load balancing with request routing

HTTP(S) Load Balancing supports request routing by using URL maps to select a backend service based on the requested host name, request path, or both. For example, you can use a set of instance groups or NEGs to handle your video content and another set to handle everything else.

You can also use external HTTP(S) load balancers with Cloud Storage buckets. After you have your load balancer set up, you can add Cloud Storage buckets to it.

For more information, see URL map concepts.

Load balancing for GKE applications

If you want to expose your applications in GKE to the internet, we recommend that you use the built-in GKE Ingress controller, which deploys Google Cloud load balancers on behalf of GKE users. This is the same as the standalone load balancing architecture, except that its lifecycle is fully automated and controlled by GKE.

Related GKE documentation:

Load balancing for Cloud Run, Cloud Functions, and App Engine applications

You can use a global external HTTP(S) load balancer as the frontend for your Cloud Run, Cloud Functions, and App Engine applications. To set this up, you use a serverless NEG for the load balancer's backend.

This diagram shows how a serverless NEG fits into the HTTP(S) Load Balancing model.

Simple HTTPS Load Balancing (click to enlarge)
HTTP(S) Load Balancing for serverless apps

Related documentation:

Load balancing with hybrid connectivity

Cloud Load Balancing supports load-balancing traffic to endpoints that extend beyond Google Cloud, such as on-premises data centers and other public clouds that you can use hybrid connectivity to reach.

The following diagram demonstrates a hybrid deployment with a global external HTTP(S) load balancer.

Hybrid connectivity with HTTP(S) Load Balancing (click to enlarge)
Hybrid connectivity with External HTTP(S) Load Balancing (click to enlarge)

Related documentation:

Load balancing with Private Service Connect

You can use a global external HTTP(S) load balancer to access managed services that are published using Private Service Connect.

For more information, see Using Private Service Connect to publish and consume managed services.