The external HTTP(S) load balancers address many use cases. This page provides some high-level examples.
Three-tier web services
You can use external HTTP(S) Load Balancing to support traditional three-tier web services. The following example shows how you can use three types of Google Cloud load balancers to scale three tiers. At each tier, the load balancer type depends on your traffic type:
Web tier: Traffic enters from the internet and is load balanced by using an external HTTP(S) load balancer.
Application tier: The application tier is scaled by using a regional internal HTTP(S) load balancer.
Database tier: The database tier is scaled by using an internal TCP/UDP load balancer.
The diagram shows how traffic moves through the tiers:
- An external HTTP(S) load balancer (the subject of this overview) distributes traffic from the internet to a set of web frontend instance groups in various regions.
- These web frontends send the HTTP(S) traffic to a set of regional, internal HTTP(S) load balancers. For the external HTTP(S) load balancer to forward traffic to an internal HTTP(S) load balancer, the external HTTP(S) load balancer must have backend instances with web server software (such as Netscaler or NGINX) configured to forward the traffic to the frontend of the internal HTTP(S) load balancer.
- The internal HTTP(S) load balancers distribute the traffic to middleware instance groups.
- These middleware instance groups send the traffic to internal TCP/UDP load balancers, which load balance the traffic to data storage clusters.
Multi-region load balancing
When you configure HTTP(S) Load Balancing in Premium Tier, it uses a global external IP address and can intelligently route requests from users to the closest backend instance group or NEG, based on proximity. For example, if you set up instance groups in North America, Europe, and Asia, and attach them to a load balancer's backend service, user requests around the world are automatically sent to the VMs closest to the users, assuming the VMs pass health checks and have enough capacity (defined by the balancing mode). If the closest VMs are all unhealthy, or if the closest instance group is at capacity and another instance group is not at capacity, the load balancer automatically sends requests to the next closest region with capacity.
In Premium Tier, the external HTTP(S) load balancer provides multi-region load balancing, using multiple backend services, each with backend instance groups or NEGs in multiple regions.
Workloads with jurisdictional compliance
Some workloads with regulatory or compliance requirements require that network configurations and traffic termination reside in a specific region. For these workloads, a regional external HTTP(S) load balancer is often the preferred option to provide the jurisdictional controls these workloads require.
Advanced traffic management
With global external HTTP(S) load balancers and regional external HTTP(S) load balancers, you can add advanced traffic management capabilities that give you fine-grained control over how traffic is handled. These capabilities help you to meet your availability and performance objectives. One of the benefits of using HTTP(S) Load Balancing for these use cases is that you can update how traffic is managed without needing to modify your application code.
For more details, see the following:
- Traffic management overview for global external HTTP(S) load balancer.
- Traffic management overview for regional external HTTP(S) load balancer.
Load balancing with request routing
HTTP(S) Load Balancing supports request routing by using URL maps to select a backend service based on the requested host name, request path, or both. For example, you can use a set of instance groups or NEGs to handle your video content and another set to handle everything else.
You can also use external HTTP(S) load balancers with Cloud Storage buckets. After you have your load balancer set up, you can add Cloud Storage buckets to it.
For more information, see URL map concepts.
Load balancing for GKE applications
There are two ways to deploy external HTTP(S) load balancers for GKE clusters:
- GKE Gateway controller. Supported by the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (classic). For setup instructions, see Deploying gateways. This capability is in Preview.
- GKE Ingress controller. Supported by the global external HTTP(S) load balancer, regional external HTTP(S) load balancer, and the global external HTTP(S) load balancer (classic). For setup instructions, see Configuring Ingress for External HTTP(S) Load Balancing.
Load balancing for Cloud Run, Cloud Functions, and App Engine applications
You can use a global external HTTP(S) load balancer as the frontend for your Cloud Run, Cloud Functions, and App Engine applications. To set this up, you use a serverless NEG for the load balancer's backend.
This diagram shows how a serverless NEG fits into the HTTP(S) Load Balancing model.
Related documentation:
- Serverless NEGs overview
- Setting up an external HTTP(S) load balancer with Cloud Run, Cloud Functions, or App Engine
Load balancing with hybrid connectivity
Cloud Load Balancing supports load-balancing traffic to endpoints that extend beyond Google Cloud, such as on-premises data centers and other public clouds that you can use hybrid connectivity to reach.
The following diagram demonstrates a hybrid deployment with a global external HTTP(S) load balancer.
Related documentation:
- Hybrid connectivity NEGs overview
- Setting up an external HTTP(S) load balancer with on-premises or other cloud backends
Load balancing with Private Service Connect
You can use a global external HTTP(S) load balancer to access managed services that are published using Private Service Connect.
For more information, see Using Private Service Connect to publish and consume managed services.