External HTTP(S) Load Balancing use cases

The external HTTP(S) load balancers address many use cases. This page provides some high-level examples.

Three-tier web services

You can use external HTTP(S) Load Balancing to support traditional three-tier web services. The following example shows how you can use three types of Google Cloud load balancers to scale three tiers. At each tier, the load balancer type depends on your traffic type:

The diagram shows how traffic moves through the tiers:

  1. An external HTTP(S) load balancer (the subject of this overview) distributes traffic from the internet to a set of web frontend instance groups in various regions.
  2. These web frontends send the HTTP(S) traffic to a set of regional, internal HTTP(S) load balancers. For the external HTTP(S) load balancer to forward traffic to an internal HTTP(S) load balancer, the external HTTP(S) load balancer must have backend instances with web server software (such as Netscaler or NGNIX) configured to forward the traffic to the frontend of the internal HTTP(S) load balancer.
  3. The internal HTTP(S) load balancers distribute the traffic to middleware instance groups.
  4. These middleware instance groups send the traffic to internal TCP/UDP load balancers, which load balance the traffic to data storage clusters.
Layer 7-based routing for internal tiers in a multi-tier app.
Layer 7-based routing for internal tiers in a multi-tier app

Multi-region load balancing

When you configure an external HTTP(S) load balancer in Premium Tier, it uses a global external IP address and can intelligently route requests from users to the closest backend instance group or NEG, based on proximity. For example, if you set up instance groups in North America, Europe, and Asia, and attach them to a load balancer's backend service, user requests around the world are automatically sent to the VMs closest to the users, assuming the VMs pass health checks and have enough capacity (defined by the balancing mode). If the closest VMs are all unhealthy, or if the closest instance group is at capacity and another instance group is not at capacity, the load balancer automatically sends requests to the next closest region with capacity.

Load balancing with request routing

HTTP(S) Load Balancing supports request routing by using URL maps to select a backend service based on the requested host name, request path, or both. For example, you can use a set of instance groups or NEGs to handle your video content and another set to handle everything else.

You can also use HTTP(S) Load Balancing with Cloud Storage buckets. After you have your load balancer set up, you can add Cloud Storage buckets to it.

For more information, see URL map concepts.

Load balancing for GKE applications

If you want to expose your applications in GKE to the internet, we recommend that you use the built-in GKE Ingress controller, which deploys Google Cloud load balancers on behalf of GKE users. This is the same as the standalone load balancing architecture, except that its lifecycle is fully automated and controlled by GKE.

Related GKE documentation: