External HTTP(S) Load Balancing overview

This document introduces the concepts that you need to understand to configure Google Cloud external HTTP(S) Load Balancing.

For information about how the Google Cloud load balancers differ from each other, see the following documents:

Use cases

The external HTTP(S) load balancers address many use cases. This section provides some high-level examples.

Load balancing using multiple backend types

One common use case is load balancing traffic among services. In the following example, external IPv4 and IPv6 clients can request video, API, and image content by using the same base URL, with the paths /api, /video, and /images.

The external HTTP(S) load balancer's URL map specifies that:

  • Requests to path /api go to a backend service with a VM instance group or a zonal NEG backend.
  • Requests to path /images go to a backend bucket with a Cloud Storage backend.
  • Requests to path /video go to a backend service that points to a internet NEG containing an external endpoint that is located on-premises outside of Google Cloud.

When a client sends a request to the load balancer's external IPv4 or IPv6 address, the load balancer evaluates the request according to the URL map and sends the request to the correct service.

The following diagram illustrates this use case.

Load balancing diagram with a custom origin(click to enlarge)
Load balancing diagram with a custom origin (click to enlarge)

On each backend service, you can optionally enable Cloud CDN and Google Cloud Armor. If you are using Google Cloud Armor with Cloud CDN, security policies are enforced only for requests for dynamic content, cache misses, or other requests that are destined for the CDN origin server. Cache hits are served even if the downstream Google Cloud Armor security policy would prevent that request from reaching the CDN origin server.

On backend buckets, Cloud CDN is supported, but not Google Cloud Armor.

Three-tier web services

You can use external HTTP(S) Load Balancing to support traditional three-tier web services. The following example shows how you can use three types of Google Cloud load balancers to scale three tiers. At each tier, the load balancer type depends on your traffic type:

The diagram shows how traffic moves through the tiers:

  1. An external HTTP(S) load balancer (the subject of this overview) distributes traffic from the internet to a set of web frontend instance groups in various regions.
  2. These frontends send the HTTP(S) traffic to a set of regional, internal HTTP(S) load balancers.
  3. The internal HTTP(S) load balancers distribute the traffic to middleware instance groups.
  4. These middleware instance groups send the traffic to internal TCP/UDP load balancers, which load balance the traffic to data storage clusters.
Layer 7-based routing for internal tiers in a multi-tier app (click to enlarge)
Layer 7-based routing for internal tiers in a multi-tier app

Cross-region load balancing

Representation of
  cross-region load balancing

When you configure an external HTTP(S) load balancer in Premium Tier, it uses a global external IP address and can intelligently route requests from users to the closest backend instance group or NEG, based on proximity. For example, if you set up instance groups in North America, Europe, and Asia, and attach them to a load balancer's backend service, user requests around the world are automatically sent to the VMs closest to the users, assuming the VMs pass health checks and have enough capacity (defined by the balancing mode). If the closest VMs are all unhealthy, or if the closest instance group is at capacity and another instance group is not at capacity, the load balancer automatically sends requests to the next closest region with capacity.


Content-based load balancing

Representation of
  content-based load balancing

HTTP(S) Load Balancing supports content-based load balancing using URL maps to select a backend service based on the requested host name, request path, or both. For example, you can use a set of instance groups or NEGs to handle your video content and another set to handle everything else.

You can also use HTTP(S) Load Balancing with Cloud Storage buckets. After you have your load balancer set up, you can add Cloud Storage buckets to it.


Creating a combined load balancer

Representation of
  content-based and cross-regional load balancing

You can configure an external HTTP(S) load balancer in Premium Tier to provide both content-based and cross-region load balancing, using multiple backend services, each with backend instance groups or NEGs in multiple regions. You can combine and extend the use cases to configure an external HTTP(S) load balancer that meets your needs.


Example configuration

If you want to jump right in and build a working load balancer for testing, see Setting up a simple external HTTP load balancer or Setting up a simple external HTTPS load balancer.

For a more complex example that uses content-based and cross-region load balancing, see Creating an HTTPS load balancer.

Architecture and resources

The following diagram shows the Google Cloud resources required for an external HTTP(S) load balancer.

HTTP(S) Load Balancing components (click to enlarge)
HTTP(S) Load Balancing components

The following resources define an external HTTP(S) load balancer:

  • An external forwarding rule specifies an external IP address, port, and global target HTTP(S) proxy. Clients use the IP address and port to connect to the load balancer.

  • A global target HTTP(S) proxy receives a request from the client. The HTTP(S) proxy evaluates the request by using the URL map to make traffic routing decisions. The proxy can also authenticate communications by using SSL certificates.

  • If you are using HTTPS load balancing, the target HTTPS proxy uses global SSL certificates to prove its identity to clients. A target HTTPS proxy supports up to a documented number of SSL certificates.

  • The HTTP(S) proxy uses a global URL map to make a routing determination based on HTTP attributes (such as the request path, cookies, or headers). Based on the routing decision, the proxy forwards client requests to specific backend services or backend buckets. The URL map can specify additional actions, such as sending redirects to clients.

  • A backend service or backend bucket distributes requests to healthy backends (instance groups containing Compute Engine VMs, NEGs containing GKE containers), or Cloud Storage buckets.

  • One or more backends must be connected to the backend service or backend bucket. Backends can be instance groups, NEGs or buckets in any of the following configurations:

    • Managed instance groups (zonal or regional)
    • Unmanaged instance groups (zonal)
    • Network endpoint groups (zonal)
    • Network endpoint groups (internet)
    • Cloud Storage buckets

    You cannot have instance groups and NEGs on the same backend service.

  • A global health check periodically monitors the readiness of your backends. This reduces the risk that requests might be sent to backends that can't service the request.

  • A firewall for your backends to accept health check probes.

Source IP addresses

The source IP addresses for packets, as seen by each backend virtual machine (VM) instance or container, is an IP address from these ranges:

  • 35.191.0.0/16
  • 130.211.0.0/22

The source IP address for actual load-balanced traffic is the same as the health checks probe IP ranges.

The source IP addresses for traffic, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two HTTP, SSL, or TCP sessions:

  • Session 1, from original client to the load balancer (GFE):

    • Source IP address: the original client (or external IP address if the client is behind NAT).
    • Destination IP address: your load balancer's IP address.
  • Session 2, from the load balancer (GFE) to the backend VM or container:

    • Source IP address: an IP address in one of these ranges: 35.191.0.0/16 or 130.211.0.0/22.

      You cannot predict the actual source address.

    • Destination IP address: the internal IP address of the backend VM or container in the Virtual Private Cloud (VPC) network.

Client communications with the load balancer

  • Clients can communicate with the load balancer by using the HTTP 1.1 or HTTP/2 protocol.
  • When HTTPS is used, modern clients default to HTTP/2. This is controlled on the client, not on the HTTPS load balancer.
  • You cannot disable HTTP/2 by making a configuration change on the load balancer. However, you can configure some clients to use HTTP 1.1 instead of HTTP/2. For example, with curl, use the --http1.1 parameter.
  • HTTPS load balancers do not support client certificate-based authentication, also known as mutual TLS authentication.

Open ports

The external HTTP(S) load balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. The reverse proxy functionality is provided by the Google Front Ends (GFEs).

The firewall rules that you set block traffic from the GFEs to the backends, but do not block incoming traffic to the GFEs.

The external HTTP(S) load balancers have a number of open ports to support other Google services that run on the same architecture. If you run a security or port scan against the external IP address of a Google Cloud external HTTP(S) load balancer, additional ports appear to be open.

This does not affect external HTTP(S) load balancers. External forwarding rules, which are used in the definition of an external HTTP(S) load balancer, can only reference TCP ports 80, 8080, and 443. Traffic with a different TCP destination port is not forwarded to the load balancer's backend.

Components

The following are components of external HTTP(S) load balancers.

Forwarding rules and addresses

Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services.

Each forwarding rule provides a single IP address that can be used in DNS records for your application. No DNS-based load balancing is required. You can either specify the IP address to be used or let Cloud Load Balancing assign one for you.

  • The forwarding rule for an HTTP load balancer can only reference TCP ports 80 and 8080.
  • The forwarding rule for an HTTPS load balancer can only reference TCP port 443.

The type of forwarding rule required by external HTTP(S) load balancers depends on which Network Service Tier the load balancer is in.

  • The external HTTP(S) load balancers in the Premium Tier use global external forwarding rules.
  • The external HTTP(S) load balancers in the Standard Tier use regional external forwarding rules.

Target proxies

Target proxies terminate HTTP(S) connections from clients. One or more forwarding rules direct traffic to the target proxy, and the target proxy consults the URL map to determine how to route traffic to backends.

The proxies set HTTP request/response headers as follows:

  • Via: 1.1 google (requests and responses)
  • X-Forwarded-Proto: [http | https] (requests only)
  • X-Forwarded-For: <unverified IP(s)>, <immediate client IP>, <global forwarding rule external IP>, <proxies running in Google Cloud> (requests only)

    The X-Forwarded-For (XFF) header contains a comma-separated list of IP addresses representing proxies through which the request has passed. Each proxy can append the IP address of its client to the list. Because of this, the number of IP addresses in the XFF header can vary. A Google Cloud external HTTP(S) load balancer adds two IP addresses to the header: the IP address of the requesting client and the external IP address of the load balancer's forwarding rule, in that order.

    Therefore, the IP address that immediately precedes the Google Cloud load balancer's IP address is the IP address of the system that contacts the load balancer. The system might be a client, or it might be another proxy server, outside Google Cloud, that forwards requests on behalf of a client.

    When a proxy server outside Google Cloud contacts the Google Cloud external HTTP(S) load balancer on behalf of a client, the load balancer might not receive the client IP address of the system that contacts that outside proxy. The outside proxy might not append the IP address of its client to the XFF header. If all outside proxies append a client IP address to the XFF header, the first IP address in the list is the IP address of the original client.

    If the backend VMs of an external HTTP(S) load balancer serve as internal proxies, those might add more client IP addresses to the XFF header. In this situation, the IP address of the external HTTP(S) load balancer's forwarding rule might not be the last IP address in the list.

  • X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)

    Contains parameters for Cloud Trace.

You can create custom request headers if the default headers do not meet your needs. For more information about this feature, see Creating user-defined request headers.

Do not rely on the proxy to preserve the case of request or response header names. For example, a Server: Apache/1.0 response header may appear at the client as server: Apache/1.0.

URL maps

URL maps define matching patterns for URL-based routing of requests to the appropriate backend services. A default service is defined to handle any requests that do not match a specified host rule or path matching rule. In some situations, such as the cross-region load balancing example, you might not define any URL rules and rely only on the default service. For content-based routing of traffic, the URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends.

SSL certificates

If you are using HTTPS Load Balancing, you must install one or more SSL certificates on the target HTTPS proxy.

These certificates are used by target HTTPS proxies to secure communications between a Google Front End (GFE) and the client. These can be self-managed or Google-managed SSL certificates.

For information about SSL certificate limits and quotas, see SSL certificates on the load balancing quotas page.

For the best security, use end-to-end encryption for your external HTTPS load balancer deployment. For more information, see Encryption from the load balancer to the backends.

For general information about how Google encrypts user traffic, see the Encryption in Transit in Google Cloud white paper.

SSL policies

SSL policies give you the ability to control the features of SSL that your HTTPS load balancer negotiates with HTTPS clients.

By default, HTTPS Load Balancing uses a set of SSL features that provides good security and wide compatibility. Some applications require more control over which SSL versions and ciphers are used for their HTTPS or SSL connections. You can define SSL policies that control the features of SSL that your load balancer negotiates and associate an SSL policy with your target HTTPS proxy.

Geographic control over where TLS is terminated

The HTTPS load balancer terminates TLS in locations that are distributed globally, so as to minimize latency between clients and the load balancer. If you require geographic control over where TLS is terminated, you should use Google Cloud Network Load Balancing instead, and terminate TLS on backends that are located in regions appropriate to your needs.

Backend services

Backend services provide configuration information to the load balancer. An external HTTP(S) load balancer must have at least one backend service and can have multiple backend services.

Load balancers use the information in a backend service to direct incoming traffic to one or more attached backends.

The backends of a backend service can be either instance groups or network endpoint groups (NEGs), but not a combination of both. When you add a backend instance group or NEG, you specify a balancing mode, which defines a method for distributing requests and a target capacity. For more information, see Load distribution algorithm.

HTTP(S) Load Balancing supports Cloud Load Balancing Autoscaler, which allows users to perform autoscaling on the instance groups in a backend service. For more information, see Scaling based on HTTP(S) Load Balancing serving capacity.

You can enable connection draining on backend services to ensure minimal interruption to your users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler. To learn more, see Enabling connection draining.

Changes to a backend service associated with an external HTTP(S) load balancer are not instantaneous. It can take several minutes for changes to propagate throughout the network.

Behavior of the load balancer in different Network Service Tiers

HTTP(S) Load Balancing is a global service when the Premium Network Service Tier is used. You may have more than one backend service in a region, and you may create backend services in more than one region, all serviced by the same global load balancer. Traffic is allocated to backend services as follows:

  1. When a user request comes in, the load balancing service determines the approximate origin of the request from the source IP address.
  2. The load balancing service knows the locations of the instances owned by the backend service, their overall capacity, and their overall current usage.
  3. If the closest instances to the user have available capacity, the request is forwarded to that closest set of instances.
  4. Incoming requests to the given region are distributed evenly across all available backend services and instances in that region. However, at very small loads, the distribution may appear to be uneven.
  5. If there are no healthy instances with available capacity in a given region, the load balancer instead sends the request to the next closest region with available capacity.

HTTP(S) Load Balancing is a regional service when the Standard Network Service Tier is used. Its backend instance groups or NEGs must all be located in the region used by the load balancer's external IP address and forwarding rule.

Health checks

Each backend service also specifies which health check is performed against each available instance. For the health check probes to function correctly, you must create a firewall rule that allows traffic from 130.211.0.0/22 and 35.191.0.0/16 to reach your instances.

For more information about health checks, see Creating health checks.

Protocol to the backends

When you configure a backend service for the external HTTP(S) load balancer, you set the protocol that the backend service uses to communicate with the backends. You can choose HTTP, HTTPS, or HTTP/2. The load balancer uses only the protocol that you specify. The load balancer does not fall back to one of the other protocols if it is unable to negotiate a connection to the backend with the specified protocol.

If you use HTTP/2, you must use TLS. HTTP/2 without encryption is not supported.

Although it is not required, it is a best practice to use a health check whose protocol matches the protocol of the backend service. For example, an HTTP/2 health check most accurately tests HTTP/2 connectivity to backends.

Using gRPC with your Google Cloud applications

gRPC is an open-source framework for remote procedure calls. It is based on the HTTP/2 standard. Use cases for gRPC include the following:

  • Low latency, highly scalable, distributed systems
  • Developing mobile clients that communicate with a cloud server
  • Designing new protocols that must be accurate, efficient, and language independent
  • Layered design to enable extension, authentication, and logging

To use gRPC with your Google Cloud applications, you must proxy requests end-to-end over HTTP/2. To do this with an external HTTP(S) load balancer:

  1. Configure an HTTPS load balancer.
  2. Enable HTTP/2 as the protocol from the load balancer to the backends.

The load balancer negotiates HTTP/2 with clients as part of the SSL handshake by using the ALPN TLS extension.

The load balancer may still negotiate HTTPS with some clients or accept insecure HTTP requests on an external HTTP(S) load balancer that is configured to use HTTP/2 between the load balancer and the backend instances. Those HTTP or HTTPS requests are transformed by the load balancer to proxy the requests over HTTP/2 to the backend instances.

If you want to configure an external HTTP(S) load balancer by using HTTP/2 with Google Kubernetes Engine Ingress or by using gRPC and HTTP/2 with Ingress, see HTTP/2 for load balancing with Ingress.

For information about troubleshooting problems with HTTP/2, see Troubleshooting issues with HTTP/2 to the backends.

For information about HTTP/2 limitations, see HTTP/2 limitations.

Backend buckets

Backend buckets direct incoming traffic to Cloud Storage buckets.

As shown in the following diagram, you can have the load balancer send traffic with a path of /static to a storage bucket and all other requests to your other backends.

Distributing traffic to various backends types with HTTP(S) Load Balancing (click to enlarge)
Distributing traffic to various backend types with HTTP(S) Load Balancing (click to enlarge)

For an example showing how to add a bucket to an existing load balancer, see Setting up a load balancer with backend buckets.

Firewall rules

The backend instances must allow connections from the load balancer GFE/health check ranges. This means that you must create a firewall rule that allows traffic from 130.211.0.0/22 and 35.191.0.0/16 to reach your backend instances or endpoints. These IP address ranges are used as sources for health check packets and for all load-balanced packets sent to your backends.

The ports you configure for this firewall rule must allow traffic to backend instances or endpoints:

  • You must allow the ports used by each forwarding rule
  • You must allow the ports used by each health check configured for each backend service

Firewall rules are implemented at the VM instance level, not on Google Front End (GFE) proxies. You cannot use Google Cloud firewall rules to prevent traffic from reaching the load balancer.

For more information about health check probes and why it's necessary to allow traffic from 130.211.0.0/22 and 35.191.0.0/16, see Probe IP ranges and firewall rules.

Return path

Google Cloud uses special routes not defined in your VPC network for health checks. For more information, see Load balancer return paths.

Load distribution algorithm

External HTTP(S) Load Balancing supports two balancing modes for backends:

  • RATE for instance group backends or NEGs
  • UTILIZATION for instance group backends

The backends of a backend service can be either instance groups or NEGs, but not a combination of both. When you add a backend instance group or NEG, you specify a balancing mode, which defines a method for distributing requests and a target capacity. For instance group backends, you can use either UTILIZATION or RATE balancing mode. For NEGs, you must use RATE.

When you use the RATE balancing mode, you must specify a target maximum number of requests (queries) per second (RPS, QPS). This target is used to define when an instance or endpoint is at capacity. The target maximum RPS/QPS can be exceeded if all backends are at or above capacity.

When an external HTTP(S) load balancer is in Premium Tier, requests sent to the load balancer are delivered to backend instance groups or NEGs in the region closest to the user, if a backend in that region has available capacity. (Available capacity is configured by the load balancer's balancing mode.)

When an external HTTP(S) load balancer is in Standard Tier, its backend instance groups or NEGs must all be located in the region used by the load balancer's external IP address and forwarding rule.

After a region is selected:

  • An external HTTP(S) load balancer tries to balance requests as evenly as possible within the zones of a region, subject to session affinity. When you configure multiple NEGs or zonal instance groups in the same region or one or more regional managed instance groups, the external HTTP(S) load balancer behaves this way.

  • Within a zone, an external HTTP(S) load balancer tries to balance requests by using a round-robin algorithm, subject to session affinity.

For specific examples of the load distribution algorithm, see How HTTP(S) Load Balancing works.

Session affinity

Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode.

Google Cloud HTTP(S) Load Balancing offers three types of session affinity:

  • NONE. Session affinity is not set for the load balancer.
  • Client IP affinity sends requests from the same client IP address to the same backend.
  • Generated cookie affinity sets a client cookie when the first request is made, and then sends requests with that cookie to the same backend.

When you use session affinity, we recommend the RATE balancing mode rather than UTILIZATION. Session affinity works best if you set the balancing mode to requests per second (RPS).

WebSocket proxy support

HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.

Backends that use the WebSocket protocol to communicate with clients can use the external HTTP(S) load balancer as a frontend for scale and availability. The load balancer does not need any additional configuration to proxy WebSocket connections.

The WebSocket protocol, which is defined in RFC 6455, provides a full-duplex communication channel between clients and servers. The channel is initiated from an HTTP(S) request.

When HTTP(S) Load Balancing recognizes a WebSocket Upgrade request from an HTTP(S) client and the request is followed by a successful Upgrade response from the backend instance, the load balancer proxies bidirectional traffic for the duration of the current connection. If the backend does not return a successful Upgrade response, the load balancer closes the connection.

The timeout for a WebSocket connection depends on the configurable response timeout of the load balancer, which is 30 seconds by default. This timeout is applied to WebSocket connections regardless of whether they are in use. For more information about the response timeout and how to configure it, see Timeouts and retries.

If you have configured either client IP or generated cookie session affinity for your external HTTP(S) load balancer, all WebSocket connections from a client are sent to the same backend instance, if the instance continues to pass health checks and has capacity.

The WebSocket protocol is supported with Ingress.

QUIC protocol support for HTTPS Load Balancing

HTTPS Load Balancing supports the QUIC protocol in connections between the load balancer and the clients. QUIC is a transport layer protocol that provides congestion control similar to TCP and the security equivalent to SSL/TLS for HTTP/2, with improved performance. QUIC allows faster client connection initiation, eliminates head-of-line blocking in multiplexed streams, and supports connection migration when a client's IP address changes.

QUIC affects connections between clients and the load balancer, not connections between the load balancer and its backends.

The target proxy's QUIC override setting allows you to enable one of the following:

  • Negotiate QUIC for a load balancer when possible.
  • Always disable QUIC for a load balancer.

If you do not specify a value for the QUIC override setting, you allow Google to manage when QUIC is used. Google enables QUIC only when the --quic-override flag in the gcloud command-line tool is set to ENABLE or the quicOverrideflag in the REST API is set to ENABLE.

For information about enabling and disabling QUIC support, see Target proxies. You can enable or disable QUIC support in the frontend configuration section of the Google Cloud Console by using the gcloud command-line tool or by using the REST API.

How QUIC is negotiated

When you enable QUIC, the load balancer can advertise its QUIC capability to clients, allowing clients that support QUIC to attempt to establish QUIC connections with the HTTPS load balancer. Properly implemented clients always fall back to HTTPS or HTTP/2 when they cannot establish a QUIC connection. Because of this fallback, enabling or disabling QUIC in the load balancer does not disrupt the load balancer's ability to connect to clients.

When you have QUIC enabled in your HTTPS load balancer, some circumstances can cause your client to fall back to HTTPS or HTTP/2 instead of negotiating QUIC. These include the following:

  • When a client supports versions of QUIC that are not compatible with the QUIC versions supported by the HTTPS load balancer.
  • When the load balancer detects that UDP traffic is blocked or rate-limited in a way that would prevent QUIC from working.
  • If QUIC is temporarily disabled for HTTPS load balancers in response to bugs, vulnerabilities, or other concerns.

When a connection falls back to HTTPS or HTTP/2 because of these circumstances, we do not count this as a failure of the load balancer.

Ensure that the previously described behaviors are acceptable for your workloads before you enable QUIC.

TLS support

By default, an HTTPS target proxy accepts only TLS 1.0, 1.1, 1.2 and 1.3 when terminating client SSL requests. You can use SSL policies to change this default behavior and control how the load balancer negotiates SSL with clients.

When the load balancer uses HTTPS as a backend service protocol, it can negotiate TLS 1.0, 1.1, or 1.2 to the backend.

Timeouts and retries

HTTP(S) Load Balancing has two distinct types of timeouts:
  • A configurable HTTP response timeout, which represents the amount of time the load balancer waits for your backend to return a complete HTTP response. The default value for the response timeout is 30 seconds. Consider increasing this timeout under either of these circumstances:

    • You expect a backend to take longer to return HTTP responses.
    • The connection is upgraded to a WebSocket (HTTP(S) Load Balancing only)

    For WebSocket traffic sent through the load balancer, the backend service timeout is interpreted as the maximum amount of time that a WebSocket connection can remain open, whether idle or not. For more information, see Backend service settings.

  • A TCP session timeout, whose value is fixed at 10 minutes (600 seconds). This session timeout is sometimes called a keepalive or idle timeout, and its value is not configurable by modifying your backend service. You must configure the web server software used by your backends so that its keepalive timeout is longer than 600 seconds to prevent connections from being closed prematurely by the backend. This timeout does not apply to WebSockets.

This table illustrates changes necessary to modify keepalive timeouts for common web server software:

Web server software Parameter Default setting Recommended setting
Apache KeepAliveTimeout KeepAliveTimeout 5 KeepAliveTimeout 620
nginx keepalive_timeout keepalive_timeout 75s; keepalive_timeout 620s;

The load balancer retries failed GET requests in certain circumstances, such as when the response timeout is exhausted. It does not retry failed POST requests. Retries are limited to two attempts. Retried requests only generate one log entry for the final response.

For more information, see HTTP(S) Load Balancing logging and monitoring.

Illegal request and response handling

The external HTTP(S) load balancer blocks both client requests and backend responses from reaching the backend or the client, respectively, for a number of reasons. Some reasons are strictly for HTTP/1.1 compliance and others are to avoid unexpected data being passed to or from the backends. None of the checks can be disabled.

The load balancer blocks the following for HTTP/1.1 compliance:

  • It cannot parse the first line of the request.
  • A header is missing the : delimiter.
  • Headers or the first line contain invalid characters.
  • The content length is not a valid number, or there are multiple content length headers.
  • There are multiple transfer encoding keys, or there are unrecognized transfer encoding values.
  • There's a non-chunked body and no content length specified.
  • Body chunks are unparseable. This is the only case where some data reaches the backend. The load balancer closes the connections to the client and backend when it receives an unparseable chunk.

The load balancer blocks the request if any of the following are true:

  • The total size of request headers and the request URL exceeds the limit for the maximum request header size for external HTTP(S) Load Balancing.
  • The request method does not allow a body, but the request has one.
  • The request contains an Upgrade header, and the Upgrade header is not used to enable WebSocket connections.
  • The HTTP version is unknown.

The load balancer blocks the backend's response if any of the following are true:

  • The total size of response headers exceeds the limit for maximum response header size for external HTTP(S) Load Balancing.
  • The HTTP version is unknown.

Specifications and limitations

  • HTTP(S) Load Balancing supports the HTTP/1.1 100 Continue response.

HTTP/2 limitations

  • HTTP/2 between the load balancer and the instance can require significantly more TCP connections to the instance than HTTP(S). Connection pooling, an optimization that reduces the number of these connections with HTTP(S), is not currently available with HTTP/2.
  • HTTP/2 between the load balancer and the backend does not support:
    • Server push
    • WebSockets

Restriction on using Cloud CDN

  • You cannot enable Identity-Aware Proxy or Cloud CDN with the same backend service. If you try to do so, the configuration process fails.

What's next