This document introduces the concepts that you need to understand how to configure an external HTTP(S) load balancer.
An external HTTP(S) load balancer is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. The external HTTP(S) load balancer distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), as well as external backends connected over the internet or via hybrid connectivity. For details, see Use cases.
Modes of operation
You can configure an external HTTP(S) load balancer in the following modes:
- Global external HTTP(S) load balancer. This is a global load balancer that is implemented as a managed service on Google Front Ends (GFEs). It uses the open-source Envoy proxy to support advanced traffic management capabilities such as traffic mirroring, weight-based traffic splitting, request/response-based header transformations, and more.
- Global external HTTP(S) load balancer (classic). This is the classic external HTTP(S) load balancer that is global in Premium Tier but can be configured to be regional in Standard Tier. This load balancer is implemented on Google Front Ends (GFEs). GFEs are distributed globally and operate together using Google's global network and control plane.
- Regional external HTTP(S) load balancer. This is a regional load balancer that is implemented as a managed service on the open-source Envoy proxy. It includes advanced traffic management capabilities such as traffic mirroring, weight-based traffic splitting, request/response-based header transformations, and more.
Load balancer mode | Recommended use cases | Capabilities |
---|---|---|
Global external HTTP(S) load balancer | Use this load balancer for external HTTP(S) workloads with globally dispersed users or backend services in multiple regions. |
|
Global external HTTP(S) load balancer (classic) | This load balancer is global in Premium Tier but can be configured to be effectively regional in Standard Tier. In the Premium Network Service Tier, this load balancer offers multi-region load balancing, directing traffic to the closest healthy backend that has capacity and terminates HTTP(S) traffic as close as possible to your users. In the Standard Network Service Tier, load balancing is handled regionally. |
|
Regional external HTTP(S) load balancer | This load balancer contains many of the features of the existing global external HTTP(S) load balancer (classic), along with advanced traffic management capabilities. Use this load balancer if you want to serve content from only one geolocation (for example, to meet compliance regulations) or if the Standard Network Service Tier is desired. |
|
Identifying the mode
Cloud console
- In the Google Cloud console, go to the Load balancing page.
Go to Load balancing - In the Load Balancers tab, the load balancer type, protocol, and
region are displayed. If the region is blank, then the load balancer
is global.
The following table summarizes how to identify the mode of the load balancer.
Load balancer mode Load balancing type Region Network tier Global external HTTP(S) load balancer HTTP(S) PREMIUM Global external HTTP(S) load balancer (classic) HTTP(S)(Classic) STANDARD or PREMIUM Regional external HTTP(S) load balancer HTTP(S) Specifies a region STANDARD
gcloud
To determine the mode of a load balancer, run the following command:
gcloud compute forwarding-rules describe FORWARDING_RULE_NAME
In the command output, check the load balancing scheme, region, and network tier. The following table summarizes how to identify the mode of the load balancer.
Load balancer mode Load balancing scheme Forwarding rule Network tier Global external HTTP(S) load balancer EXTERNAL_MANAGED Global PREMIUM Global external HTTP(S) load balancer (classic) EXTERNAL Global STANDARD or PREMIUM Regional external HTTP(S) load balancer EXTERNAL_MANAGED Specifies a region STANDARD
Architecture
The following resources are required for an external HTTP(S) load balancer deployment:
For regional external HTTP(S) load balancers only, a proxy-only subnet is used to send connections from the load balancer to the backends.
An external forwarding rule specifies an external IP address, port, and target HTTP(S) proxy. Clients use the IP address and port to connect to the load balancer.
A target HTTP(S) proxy receives a request from the client. The HTTP(S) proxy evaluates the request by using the URL map to make traffic routing decisions. The proxy can also authenticate communications by using SSL certificates.
- For HTTPS load balancing, the target HTTPS proxy uses SSL certificates to prove its identity to clients. A target HTTPS proxy supports up to the documented number of SSL certificates.
The HTTP(S) proxy uses a URL map to make a routing determination based on HTTP attributes (such as the request path, cookies, or headers). Based on the routing decision, the proxy forwards client requests to specific backend services or backend buckets. The URL map can specify additional actions, such as sending redirects to clients.
A backend service distributes requests to healthy backends. The global external HTTP(S) load balancers also support backend buckets.
- One or more backends must be connected to the backend service or backend bucket.
A health check periodically monitors the readiness of your backends. This reduces the risk that requests might be sent to backends that can't service the request.
Firewall rules for your backends to accept health check probes. Regional external HTTP(S) load balancers require an additional firewall rule to allow traffic from the proxy-only subnet to reach the backends.
Global
This diagram shows the components of a global external HTTP(S) load balancer deployment. This architecture applies to both, the global external HTTP(S) load balancer, and the global external HTTP(S) load balancer (classic) in Premium Tier.
Regional
This diagram shows the components of a regional external HTTP(S) load balancer deployment.
Proxy-only subnet
Proxy-only subnets are only required for regional external HTTP(S) load balancers.
The proxy-only subnet provides a set of IP addresses
that Google uses to run Envoy proxies on your behalf. You must create one
proxy-only subnet in each region of a VPC network where you use
regional external HTTP(S) load balancers. The --purpose
flag for this proxy-only subnet is
set to REGIONAL_MANAGED_PROXY
. All regional Envoy-based load
balancers in the same
region and VPC network share a pool of Envoy proxies from the
same proxy-only subnet. Further:
- Proxy-only subnets are only used for Envoy proxies, not your backends.
- Backend VMs or endpoints of all regional external HTTP(S) load balancers in a region and VPC network receive connections from the proxy-only subnet.
- The IP address of the regional external HTTP(S) load balancer is not located in the proxy-only subnet. The load balancer's IP address is defined by its external managed forwarding rule, which is described below.
If you previously created a proxy-only subnet with
--purpose=INTERNAL_HTTPS_LOAD_BALANCER
, you need to migrate the subnet's
purpose to
REGIONAL_MANAGED_PROXY
before you can create
other Envoy-based load balancers in the same region of the VPC
network.
Forwarding rules and addresses
Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services.
Each forwarding rule provides a single IP address that can be used in DNS records for your application. No DNS-based load balancing is required. You can either specify the IP address to be used or let Cloud Load Balancing assign one for you.
- The forwarding rule for an HTTP load balancer can only reference TCP ports 80 and 8080.
- The forwarding rule for an HTTPS load balancer can only reference TCP port 443.
The type of forwarding rule, IP address, and load balancing scheme used by external HTTP(S) load balancers depends on the mode of the load balancer and which Network Service Tier the load balancer is in.
Load balancer mode | Network Service Tier | Forwarding rule, IP address, and Load balancing scheme | Routing from the internet to the load balancer frontend |
---|---|---|---|
Global external HTTP(S) load balancer | Premium Tier |
Global external forwarding rule Load balancing scheme: |
Requests routed to the GFE that is closest to the client on the internet. |
Global external HTTP(S) load balancer (classic) | Premium Tier |
Global external forwarding rule Load balancing scheme: |
Requests routed to the GFE that is closest to the client on the internet. |
Standard Tier |
Regional external forwarding rule Load balancing scheme: |
Requests routed to a GFE in the load balancer's region. | |
Regional external HTTP(S) load balancer | Standard Tier |
Regional external forwarding rule Load balancing scheme: |
Requests routed to the Envoy proxies in the same region as the load balancer. |
For the complete list of protocols supported by external HTTP(S) load balancer forwarding rules in each mode, see Load balancer features.
Target proxies
Target proxies terminate HTTP(S) connections from clients. One or more forwarding rules direct traffic to the target proxy, and the target proxy consults the URL map to determine how to route traffic to backends.
Do not rely on the proxy to preserve the case of request or response header
names. For example, a Server: Apache/1.0
response header might appear at the
client as server: Apache/1.0
.
The following table specifies the type of target proxy required by external HTTP(S) load balancers in each mode.
Load balancer mode | Target proxy types | Proxy-added headers | Custom headers supported | Cloud Trace supported |
---|---|---|---|---|
Global external HTTP(S) load balancer | Global HTTP, Global HTTPS |
The proxies set HTTP request/response headers as follows:
|
Configured on the
backend service or backend bucket
Not supported with Cloud CDN |
|
Global external HTTP(S) load balancer (classic) | Global HTTP, Global HTTPS |
The proxies set HTTP request/response headers as follows:
|
Configured on the backend service or backend bucket | |
Regional external HTTP(S) load balancer | Regional HTTP, Regional HTTPS |
|
In addition to headers added by the target proxy, the load balancer adjusts other HTTP headers in the following ways:
For the global external HTTP(S) load balancer, both request and response headers are always converted to lowercase. For example,
Host
becomeshost
, andKeep-ALIVE
becomeskeep-alive
.The only exception to this is when you use internet NEG backends with HTTP/1.1. For details about how HTTP/1.1 headers are processed with internet NEGs, see the Internet NEGs overview.
For the global external HTTP(S) load balancer (classic), request and response headers are converted to lowercase except when you use HTTP/1.1. With HTTP/1.1, headers are proper-cased instead. The first letter of the header's key and every letter following a hyphen (
-
) is capitalized to preserve compatibility with HTTP/1.1 clients. For example,user-agent
is changed toUser-Agent
, andcontent-encoding
is changed toContent-Encoding
.Some headers are coalesced. When there are multiple instances of the same header key (for example,
Via
), the load balancer combines their values into a single comma-separated list for a single header key. Only the headers whose values can be represented as a comma-separated list are coalesced. Other headers, such asSet-Cookie
, are never coalesced.
Host header
When the load balancer makes the HTTP request, the load balancer preserves the Host header of the original request.
X-Forwarded-For header
The load balancer appends two IP addresses separated by a single comma to the
X-Forwarded-For
header in the following order:
- The IP address of the client that connects to the load balancer
- The IP address of the load balancer's forwarding rule
If there is no X-Forwarded-For
header on the incoming request, these
two IP addresses are the entire header value:
X-Forwarded-For: <client-ip>,<load-balancer-ip>
If the request includes an X-Forwarded-For
header, the load balancer preserves
the supplied value before the
<client-ip>,<load-balancer-ip>
:
X-Forwarded-For: <supplied-value>,<client-ip>,<load-balancer-ip>
When running HTTP reverse proxy software on the load balancer's backends, the
software might append one or both of the following IP addresses to the end of
the X-Forwarded-For
header:
The IP address of the Google Front End (GFE) that connected to the backend. These IP addresses are in the
130.211.0.0/22
and35.191.0.0/16
ranges.The IP address of the backend system itself.
Thus, an upstream process after your load balancer's backend might receive an
X-Forwarded-For
header of the form:
<existing-values>,<client-ip>,<load-balancer-ip><GFE-IP><backend-IP>
URL maps
URL maps define matching patterns for URL-based routing of requests to the appropriate backend services. A default service is defined to handle any requests that do not match a specified host rule or path matching rule. In some situations, such as the multi-region load balancing example, you might not define any URL rules and rely only on the default service. For request routing, the URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends.
URL maps used with global external HTTP(S) load balancers and regional external HTTP(S) load balancer support several advanced traffic management features such as header-based traffic steering, weight-based traffic splitting, and request mirroring. For more information, see the following:
- Traffic management overview for global external HTTP(S) load balancer.
- Traffic management overview for regional external HTTP(S) load balancer.
The following table specifies the type of URL map required by external HTTP(S) load balancers in each mode.
Load balancer mode | URL map type |
---|---|
Global external HTTP(S) load balancer | Global |
Global external HTTP(S) load balancer (classic) | Global (with only a subset of the features supported) |
Regional external HTTP(S) load balancer | Regional |
SSL certificates
External HTTP(S) load balancers using target HTTPS proxies require private keys and SSL certificates as part of the load balancer configuration:Google Cloud provides two configuration methods for assigning private keys and SSL certificates to target HTTPS proxies: Compute Engine SSL certificates and Certificate Manager. For a description of each configuration, see Certificate configuration methods in the SSL certificates overview.
Google Cloud provides two certificate types: Self-managed and Google-managed. For a description of each type, see Certificate types in the SSL certificates overview.
The external HTTP(S) load balancer mode determines which configuration methods and certificate types are supported. For details, see Certificates and Google Cloud load balancers in the SSL certificates overview.
Mutual TLS
Typically with HTTPS communication, the authentication works only one way: the client verifies the identity of the server. For applications that require the load balancer to authenticate the identity of clients that connect to it, both a global external HTTP(S) load balancer and a global external HTTP(S) load balancer (classic) support mutual TLS (mTLS).
With mTLS, the load balancer requests that the client send a certificate to authenticate itself during the TLS handshake with the load balancer. You can configure a trust store that the load balancer uses to validate the client certificate's chain of trust.
Google Cloud uses the TrustConfig
resource in
Certificate Manager to store certificates that the server uses
to verify the certificate presented by the client. If you are using a
global external HTTP(S) load balancer (classic) on the
Premium Network Service Tier or a
global external HTTP(S) load balancer, you can use Certificate Manager to
provision and manage your SSL certificates or TrustConfig
across
multiple instances of the load balancer at scale. For more information, see the
Certificate Manager overview.
For more information about mTLS, see Mutual TLS authentication.
SSL policies
SSL policies specify the set of SSL features that Google Cloud load balancers use when negotiating SSL with clients.
By default, HTTPS Load Balancing uses a set of SSL features that provides good security and wide compatibility. Some applications require more control over which SSL versions and ciphers are used for their HTTPS or SSL connections. You can define an SSL policy to specify the set of SSL features that your load balancer uses when negotiating SSL with clients. In addition, you can apply that SSL policy to your target HTTPS proxy.
The following table specifies the SSL policy support for load balancers in each mode.
Load balancer mode | SSL policies supported |
---|---|
Global external HTTP(S) load balancer | |
Global external HTTP(S) load balancer (classic) | |
Regional external HTTP(S) load balancer |
Backend services and buckets
Backend services provide configuration information to the load balancer. Load balancers use the information in a backend service to direct incoming traffic to one or more attached backends. For an example showing how to set up a load balancer with a backend service and a Compute Engine backend, see Setting up an external HTTP(S) load balancer with a Compute Engine backend.
Backend buckets direct incoming traffic to Cloud Storage buckets. For an example showing how to add a bucket to a external HTTP(S) load balancer, see Setting up a load balancer with backend buckets.
The following table specifies the backend features supported by external HTTP(S) load balancers in each mode.
Load balancer mode |
Supported backends on a backend service | Supports backend buckets | Supports Google Cloud Armor | Supports Cloud CDN | Supports IAP | |||||
---|---|---|---|---|---|---|---|---|---|---|
Instance groups | Zonal NEGs | Internet NEGs | Serverless NEGs | Hybrid NEGs | Private Service Connect NEGs | |||||
Global external HTTP(S) load balancer | ||||||||||
Global external HTTP(S) load balancer (classic) |
when using Premium Tier |
|
||||||||
Regional external HTTP(S) load balancer |
For more information, see the following:
Backends and VPC networks
The restrictions on where backends can be located depend on the type of load balancer.
- For the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (classic), all backends must be located in the same project but can be located in different VPC networks. The different VPC networks do not need to be connected using VPC Network Peering because GFE proxy systems communicate directly with backends in their respective VPC networks.
- For the regional external HTTP(S) load balancer, all backends must be located in the same VPC network and region.
Protocol to the backends
When you configure a backend service for the load balancer, you set the protocol that the backend service uses to communicate with the backends. You can choose HTTP, HTTPS, or HTTP/2. The load balancer uses only the protocol that you specify. The load balancer does not fall back to one of the other protocols if it is unable to negotiate a connection to the backend with the specified protocol.
If you use HTTP/2, you must use TLS. HTTP/2 without encryption is not supported.
For the complete list of protocols supported, see Load balancing features: Protocols from the load balancer to the backends.
WebSocket support
Google Cloud HTTP(S)-based load balancers have native support for the WebSocket protocol when you use HTTP or HTTPS as the protocol to the backend. The load balancer does not need any configuration to proxy WebSocket connections.
The WebSocket protocol provides a full-duplex communication channel between clients and servers. An HTTP(S) request initiates the channel. For detailed information about the protocol, see RFC 6455.
When the load balancer recognizes a WebSocket Upgrade
request from
an HTTP(S) client followed by a successful Upgrade
response from the backend
instance, the load balancer proxies bidirectional traffic for
the duration of the current connection. If the backend instance does not return
a successful Upgrade
response, the load balancer closes the connection.
The timeout for a WebSocket connection depends on the configurable backend service timeout of the load balancer, which is 30 seconds by default. This timeout applies to WebSocket connections regardless of whether they are in use.
Session affinity for WebSockets works the same as for any other request. For information, see Session affinity.
Using gRPC with your Google Cloud applications
gRPC is an open-source framework for remote procedure calls. It is based on the HTTP/2 standard. Use cases for gRPC include the following:
- Low-latency, highly scalable, distributed systems
- Developing mobile clients that communicate with a cloud server
- Designing new protocols that must be accurate, efficient, and language-independent
- Layered design to enable extension, authentication, and logging
To use gRPC with your Google Cloud applications, you must proxy requests end-to-end over HTTP/2. To do this:
- Configure an HTTPS load balancer.
- Enable HTTP/2 as the protocol from the load balancer to the backends.
The load balancer negotiates HTTP/2 with clients as part of the SSL handshake by using the ALPN TLS extension.
The load balancer may still negotiate HTTPS with some clients or accept insecure HTTP requests on a load balancer that is configured to use HTTP/2 between the load balancer and the backend instances. Those HTTP or HTTPS requests are transformed by the load balancer to proxy the requests over HTTP/2 to the backend instances.
You must enable TLS on your backends. For more information, see Encryption from the load balancer to the backends.
If you want to configure an external HTTP(S) load balancer by using HTTP/2 with Google Kubernetes Engine Ingress or by using gRPC and HTTP/2 with Ingress, see HTTP/2 for load balancing with Ingress.
For information about troubleshooting problems with HTTP/2, see Troubleshooting issues with HTTP/2 to the backends.
For information about HTTP/2 limitations, see HTTP/2 limitations.
Health checks
Each backend service specifies a health check that periodically monitors the backends' readiness to receive a connection from the load balancer. This reduces the risk that requests might be sent to backends that can't service the request. Health checks do not check if the application itself is working.
For the health check probes, you must create an ingress allow firewall rule that allows health check probes to reach your backend instances. Typically, health check probes originate from Google's centralized health checking mechanism.
Regional external HTTP(S) load balancers that use hybrid NEG backends are an exception to this rule because their health checks originate from the proxy-only subnet instead. For details, see the Hybrid NEGs overview.
Health check protocol
Although it is not required and not always possible, it is a best practice to use a health check whose protocol matches the protocol of the backend service. For example, an HTTP/2 health check most accurately tests HTTP/2 connectivity to backends. In contrast, regional external HTTP(S) load balancers that use hybrid NEG backends do not support gRPC health checks. For the list of supported health check protocols, see Load balancing features.
The following table specifies the scope of health checks supported by external HTTP(S) load balancers in each mode.
Load balancer mode | Health check type |
---|---|
Global external HTTP(S) load balancer | Global |
Global external HTTP(S) load balancer (classic) | Global |
Regional external HTTP(S) load balancer | Regional |
For more information about health checks, see the following:
Firewall rules
The load balancer requires the following firewall rules:
- For the global external HTTP(S) load balancers, an ingress allow
rule to permit traffic from
Google Front Ends (GFEs) to reach your backends.
For the regional external HTTP(S) load balancer, an ingress allow rule to permit traffic from the proxy-only subnet to reach your backends. - An ingress allow rule to permit traffic from the health check probes ranges. For more information about health check probes and why it's necessary to allow traffic from them, see Probe IP ranges and firewall rules.
Firewall rules are implemented at the VM instance level, not on GFE proxies. You cannot use Google Cloud firewall rules to prevent traffic from reaching the load balancer. For the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (classic), you can use Google Cloud Armor to achieve this.
The ports for these firewall rules must be configured as follows:
Allow traffic to the destination port for each backend service's health check.
For instance group backends: Determine the ports to be configured by the mapping between the backend service's named port and the port numbers associated with that named port on each instance group. The port numbers can vary among instance groups assigned to the same backend service.
For
GCE_VM_IP_PORT
NEG backends: Allow traffic to the port numbers of the endpoints.
The following table summarizes the required source IP address ranges for the firewall rules:
Load balancer mode | Health check source ranges | Request source ranges |
---|---|---|
Global external HTTP(S) load balancer |
|
The source of GFE traffic depends on the backend type:
|
Global external HTTP(S) load balancer (classic) |
|
The source of GFE traffic depends on the backend type:
|
Regional external HTTP(S) load balancer |
Currently, health check probes for hybrid NEGs originate from Google's centralized health checking mechanism. If you cannot allow traffic that originates from the Google health check ranges to reach your hybrid endpoints and would prefer to have the health check probes originate from private IP addresses instead, speak to your Google account representative to get your project allowlisted for distributed Envoy health checks. |
The proxy-only subnet that you configure. |
Shared VPC architecture
External HTTP(S) load balancers support networks that use Shared VPC. Shared VPC lets organizations connect resources from multiple projects to a common VPC network so that they can communicate with each other securely and efficiently using internal IPs from that network. If you're not already familiar with Shared VPC, read the Shared VPC overview documentation.
There are many ways to configure an external HTTP(S) load balancer within a Shared VPC network. Regardless of type of deployment, all the components of the load balancer must be in the same organization.
Load balancer | Frontend components | Backend components |
---|---|---|
Global external HTTP(S) load balancer, Global external HTTP(S) load balancer (classic) |
The global external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same host or service project as the backends. | A global backend service must be defined in the same host or service project as the backends (instance groups or NEGs). Health checks associated with backend services must be defined in the same project as the backend service as well. |
Regional external HTTP(S) load balancer | Create the required network and proxy-only subnet in the Shared VPC host project. The regional external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same project. This project can be the host project or a service project. |
You can do one of the following:
Each backend service must be defined in the same project as the backends it references. Health checks associated with backend services must be defined in the same project as the backend service as well. |
While you can create all the load balancing components and backends in the Shared VPC host project, this model does not separate network administration and service development responsibilities.
Serverless backends in a Shared VPC environment
For a load balancer that is using a serverless NEG backend, the backend Cloud Run, Cloud Functions, or App Engine service must be in the same project as the serverless NEG.
Additionally, for the regional external HTTP(S) load balancer that supports cross-project service referencing, the backend service, serverless NEG, and the Cloud Run service must always be in the same service project.
Cross-project service referencing
In this model, the load balancer's frontend and URL map are in a host or service project. The load balancer's backend services and backends can be distributed across projects in the Shared VPC environment. Cross-project backend services can be referenced in a single URL map. This is referred to as cross- project service referencing.
Cross-project service referencing allows organizations to configure one central load balancer and route traffic to hundreds of services distributed across multiple different projects. You can centrally manage all traffic routing rules and policies in one URL map. You can also associate the load balancer with a single set of hostnames and SSL certificates. You can therefore optimize the number of load balancers needed to deploy your application, and lower manageability, operational costs, and quota requirements.
By having different projects for each of your functional teams, you can also achieve separation of roles within your organization. Service owners can focus on building services in service projects, while network teams can provision and maintain load balancers in another project, and both can be connected by using cross-project service referencing.
Service owners can maintain autonomy over the exposure of their services and
control which users can access their services by using the load balancer. This is
achieved by a special IAM role called the
Compute Load Balancer Services User role
(roles/compute.loadBalancerServiceUser
).
Cross-project service referencing can be used with instance groups, serverless NEGs, or any other supported backend types. Cross-project service referencing is not supported for a global external HTTP(S) load balancer or a global external HTTP(S) load balancer (classic).
Example 1: Load balancer frontend and backend in different service projects
Here is an example of a deployment where the load balancer's frontend and URL map are created in service project A and the URL map references a backend service in service project B.
In this case, Network Admins or Load Balancer Admins in service project A will
require access to backend services in service project B. Service project B
admins grant the compute.loadBalancerServiceUser
IAM role
to Load Balancer Admins in service project A who want to reference the backend
service in service project B.
Example 2: Load balancer frontend in the host project and backends in service projects
In this type of deployment, the load balancer's frontend and URL map are created in the host project and the backend services (and backends) are created in service projects.
In this case, Network Admins or Load Balancer Admins in the host project will
require access to backend services in the service project. Service project
admins grant the compute.loadBalancerServiceUser
IAM role to
to Load Balancer Admins in the host project A who want to reference the backend
service in the service project.
All load balancer components and backends in a service project
In this model, all load balancer components and backends are in a service project. This deployment model is supported by all HTTP(S) load balancers.
The load balancer components and backends must use the same VPC network.How connections work
Global external HTTP(S) load balancer connections
The global external HTTP(S) load balancers are implemented by many proxies called Google Front Ends (GFEs). There isn't just a single proxy. In Premium Tier, the same global external IP address is advertised from various points of presence, and client requests are directed to the client's nearest GFE.
Depending on where your clients are, multiple GFEs can initiate HTTP(S)
connections to your backends. Packets sent from GFEs have source IP addresses
from the same range used by health check probers: 35.191.0.0/16
and
130.211.0.0/22
.
Depending on the backend service configuration, the protocol used by each GFE to connect to your backends can be HTTP, HTTPS, or HTTP/2. For HTTP or HTTPS connections, the HTTP version used is HTTP 1.1.
HTTP keepalive is enabled by default, as specified in the HTTP 1.1 specification. HTTP keepalives attempt to efficiently use the same TCP session; however, there's no guarantee. The GFE uses a keepalive timeout of 600 seconds, and you cannot configure this. You can, however, configure the request/response timeout by setting the backend service timeout. Though closely related, an HTTP keepalive and a TCP idle timeout are not the same thing. For more information, see timeouts and retries.
To ensure that traffic is load balanced evenly, the load balancer might cleanly
close a TCP connection either by sending a FIN ACK
packet after completing a
response that included a Connection: close
header, or it might issue an HTTP/2
GOAWAY
frame after completing a response. This behavior does not interfere
with any active requests or responses.
The numbers of HTTP connections and TCP sessions vary depending on the number of GFEs connecting, the number of clients connecting to the GFEs, the protocol to the backends, and where backends are deployed.
For more information, see How external HTTP(S) load balancers work in the solutions guide: Application Capacity Optimizations with Global Load Balancing.
Regional external HTTP(S) load balancer connections
The regional external HTTP(S) load balancer is a managed service implemented on the Envoy
proxy. The regional external HTTP(S) load balancer uses a shared subnet called a proxy-only
subnet to provision a set of IP addresses that Google uses to run Envoy proxies
on your behalf. The --purpose
flag for this proxy-only subnet is set to
REGIONAL_MANAGED_PROXY
. All regional Envoy-based load
balancers in a particular
network and region share this subnet.
Clients use the load balancer's IP address and port to connect to the load balancer. Client requests are directed to the proxy-only subnet in the same region as the client. The load balancer terminates clients requests and then opens new connections from the proxy-only subnet to your backends. Therefore, packets sent from the load balancer have source IP addresses from the proxy-only subnet.
Depending on the backend service configuration, the protocol used by Envoy proxies to connect to your backends can be HTTP, HTTPS, or HTTP/2. If HTTP or HTTPS, the HTTP version is HTTP 1.1. HTTP keepalive is enabled by default, as specified in the HTTP 1.1 specification. The Envoy proxy uses a keepalive timeout of 600 seconds, and you cannot configure this. You can, however, configure the request/response timeout by setting the backend service timeout. For more information, see timeouts and retries.
Client communications with the load balancer
- Clients can communicate with the load balancer by using the HTTP 1.1 or HTTP/2 protocol.
- When HTTPS is used, modern clients default to HTTP/2. This is controlled on the client, not on the HTTPS load balancer.
- You cannot disable HTTP/2 by making a configuration change on the load
balancer. However, you can configure some clients to use HTTP 1.1
instead of HTTP/2. For example, with
curl
, use the--http1.1
parameter. - External HTTP(S) load balancers support the
HTTP/1.1 100 Continue
response.
For the complete list of protocols supported by external HTTP(S) load balancer forwarding rules in each mode, see Load balancer features.
Source IP addresses for client packets
The source IP address for packets, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two TCP connections.
For the global external HTTP(S) load balancers:Connection 1, from original client to the load balancer (GFE):
- Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
- Destination IP address: your load balancer's IP address.
Connection 2, from the load balancer (GFE) to the backend VM or endpoint:
Source IP address: an IP address in one of the ranges specified in Firewall rules.
Destination IP address: the internal IP address of the backend VM or container in the VPC network.
Connection 1, from original client to the load balancer (proxy-only subnet):
- Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
- Destination IP address: your load balancer's IP address.
Connection 2, from the load balancer (proxy-only subnet) to the backend VM or endpoint:
Source IP address: an IP address in the proxy-only subnet that is shared among all the Envoy-based load balancers deployed in the same region and network as the load balancer.
Destination IP address: the internal IP address of the backend VM or container in the VPC network.
Return path
For the global external HTTP(S) load balancers, Google Cloud uses special routes not defined in your VPC network for health checks. For more information, see Load balancer return paths.
For regional external HTTP(S) load balancers, Google Cloud uses open-source Envoy proxies to terminate client requests to the load balancer. The load balancer terminates the TCP session and opens a new TCP session from the region's proxy- only subnet to your backend. Routes defined within your VPC network facilitate communication from Envoy proxies to your backends and from your backends to the Envoy proxies.
Open ports
This section applies only to the global external HTTP(S) load balancers which are implemented using GFEs.
GFEs have several open ports to support other Google services that run on the same architecture. To see a list of some of the ports likely to be open on GFEs, see Forwarding rule: Port specifications. There might be other open ports for other Google services running on GFEs.
Running a port scan on the IP address of a GFE-based load balancer is not useful from an auditing perspective for the following reasons:
A port scan (for example, with
nmap
) generally expects no response packet or a TCP RST packet when performing TCP SYN probing. GFEs will send SYN-ACK packets in response to SYN probes only for ports on which you have configured a forwarding rule and on ports 80 and 443 if your load balancer uses a Premium Tier IP address. GFEs only send packets to your backends in response to packets sent to your load balancer's IP address and the destination port configured on its forwarding rule. Packets sent to different load balancer IP addresses or your load balancer's IP address on a port not configured in your forwarding rule do not result in packets being sent to your load balancer's backends. GFEs implement security features such as Google Cloud Armor. Even without a Google Cloud Armor configuration, Google infrastructure and GFEs provide defense-in-depth for DDoS attacks and SYN floods.Packets sent to the IP address of your load balancer could be answered by any GFE in Google's fleet; however, scanning a load balancer IP address and destination port combination only interrogates a single GFE per TCP connection. The IP address of your load balancer is not assigned to a single device or system. Thus, scanning the IP address of a GFE-based load balancer does not scan all the GFEs in Google's fleet.
With that in mind, the following are some more effective ways to audit the security of your backend instances:
A security auditor should inspect the forwarding rules configuration for the load balancer's configuration. The forwarding rules define the destination port for which your load balancer accepts packets and forwards them to the backends. For GFE-based load balancers, each external forwarding rule can only reference a single destination TCP port. For a load balancer using TCP port 443, UDP port 443 is used when the connection is upgraded to QUIC (HTTP/3).
A security auditor should inspect the firewall rule configuration applicable to backend VMs. The firewall rules that you set block traffic from the GFEs to the backend VMs, but do not block incoming traffic to the GFEs. For best practices, see the firewall rules section.
TLS termination
The following table summarizes how TLS termination is handled by external HTTP(S) load balancers in each mode.
Load balancer mode | TLS termination |
---|---|
Global external HTTP(S) load balancer | TLS is terminated on a GFE, which can be anywhere in the world. |
Global external HTTP(S) load balancer (classic) | TLS is terminated on a GFE, which could be anywhere in the world. |
Regional external HTTP(S) load balancer | TLS is terminated on Envoy proxies located in a proxy-only subnet in a region chosen by the user. Use this load balancer mode if you need geographic control over the region where TLS is terminated. |
Timeouts and retries
External HTTP(S) load balancers have the following timeouts:A configurable HTTP backend service timeout, which represents the amount of time the load balancer waits for your backend to return a complete HTTP response. The default value for the backend service timeout is 30 seconds. The full range of timeout values allowed is 1-2,147,483,647 seconds.
For example, if you want to download a 500-MB file, and the value of the backend service timeout is the default value of 30 seconds, the load balancer expects the backend to deliver the entire 500-MB file within 30 seconds. It is possible to configure the backend service timeout to not be long enough for the backend to send its complete HTTP response. In this situation, if the load balancer has at least received HTTP response headers, the load balancer returns the complete response headers and as much of the response body as it could obtain within the backend service timeout.
The backend service timeout should be set to the maximum possible time from the first byte of the request to the last byte of the response, for the interaction between the proxy (GFE or managed Envoy) and your backend. If you are using WebSockets, the backend service timeout should be set to the maximum duration of a WebSocket, idle or active.
Consider increasing this timeout under any of these circumstances:
- You expect a backend to take longer to return HTTP responses.
- You see an HTTP
408
response withjsonPayload.statusDetail
client_timed_out
. - The connection is upgraded to a WebSocket.
The backend service timeout you set is a best-effort goal. It does not guarantee that underlying TCP connections will stay open for the duration of that timeout.
You can set the backend service timeout to whatever value you'd like; however, setting it to a value beyond one day (86,400 seconds) does not mean that the load balancer will keep a TCP connection running for that long. Google periodically restarts GFEs and Envoy software tasks for software updates and routine maintenance, and your backend service timeout does not override that. The longer you make your backend service timeout, the more likely it is that Google will terminate a TCP connection for maintenance. We recommend that you implement retry logic to reduce the impact of such events.
The backend service timeout is not an HTTP idle (keepalive) timeout. It is possible that input and output (IO) from the backend is blocked due to a slow client (a browser with a slow connection, for example). This wait time isn't counted against the backend service timeout.
To configure the backend service timeout, use one of the following methods:
- Google Cloud console: Modify the Timeout field of the load balancer's backend service.
- Google Cloud CLI: Use the
gcloud compute backend-services update
command to modify the--timeout
parameter of the backend service resource. - API: Modify the
timeoutSec
parameter for the global or regional backend service resource.
For regional external HTTP(S) load balancers, the URL map's
routeActions.timeout
parameter can override the backend service timeout. The backend service timeout is used as the default value forrouteActions.timeout
.
- There are two timeouts applicable when backend buckets are used with the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (classic):
- An idle timeout fixed at 6 minutes. This means that backend bucket HTTP streams are considered idle after 6 minutes of no activity.
- Another timeout that includes the complete response processing time taken by both the load balancer and Cloud Storage. This timeout is fixed at 24 hours.
- An HTTP keepalive timeout, whose value is fixed at 10 minutes (600 seconds).
This value is not configurable by modifying your backend service. You must
configure the web server software used by your backends so that its keepalive
timeout is longer than 600 seconds to prevent connections from being closed
prematurely by the backend. This timeout does not apply to WebSockets.
This table illustrates changes necessary to modify keepalive timeouts for
common web server software:
Web server software Parameter Default setting Recommended setting Apache KeepAliveTimeout KeepAliveTimeout 5 KeepAliveTimeout 620 nginx keepalive_timeout keepalive_timeout 75s; keepalive_timeout 620s;
Retries
Support for retry logic depends on the mode of the external HTTP(S) load balancer.
Load balancer mode | Retry logic |
---|---|
Global external HTTP(S) load balancer |
Configurable using a
retry policy
in the URL map. The default number of retries ( Without a retry policy, unsuccessful requests that have no HTTP body
(for example, GET requests) resulting in HTTP 502, 503, or 504 responses
( Retried requests only generate one log entry for the final response. |
Global external HTTP(S) load balancer (classic) |
Retry policy cannot be changed for connection retries. HTTP POST requests are not retried. HTTP GET requests are always retried once as long as 80% or more of the backends are healthy. If there is a single backend instance in a group and the connection to that backend instance fails, the percentage of unhealthy backend instances is 100%, so the GFE doesn't retry the request. The load balancer retries a failed GET request if the first request failed before receiving response headers from the backend instance. Retried requests only generate one log entry for the final response. For more information, see External HTTP(S) load balancer logging and monitoring. Unsuccessful requests result in the load balancer synthesizing an HTTP 502 response. |
Regional external HTTP(S) load balancer |
Configurable using a
retry
policy in the URL map. The default number of retries
( Without a retry policy, unsuccessful requests that have no HTTP body (for example, GET requests) resulting in HTTP 502, 503, or 504 responses are retried once. HTTP POST requests are not retried. Retried requests only generate one log entry for the final response. |
The WebSocket protocol is supported with GKE Ingress.
Illegal request and response handling
The load balancer blocks both client requests and backend responses from reaching the backend or the client, respectively, for a number of reasons. Some reasons are strictly for HTTP/1.1 compliance and others are to avoid unexpected data being passed to or from the backends. None of the checks can be disabled.
The load balancer blocks the following for HTTP/1.1 compliance:
- It cannot parse the first line of the request.
- A header is missing the
:
delimiter. - Headers or the first line contain invalid characters.
- The content length is not a valid number, or there are multiple content length headers.
- There are multiple transfer encoding keys, or there are unrecognized transfer encoding values.
- There's a non-chunked body and no content length specified.
- Body chunks are unparseable. This is the only case where some data reaches the backend. The load balancer closes the connections to the client and backend when it receives an unparseable chunk.
The load balancer blocks the request if any of the following are true:
- The total size of request headers and the request URL exceeds the limit for the maximum request header size for external HTTP(S) load balancers.
- The request method does not allow a body, but the request has one.
- The request contains an
Upgrade
header, and theUpgrade
header is not used to enable WebSocket connections. - The HTTP version is unknown.
The load balancer blocks the backend's response if any of the following are true:
- The total size of response headers exceeds the limit for maximum response header size for external HTTP(S) load balancers.
- The HTTP version is unknown.
Traffic distribution
When you add a backend instance group or NEG to a backend service, you specify a balancing mode, which defines a method measuring backend load and a target capacity. External HTTP(S) load balancers support two balancing modes:
RATE
, for instance groups or NEGs, is the target maximum number of requests (queries) per second (RPS, QPS). The target maximum RPS/QPS can be exceeded if all backends are at or above capacity.UTILIZATION
is the backend utilization of VMs in an instance group.
How traffic is distributed among backends depends on the mode of the load balancer.
Global external HTTP(S) load balancer
Before a Google Front End (GFE) sends requests to backend instances, the GFE estimates which backend instances have capacity to receive requests. This capacity estimation is made proactively, not at the same time as requests are arriving. The GFEs receive periodic information about the available capacity and distribute incoming requests accordingly.
What capacity means depends in part on the balancing mode. For the RATE
mode, it is relatively simple: a GFE determines exactly how many requests it can
assign per second. UTILIZATION
-based load balancing is more complex: the load
balancer checks the instances' current utilization and then estimates a query
load that each instance can handle. This estimate changes over time as instance
utilization and traffic patterns change.
Both factors—the capacity estimation and the proactive assignment—influence the distribution among instances. Thus, Cloud Load Balancing behaves differently from a simple round-robin load balancer that spreads requests exactly 50:50 between two instances. Instead, Google Cloud load balancing attempts to optimize the backend instance selection for each request.
For the global external HTTP(S) load balancer (classic), the balancing mode is used to select the most favorable backend (instance group or NEG). Traffic is then distributed in a round robin fashion among instances or endpoints within the backend.
For the global external HTTP(S) load balancer, load balancing is two-tiered. The balancing
mode determines the weighting or fraction of traffic that should be sent to each
backend (instance group or NEG). Then, the load balancing policy
(LocalityLbPolicy
) determines how traffic is distributed to instances or
endpoints within the group. For more information, see the Load balancing
locality policy (backend service API
documentation).
Regional external HTTP(S) load balancer
For regional external HTTP(S) load balancers, traffic distribution is based on the load balancing mode and the load balancing locality policy.
The balancing mode determines the weight and fraction of traffic that should be
sent to each group (instance group or NEG). The load balancing locality policy
(LocalityLbPolicy
) determines how backends within the group are load balanced.
When a backend service receives traffic, it first directs traffic to a backend (instance group or NEG) according to the backend's balancing mode. After a backend is selected, traffic is then distributed among instances or endpoints in that backend group according to the load balancing locality policy.
For more information, see the following:
How requests are distributed
Whether traffic is distributed regionally or globally depends on which load balancer mode and network service tier is in use.
For Premium Tier:
- Google advertises your load balancer's IP address from all points of presence, worldwide. Each load balancer IP address is global anycast.
- If you configure a backend service with backends in multiple regions, Google Front Ends (GFEs) attempt to direct requests to healthy backend instance groups or NEGs in the region closest to the user. Details for the process are documented on this page.
For Standard Tier:
Google advertises your load balancer's IP address from points of presence associated with the forwarding rule's region. The load balancer uses a regional external IP address.
You can configure backends in the same region as the forwarding rule. The process documented here still applies, but the load balancer only directs requests to healthy backends in that one region.
Request distribution process:
The balancing mode and choice of target define backend fullness from the perspective of each zonalGCE_VM_IP_PORT
NEG, zonal instance group,
or zone of a regional instance group. Distribution within a zone is done with
consistent hashing for global external HTTP(S) load balancer (classic) and is configurable using the
load balancing locality policy for global external HTTP(S) load balancer and
regional external HTTP(S) load balancer.
GFE-based global external HTTP(S) load balancers use the following process to distribute incoming requests:
- The forwarding rule's external IP address is advertised by edge routers at the borders of Google's network. Each advertisement lists a next hop to a Layer 3/4 load balancing system (Maglev).
- The Maglev systems route traffic to a first-layer Google Front End (GFE). The first-layer GFE terminates TLS if required and then routes traffic to second-layer GFEs according to this process:
- The URL map selects a backend service.
- If a backend service uses instance group or
GCE_VM_IP_PORT
NEG backends, the first layer-GFEs prefer second-layer GFEs that are located in or near the region that contains the instance group or NEG. - For backend buckets and backend services with hybrid NEGs, serverless
NEGs, and internet NEGs, the first-layer GFEs choose second-layer GFEs in a
subset of regions such that the round trip time between the two GFEs is
minimized.
Second-layer GFE preference is not a guarantee, and it can dynamically change based on Google's network conditions and maintenance.
Second-layer GFEs are aware of health check status and actual backend capacity usage.
- The second-layer GFE directs requests to backends in zones within its region.
- For Premium Tier, sometimes second-layer GFEs send requests to backends in zones of different regions. This behavior is called spillover.
- Spillover is possible when all backends known to a second-layer GFE are at capacity or are unhealthy.
- The second-layer GFE has information for healthy, available backends in zones of a different region.
When distributing requests to backends, GFEs operate at a zonal level.
With a low number of requests per second, second-layer GFEs sometimes prefer one zone in a region over the other zones. This preference is normal and expected. The distribution among zones in the region doesn't become even until the load balancer receives more requests per second.
Spillover is governed by two principles:
The second-layer GFEs are typically configured to serve a subset of backend locations.
Spillover behavior does not exhaust all possible Google Cloud zones. If you need to direct traffic away from backends in a particular zone or in an entire region, you must set the capacity scaler to zero. Configuring backends to fail health checks does not guarantee that the second-layer GFE spills over to backends in zones of a different region.
Session affinity
Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode.
When you use session affinity, we recommend the RATE
balancing mode rather
than UTILIZATION
. Session affinity works best if you set the balancing mode
to requests per second (RPS).
External HTTP(S) load balancers offer the following types of session affinity:
- NONE. Session affinity is not set for the load balancer.
- Client IP affinity
- Generated cookie affinity
- Header field affinity
- HTTP Cookie affinity
The following table summarizes the supported session affinity options supported by external HTTP(S) load balancers in each mode:
Load balancer mode | Session affinity options | ||||
---|---|---|---|---|---|
None | Client IP | Generated cookie | Header field | HTTP cookie | |
Global external HTTP(S) load balancer | |||||
Global external HTTP(S) load balancer (classic) | |||||
Regional external HTTP(S) load balancer |
HTTP/2 support
HTTP/2 max concurrent streams
The HTTP/2
SETTINGS_MAX_CONCURRENT_STREAMS
setting describes the maximum number of streams that an endpoint accepts,
initiated by the peer. The value advertised by an HTTP/2 client to a
Google Cloud load balancer is effectively meaningless because the load
balancer doesn't initiate streams to the client.
In cases where the load balancer uses HTTP/2 to communicate with a server
that is running on a VM, the load balancer respects the
SETTINGS_MAX_CONCURRENT_STREAMS
value advertised by the server. If a value of
zero is advertised, the load balancer can't forward requests to the server, and
this might result in errors.
HTTP/2 limitations
- HTTP/2 between the load balancer and the instance can require significantly more TCP connections to the instance than HTTP(S). Connection pooling, an optimization that reduces the number of these connections with HTTP(S), is not currently available with HTTP/2. As a result, you might see high backend latencies because backend connections are made more frequently.
- HTTP/2 between the load balancer and the backend does not support running the WebSocket Protocol over a single stream of an HTTP/2 connection (RFC 8441).
- HTTP/2 between the load balancer and the backend does not support server push.
- The gRPC error rate and request volume aren't visible in the Google Cloud
API or the Google Cloud console. If the gRPC endpoint returns an error, the
load balancer logs and the monitoring data report the
OK 200
HTTP response code.
HTTP/3 support
HTTP/3 is a next-generation internet protocol. It is built on top of IETF QUIC, a protocol developed from the original Google QUIC protocol. HTTP/3 is supported between the external HTTP(S) load balancer, Cloud CDN, and clients.
Specifically:
- IETF QUIC is a transport layer protocol that provides congestion control and reliability similar to TCP, uses TLS 1.3 for security, and improved performance.
- HTTP/3 is an application layer built on top of IETF QUIC, and it relies on QUIC to handle multiplexing, congestion control, loss detection, and retransmission.
- HTTP/3 allows faster client connection initiation, eliminates head-of-line blocking in multiplexed streams, and supports connection migration when a client's IP address changes.
- HTTP/3 is supported for connections between clients and the load balancer, not connections between the load balancer and its backends.
- HTTP/3 connections use the BBR congestion control algorithm.
HTTP/3 on your load balancer can improve web page load times, reduce video rebuffering, and improve throughput on higher latency connections.
The following table specifies the HTTP/3 support for external HTTP(S) load balancers in each mode.
Load balancer mode | HTTP/3 support |
---|---|
Global external HTTP(S) load balancer (always Premium Tier) | |
Global external HTTP(S) load balancer (classic) in Premium Tier | |
Global external HTTP(S) load balancer (classic) in Standard Tier | |
Regional external HTTP(S) load balancer (always Standard Tier) |
How HTTP/3 is negotiated
When HTTP/3 is enabled, the load balancer advertises this support to clients, allowing clients that support HTTP/3 to attempt to establish HTTP/3 connections with the HTTPS load balancer.
- Properly implemented clients always fall back to HTTPS or HTTP/2 when they cannot establish an HTTP/3 connection.
- Clients that support HTTP/3 use their cached prior knowledge of HTTP/3 support to save unnecessary round-trips in the future.
- Because of this fallback, enabling or disabling HTTP/3 in the load balancer does not disrupt the load balancer's ability to connect to clients.
Support is advertised in the
Alt-Svc
HTTP response header. When HTTP/3 is enabled, responses from the
load balancer include the following alt-svc
header value:
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000"
If HTTP/3 has been explicitly set to DISABLE
, responses do not include
an alt-svc
response header.
When you have HTTP/3 enabled on your HTTPS load balancer, some circumstances can cause your client to fall back to HTTPS or HTTP/2 instead of negotiating HTTP/3. These include the following:
- When a client supports versions of HTTP/3 that are not compatible with the HTTP/3 versions supported by the HTTPS load balancer.
- When the load balancer detects that UDP traffic is blocked or rate-limited in a way that would prevent HTTP/3 from working.
- The client does not support HTTP/3 at all, and thus does not attempt to negotiate an HTTP/3 connection.
When a connection falls back to HTTPS or HTTP/2, we do not count this as a failure of the load balancer.
Before you enable HTTP/3, ensure that the previously described behaviors are acceptable for your workloads.
Configure HTTP/3
Both NONE
(the default) and ENABLE
enable HTTP/3 support for your load balancer.
When HTTP/3 is enabled, the load balancer advertises it to clients, which allows clients that support it to negotiate an HTTP/3 version with the load balancer. Clients that do not support HTTP/3 do not negotiate an HTTP/3 connection. You do not need to explicitly disable HTTP/3 unless you have identified broken client implementations.
External HTTP(S) load balancers provide three ways to configure HTTP/3 as shown in the following table.
quicOverride value | Behavior |
---|---|
NONE |
Support for HTTP/3 is advertised to clients. |
ENABLE |
Support for HTTP/3 is advertised to clients. Note: TLS 0-RTT (also known as TLS Early Data) is not yet supported for HTTP/3. |
DISABLE |
Explicitly disables advertising HTTP/3 and Google QUIC to clients. |
To explicitly enable (or disable) HTTP/3, follow these steps.
Console: HTTPS
In the Google Cloud console, go to the Load balancing page.
Select the load balancer that you want to edit.
Click Frontend configuration.
Select the frontend IP address and port that you want to edit. To edit HTTP/3 configurations, the IP address and port must be HTTPS (port 443).
Enable HTTP/3
- Select the QUIC negotiation drop-down.
- To explicitly enable HTTP/3 for this frontend, select Enabled.
- If you have multiple frontend rules representing IPv4 and IPv6, make sure to enable HTTP/3 on each rule.
Disable HTTP/3
- Select the QUIC negotiation drop-down.
- To explicitly disable HTTP/3 for this frontend, select Disabled.
- If you have multiple frontend rules representing IPv4 and IPv6, make sure to disable HTTP/3 for each rule.
gcloud: HTTPS
Before you run this command, you must create an SSL certificate resource for each certificate.
gcloud compute target-https-proxies create HTTPS_PROXY_NAME \ --global \ --quic-override=QUIC_SETTING
Replace QUIC_SETTING
with one of the following:
NONE
(default): allows Google to control when HTTP/3 is advertised.Currently, when you select
NONE
, HTTP/3 is advertised to clients, but Google QUIC is not advertised. In the Google Cloud console, this option is called Automatic (Default).ENABLE
: advertises HTTP/3 and Google QUIC to clients.DISABLE
: does not advertise HTTP/3 or Google QUIC to clients.
API: HTTPS
POST https://www.googleapis.com/v1/compute/projects/PROJECT_ID/global/targetHttpsProxies/TARGET_PROXY_NAME/setQuicOverride { "quicOverride": QUIC_SETTING }
Replace QUIC_SETTING
with one of the following:
NONE
(default): Allows Google to control when HTTP/3 is advertised.Currently, when you select
NONE
, HTTP/3 is advertised to clients, but Google QUIC is not advertised.In the Google Cloud console, this option is called Automatic (Default).
ENABLE
: Advertises HTTP/3 and Google QUIC to clients.DISABLE
: Does not advertise HTTP/3 or Google QUIC to clients.
Limitations
- HTTPS load balancers do not send a
close_notify
closure alert when terminating SSL connections. That is, the load balancer closes the TCP connection instead of performing an SSL shutdown. - HTTPS load balancers support only lowercase characters in
domains in a common name (
CN
) attribute or a subject alternative name (SAN
) attribute of the certificate. Certificates with uppercase characters in domains are returned only when set as the primary certificate in the target proxy. - HTTPS load balancers do not use the Server Name Indication (SNI) extension when connecting to the backend, except for load balancers with Internet NEG backends. For more details, see Encryption from the load balancer to the backends.
- When using regional external HTTP(S) load balancers with Cloud Run in a Shared VPC environment, standalone VPC networks in service projects can send traffic to any other Cloud Run services deployed in any other service projects within the same Shared VPC environment. This is a known issue and this form of access will be blocked in the future.
What's next
- To learn how to deploy a global external HTTP(S) load balancer, see Setting up an external HTTP(S) load balancer with a Compute Engine backend.
- To learn how to deploy a regional external HTTP(S) load balancer, see Setting up a regional external HTTP(S) load balancer with a Compute Engine backend.
- If you are an existing user of the global external HTTP(S) load balancer (classic), make sure that you review Plan your migration to the global external HTTP(S) load balancer when you plan a new deployment with the global external HTTP(S) load balancer.
- To learn how to automate your external HTTP(S) load balancer setup with Terraform, see Terraform module examples for external HTTP(S) load balancers.
- To learn how to configure advanced traffic management capabilities available with the global external HTTP(S) load balancer, see Traffic management overview for global external HTTP(S) load balancers.
- To learn how to configure advanced traffic management capabilities available with the regional external HTTP(S) load balancer, see Traffic management overview for regional external HTTP(S) load balancers.
- To find the locations for Google PoPs, see GFE locations.
- To learn about capacity management, see Capacity management with load balancing tutorial and Application capacity optimizations with global load balancing.
- To learn about serving websites, see Serving websites.
- To learn how to use Certificate Manager to provision and manage SSL certificates, see the Certificate Manager overview.