Internal HTTP(S) Load Balancing overview

Google Cloud Internal HTTP(S) Load Balancing is a proxy-based, regional Layer 7 load balancer that enables you to run and scale your services behind an internal IP address.

Internal HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and Google Kubernetes Engine (GKE). The load balancer is accessible only in the chosen region of your Virtual Private Cloud (VPC) network on an internal IP address.

Internal HTTP(S) Load Balancing is a managed service based on the open source Envoy proxy. This enables rich traffic control capabilities based on HTTP(S) parameters. After the load balancer has been configured, it automatically allocates Envoy proxies to meet your traffic needs.

At a high level, an internal HTTP(S) load balancer consists of:

  • An internal IP address to which clients send traffic. Only clients that are located in the same region as the load balancer can access this IP address. Internal client requests stay internal to your network and region.
  • One or more backend services to which the load balancer forwards traffic. Backends can be Compute Engine VMs, groups of Compute Engine VMs (through instance groups), or GKE nodes (through network endpoint groups [NEGs]). These backends must be located in the same region as the load balancer.
Internal services with Layer 7-based load balancing (click to enlarge)
Internal services with Layer 7-based load balancing (click to enlarge)

Two additional components are used to deliver the load balancing service:

  • A URL map, which defines traffic control rules (based on Layer 7 parameters such as HTTP headers) that map to specific backend services. The load balancer evaluates incoming requests against the URL map to route traffic to backend services or perform additional actions (such as redirects).
  • Health checks, which periodically check the status of backends and reduce the risk that client traffic is sent to a non-responsive backend.

For limitations specific to Internal HTTP(S) Load Balancing, see the Limitations section.

For information about how the Google Cloud load balancers differ from each other, see the following documents:

Use cases

Internal HTTP(S) Load Balancing addresses many use cases. This section provides a few high-level examples. For additional examples, see traffic management use cases.

Load balancing using path-based routing

One common use case is load balancing traffic among services. In this example, an internal client can request video and image content by using the same base URL, mygcpservice.internal, with the paths /video and /images.

The internal HTTP(S) load balancer's URL map specifies that requests to path /video should be sent to the video backend service, while requests to path /images should be sent to the images backend service. In the following example, the video and images backend services are served by using Compute Engine VMs, but they can also be served by using GKE pods.

When an internal client sends a request to the load balancer's internal IP address, the load balancer evaluates the request according to this logic and sends the request to the correct backend service.

The following diagram illustrates this use case.

Internal (micro) services with Layer 7-based load balancing (click to enlarge)
Internal (micro) services with Layer 7-based load balancing

Modernizing legacy services

Internal HTTP(S) Load Balancing can be an effective tool for modernizing legacy applications.

One example of a legacy application is a large monolithic application that you cannot easily update. In this case, you can deploy an internal HTTP(S) load balancer in front of your legacy application. You can then use the load balancer's traffic control capabilities to direct a subset of traffic to new microservices that replace the functionality that your legacy application provides.

To begin, you would configure the load balancer's URL map to route all traffic to the legacy application by default. This maintains the existing behavior. As replacement services are developed, you would update the URL map to route portions of traffic to these replacement services.

Imagine that your legacy application contains some video processing functionality that is served when internal clients send requests to /video. You could break this video service out into a separate microservice as follows:

  1. Add Internal HTTP(S) Load Balancing in front of your legacy application.
  2. Create a replacement video processing microservice.
  3. Update the load balancer's URL map so that all requests to path /video are routed to the new microservice instead of to the legacy application.

As you develop additional replacement services, you would continue to update the URL map. Over time, fewer requests would be routed to the legacy application. Eventually, replacement services would exist for all the functionality that the legacy application provided. At this point, you could retire your legacy application.

Three-tier web services

You can use Internal HTTP(S) Load Balancing to support traditional three-tier web services. The following example shows how you can use three types of Google Cloud load balancers to scale three tiers. At each tier, the load balancer type depends on your traffic type:

The diagram shows how traffic moves through the tiers:

  1. An external HTTP(S) load balancer distributes traffic from the internet to a set of web frontend instance groups in various regions.
  2. These frontends send the HTTP(S) traffic to a set of regional, internal HTTP(S) load balancers (the subject of this overview).
  3. The internal HTTP(S) load balancers distribute the traffic to middleware instance groups.
  4. These middleware instance groups send the traffic to internal TCP/UDP load balancers, which load balance the traffic to data storage clusters.
Layer 7-based routing for internal tiers in a multi-tier app (click to enlarge)
Layer 7-based routing for internal tiers in a multi-tier app

Access examples

You can access an internal HTTP(S) load balancer in your VPC network from a connected network by using the following:

  • VPC Network Peering
  • Cloud VPN and Cloud Interconnect

For detailed examples, see Internal HTTP(S) Load Balancing and connected networks.

Architecture and resources

The following diagram shows the Google Cloud resources required for an internal HTTP(S) load balancer.

Internal HTTP(S) Load Balancing components (click to enlarge)
Internal HTTP(S) Load Balancing components

In the diagram above, the proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. You must create one proxy-only subnet in each region of a VPC network where you use internal HTTP(S) load balancers. All your internal HTTP(S) load balancers in a region and VPC network share the same proxy-only subnet because all internal HTTP(S) load balancers in the region and VPC network share a pool of Envoy proxies. Further:

  • Proxy-only subnets are only used for Envoy proxies, not your backends.
  • Backend VMs or endpoints of all internal HTTP(S) load balancers in a region and VPC network receive connections from the proxy-only subnet.
  • The IP address of an internal HTTP(S) load balancer is not located in the proxy-only subnet. The load balancer's IP address is defined by its internal managed forwarding rule, which is described below.

Each internal HTTP(S) load balancer uses these Google Cloud configuration resources:

  • An internal managed forwarding rule specifies an internal IP address, port, and regional target HTTP(S) proxy. Clients use the IP address and port to connect to the load balancer's Envoy proxies – the forwarding rule's IP address is the IP address of the load balancer (sometimes called a virtual IP address or VIP).

    The internal IP address associated with the forwarding rule can come from any subnet (in the same network and region) with its --purpose flag set to PRIVATE. Note that:

    • The IP address can (but does not need to) come from the same subnet as the backend instance groups.
    • The IP address must not come from the reserved proxy-only subnet that has its --purpose flag set to INTERNAL_HTTPS_LOAD_BALANCER.
  • A regional target HTTP(S) proxy terminates HTTP(S) connections from clients. The HTTP(S) proxy consults the URL map to determine how to route traffic to backends. A target HTTPS proxy uses an SSL certificate to authenticate itself to clients.

    The load balancer preserves the Host header of the original client request. The load balancer also appends two IP addresses to the X-Forwarded-For header:

    • The IP address of the client that connects to the load balancer
    • The IP address of the load balancer's forwarding rule

    If there is no X-Forwarded-For header on the incoming request, these two IP addresses are the entire header value. If the request does have an X-Forwarded-For header, other information, such as the IP addresses recorded by proxies on the way to the load balancer, are preserved before the two IP addresses. The load balancer does not verify any IP addresses that precede the last two IP addresses in this header.

    If you are running a proxy as the backend server, this proxy typically appends more information to the X-Forwarded-For header, and your software might need to take that into account. The proxied requests from the load balancer come from an IP address in the proxy-only subnet, and your proxy on the backend instance might record this address as well as the backend instance's own IP address.

  • The HTTP(S) proxy uses a regional URL map to make a routing determination based on HTTP attributes (such as the request path, cookies, or headers). Based on the routing decision, the proxy forwards client requests to specific regional backend services. The URL map can specify additional actions to take such as rewriting headers, sending redirects to clients, and configuring timeout policies (among others).

  • A regional backend service distributes requests to healthy backends (either instance groups containing Compute Engine VMs or NEGs containing GKE containers).

  • One or more backends must be connected to the backend service. Backends can be instance groups or NEGs in any of the following configurations:

    • Managed instance groups (zonal or regional)
    • Unmanaged instance groups (zonal)
    • Network endpoint groups (zonal)

    You cannot use instance groups and NEGs on the same backend service.

  • A regional health check periodically monitors the readiness of your backends. This reduces the risk that requests might be sent to backends that can't service the request.

SSL certificates

If you are using HTTPS-based load balancing, you must install one or more SSL certificates on the target HTTPS proxy.

These certificates are used by target HTTPS proxies to secure communications between the load balancer and the client.

For information about SSL certificate limits and quotas, see SSL certificates on the load balancing quotas page.

For the best security, use end-to-end encryption for your HTTPS load balancer deployment. For more information, see Encryption from the load balancer to the backends.

For general information about how Google encrypts user traffic, see the Encryption in Transit in Google Cloud white paper.

Firewall rules

Your internal HTTP(S) load balancer requires the following firewall rules:

Timeouts and retries

The backend service timeout is a request/response timeout for HTTP(S) traffic. This is the amount of time that the load balancer waits for a backend to return a full response to a request.

For example, if the value of the backend service timeout is the default value of 30 seconds, the backends have 30 seconds to respond to requests. The load balancer retries the HTTP GET request once if the backend closes the connection or times out before sending response headers to the load balancer. If the backend sends response headers or if the request sent to the backend is not an HTTP GET request, the load balancer does not retry. If the backend does not reply at all, the load balancer returns an HTTP 5xx response to the client. For these load balancers, change the timeout value if you want to allow more or less time for the backends to respond to requests.

Internal HTTP(S) Load Balancing has two distinct types of timeouts:
  • A configurable HTTP backend service timeout, which represents the amount of time the load balancer waits for your backend to return a complete HTTP response. The default value for the backend service timeout is 30 seconds. Consider increasing this timeout under any of these circumstances:

    • You expect a backend to take longer to return HTTP responses.
    • You see an HTTP 408 responses with the jsonPayload.statusDetail client_timed_out.
    • The connection is upgraded to a WebSocket

    For WebSocket traffic sent through the load balancer, the backend service timeout is interpreted as the maximum amount of time that a WebSocket connection can remain open, whether idle or not. For more information, see Backend service settings.

  • A TCP session timeout, whose value is fixed at 10 minutes (600 seconds). This session timeout is sometimes called a keepalive or idle timeout, and its value is not configurable by modifying your backend service. You must configure the web server software used by your backends so that its keepalive timeout is longer than 600 seconds to prevent connections from being closed prematurely by the backend. This timeout does not apply to WebSockets.

This table illustrates changes necessary to modify keepalive timeouts for common web server software:

Web server software Parameter Default setting Recommended setting
Apache KeepAliveTimeout KeepAliveTimeout 5 KeepAliveTimeout 620
nginx keepalive_timeout keepalive_timeout 75s; keepalive_timeout 620s;

The load balancer retries failed GET requests in certain circumstances, such as when the backend service timeout is exhausted. It does not retry failed POST requests. Retries are limited to two attempts. Retried requests only generate one log entry for the final response.

For more information, see Internal HTTP(S) Load Balancing logging and monitoring.

WebSocket support

Google Cloud HTTP(S)-based load balancers have native support for the WebSocket protocol when you use HTTP or HTTPS as the protocol to the backend. The load balancer does not need any configuration to proxy WebSocket connections.

The WebSocket protocol provides a full-duplex communication channel between clients and servers. An HTTP(S) request initiates the channel. For detailed information about the protocol, see RFC 6455.

When the load balancer recognizes a WebSocket Upgrade request from an HTTP(S) client followed by a successful Upgrade response from the backend instance, the load balancer proxies bidirectional traffic for the duration of the current connection. If the backend instance does not return a successful Upgrade response, the load balancer closes the connection.

The timeout for a WebSocket connection depends on the configurable backend service timeout of the load balancer, which is 30 seconds by default. This timeout applies to WebSocket connections regardless of whether they are in use. For more information about the backend service timeout and how to configure it, see Timeouts and retries.

Session affinity for WebSockets works the same as for any other request. For information, see Session affinity.

Traffic types, scheme, and scope

Backend services support the HTTP, HTTPS, or HTTP/2 protocols. Clients and backends do not need to use the same request protocol. For example, clients can send requests to the load balancer by using HTTP/2, and the load balancer can forward these requests to backends by using HTTP/1.1.

Because the scope of an internal HTTP(S) load balancer is regional, not global, clients and backend VMs or endpoints must all be in the same region.

Shared VPC architectures

Internal HTTP(S) Load Balancing supports networks that use Shared VPC. If you're not already familiar with Shared VPC, read the Shared VPC overview documentation. At a high level:

  • You designate a host project and attach one or more other service projects to it.
  • The host project administrator creates one or more Shared VPC networks and subnets, and shares these with service projects.
  • Eligible resources from service projects can use subnets in the Shared VPC network.

In the context of Internal HTTP(S) Load Balancing, there are two ways to configure load balancing within a Shared VPC network. You can create the load balancer and its backend instances either in the service project or in the host project.

Load balancer and backends in a service project

In this model, you deploy the load balancer and backend instances in a service project. You then configure the load balancer and the backend instances to use the Shared VPC network.

This deployment model aligns closely with the typical use-case for the Shared VPC network deployment model: division of responsibility between network administration and service development. It enables network administrators to securely and efficiently allocate internal IP space, and maintains a clear separation of responsibilities between network administrators and service developers.

Internal HTTP(S) Load Balancing  on Shared VPC network
Internal HTTP(S) Load Balancing on Shared VPC network

Host project

The host project administrator:

  • Sets up the Shared VPC network in the host project.
  • Provisions subnets from the Shared VPC network.
  • Configures firewall rules in the Shared VPC network.

Service project

  • The service project administrator creates the load balancer (forwarding rule, target HTTP(S) proxy, URL map, backend service(s)) and backend instances in the service project.
  • These load balancing resources and backend instances reference the shared network and subnets from the Shared VPC host project.

This pattern enables service developers to create load balanced services in their own service projects. The service development team can also update the load balancer's configuration and make changes to backend instances without involving the administrators of the host project.

If clients are in the same Shared VPC network as the load balancer, they can be in either the host project or a service project. Such clients can use the load balancer's private IP address to access load balanced services.

To learn how to configure an internal HTTP(S) load balancer for a Shared VPC network, see Setting up Internal HTTP(S) Load Balancing with Shared VPC.

Load balancer and backends in a host project

With this network deployment model, the network, load balancer, and backends are all in the host project. While this setup works, it is not well-suited for typical Shared VPC deployments since it does not separate network administration and service development responsibilities.

If you still need to run a load balancer and its backends in the host project, you can follow the steps in Setting up Internal HTTP(S) Load Balancing.

Limitations

  • Internal HTTP(S) Load Balancing operates at a regional level.

  • There's no guarantee that a request from a client in one zone of the region is sent to a backend that's in the same zone as the client. Session affinity doesn't reduce communication between zones.

  • Internal HTTP(S) Load Balancing isn't compatible with the following features:

  • When creating an internal HTTP(S) load balancer in a Shared VPC host or service project:

    • All load balancing components and backends must exist in the same project, either all in a host project or all in a service project. For example, you cannot deploy the load balancer's forwarding rule in one project and create backend instances in another project.

    • Clients can be located in either the host project, any attached service projects, or any connected networks. Clients must use the same Shared VPC network and be in the same region as the load balancer.

  • An internal HTTP(S) load balancer supports HTTP/2 only over TLS.

  • Google Cloud doesn't warn you if your proxy-only subnet runs out of IP addresses.

  • Within each VPC network, each internal managed forwarding rule must have its own IP address. For more information, see Multiple forwarding rules with a common IP address.

  • The internal forwarding rule that your internal HTTP(S) load balancer uses must have exactly one port.

  • Internal HTTP(S) Load Balancing isn't currently compatible with an internal TCP/UDP load balancer as a next hop. If you configure a custom static route with an internal TCP/UDP load balancer as the next hop, your internal HTTP(S) load balancer doesn't receive the route. As a workaround, you can do the following:

    1. Configure the router VM (the routing appliance) to perform source network address translation (SNAT) when sending packets to the internal HTTP(S) load balancer.
    2. Configure the router VM to perform destination network address translation (DNAT) when routing the reply back to the requestor VM.

    For an example of SNAT and DNAT configuration, see Setting up Internal TCP/UDP Load Balancing for third-party appliances.

What's next