External SSL Proxy Load Balancing overview

Stay organized with collections Save and categorize content based on your preferences.

External SSL Proxy Load Balancing is a reverse proxy load balancer that distributes SSL traffic coming from the internet to virtual machine (VM) instances in your Google Cloud VPC network.

When using External SSL Proxy Load Balancing for your SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP. For the types of backends that are supported, see Backends.

With the Premium Tier, External SSL Proxy Load Balancing can be configured as a global load balancing service. With Standard Tier, the external SSL proxy load balancer handles load balancing regionally. For details, see Load balancer behavior in Network Service Tiers.

In this example, traffic from users in Iowa and Boston is terminated at the load balancing layer, and a separate connection is established to the selected backend.

Cloud Load Balancing with SSL termination (click to enlarge)
Cloud Load Balancing with SSL termination (click to enlarge)

External SSL Proxy Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend that you use HTTP(S) Load Balancing.

For information about how the Google Cloud load balancers differ from each other, see the following documents:

Benefits

Following are some benefits of using External SSL Proxy Load Balancing:

  • IPv6 termination. External SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, and then proxied over IPv4 to your VMs.

  • Intelligent routing. The load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without considering capacity. The use of smarter routing allows provisioning at N+1 or N+2 instead of x*N.

  • Better utilization of backends. SSL processing can be very CPU-intensive if the ciphers used are not CPU efficient. To maximize CPU performance, use ECDSA SSL certificates and TLS 1.2, and prefer the ECDHE-ECDSA-AES128-GCM-SHA256 cipher suite for SSL between the load balancer and your backend instances.

  • Certificate management. Your customer-facing SSL certificates can be either certificates that you obtain and manage (self-managed certificates), or certificates that Google obtains and manages for you (Google-managed certificates). Google-managed SSL certificates each support up to 100 domains. Multiple domains are supported for Google-managed certificates. You only need to provision certificates on the load balancer. On your VMs, you can simplify management by using self-signed certificates.

  • Security patching. If vulnerabilities arise in the SSL or TCP stack, we apply patches at the load balancer automatically to keep your VMs safe.

  • Support for all ports. External SSL Proxy Load Balancing allows any valid port from 1-65535. When you use Google-managed SSL certificates with External SSL Proxy Load Balancing, the frontend port for traffic must be 443 to enable the Google-managed SSL certificates to be provisioned and renewed.

  • SSL policies. SSL policies give you the ability to control the features of SSL that your external SSL proxy load balancer negotiates with clients.

  • Geographic control over where TLS is terminated. The external SSL proxy load balancer terminates TLS in locations that are distributed globally, so as to minimize latency between clients and the load balancer. If you require geographic control over where TLS is terminated, you should use Network Load Balancing instead, and terminate TLS on backends that are located in regions appropriate to your needs.

  • Integration with Google Cloud Armor. You can use Google Cloud Armor security policies to protect your infrastructure from distributed denial-of-service (DDoS) attacks and other targeted attacks.

Architecture

The following are components of external SSL proxy load balancers.

Forwarding rules and IP addresses

Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy and a backend service.

Each forwarding rule provides a single IP address that you can use in DNS records for your application. No DNS-based load balancing is required. You can either reserve a static IP address that you can use or let Cloud Load Balancing assign one for you. We recommend that you reserve a static IP address; otherwise, you must update your DNS record with the newly- assigned ephemeral IP address whenever you delete a forwarding rule and create a new one.

Each external forwarding rule that you use in an external SSL proxy load balancer can reference exactly one of the ports listed in: Port specifications for forwarding rules.

external SSL proxy load balancer supports a single port in the port range (contiguous). To support multiple consecutive ports, you have to configure multiple forwarding rules. Multiple forwarding rules can be configured with the same virtual IP address and different ports; therefore, you can proxy multiple applications with separate custom ports to the same SSL proxy virtual IP address.

Target proxies

External SSL Proxy Load Balancing terminates SSL connections from the client and creates new connections to the backends. The target proxy routes these new connections to the backend service.

By default, the original client IP address and port information is not preserved. You can preserve this information by using the PROXY protocol.

SSL certificates

You must install one or more SSL certificates on the target SSL proxy. These certificates are used by target SSL proxies to secure communications between a Google Front End (GFE) and the client. These can be self-managed or Google-managed SSL certificates. For information about SSL certificate limits and quotas, see SSL certificates on the load balancing quotas page.

You can create SSL policies to control the features of SSL that your load balancer negotiates. For details, see SSL policies overview.

Sending traffic over unencrypted TCP between the load balancing layer and backend instances enables you to offload SSL processing from your backends; however, it also reduces security. Therefore, we do not recommend it. For the best security, use end-to-end encryption for your external SSL proxy load balancer deployment. For more information, see Encryption from the load balancer to the backends.

For general information about how Google encrypts user traffic, see the Encryption in Transit in Google Cloud white paper.

Backend services

Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group or network endpoint group, and information about the backend's serving capacity. Backend serving capacity can be based on CPU or requests per second (RPS).

Each backend service specifies the health checks to perform for the available backends.

The external SSL proxy load balancer supports the following balancing modes:

  • UTILIZATION (default): instances can accept traffic if the backend utilization of the instance group is below a specified value. To set this value, use the --max-utilization parameter and pass a value between 0.0 (0%) and 1.0 (100%). Default is 0.8 (80%).
  • CONNECTION: instances can accept traffic if the number of connections is below a specified value. This value can be one of the following:
    • --max-connections: the maximum number of connections across all of the backend instances in the instance group.
    • --max-connections-per-instance: the maximum number of connections a single instance can handle. Requests are forwarded if the average number of connections for the instance group does not exceed this number.

You can specify a --max-connections or --max-connections-per-instance even if you set balancing mode to UTILIZATION. If both --max-utilization and a connection parameter are specified, the group is considered at full utilization when either limit is reached.

For more information on the backend service resource, see Backend services overview.

To ensure minimal interruptions to your users, you can enable connection draining on backend services. Such interruptions might happen when a backend is terminated, removed manually, or removed by an autoscaler. To learn more about using connection draining to minimize service interruptions, see Enabling connection draining.

Backends and VPC networks

All backends must be located in the same project but can be located in different VPC networks. The different VPC networks do not need to be connected using VPC Network Peering because GFE proxy systems communicate directly with backends in their respective VPC networks.

Firewall rules

External TCP Proxy Load Balancing and External SSL Proxy Load Balancing require the following firewall rules:

  • An ingress allow firewall rule to permit traffic from Google Front Ends (GFEs) to reach your backends.

  • An ingress allow firewall rule to permit traffic from the health check probes ranges to reach your backends. For more information about health check probes and why it's necessary to allow traffic from them, see Probe IP ranges and firewall rules.

Firewall rules are implemented at the VM instance level, not on GFE proxies. You cannot use Google Cloud firewall rules to prevent traffic from reaching the load balancer.

The ports for these firewall rules must be configured as follows:

  • Allow traffic to the destination port for each backend service's health check.

  • For instance group backends: Determine the ports to be configured by the mapping between the backend service's named port and the port numbers associated with that named port on each instance group. The port numbers can vary among instance groups assigned to the same backend service.

  • For GCE_VM_IP_PORT NEG backends: Allow traffic to the port numbers of the endpoints.

The source IP address ranges required to be permitted are as follows:

  • 130.211.0.0/22
  • 35.191.0.0/16

These ranges apply to health checks and requests from the GFE.

Source IP addresses

The source IP address for packets, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two TCP connections.

  • Connection 1, from original client to the load balancer (GFE):

    • Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
    • Destination IP address: your load balancer's IP address.
  • Connection 2, from the load balancer (GFE) to the backend VM or endpoint:

    • Source IP address: an IP address in one of the ranges specified in Firewall rules.

    • Destination IP address: the internal IP address of the backend VM or container in the VPC network.

Preserving client source IP addresses

To preserve the original source IP addresses of incoming connections to the load balancer, you can configure the load balancer to prepend a PROXY protocol version 1 header to retain the original connection information. For more information, see Update proxy protocol header for the proxy.

Open ports

The external SSL proxy load balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. These load balancers are implemented using Google Front End (GFE) proxies worldwide.

GFEs have several open ports to support other Google services that run on the same architecture. To see a list of some of the ports likely to be open on GFEs, see Forwarding rule: Port specifications. There might be other open ports for other Google services running on GFEs.

Running a port scan on the IP address of a GFE-based load balancer is not useful from an auditing perspective for the following reasons:

  • A port scan (for example, with nmap) generally expects no response packet or a TCP RST packet when performing TCP SYN probing. GFEs will send SYN-ACK packets in response to SYN probes only for ports on which you have configured a forwarding rule and on ports 80 and 443 if your load balancer uses a Premium Tier IP address. GFEs only send packets to your backends in response to packets sent to your load balancer's IP address and the destination port configured on its forwarding rule. Packets sent to different load balancer IP addresses or your load balancer's IP address on a port not configured in your forwarding rule do not result in packets being sent to your load balancer's backends.

  • Packets sent to the IP address of your load balancer could be answered by any GFE in Google's fleet; however, scanning a load balancer IP address and destination port combination only interrogates a single GFE per TCP connection. The IP address of your load balancer is not assigned to a single device or system. Thus, scanning the IP address of a GFE-based load balancer does not scan all the GFEs in Google's fleet.

With that in mind, the following are some more effective ways to audit the security of your backend instances:

  • A security auditor should inspect the forwarding rules configuration for the load balancer's configuration. The forwarding rules define the destination port for which your load balancer accepts packets and forwards them to the backends. For GFE-based load balancers, each external forwarding rule can only reference a single destination TCP port.

  • A security auditor should inspect the firewall rule configuration applicable to backend VMs. The firewall rules that you set block traffic from the GFEs to the backend VMs, but do not block incoming traffic to the GFEs. For best practices, see the firewall rules section.

Shared VPC architecture

External SSL Proxy Load Balancing supports networks that use Shared VPC. Shared VPC lets you maintain a clear separation of responsibilities between network administrators and service developers. Your development teams can focus on building services in service projects, and the network infrastructure teams can provision and administer load balancing. If you're not already familiar with Shared VPC, read the Shared VPC overview documentation.

IP address Forwarding rule Target proxy and URL map Backend components
An external IP address must be defined in the same project as the load balancer. The external forwarding rule must be defined in the same project as the backend instances (the service project). The target SSL proxy and URL map must be defined in the same project as the backend instances. A global backend service must be defined in the same project as the backend instances. These instances must be in instance groups attached to the backend service as backends. Health checks associated with backend services must be defined in the same project as the backend service as well.

Traffic distribution

The way an external SSL proxy load balancer distributes traffic to its backends depends on the balancing mode and the hashing method selected to choose a backend (session affinity).

How connections are distributed

External TCP Proxy Load Balancing can be configured as a global load balancing service with Premium Tier, and as a regional service in the Standard Tier.

For Premium Tier:

  • Google advertises your load balancer's IP address from all points of presence, worldwide. Each load balancer IP address is global anycast.
  • If you configure a backend service with backends in multiple regions, Google Front Ends (GFEs) attempt to direct requests to healthy backend instance groups or NEGs in the region closest to the user. Details for the process are documented on this page.

For Standard Tier:

  • Google advertises your load balancer's IP address from points of presence associated with the forwarding rule's region. The load balancer uses a regional external IP address.

  • You can configure backends in the same region as the forwarding rule. The process documented here still applies, but the load balancer only directs requests to healthy backends in that one region.

Request distribution process:

  1. The forwarding rule's external IP address is advertised by edge routers at the borders of Google's network. Each advertisement lists a next hop to a Layer 3/4 load balancing system (Maglev).
  2. The Maglev systems route traffic to a first-layer Google Front End (GFE). The first-layer GFE terminates TLS if required and then routes traffic to second-layer GFEs according to this process:
    1. If a backend service uses instance group or GCE_VM_IP_PORT NEG backends, the first layer-GFEs prefer second-layer GFEs that are located in or near the region that contains the instance group or NEG.
    2. For backend buckets and backend services with hybrid NEGs, serverless NEGs, and internet NEGs, the first-layer GFEs choose second-layer GFEs in a subset of regions such that the round trip time between the two GFEs is minimized.

      Second-layer GFE preference is not a guarantee, and it can dynamically change based on Google's network conditions and maintenance.

      Second-layer GFEs are aware of health check status and actual backend capacity usage.

  3. The second-layer GFE directs requests to backends in zones within its region.
  4. For Premium Tier, sometimes second-layer GFEs send requests to backends in zones of different regions. This behavior is called spillover.
  5. Spillover is governed by two principles:

    • Spillover is possible when all backends known to a second-layer GFE are at capacity or are unhealthy.
    • The second-layer GFE has information for healthy, available backends in zones of a different region.

    The second-layer GFEs are typically configured to serve a subset of backend locations.

    Spillover behavior does not exhaust all possible Google Cloud zones. If you need to direct traffic away from backends in a particular zone or in an entire region, you must set the capacity scaler to zero. Configuring backends to fail health checks does not guarantee that the second-layer GFE spills over to backends in zones of a different region.

  6. When distributing requests to backends, GFEs operate at a zonal level.

Balancing mode

When you add a backend to the backend service, you set a load balancing mode.

For External SSL Proxy Load Balancing, the balancing mode can be CONNECTION or UTILIZATION.

If the load balancing mode is CONNECTION, the load is spread based on how many concurrent connections the backend can handle. You must also specify exactly one of the following parameters: maxConnections (except for regional managed instance groups), maxConnectionsPerInstance, or maxConnectionsPerEndpoint.

If the load balancing mode is UTILIZATION, the load is spread based on the utilization of instances in an instance group.

For information about comparing the load balancer types and the supported balancing modes, see Load balancing methods.

Session affinity

Session affinity sends all requests from the same client to the same backend, if the backend is healthy and has capacity.

External SSL Proxy Load Balancing offers client IP affinity, which forwards all requests from the same client IP address to the same backend.

Failover

If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region. If all backends within a region are unhealthy, traffic is distributed to healthy backends in other regions (Premium Tier only). If all backends are unhealthy, the load balancer drops traffic.

Load balancing for GKE applications

If you are building applications in Google Kubernetes Engine, you can use standalone NEGs to load balance traffic directly to containers. With standalone NEGs you are responsible for creating the Service object that creates the NEG, and then associating the NEG with the backend service so that the load balancer can connect to the Pods.

Related GKE documentation:

Limitations

  • Each external SSL proxy load balancer has a single backend service resource. Changes to the backend service are not instantaneous. It can take several minutes for changes to propagate to Google Front Ends (GFEs).

  • External SSL proxy load balancers do not support client certificate-based authentication, also known as mutual TLS authentication.

  • Although External SSL Proxy Load Balancing can handle HTTPS traffic, we don't recommend this. You should instead use HTTP(S) Load Balancing for HTTPS traffic. HTTP(S) Load Balancing also does the following, which makes it a better choice in most cases:

    • Negotiates HTTP/2 and HTTP/3.
    • Rejects invalid HTTP requests or responses.
    • Forwards requests to different VMs based on URL host and path.
    • Integrates with Cloud CDN.
    • Spreads the request load more evenly among backend instances, providing better backend utilization. HTTPS load balances each request separately, whereas External SSL Proxy Load Balancing sends all bytes from the same SSL or TCP connection to the same backend instance.
  • For external SSL proxy load balancers with Google-managed SSL certificates, the frontend ports must include 443 for the certificates to be provisioned and renewed successfully.

    External SSL Proxy Load Balancing can be used for other protocols that use SSL, such as WebSockets and IMAP over SSL.

  • External SSL proxy load balancers support only lowercase characters in domains in a common name (CN) attribute or a subject alternative name (SAN) attribute of the certificate. Certificates with uppercase characters in domains are returned only when set as the primary certificate in the target proxy.

What's next