By using Google Cloud SSL Proxy Load Balancing for your SSL traffic, you can terminate user SSL (TLS) connections at the load balancing layer, and then balance the connections across your backend instances by using the SSL (recommended) or TCP protocols. For the types of backends that are supported, see Backends.
SSL Proxy Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend that you use HTTP(S) Load Balancing.
For information about how the Google Cloud load balancers differ from each other, see the following documents:
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, and then proxied over IPv4 to your VMs.
SSL Proxy Load Balancing is a load balancing service that can be deployed globally. You can deploy your backends in multiple regions, and the load balancer automatically directs traffic to the closest region that has capacity. If the closest region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections remain in the current region.
Global load balancing requires that you use the Premium Tier of Network Service Tiers, which is the default tier. Otherwise, load balancing is handled regionally.
Following are some benefits of using SSL Proxy Load Balancing:
Intelligent routing. The load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without considering capacity. The use of smarter routing allows provisioning at N+1 or N+2 instead of x*N.
Better utilization of backends. SSL processing can be very CPU-intensive if the ciphers used are not CPU efficient. To maximize CPU performance, use ECDSA SSL certificates and TLS 1.2, and prefer the ECDHE-ECDSA-AES128-GCM-SHA256 cipher suite for SSL between the load balancer and your backend instances.
Certificate management. Your customer-facing SSL certificates can be either certificates that you obtain and manage (self-managed certificates), or certificates that Google obtains and manages for you (Google-managed certificates) . Google-managed SSL certificates each support up to 100 domains. Multiple-domain support for Google-managed certificates is a beta feature. You only need to provision certificates on the load balancer. On your VMs, you can simplify management by using self-signed certificates.
Security patching. If vulnerabilities arise in the SSL or TCP stack, we apply patches at the load balancer automatically to keep your VMs safe.
SSL Proxy Load Balancing support for the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, and 5222. When you use Google- managed SSL certificates with SSL Proxy Load Balancing, the frontend port for traffic must be 443 to enable the Google-managed SSL certificates to be provisioned and renewed.
SSL policies. SSL policies give you the ability to control the features of SSL that your SSL proxy load balancer negotiates with clients.
Sending traffic over unencrypted TCP between the load balancing layer and backend instances enables you to offload SSL processing from your backends; however, it also reduces security. Therefore, we do not recommend it.
SSL Proxy Load Balancing can handle HTTPS, but we don't recommend this. You should instead use HTTP(S) Load Balancing for HTTPS traffic. For more information, see the FAQ.
You can create SSL policies by using the
SSL proxy load balancers do not support client certificate-based authentication, also known as mutual TLS authentication.
The following sections describe how SSL Proxy Load Balancing works.
By using SSL Proxy Load Balancing, SSL connections are terminated at the load balancing layer, and then proxied to the closest available backend.
In this example, traffic from users in Iowa and Boston is terminated at the load balancing layer, and a separate connection is established to the selected backend.
Load balancer behavior in Network Service Tiers
SSL Proxy Load Balancing is a global service with Premium Tier. You can have only one backend service, and it can have backends in multiple regions. Traffic is allocated to backends as follows:
- When a client sends a request, the load balancing service determines the approximate origin of the request from the source IP address.
- The load balancing service determines the locations of the backends owned by the backend service, their overall capacity, and their overall current usage.
- If the closest backend instances to the user have available capacity, the request is forwarded to that closest set of backends.
- Incoming requests to the given region are distributed evenly across all available backend instances in that region. However, at very small loads, the distribution might appear to be uneven.
- If there are no healthy backend instances with available capacity in a given region, the load balancer instead sends the request to the next closest region with available capacity.
With Standard Tier, TCP Proxy Load Balancing is a regional service. Its backends must all be located in the region used by the load balancer's external IP address and forwarding rule.
Source IP addresses
The source IP addresses for packets, as seen by each backend virtual machine (VM) instance or container, is an IP address from these ranges:
The source IP address for actual load-balanced traffic is the same as the health checks probe IP ranges.
The source IP addresses for traffic, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two HTTP, SSL, or TCP sessions:
Session 1, from original client to the load balancer (GFE):
- Source IP address: the original client (or external IP address if the client is behind NAT).
- Destination IP address: your load balancer's IP address.
Session 2, from the load balancer (GFE) to the backend VM or container:
Source IP address: an IP address in one of these ranges:
You cannot predict the actual source address.
Destination IP address: the internal IP address of the backend VM or container in the Virtual Private Cloud (VPC) network.
Geographic control over where TLS is terminated
The SSL proxy load balancer terminates TLS in locations that are distributed globally, so as to minimize latency between clients and the load balancer. If you require geographic control over where TLS is terminated, you should use Network Load Balancing instead, and terminate TLS on backends that are located in regions appropriate to your needs.
The SSL proxy load balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. The reverse proxy functionality is provided by the Google Front Ends (GFEs).
The firewall rules that you set block traffic from the GFEs to the backend instances, but do not block incoming traffic to the GFEs.
The SSL proxy load balancers have a number of open ports to support other Google services that run on the same architecture. If you run a security or port scan against the external IP address of your load balancer, additional ports appear to be open.
This does not affect SSL proxy load balancers. External forwarding rules, which are used in the definition of an SSL load balancer, can only reference TCP ports 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, and 5222. Traffic with a different TCP destination port is not forwarded to the load balancer's backend.
When you add a backend to the backend service, you set a load balancing mode.
For SSL Proxy Load Balancing, the balancing mode can be
If the load balancing mode is
connection, the load is spread based on how
many concurrent connections the backend can handle. You must also specify
exactly one of the following parameters:
maxConnections (except for
regional managed instance groups),
If the load balancing mode is
backend utilization, the load is spread
based on the utilization of instances in an instance group.
For information about comparing the load balancer types and the supported balancing modes, see Load balancing methods.
You must install one or more SSL certificates on the target SSL proxy.
These certificates are used by target SSL proxies to secure communications between a Google Front End (GFE) and the client. These can be self-managed or Google-managed SSL certificates.
For information about SSL certificate limits and quotas, see SSL certificates on the load balancing quotas page.
For the best security, use end-to-end encryption for your SSL proxy load balancer deployment. For more information, see Encryption from the load balancer to the backends.
For general information about how Google encrypts user traffic, see the Encryption in Transit in Google Cloud white paper.
When should I use HTTP(S) Load Balancing instead of SSL Proxy Load Balancing?
Although SSL Proxy Load Balancing can handle HTTPS traffic, HTTP(S) Load Balancing has additional features that make it a better choice in most cases.
HTTP(S) Load Balancing has the following additional functionality:
- Negotiates HTTP/2 and SPDY/3.1.
- Rejects invalid HTTP requests or responses.
- Forwards requests to different VMs based on URL host and path.
- Integrates with Cloud CDN.
- Spreads the request load more evenly among backend instances, providing better backend utilization. HTTPS load balances each request separately, whereas SSL Proxy Load Balancing sends all bytes from the same SSL or TCP connection to the same backend instance.
SSL Proxy Load Balancing can be used for other protocols that use SSL, such as WebSockets and IMAP over SSL.
Can I view the original IP address of the connection to the load balancing layer?
Yes. You can configure the load balancer to prepend a PROXY protocol version 1 header to retain the original connection information. For more information, see Update proxy protocol header for the proxy.