This document introduces the concepts that you need to understand to configure a Google Cloud external proxy Network Load Balancer.
The external proxy Network Load Balancer is a reverse proxy load balancer that distributes TCP traffic coming from the internet to virtual machine (VM) instances in your Google Cloud Virtual Private Cloud (VPC) network. When using an external proxy Network Load Balancer, incoming TCPor SSL traffic is terminated at the load balancer. A new connection then forwards traffic to the closest available backendby using either TCP or SSL (recommended). For more use cases, see Proxy Network Load Balancer overview.
External proxy Network Load Balancers let you use a single IP address for all users worldwide. The load balancer automatically routes traffic to the backends that are closest to the user.
In this example, SSL traffic from users in City A and City B is terminated at the load balancing layer, and a separate connection is established to the selected backend.
Modes of operation
You can configure an external proxy Network Load Balancer in the following modes:
- A classic proxy Network Load Balancer is implemented on globally distributed Google Front Ends (GFEs). This load balancer can be configured to handle either TCP or SSL traffic by using either a target TCP proxy or a target SSL proxy respectively. With the Premium Tier, this load balancer can be configured as a global load balancing service. With Standard Tier, this load balancer is configured as a regional load balancing service. Classic proxy Network Load Balancers can also be used for other protocols that use SSL, such as WebSockets and IMAP over SSL.
- A global external proxy Network Load Balancer is implemented on globally distributed GFEs and supports advanced traffic management capabilities. This load balancer can be configured to handle either TCP or SSL traffic by using either a target TCP proxy or a target SSL proxy respectively. This load balancer is configured as a global load balancing service with the Premium Tier. Global external proxy Network Load Balancers can also be used for other protocols that use SSL, such as WebSockets and IMAP over SSL.
- A regional external proxy Network Load Balancer is implemented on the open source Envoy proxy software stack. It can handle only TCP traffic. This load balancer is configured as a regional load balancing service that can use either Premium or Standard Tier.
Identify the mode
To determine the mode of a load balancer, complete the following steps.
Console
In the Google Cloud console, go to the Load balancing page.
On the Load Balancers tab, the load balancer type, protocol, and region are displayed. If the region is blank, then the load balancer is global.
The following table summarizes how to identify the mode of the load balancer.
Load balancer mode Load balancer type Access type Region Classic proxy Network Load Balancer Network (Proxy classic) External Global external proxy Network Load Balancer Network (Proxy) External Regional external proxy Network Load Balancer Network (Proxy) External Specifies a region
gcloud
Use the
gcloud compute forwarding-rules describe
command:gcloud compute forwarding-rules describe FORWARDING_RULE_NAME
In the command output, check the load balancing scheme, region, and network tier. The following table summarizes how to identify the mode of the load balancer.
Load balancer mode Load balancing scheme Forwarding rule Network tier Classic proxy Network Load Balancer EXTERNAL Global Standard or Premium Global external proxy Network Load Balancer EXTERNAL_MANAGED Global Premium Regional external proxy Network Load Balancer EXTERNAL_MANAGED Regional Standard or Premium
Architecture
The following diagrams show the components of external proxy Network Load Balancers.
Global
This diagram shows the components of a global external proxy Network Load Balancer deployment. This architecture applies to both the global external proxy Network Load Balancer and classic proxy Network Load Balancer in Premium Tier.
Regional
This diagram shows the components of a regional external proxy Network Load Balancer deployment.
The following are components of external proxy Network Load Balancers.
Proxy-only subnet
Note: Proxy-only subnets are only required for regional external proxy Network Load Balancers.The proxy-only subnet provides a set of IP addresses
that Google uses to run Envoy proxies on your behalf. You must create one
proxy-only subnet in each region of a VPC network where you use
load balancers. The --purpose
flag for this proxy-only subnet is
set to REGIONAL_MANAGED_PROXY
. All regional Envoy-based load
balancers in the same
region and VPC network share a pool of Envoy proxies from the
same proxy-only subnet.
Backend VMs or endpoints of all load balancers in a region and a VPC network receive connections from the proxy-only subnet.
Points to remember:
- Proxy-only subnets are only used for Envoy proxies, not your backends.
- The IP address of the load balancer is not located in the proxy-only subnet. The load balancer's IP address is defined by its external managed forwarding rule.
Forwarding rules and IP addresses
Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration that consists of a target proxy and a backend service.
IP address specification. Each forwarding rule references a single IP address that you can use in DNS records for your application. You can either reserve a static IP address that you can use or let Cloud Load Balancing assign one for you. We recommend that you reserve a static IP address. Otherwise, you must update your DNS record with the newly assigned ephemeral IP address whenever you delete a forwarding rule and create a new one.
Port specification. External forwarding rules used in the definition of this load balancer can reference exactly one port from 1-65535. If you want to support multiple consecutive ports, you need to configure multiple forwarding rules. Multiple forwarding rules can be configured with the same virtual IP address and different ports; therefore, you can proxy multiple applications with separate custom ports to the same TCP proxy virtual IP address. For more details, see Port specifications for forwarding rules.
To support multiple consecutive ports, you have to configure multiple forwarding rules. Multiple forwarding rules can be configured with the same virtual IP address and different ports. Therefore, you can proxy multiple applications with separate custom ports to the same TCP proxy virtual IP address.
The following table shows the forwarding rule requirements for external proxy Network Load Balancers.
Load balancer mode | Network Service Tier | Forwarding rule, IP address, and load balancing scheme | Routing from the internet to the load balancer frontend |
---|---|---|---|
Classic proxy Network Load Balancer | Premium Tier | Global external forwarding rule Load balancing scheme: |
Requests routed to the GFEs that are closest to the client on the internet. |
Standard Tier | Regional external forwarding rule Load balancing scheme: |
Requests routed to a GFE in the load balancer's region. | |
Global external proxy Network Load Balancer | Premium Tier | Global external forwarding rule Load balancing scheme: |
Requests routed to the GFEs that are closest to the client on the internet. |
Regional external proxy Network Load Balancer | Premiumand Standard Tier | Regional external forwarding rule Load balancing scheme: |
Requests routed to the Envoy proxies in the same region as the load balancer. |
Forwarding rules and VPC networks
This section describes how forwarding rules used by external Application Load Balancers are associated with VPC networks.
Load balancer mode | VPC network association |
---|---|
Global external proxy Network Load Balancer Classic proxy Network Load Balancer |
No associated VPC network. The forwarding rule always uses an IP address that is outside the VPC network. Therefore, the forwarding rule has no associated VPC network. |
Regional external proxy Network Load Balancer | The forwarding rule's VPC network is the network where the proxy-only subnet has been created. You specify the network when you create the forwarding rule. |
Target proxies
External proxy Network Load Balancers terminate connections from the client and create new connections to the backends. The target proxy routes these new connections to the backend service.
Depending on the type of traffic your application needs to handle, you can configure an external proxy Network Load Balancer with either a target TCP proxy or a target SSL proxy.
- Target TCP proxy: Configure the load balancer with a target TCP proxy if you're expecting TCP traffic.
- Target SSL proxy: Configure the load balancer with a target SSL proxy if you're expecting encrypted client traffic. This type of load balancer is intended for non-HTTP(S) traffic only. For HTTP(S) traffic, we recommend that you use an external Application Load Balancer.
By default, the target proxy does not preserve the original client IP address and port information. You can preserve this information by enabling the PROXY protocol on the target proxy.
The following table shows the target proxy requirements for external proxy Network Load Balancers.
Load balancer mode | Network Service Tier | Target proxy |
---|---|---|
Classic proxy Network Load Balancer | Premium Tier | targetTcpProxies or targetSslProxies |
Standard Tier | targetTcpProxies or targetSslProxies |
|
Global external proxy Network Load Balancer | Premium Tier | targetTcpProxies or targetSslProxies |
Regional external proxy Network Load Balancer | Premium and Standard Tier | regionTargetTcpProxies |
SSL certificates
SSL certificates are only required if you're deploying a global external proxy Network Load Balancer and classic proxy Network Load Balancer with a target SSL proxy.
External proxy Network Load Balancers using target SSL proxies require private keys and SSL certificates as part of the load balancer configuration.
Google Cloud provides two configuration methods for assigning private keys and SSL certificates to target SSL proxies: Compute Engine SSL certificates and Certificate Manager. For a description of each configuration, see Certificate configuration methods in the SSL certificates overview.
Google Cloud provides two certificate types: Self-managed and Google-managed. For a description of each type, see Certificate types in the SSL certificates overview.
Backend services
Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group or network endpoint group and information about the backend's serving capacity. Backend serving capacity can be based on CPU or requests per second (RPS).
Each load balancer has a single backend service resource that specifies the health check to be performed for the available backends.
Changes made to the backend service are not instantaneous. It can take several minutes for changes to propagate to GFEs. To ensure minimal interruptions to your users, you can enable connection draining on backend services. Such interruptions might happen when a backend is terminated, removed manually, or removed by an autoscaler. To learn more about using connection draining to minimize service interruptions, see Enabling connection draining.
For more information about the backend service resource, see Backend services overview.
The following table specifies the different backends supported on the backend service of external proxy Network Load Balancers.
Load balancer mode | Supported backends on a backend service | ||||||
---|---|---|---|---|---|---|---|
Instance groups | Zonal NEGs | Internet NEGs | Serverless NEGs | Hybrid NEGs | Private Service Connect NEGs | GKE | |
Classic proxy Network Load Balancer | Use standalone zonal NEGs | ||||||
Global external proxy Network Load Balancer | * | GCE_VM_IP_PORT type
endpoints * |
|||||
Regional external proxy Network Load Balancer | GCE_VM_IP_PORT type endpoints |
Regional NEGs only | Add a Private Service Connect NEG |
* Global external proxy Network Load Balancers support
IPv4 and IPv6 (dual stack) instance groups and zonal NEG backends with GCE_VM_IP_PORT
endpoints.
Backends and VPC networks
For the global external proxy Network Load Balancer and the classic proxy Network Load Balancer, all backend instances from instance group backends and all backend endpoints from NEG backends must be located in the same project. However, an instance group backend or a NEG can use a different VPC network in that project. The different VPC networks don't need to be connected using VPC Network Peering because GFEs communicate directly with backends in their respective VPC networks.
For the regional external proxy Network Load Balancer, this depends on the type of backend.
For instance groups, zonal NEGs, and hybrid connectivity NEGs, all backends must be located in the same project and region as the backend service. However, a load balancer can reference a backend that uses a different VPC network in the same project as the backend service (this capability is in Preview). Connectivity between the load balancer's VPC network and the backend VPC network can be configured using either VPC Network Peering, Cloud VPN tunnels, Cloud Interconnect VLAN attachments, or a Network Connectivity Center framework.
Backend network definition
- For zonal NEGs and hybrid NEGs, you explicitly specify the VPC network when you create the NEG.
- For managed instance groups, the VPC network is defined in the instance template.
- For unmanaged instance groups, the instance group's
VPC network is set to match the VPC network
of the
nic0
interface for the first VM added to the instance group.
Backend network requirements
Your backend's network must satisfy one of the following network requirements:
The backend's VPC network must exactly match the forwarding rule's VPC network.
The backend's VPC network must be connected to the forwarding rule's VPC network using VPC Network Peering. You must configure subnet route exchanges to allow communication between the proxy-only subnet in the forwarding rule's VPC network and the subnets used by the backend instances or endpoints.
- Both the backend's VPC network and the forwarding rule's VPC network must be VPC spokes on the same Network Connectivity Center hub. Import and export filters must allow communication between the proxy-only subnet in the forwarding rule's VPC network and the subnets used by backend instances or endpoints.
- For all other backend types, all backends must be located in the same VPC network and region.
Backends and network interfaces
If you use instance group backends, packets are always delivered to nic0
. If
you want to send packets to different NICs, use NEG backends instead.
If you use zonal NEG backends, packets are sent to whatever network interface is represented by the endpoint in the NEG. The NEG endpoints must be in the same VPC network as the NEG's explicitly defined VPC network.
Protocol for communicating with the backends
When you configure a backend service for an external proxy Network Load Balancer, you set the protocol that the backend service uses to communicate with the backends.
- For classic proxy Network Load Balancers, you can choose either TCP or SSL.
- For global external proxy Network Load Balancers, you can choose either TCP or SSL.
- For regional external proxy Network Load Balancers, you can use TCP.
The load balancer uses only the protocol that you specify, and does not attempt to negotiate a connection with the other protocol.
Firewall rules
The following firewall rules are required:
For classic proxy Network Load Balancers, an ingress
allow
firewall rule to permit traffic from GFEs to reach your backends.For global external proxy Network Load Balancers, an ingress
allow
firewall rule to permit traffic from GFEs to reach your backends.
For regional external proxy Network Load Balancers, an ingress firewall rule to permit traffic from the proxy-only subnet to reach your backends.
An ingress
allow
firewall rule to permit traffic from the health check probe ranges to reach your backends. For more information about health check probes and why it's necessary to allow traffic from them, see Probe IP ranges and firewall rules.
Firewall rules are implemented at the VM instance level, not at the GFE proxies level. You cannot use firewall rules to prevent traffic from reaching the load balancer.
The ports for these firewall rules must be configured as follows:
- Allow traffic to the destination port for each backend service's health check.
- For instance group backends: determine the ports to be configured by the mapping between the backend service's named port and the port numbers associated with that named port on each instance group. Port numbers can vary between instance groups assigned to the same backend service.
- For
GCE_VM_IP_PORT NEG
zonal NEG backends: allow traffic to the port numbers of the endpoints.
The following table summarizes the required source IP address ranges for the firewall rules.
Load balancer mode | Health check source ranges | Request source ranges |
---|---|---|
Global external proxy Network Load Balancer |
For IPv6 traffic to the backends:
|
The source of GFE traffic depends on the backend type:
|
Classic proxy Network Load Balancer |
|
These ranges apply to health check probes and requests from the GFE. |
Regional external proxy Network Load Balancer *, † |
For IPv6 traffic to the backends:
|
These ranges apply to health checks probes. |
* Allowlisting Google's health check probe ranges isn't required for hybrid NEGs. However, if you're using a combination of hybrid and zonal NEGs in a single backend service, you need to allowlist the Google health check probe ranges for the zonal NEGs.
† For regional internet NEGs, health checks are optional. Traffic from load balancers using regional internet NEGs originates from the proxy-only subnet and is then NAT-translated (by using Cloud NAT) to either manual or auto-allocated NAT IP addresses. This traffic includes both health check probes and user requests from the load balancer to the backends. For details, see Regional NEGs: Use Cloud NAT to egress.
Source IP addresses
The source IP address for packets, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two TCP connections.
For the classic proxy Network Load Balancers and global external proxy Network Load Balancers:Connection 1, from original client to the load balancer (GFE):
- Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
- Destination IP address: your load balancer's IP address.
Connection 2, from the load balancer (GFE) to the backend VM or endpoint:
Source IP address: an IP address in one of the ranges specified in Firewall rules.
Destination IP address: the internal IP address of the backend VM or container in the VPC network.
Connection 1, from original client to the load balancer (proxy-only subnet):
- Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
- Destination IP address: your load balancer's IP address.
Connection 2, from the load balancer (proxy-only subnet) to the backend VM or endpoint:
Source IP address: an IP address in the proxy-only subnet that is shared among all the Envoy-based load balancers deployed in the same region and network as the load balancer.
Destination IP address: the internal IP address of the backend VM or container in the VPC network.
Open ports
External proxy Network Load Balancers are reverse proxy load balancers. The load balancer terminates incoming connections, and then opens new connections from the load balancer to the backends. These load balancers are implemented by using Google Front End (GFE) proxies worldwide.
GFEs have several open ports to support other Google services that run on the same architecture. When you run a port scan, you might see other open ports for other Google services running on GFEs.
Running a port scan on the IP address of a GFE-based load balancer is not useful from an auditing perspective for the following reasons:
A port scan (for example, with
nmap
) generally expects no response packet or a TCP RST packet when performing TCP SYN probing. GFEs will send SYN-ACK packets in response to SYN probes only for ports on which you have configured a forwarding rule. GFEs only send packets to your backends in response to packets sent to your load balancer's IP address and the destination port configured on its forwarding rule. Packets that are sent to a different IP address or port are not sent to your backends.GFEs implement security features such as Google Cloud Armor. With Google Cloud Armor Standard, GFEs provide always-on protection from volumetric and protocol-based DDoS attacks and SYN floods. This protection is available even if you haven't explicitly configured Google Cloud Armor. You'll only be charged if you configure security policies, or if you enroll in Managed Protection Plus.
Packets sent to the IP address of your load balancer could be answered by any GFE in Google's fleet; however, scanning a load balancer IP address and destination port combination only interrogates a single GFE per TCP connection. The IP address of your load balancer is not assigned to a single device or system. Thus, scanning the IP address of a GFE-based load balancer does not scan all the GFEs in Google's fleet.
With that in mind, the following are some more effective ways to audit the security of your backend instances:
A security auditor should inspect the forwarding rules configuration for the load balancer's configuration. The forwarding rules define the destination port for which your load balancer accepts packets and forwards them to the backends. For GFE-based load balancers, each external forwarding rule can only reference a single destination TCP port.
A security auditor should inspect the firewall rule configuration applicable to backend VMs. The firewall rules that you set block traffic from the GFEs to the backend VMs, but don't block incoming traffic to the GFEs. For best practices, see the firewall rules section.
Shared VPC architecture
Regional external proxy Network Load Balancers and classic proxy Network Load Balancers support deployments that use Shared VPC networks. Shared VPC lets you maintain a clear separation of responsibilities between network administrators and service developers. Your development teams can focus on building services in service projects, and the network infrastructure teams can provision and administer load balancing. If you're not already familiar with Shared VPC, read the Shared VPC overview documentation.
IP address | Forwarding rule | Target proxy | Backend components |
---|---|---|---|
An external IP address must be defined in the same project as the load balancer. | The external forwarding rule must be defined in the same project as the backend instances (the service project). | The target TCP or SSL proxy must be defined in the same project as the backend instances. |
For classic proxy Network Load Balancers, a global backend service must be defined in the same project as the backend instances. These instances must be in instance groups attached to the backend service as backends. Health checks associated with backend services must be defined in the same project as the backend service. For regional external proxy Network Load Balancers, the backend VMs are typically located in a service project. A regional backend service and health check must be defined in that service project. |
Traffic distribution
When you add a backend instance group or NEG to a backend service, you specify a load balancing mode, which defines a method that measures the backend load and target capacity.
For external proxy Network Load Balancers, the balancing mode can be CONNECTION
or UTILIZATION
:
- If the load balancing mode is
CONNECTION
, the load is spread based on the total number of connections that the backend can handle. - If the load balancing mode is
UTILIZATION
, the load is spread based on the utilization of instances in an instance group. This balancing mode applies to VM instance group backends only.
The distribution of traffic across backends is determined by the balancing mode of the load balancer.
Classic proxy Network Load Balancer
For the classic proxy Network Load Balancer, the balancing mode is used to select the most favorable backend (instance group or NEG). Traffic is then distributed in a round robin fashion among instances or endpoints within the backend.
How connections are distributed
A classic proxy Network Load Balancer can be configured as a global load balancing service with Premium Tier, and as a regional service in the Standard Tier.
The balancing mode and choice of target determine backend fullness from the
perspective of each zonal GCE_VM_IP_PORT
NEG, zonal instance group, or zone of
a regional instance group. Traffic is then distributed within a zone by using
consistent hashing.
For Premium Tier:
You can have only one backend service, and the backend service can have backends in multiple regions. For global load balancing, you deploy your backends in multiple regions, and the load balancer automatically directs traffic to the region closest to the user. If a region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections remain in the current region.
Google advertises your load balancer's IP address from all points of presence, worldwide. Each load balancer IP address is global anycast.
If you configure a backend service with backends in multiple regions, Google Front Ends (GFEs) attempt to direct requests to healthy backend instance groups or NEGs in the region closest to the user.
For Standard Tier:
Google advertises your load balancer's IP address from points of presence associated with the forwarding rule's region. The load balancer uses a regional external IP address.
You can only configure backends in the same region as the forwarding rule. The load balancer only directs requests to healthy backends in that one region.
Global external proxy Network Load Balancer
For the global external proxy Network Load Balancer, traffic distribution is based on the load balancing mode and the load balancing locality policy.
The balancing mode determines the weight and fraction of traffic to be
sent to each group (instance group or NEG). The load balancing locality policy
(LocalityLbPolicy
) determines how backends within the group are load balanced.
When a backend service receives traffic, it first directs traffic to a backend (instance group or NEG) according to the backend's balancing mode. After a backend is selected, traffic is then distributed among instances or endpoints in that backend group according to the load balancing locality policy.
For more information, see the following:
How connections are distributed
A global external proxy Network Load Balancer can be configured as a global load balancing service with Premium Tier
The balancing mode and choice of target determine backend fullness from the
perspective of each zonal GCE_VM_IP_PORT
NEG, or zonal instance group.
Traffic is then distributed within a zone by using consistent hashing.
You can have only one backend service, and the backend service can have backends in multiple regions. For global load balancing, you deploy your backends in multiple regions, and the load balancer automatically directs traffic to the region closest to the user. If a region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections remain in the current region.
Google advertises your load balancer's IP address from all points of presence, worldwide. Each load balancer IP address is global anycast.
If you configure a backend service with backends in multiple regions, Google Front Ends (GFEs) attempt to direct requests to healthy backend instance groups or NEGs in the region closest to the user.
Regional external proxy Network Load Balancer
For regional external proxy Network Load Balancers, traffic distribution is based on the load balancing mode and the load balancing locality policy.
The balancing mode determines the weight and fraction of traffic that should be
sent to each backend (instance group or NEG). The load balancing locality policy
(LocalityLbPolicy
) determines how backends within the group are load balanced.
When a backend service receives traffic, it first directs traffic to a backend (instance group or NEG) according to its balancing mode. After a backend is selected, traffic is then distributed among instances or endpoints in that backend group according to the load balancing locality policy.
For more information, see the following:
Session affinity
Session affinity sends all requests from the same client to the same backend if the backend is healthy and has capacity.
External proxy Network Load Balancers offer the following types of session affinity:
NONE
. Session affinity is not set for the load balancer.- Client IP affinity, which forwards all requests from the same client IP address to the same backend.
Failover
Failover for external proxy Network Load Balancers works as follows:
- If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region.
- If all backends within a region are unhealthy, traffic is distributed to healthy backends in other regions (global and classic modes only).
- If all backends are unhealthy, the load balancer drops traffic.
Load balancing for GKE applications
If you are building applications in Google Kubernetes Engine, you can use standalone NEGs to load balance traffic directly to containers. With standalone NEGs, you are responsible for creating the Service object that creates the NEG, and then associating the NEG with the backend service so that the load balancer can connect to the Pods.
For related documentation, see Container-native load balancing through standalone zonal NEGs.
Limitations
- You can't create a regional external proxy Network Load Balancer in Premium Tier using the Google Cloud console. Additionally, only regions supporting Standard Tier are available for these load balancers in the Google Cloud console. Use either the gcloud CLI or the API instead.
The following limitations apply only to classic proxy Network Load Balancers and global external proxy Network Load Balancer that are deployed with an SSL target proxy:
Classic proxy Network Load Balancers and global external proxy Network Load Balancers do not support client certificate-based authentication, also known as mutual TLS authentication.
Classic proxy Network Load Balancers and global external proxy Network Load Balancers support only lowercase characters in domains in a common name (
CN
) attribute or a subject alternative name (SAN
) attribute of the certificate. Certificates with uppercase characters in domains are returned only when set as the primary certificate in the target proxy.
What's next
- Set up a classic proxy Network Load Balancer (TCP proxy).
- Set up a classic proxy Network Load Balancer (SSL proxy).
- Set up a global external proxy Network Load Balancer (TCP proxy).
- Set up a global external proxy Network Load Balancer (SSL proxy).
- Set up a regional external proxy Network Load Balancer (TCP proxy).
- Proxy Network Load Balancer logging and monitoring.