The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
This guide contains instructions for setting up a regional internal proxy Network Load Balancer with a zonal network endpoint group (NEG) backend. Before you start:
- Read the Regional internal proxy Network Load Balancer overview
- Read the zonal NEGs overview
Overview
In this example, we'll use the load balancer to distribute TCP traffic across
backend VMs in two zonal NEGs in the REGION_A
region. For purposes of the
example, the service is a set of Apache servers
configured to respond on port 80
.
In this example, you configure the following deployment:
The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.
Permissions
To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
Task | Required Role |
---|---|
Create networks, subnets, and load balancer components | Network Admin |
Add and remove firewall rules | Security Admin |
Create instances | Compute Instance Admin |
For more information, see the following guides:
Configure the network and subnets
You need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. Regional internal proxy Network Load Balancers are regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network. The network is a custom-mode VPC network named
lb-network
.Subnet for backends. A subnet named
backend-subnet
in theREGION_A
region uses10.1.2.0/24
for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnet
in theREGION_A
region uses10.129.0.0/23
for its primary IP range.
Create the network and subnet for backends
Console
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For the Name, enter
lb-network
.In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
backend-subnet
- Region:
REGION_A
- IP address range:
10.1.2.0/24
- Name:
- Click Done.
Click Create.
gcloud
Create the custom VPC network with the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-network
network in theREGION_A
region with thegcloud compute networks subnets create
command:gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
Create the proxy-only subnet
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load
balancers in the REGION_A
region of the lb-network
VPC network.
Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click the name of the Shared VPC network:
lb-network
.Click Add subnet.
For the Name, enter
proxy-only-subnet
.For the Region, select
REGION_A
.Set Purpose to Regional Managed Proxy.
For the IP address range, enter
10.129.0.0/23
.Click Add.
gcloud
Create the proxy-only subnet with the gcloud compute networks subnets
create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23
Create firewall rules
In this example, you create the following firewall rules:
fw-allow-health-check
: An ingress rule, applicable to the Google Cloud instances being load balanced, that allows traffic from the load balancer and Google Cloud health checking systems (130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagallow-health-check
to identify the backend VMs to which it should apply.fw-allow-ssh
: An ingress rule that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the systems from which you will initiate SSH sessions. This example uses the target tagallow-ssh
to identify the VMs to which it should apply.fw-allow-proxy-only-subnet
: Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port80
. This example uses the target tagallow-proxy-only-subnet
to identify the backend VMs to which it should apply.
Console
In the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule:
- Enter a Name of
fw-allow-health-check
. - Under Network, select
lb-network
. - Under Targets, select Specified target tags.
- Populate the Target tags field with
allow-health-check
. - Set Source filter to IPv4 ranges.
- Set Source IPv4 ranges to
130.211.0.0/22
and35.191.0.0/16
. - Under Protocols and ports, select Specified protocols and ports.
- Select the TCP checkbox and enter
80
for the port numbers. - Click Create.
- Enter a Name of
Click Create firewall rule again to create the rule to allow incoming SSH connections:
- Name:
fw-allow-ssh
- Network:
lb-network
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0
- Protocols and ports: Choose Specified protocols and ports then
type:
tcp:22
- Name:
Click Create.
Click Create firewall rule again to create the rule to allow incoming connections from the proxy-only subnet to the Google Cloud backends:
- Name:
fw-allow-proxy-only-subnet
- Network:
lb-network
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet
- Source filter: IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23
- Protocols and ports: Choose Specified protocols and ports then
type:
tcp:80
- Name:
Click Create.
gcloud
Create the
fw-allow-health-check
rule to allow the Google Cloud health checks to reach the backend instances on a TCP port80
:gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port
80
:gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=10.129.0.0/23 \ --rules=tcp:80
Reserve the load balancer's IP address
To reserve a static internal IP address for your load balancer, see Reserve a new static internal IPv4 or IPv6 address.
Set up the zonal NEG
Set up a zonal NEG (with GCE_VM_IP_PORT
type endpoints)
in the REGION_A
region. First create the VMs. Then create a zonal NEG
and add the VMs' network endpoints to the NEG.
Create VMs
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set the Name to
vm-a1
.For the Region, select
REGION_A
.For the Zone, see
ZONE_A1
.In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.
Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
,allow-health-check
andallow-proxy-only-subnet
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
backend-subnet
- Network:
- For Network tags, enter
Click Management. Enter the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Repeat the following steps to create 3 more VMs, using the following name and zone combinations:
- Name:
vm-a2
, zone:ZONE_A1
- Name:
vm-c1
, zone:ZONE_A2
- Name:
vm-c2
, zone:ZONE_A2
- Name:
gcloud
Create the VMs by running the following command two times, using these combinations for VM_NAME and ZONE. The script contents are identical for both VMs.
- VM_NAME:
vm-a1
and ZONE:ZONE_A1
- VM_NAME:
vm-a2
and ZONE:ZONE_A1
- VM_NAME:
vm-c1
and ZONE:ZONE_A2
VM_NAME:
vm-c2
and ZONE:ZONE_A2
gcloud compute instances create VM_NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=backend-subnet \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create the zonal NEGs
Console
To create a zonal network endpoint group:
In the Google Cloud console, go to the Network endpoint groups page.
Click Create network endpoint group.
For Name, enter
zonal-neg-a
.For Network endpoint group type, select Network endpoint group (Zonal).
For Network, select
lb-network
.For Subnet, select
backend-subnet
.For Zone, select
ZONE_A1
.Enter the Default port:
80
.Click Create.
Repeat all the steps in this section to create a second zonal NEG with the following changes in settings:
- Name:
zonal-neg-c
- Zone:
ZONE_A2
- Name:
Add endpoints to the zonal NEGs:
In the Google Cloud console, go to the Network endpoint groups page.
Click the Name of the network endpoint group created in the previous step (for example,
zonal-neg-a
). You see the Network endpoint group details page.In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
Select a VM instance (for example,
vm-a1
). In the Network interface section, the VM name, zone, and subnet is displayed.- Enter the IP address of the new network endpoint. You can click Check primary IP addresses and alias IP range in nic0 for the IP address.
- For Port type, select Default, the endpoint uses the default port
80
for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port80
. - Click Create.
Again click Add network endpoint. Select the second VM instance,
vm-a2
, and repeat these steps to add its endpoints tozonal-neg-a
.Repeat all the steps in this section to add endpoints from
vm-c1
andvm-c2
tozonal-neg-c
.
gcloud
Create a zonal NEG in the
ZONE_A1
zone withGCE_VM_IP_PORT
endpoints.gcloud compute network-endpoint-groups create zonal-neg-a \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_A1 \ --network=lb-network \ --subnet=backend-subnet
You can either specify the
--default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.Add endpoints to the zonal NEG.
gcloud compute network-endpoint-groups update zonal-neg-a \ --zone=ZONE_A1 \ --add-endpoint='instance=vm-a1,port=80' \ --add-endpoint='instance=vm-a2,port=80'
Create a zonal NEG in the
ZONE_A2
zone withGCE_VM_IP_PORT
endpoints.gcloud compute network-endpoint-groups create zonal-neg-c \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_A2 \ --network=lb-network \ --subnet=backend-subnet
You can either specify the
--default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.Add endpoints to the zonal NEG.
gcloud compute network-endpoint-groups update zonal-neg-c \ --zone=ZONE_A2 \ --add-endpoint='instance=vm-c1,port=80' \ --add-endpoint='instance=vm-c2,port=80'
Configure the load balancer
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
- For Proxy or passthrough, select Proxy load balancer and click Next.
- For Public facing or internal, select Internal and click Next.
- For Cross-region or single region deployment, select Best for regional workloads and click Next.
- Click Configure.
Basic configuration
- For Name, enter
my-int-tcp-lb
. - For Region, select
REGION_A
. - For Network, select
lb-network
.
Reserve a proxy-only subnet
To reserve a proxy-only subnet:
- Click Reserve subnet.
- For Name, enter
proxy-only-subnet
. - For IP address range, enter
10.129.0.0/23
. - Click Add.
Backend configuration
- Click Backend configuration.
- For Backend type, select Zonal network endpoint group.
- For Protocol, select TCP.
- Configure the first backend:
- Under New backend, select zonal NEG
zonal-neg-a
. - Retain the remaining default values and click Done.
- Under New backend, select zonal NEG
- Configure the second backend:
- Click Add backend.
- Under New backend, select instance group
zonal-neg-c
. - Retain the remaining default values and click Done.
- Configure the health check:
- Under Health check, select Create a health check.
- Set the health check Name to
tcp-health-check
. - For Protocol, select TCP.
- For Port, enter
80
.
- Retain the remaining default values and click Save.
- In the Google Cloud console, verify that there is a check mark next to Backend configuration. If not, double-check that you have completed all of the steps.
Frontend configuration
- Click Frontend configuration.
- For Name, enter
int-tcp-forwarding-rule
. - For Subnetwork, select backend-subnet.
- For IP address, select int-tcp-ip-address.
- For Port number, enter
9090
. The forwarding rule only forwards packets with a matching destination port. - In this example, don't enable the Proxy Protocol because it doesn't work with the Apache HTTP Server software. For more information, see Proxy protocol.
- Click Done.
- In the Google Cloud console, verify that there is a check mark next to Frontend configuration. If not, double-check that you have completed all the previous steps.
Review and finalize
- Click Review and finalize.
- Double-check your settings.
- Click Create.
gcloud
Create a regional health check for the backends.
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service.
gcloud compute backend-services create internal-tcp-proxy-bs \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add the zonal NEG in the
ZONE_A1
zone to the backend service.gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-a \ --network-endpoint-group-zone=ZONE_A1 \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Add the zonal NEG in the
ZONE_A2
zone to the backend service.gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-c \ --network-endpoint-group-zone=ZONE_A2 \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Create the target TCP proxy.
gcloud compute target-tcp-proxies create int-tcp-target-proxy \ --backend-service=internal-tcp-proxy-bs \ --region=REGION_A
Create the forwarding rule. For
--ports
, specify a single port number from 1-65535. This example uses port9090
. The forwarding rule only forwards packets with a matching destination port.gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --address=int-tcp-ip-address \ --ports=9090 \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A
Test the load balancer
To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.
Create a client VM
Create a client VM (client-vm
) in the same region as the load balancer.
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to
client-vm
.Set Zone to
ZONE_A1
.Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
backend-subnet
- Network:
- For Network tags, enter
Click Create.
gcloud
The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=ZONE_A1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=backend-subnet
Send traffic to the load balancer
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=ZONE_A1
Verify that the load balancer is serving backend hostnames as expected.
Use the
compute addresses describe
command to view the load balancer's IP address:gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer. Replace IP_ADDRESS with the IP address of the load balancer.
curl IP_ADDRESS:9090
What's next
- Convert proxy Network Load Balancer to IPv6
- Regional internal proxy Network Load Balancer overview.
- To set up monitoring for your regional internal proxy Network Load Balancer, see Using monitoring.
- Clean up the load balancer setup.