The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
This guide contains instructions for setting up a regional internal proxy Network Load Balancer with a managed instance group (MIG) backend.
Before you start, read the Regional internal proxy Network Load Balancer overview.
Overview
In this example, we'll use the load balancer to distribute TCP traffic across
backend VMs in two zonal managed instance groups in the REGION_A
region. For
purposes of the example, the service is a set of
Apache servers configured to respond on port 110
.
Many browsers don't allow port 110
, so the testing section uses curl
.
In this example, you configure the following deployment:
The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.
Permissions
To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
Task | Required Role |
---|---|
Create networks, subnets, and load balancer components | Network Admin |
Add and remove firewall rules | Security Admin |
Create instances | Compute Instance Admin |
For more information, see the following guides:
Configure the network and subnets
You need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. Regional internal proxy Network Load Balancers are regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network. The network is a custom-mode VPC network named
lb-network
.Subnet for backends. A subnet named
backend-subnet
in theREGION_A
region uses10.1.2.0/24
for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnet
in theREGION_A
region uses10.129.0.0/23
for its primary IP range.
To demonstrate global
access, this example
also creates a second test client VM in a different region (REGION_B)
and a subnet with primary IP address range 10.3.4.0/24
.
Create the network and subnets
Console
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter
lb-network
.In the Subnets section, set the Subnet creation mode to Custom.
Create a subnet for the load balancer's backends. In the New subnet section, enter the following information:
- Name:
backend-subnet
- Region:
REGION_A
- IP address range:
10.1.2.0/24
- Name:
Click Done.
Click Add subnet.
Create a subnet to demonstrate global access. In the New subnet section, enter the following information:
- Name:
test-global-access-subnet
- Region:
REGION_B
- IP address range:
10.3.4.0/24
- Name:
Click Done.
Click Create.
gcloud
Create the custom VPC network with the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-network
network in theREGION_A
region with thegcloud compute networks subnets create
command:gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
Replace REGION_A with the name of the target Google Cloud region.
Create a subnet in the
lb-network
network in theREGION_B
region with thegcloud compute networks subnets create
command:gcloud compute networks subnets create test-global-access-subnet \ --network=lb-network \ --range=10.3.4.0/24 \ --region=REGION_B
Replace REGION_B with the name of the Google Cloud region where you want to create the second subnet to test global access.
Create the proxy-only subnet
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load
balancers in the REGION_A
region of the lb-network
VPC network.
Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
- In the Google Cloud console, go to the VPC networks page.
Go to VPC networks - Click the name of the Shared VPC network:
lb-network
. - Click Add subnet.
- For Name, enter
proxy-only-subnet
. - For Region, select
REGION_A
. - Set Purpose to Regional Managed Proxy.
- For IP address range, enter
10.129.0.0/23
. - Click Add.
gcloud
Create the proxy-only subnet with the gcloud compute networks subnets
create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23
Create firewall rules
This example requires the following firewall rules:
fw-allow-ssh
. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tagallow-ssh
.fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagallow-health-check
.fw-allow-proxy-only-subnet
. An ingress rule that allows connections from the proxy-only subnet to reach the backends.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Creating a managed instance group.
Console
- In the Google Cloud console, go to the Firewall policies page.
Go to Firewall policies - Click Create firewall rule to create the rule to allow incoming
SSH connections:
- Name:
fw-allow-ssh
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCP checkbox, and then enter
22
for the port number.
- Name:
- Click Create.
- Click Create firewall rule a second time to create the rule to allow
Google Cloud health checks:
- Name:
fw-allow-health-check
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-health-check
- Source filter: IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCP checkbox, and then enter
80
for the port number.
As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you usetcp:80
for the protocol and port, Google Cloud can use HTTP on port80
to contact your VMs, but it cannot use HTTPS on port443
to contact them.
- Name:
- Click Create.
- Click Create firewall rule a third time to create the rule to allow
the load balancer's proxy servers to connect the backends:
- Name:
fw-allow-proxy-only-subnet
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet
- Source filter: IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCP checkbox, and then enter
80
for the port numbers.
- Name:
- Click Create.
gcloud
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check
rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80
Create the
fw-allow-proxy-only-subnet
rule to allow the region's Envoy proxies to connect to your backends. Set--source-ranges
to the allocated ranges of your proxy-only subnet, in this example,10.129.0.0/23
.gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=allow-proxy-only-subnet \ --rules=tcp:80
Reserve the load balancer's IP address
To reserve a static internal IP address for your load balancer, see Reserve a new static internal IPv4 or IPv6 address.
Create a managed instance group
This section shows you how to create two managed instance group (MIG) backends
in the REGION_A
region for the load balancer. The MIG provides VM instances
running the backend Apache servers for this example regional internal proxy Network Load Balancer.
Typically, a regional internal proxy Network Load Balancer isn't used for HTTP traffic, but
Apache software is commonly used for testing.
Console
Create an instance template. In the Google Cloud console, go to the Instance templates page.
- Click Create instance template.
- For Name, enter
int-tcp-proxy-backend-template
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 10 (stretch). These instructions use commands that
are only available on Debian, such as
apt-get
. - Click Advanced options.
- Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
,allow-health-check
andallow-proxy-only-subnet
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
backend-subnet
- Network:
- For Network tags, enter
Click Management. Enter the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create a managed instance group. In the Google Cloud console, go to the Instance groups page.
- Click Create instance group.
- Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
- For Name, enter
mig-a
. - Under Location, select Single zone.
- For Region, select
REGION_A
. - For Zone, select
ZONE_A1
. - Under Instance template, select
int-tcp-proxy-backend-template
. Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
- For Autoscaling mode, select
Off:do not autoscale
. - For Maximum number of instances, enter
2
.
- For Autoscaling mode, select
For Port mapping, click Add port.
- For Port name, enter
tcp80
. - For Port number, enter
80
.
- For Port name, enter
Click Create.
Repeat Step 2 to create a second managed instance group with the following settings:
- Name:
mig-c
- Zone:
ZONE_A2
Keep all other settings the same.
- Name:
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.gcloud compute instance-templates create int-tcp-proxy-backend-template \ --region=REGION_A \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the
ZONE_A1
zone.gcloud compute instance-groups managed create mig-a \ --zone=ZONE_A1 \ --size=2 \ --template=int-tcp-proxy-backend-template
Replace ZONE_A1 with the name of the zone in the target Google Cloud region.
Create a managed instance group in the
ZONE_A2
zone.gcloud compute instance-groups managed create mig-c \ --zone=ZONE_A2 \ --size=2 \ --template=int-tcp-proxy-backend-template
Replace ZONE_A2 with the name of another zone in the target Google Cloud region.
Configure the load balancer
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
- For Proxy or passthrough, select Proxy load balancer and click Next.
- For Public facing or internal, select Internal and click Next.
- For Cross-region or single region deployment, select Best for regional workloads and click Next.
- Click Configure.
Basic configuration
- For Name, enter
my-int-tcp-lb
. - For Region, select
REGION_A
. - For Network, select
lb-network
.
Reserve a proxy-only subnet
To reserve a proxy-only subnet:
- Click Reserve subnet.
- For Name, enter
proxy-only-subnet
. - For IP address range, enter
10.129.0.0/23
. - Click Add.
Backend configuration
- Click Backend configuration.
- For Backend type, select Instance group.
- For Protocol, select TCP.
- For Named port, enter
tcp80
. - Configure the first backend:
- Under New backend, select instance group
mig-a
. - For Port numbers, enter
80
. - Retain the remaining default values and click Done.
- Under New backend, select instance group
- Configure the second backend:
- Click Add backend.
- Under New backend, select instance group
mig-c
. - For Port numbers, enter
80
. - Retain the remaining default values and click Done.
- Configure the health check:
- Under Health check, select Create a health check.
- Set the health check Name to
tcp-health-check
. - For Protocol, select TCP.
- Set Port to
80
.
- Retain the remaining default values and click Save.
- In the Google Cloud console, verify that there is a check mark next to Backend configuration. If not, double-check that you have completed all of the steps.
Frontend configuration
- Click Frontend configuration.
- For Name, enter
int-tcp-forwarding-rule
. - For Subnetwork, select backend-subnet.
- For IP address, select the IP address reserved previously: LB_IP_ADDRESS
- For Port number, enter
110
. The forwarding rule only forwards packets with a matching destination port. - In this example, don't enable the Proxy Protocol because it doesn't work with the Apache HTTP Server software. For more information, see Proxy protocol.
- Click Done.
- In the Google Cloud console, verify that there is a check mark next to Frontend configuration. If not, double-check that you have completed all the previous steps.
Review and finalize
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
- Click Create.
gcloud
Create a regional health check.
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service.
gcloud compute backend-services create internal-tcp-proxy-bs \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add instance groups to your backend service.
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-a \ --instance-group-zone=ZONE_A1 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-c \ --instance-group-zone=ZONE_A2 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
Create an internal target TCP proxy.
gcloud compute target-tcp-proxies create int-tcp-target-proxy \ --backend-service=internal-tcp-proxy-bs \ --proxy-header=NONE \ --region=REGION_A
If you want to turn on the proxy header, set it to
PROXY_V1
instead ofNONE
. In this example, don't enable Proxy protocol because it doesn't work with the Apache HTTP Server software. For more information, see Proxy protocol.Create the forwarding rule. For
--ports
, specify a single port number from 1-65535. This example uses port110
. The forwarding rule only forwards packets with a matching destination port.gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110
Test your load balancer
To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.
Create a client VM
Create a client VM (client-vm
) in the same region as the load balancer.
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to
client-vm
.Set Zone to
ZONE_A1
.Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
backend-subnet
- Network:
- For Network tags, enter
Click Create.
gcloud
The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=ZONE_A1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=backend-subnet
Send traffic to the load balancer
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=ZONE_A1
Verify that the load balancer is serving backend hostnames as expected.
Use the
compute addresses describe
command to view the load balancer's IP address:gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer. Replace IP_ADDRESS with the IP address of the load balancer.
curl IP_ADDRESS:110
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Enable global access
You can enable global
access for your load
balancer to make it accessible to clients in all regions. The backends of your
example load balancer must still be located in one region (REGION_A
).
You can't modify an existing regional forwarding rule to enable global access. You must create a new forwarding rule for this purpose. Additionally, after a forwarding rule has been created with global access enabled, it cannot be modified. To disable global access, you must create a new regional access forwarding rule and delete the previous global access forwarding rule.
To configure global access, make the following configuration changes.
Console
Create a new forwarding rule for the load balancer:
In the Google Cloud console, go to the Load balancing page.
In the Name column, click your load balancer.
Click Frontend configuration.
Click Add frontend IP and port.
Enter the name and subnet details for the new forwarding rule.
For Subnetwork, select backend-subnet.
For IP address, you can either select the same IP address as an existing forwarding rule, reserve a new IP address, or use an ephemeral IP address. Sharing the same IP address across multiple forwarding rules is only possible if you set the IP address
--purpose
flag toSHARED_LOADBALANCER_VIP
while creating the IP address.For Port number, enter
110
.For Global access, select Enable.
Click Done.
Click Update.
gcloud
Create a new forwarding rule for the load balancer with the
--allow-global-access
flag.gcloud compute forwarding-rules create int-tcp-forwarding-rule-global-access \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110 \ --allow-global-access
You can use the
gcloud compute forwarding-rules describe
command to determine whether a forwarding rule has global access enabled. For example:gcloud compute forwarding-rules describe int-tcp-forwarding-rule-global-access \ --region=REGION_A \ --format="get(name,region,allowGlobalAccess)"
When global access is enabled, the word
True
appears in the output after the name and region of the forwarding rule.
Create a client VM to test global access
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to
test-global-access-vm
.Set Zone to
ZONE_B1
.Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
test-global-access-subnet
- Network:
- For Network tags, enter
Click Create.
gcloud
Create a client VM in the ZONE_B1
zone.
gcloud compute instances create test-global-access-vm \ --zone=ZONE_B1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=test-global-access-subnet
Replace ZONE_B1 with the name of the zone in the REGION_B region.
Connect to the client VM and test connectivity
Use
ssh
to connect to the client instance:gcloud compute ssh test-global-access-vm \ --zone=ZONE_B1
Use the
gcloud compute addresses describe
command to get the load balancer's IP address:gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer; replace IP_ADDRESS with the IP address of the load balancer:
curl IP_ADDRESS:110
PROXY protocol for retaining client connection information
The proxy Network Load Balancer ends TCP connections from the client and creates new connections to the instances. By default, the original client IP and port information is not preserved.
To preserve and send the original connection information to your instances, enable PROXY protocol version 1. This protocol sends an additional header that contains the source IP address, destination IP address, and port numbers to the instance as a part of the request.
Make sure that the proxy Network Load Balancer's backend instances are running servers that support PROXY protocol headers. If the servers are not configured to support PROXY protocol headers, the backend instances return empty responses.
If you set the PROXY protocol for user traffic, you can also set it for your
health checks. If you are checking health and serving
content on the same port, set the health check's --proxy-header
to match your
load balancer setting.
The PROXY protocol header is typically a single line of user-readable text in the following format:
PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n
The following example shows a PROXY protocol:
PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n
In the preceding example, the client IP is 192.0.2.1
, the load balancing IP is
198.51.100.1
, the client port is 15221
, and the destination port is 110
.
When the client IP is not known, the load balancer generates a PROXY protocol header in the following format:
PROXY UNKNOWN\r\n
Update PROXY protocol header for target proxy
You cannot update the PROXY protocol header in the existing target proxy. You have to create a new target proxy with the required setting for the PROXY protocol header. Use these steps to create a new frontend with the required settings:
Console
In the Google Cloud console, go to the Load balancing page.
- Click the name of the load balancer you want to edit.
- Click Edit for your load balancer.
- Click Frontend configuration.
- Delete the old frontend IP and port.
- Click Add frontend IP and port.
- For Name, enter
int-tcp-forwarding-rule
. - For Subnetwork, select backend-subnet.
- For IP address, select the IP address reserved previously: LB_IP_ADDRESS
- For Port number, enter
110
. The forwarding rule only forwards packets with a matching destination port. - Change the value of the Proxy protocol field to On.
- Click Done.
- For Name, enter
- Click Update to save your changes.
gcloud
In the following command, edit the
--proxy-header
field and set it to eitherNONE
orPROXY_V1
depending on your requirement.gcloud compute target-tcp-proxies create TARGET_PROXY_NAME \ --backend-service=BACKEND_SERVICE \ --proxy-header=[NONE | PROXY_V1] \ --region=REGION
Delete the existing forwarding rule.
gcloud compute forwarding-rules delete int-tcp-forwarding-rule \ --region=REGION
Create a new forwarding rule and associate it with the target proxy.
gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION \ --target-tcp-proxy=TARGET_PROXY_NAME \ --target-tcp-proxy-region=REGION \ --address=LB_IP_ADDRESS \ --ports=110
Enable session affinity
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the example regional internal proxy Network Load Balancer so that the backend service uses client IP affinity or generated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the load balancer's IP address (the internal IP address of an internal forwarding rule).
Console
To enable client IP session affinity:
- In the Google Cloud console, go to the Load balancing page.
Go to Load balancing - Click Backends.
- Click internal-tcp-proxy-bs (the name of the backend service you created for this example) and click Edit.
- On the Backend service details page, click Advanced configuration.
- Under Session affinity, select Client IP from the menu.
- Click Update.
gcloud
Use the following Google Cloud CLI command to update the internal-tcp-proxy-bs
backend
service, specifying client IP session affinity:
gcloud compute backend-services update internal-tcp-proxy-bs \ --region=REGION_A \ --session-affinity=CLIENT_IP
Enable connection draining
You can enable connection draining on backend services to ensure minimal interruption to your users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler. To learn more about connection draining, read the Enabling connection draining documentation.
What's next
- Convert proxy Network Load Balancer to IPv6
- Regional internal proxy Network Load Balancer overview
- Using monitoring
- Clean up the load balancer setup