This guide uses an example to teach you how to configure a Google Cloud internal TCP/UDP load balancer to be a next hop. Before following this guide, familiarize yourself with the following:
Permissions
To follow this guide, you need to create instances and modify a network in a project. You should be either a project owner or editor, or you should have all of the following Compute Engine IAM roles:
Task | Required Role |
---|---|
Create networks, subnets, and load balancer components | Network Admin |
Add and remove firewall rules | Security Admin |
Create instances | Instance Admin |
Load balancing to a single backend NIC
This guide shows you how to use an internal TCP/UDP load balancer as the next hop for a custom static route in order to integrate scaled-out virtual appliances.
The solution discussed in this guide integrates virtual appliances so that you don't need to explicitly reconfigure your clients to send traffic to each virtual appliance. The example in this setup guide sends all traffic through a load-balanced set of firewall virtual appliances.
The steps in this section describe how to configure the following resources:
- Sample VPC networks and custom subnets
- Google Cloud firewall rules that allow incoming connections to backend VMs
- A custom static route
- One client VM to test connections
- The following internal TCP/UDP load balancer components:
- Backend VMs in a managed instance group
- A health check for the backend VM appliances
- An internal backend service in the
us-west1
region to manage connection distribution among the backend VMs - An internal forwarding rule and internal IP address for the frontend of the load balancer
The topology looks like this:
The diagram shows some of the resources that the example creates:
- Application instances (in this case, VMs running firewall appliance software)
behind an internal TCP/UDP load balancer (
fr-ilb1
, in this example). The application instances only have internal (RFC 1918) IP addresses. - Each application instance has its
can-ip-forward
flag enabled. Without this flag, a Compute Engine VM can only transmit a packet if the source IP address of the packet matches one of the VM's IP addresses. Thecan-ip-forward
flag changes this behavior so that the VM can transmit packets with any source IP address. - A custom static route with destination
10.50.1.0/24
and next hop set to the load balancer's forwarding rule,fr-ilb1
.
The diagram also shows the traffic flow:
- The
testing
VPC network has a custom static route for traffic thet is destined to10.50.1.0/24
subnet. This route directs the traffic to the load balancer. - The load balancer forwards traffic to one of the application instances based on the configured session affinity. (Session affinity only affects TCP traffic.)
- The application instance performs source network address translation (SNAT)
to deliver packets to the instance group in the
production
VPC network. For return traffic, it performs destination network address translation (DNAT) to deliver packets to the client instance in thetesting
VPC network.
For additional use cases, see Next-hop concepts for Internal TCP/UDP Load Balancing.
Configuring the networks, region, and subnets
This example uses the following VPC networks, region, and subnets:
Networks: This example requires two networks, each with at least one subnet. Each backend third-party appliance VM must have at least two network interfaces, one in each VPC network. The networks in this example are custom mode VPC networks named
testing
andproduction
. Thetesting
network in this example contains the client and the load balancer. Theproduction
network contains the destination target VM.Region: The subnets are located in the
us-west1
region. The subnets must be in the same region because VM instances are zonal resources.Subnets: The subnets,
testing-subnet
andproduction-subnet
, use the10.30.1.0/24
and10.50.1.0/24
primary IP address ranges, respectively.
To create the example networks and subnets, follow these steps.
Console
Create the testing
network and the testing-subnet
:
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC network page - Click Create VPC network.
- Enter a Name of
testing
. - In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
testing-subnet
- Region:
us-west1
- IP address range:
10.30.1.0/24
- Click Done.
- Name:
- Click Create.
Create the production
network and the production-subnet
:
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC network page - Click Create VPC network.
- Enter a Name of
production
. - In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
production-subnet
- Region:
us-west1
- IP address range:
10.50.1.0/24
- Click Done.
- Name:
- Click Create.
gcloud
Create the custom VPC networks:
gcloud compute networks create testing --subnet-mode=custom
gcloud compute networks create production --subnet-mode=custom
Create subnets in the
testing
andproduction
networks in theus-west1
region:gcloud compute networks subnets create testing-subnet \ --network=testing \ --range=10.30.1.0/24 \ --region=us-west1
gcloud compute networks subnets create production-subnet \ --network=production \ --range=10.50.1.0/24 \ --region=us-west1
Configuring firewall rules
This example uses the following firewall rules:
fw-allow-testing-subnet
: An ingress rule, applicable to all targets in thetesting
network, allowing traffic from sources in the10.30.1.0/24
ranges. This rule allows the VM instances and third-party VM appliances in thetesting-subnet
to communicate.fw-allow-production-subnet
: An ingress rule, applicable to all targets in theproduction
network, allowing traffic from sources in the10.50.1.0/24
ranges. This rule allows the VM instances and third-party VM appliances in theproduction-subnet
to communicate.fw-allow-testing-ssh
: An ingress rule applied to the VM instances in thetesting
VPC network, allowing incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify the IP ranges of the systems from which you plan to initiate SSH sessions. This example uses the target tagallow-ssh
to identify the VMs to which the firewall rule applies.fw-allow-production-ssh
: An ingress rule applied to the VM instances in theproduction
VPC network, allowing incoming SSH connectivity on TCP port 22 from any address. Like thefw-allow-testing-ssh
rule, you can choose a more restrictive source IP range for this rule.fw-allow-health-check
: An ingress rule, applicable to the third-party VM appliances load balanced, that allows traffic from the Google Cloud health checking systems (130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagallow-health-check
to identify the instances to which it should apply.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances. You must create a firewall rule to allow health checks from the IP ranges of Google Cloud probe systems. Refer to probe IP ranges for more information.
Console
- Go to the Firewall rules page in the Google Cloud Console.
Go to the Firewall rules page - Click Create firewall rule and enter the following
information to create the rule to allow subnet traffic:
- Name:
fw-allow-testing-subnet
- Network:
testing
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: All instances in the network
- Source filter:
IP ranges
- Source IP ranges:
10.30.1.0/24
- Protocols and ports: Allow all
- Name:
- Click Create.
- Click Create firewall rule and enter the following
information to create the rule to allow subnet traffic:
- Name:
fw-allow-production-subnet
- Network:
production
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: All instances in the network
- Source filter:
IP ranges
- Source IP ranges:
10.50.1.0/24
- Protocols and ports: Allow all
- Name:
- Click Create.
- Click Create firewall rule again to create the rule to allow incoming
SSH connections:
- Name:
fw-allow-testing-ssh
- Network:
testing
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter:
IP ranges
- Source IP ranges:
0.0.0.0/0
- Protocols and ports: Choose Specified protocols and ports and
type:
tcp:22
- Name:
- Click Create.
- Click Create firewall rule again to create the rule to allow incoming
SSH connections:
- Name:
fw-allow-production-ssh
- Network:
production
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter:
IP ranges
- Source IP ranges:
0.0.0.0/0
- Protocols and ports: Choose Specified protocols and ports and
type:
tcp:22
- Name:
- Click Create.
- Click Create firewall rule a third time to create the rule to allow
Google Cloud health checks:
- Name:
fw-allow-health-check
- Network:
testing
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-health-check
- Source filter:
IP ranges
- Source IP ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports: Allow all
- Name:
- Click Create.
gcloud
Create the
fw-allow-testing-subnet
firewall rule to allow communication from with the subnet:gcloud compute firewall-rules create fw-allow-testing-subnet \ --network=testing \ --action=allow \ --direction=ingress \ --source-ranges=10.30.1.0/24 \ --rules=tcp,udp,icmp
Create the
fw-allow-production-subnet
firewall rule to allow communication from with the subnet:gcloud compute firewall-rules create fw-allow-production-subnet \ --network=production \ --action=allow \ --direction=ingress \ --source-ranges=10.50.1.0/24 \ --rules=tcp,udp,icmp
Create the
fw-allow-testing-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-testing-ssh \ --network=testing \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-production-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
.gcloud compute firewall-rules create fw-allow-production-ssh \ --network=production \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check
rule to allow Google Cloud health checks to the third-party appliance VMs in thetesting
network.gcloud compute firewall-rules create fw-allow-testing-health-check \ --network=testing \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
Creating the third-party virtual appliances
The following steps demonstrate how to create an instance template and managed
regional instance group using the iptables
software as a third-party virtual
appliance.
Console
You must use gcloud
for this step because you need to create an instance
template with more than one network interface. The Cloud Console
does not currently support creating instance templates with more than one
network interface.
gcloud
Create an instance template for your third-party virtual appliances. The instance template must include the
--can-ip-forward
flag so that the VM instances created from the template can forward packets from other instances in thetesting
andproduction
networks.gcloud compute instance-templates create third-party-template \ --region=us-west1 \ --network-interface subnet=testing-subnet,address="" \ --network-interface subnet=production-subnet \ --tags=allow-ssh,allow-health-check \ --image-family=debian-9 \ --image-project=debian-cloud \ --can-ip-forward \ --metadata=startup-script='#! /bin/bash # Enable IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf # Read VM network configuration: md_vm="http://169.254.169.254/computeMetadata/v1/instance/" md_net="$md_vm/network-interfaces" nic0_gw="$(curl -H "Metadata-Flavor:Google" $md_net/0/gateway)" nic0_mask="$(curl -H "Metadata-Flavor:Google" $md_net/0/subnetmask)" nic1_gw="$(curl -H "Metadata-Flavor:Google" $md_net/1/gateway)" nic1_mask="$(curl -H "Metadata-Flavor:Google" $md_net/1/subnetmask)" nic1_addr="$(curl -H "Metadata-Flavor:Google" $md_net/1/ip)" # Start iptables: /sbin/iptables -t nat -F /sbin/iptables -t nat -A POSTROUTING \ -s "$nic0_gw/$nic0_mask" \ -d "$nic1_gw/$nic1_mask" \ -o eth1 \ -j SNAT \ --to-source "$nic1_addr" /sbin/iptables-save # Use a web server to pass the health check for this example. # You should use a more complete test in production. apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl echo "Example web page to pass health check" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group for your third-party virtual appliances. This command creates a regional managed instance group, which can then be autoscaled, in
us-west1
.gcloud compute instance-groups managed create third-party-instance-group \ --region=us-west1 \ --template=third-party-template \ --size=3
Creating the load balancing resources
These steps configure all of the internal TCP/UDP load balancer components starting with the health check and backend service, and then the frontend components:
Health check: In this example, the HTTP health check checks for an HTTP
200
(OK) response. For more information, see the health checks section of the Internal TCP/UDP Load Balancing overview.Backend service: Even though this example's backend service specifies the TCP protocol, when the load balancer is the next hop for a route, both TCP and UDP traffic are sent to the load balancer's backends.
Forwarding rule: Even though this example forwarding rule specifies TCP port 80, when the load balancer is the next hop for a route, traffic on any TCP or UDP port is sent to the load balancer's backends.
Internal IP address: The example specifies an internal IP address,
10.30.1.99
, for the forwarding rule.
Console
Create the load balancer and configure a backend service
- Go to the Load balancing page in the Google Cloud Console.
Go to the Load balancing page - Click Create load balancer.
- Under TCP load balancing, click Start configuration.
- Under Internet facing or internal only select Only between my VMs.
- Click Continue.
- Set the Name to
ilb1
. - Click Backend configuration and make the following changes:
- Region:
us-west1
- Network:
testing
- Under Backends, in the New item section, select the
third-party-instance-group
instance group and click Done. - Under Health check, choose Create another health check,
enter the following information, and click Save and continue:
- Name:
hc-http-80
- Protocol:
HTTP
- Port:
80
- Proxy protocol:
NONE
- Request path:
/
- Name:
- Verify that there is a blue check mark next to Backend configuration before continuing. Review this step if not.
- Region:
- Click Frontend configuration. In the New Frontend IP and port
section, make the following changes:
- Name:
fr-ilb1
- Subnetwork:
testing-subnet
- From Internal IP, choose Reserve a static internal IP address,
enter the following information, and click Reserve:
- Name:
ip-ilb
- Static IP address: Let me choose
- Custom IP address:
10.30.1.99
- Name:
- Ports: Choose Single, and enter
80
for the Port number. Remember that the choice of a protocol and port for the load balancer does not limit the protocols and ports that are used when the load balancer is the next hop of a route. - Verify that there is a blue check mark next to Frontend configuration before continuing. Review this step if not.
- Name:
- Click Review and finalize. Double-check your settings.
- Click Create.
gcloud
Create a new HTTP health check to test TCP connectivity to the VMs on 80.
gcloud compute health-checks create http hc-http-80 \ --port=80
Create an internal backend service in the
us-west1
region.gcloud compute backend-services create ilb1 \ --load-balancing-scheme=internal \ --region=us-west1 \ --health-checks=hc-http-80
Add the instance group containing the third-party virtual appliances as a backend on the backend service.
gcloud compute backend-services add-backend ilb1 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
Create the internal forwarding rule and connect it to the backend service to complete the load balancer configuration. Remember that the protocol (TCP) and port (80) of the internal load balancer do not limit the ports and protocols that are forwarded to the backend instances (the third-party virtual appliances) when the load balancer is used as the next hop of a route.
gcloud compute forwarding-rules create fr-ilb1 \ --load-balancing-scheme=internal \ --ports=80 \ --network=testing \ --subnet=testing-subnet \ --region=us-west1 \ --backend-service=ilb1 \ --address=10.30.1.99
Creating the static route that defines the load balancer as the next hop
When you create a static route, you cannot use next-hop-address
to point to
the IP address of a load balancer's forwarding rule. This is because when you
use next-hop-address
, Google Cloud passes traffic to the VM instance
assigned to that IP address, and a load balancer is not a VM instance. Instead,
if you want to designate a load balancer as next hop, you must use the
next-hop-ilb
flag, as demonstrated in this example.
Console
- Go to the Routes page in the Google Cloud Console.
Go to the Routes page - Click Create route.
- For the route Name, enter `ilb-nhop-dest-10-50-1.
- Select the
testing
network. - For the Destination IP range, enter
10.50.1.0/24
. - Make sure tags aren't specified, as they aren't currently supported with this feature.
- For the route's Next hop, select Specify a forwarding rule internal TCP/UDP load balancer.
- For the next-hop region, select
us-west1
. - For the forwarding rule name, select
fr-ilb1
. - Click Create.
gcloud
Create an advanced route with the next hop set to the load balancer's forwarding rule, and the destination range set to the route 10.50.1.0/24.
gcloud compute routes create ilb-nhop-dest-10-50-1 \ --network=testing \ --destination-range=10.50.1.0/24 \ --next-hop-ilb=fr-ilb1 \ --next-hop-ilb-region=us-west1
Creating the testing
VM instance
This example creates a VM instance with the IP address 10.30.1.100
in
the testing-subnet
(10.30.1.0/24
) in the testing
VPC
network.
gcloud
Create the
testing-vm
by running the following command.gcloud compute instances create testing-vm \ --zone=us-west1-a \ --image-family=debian-9 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=testing-subnet \ --private-network-ip 10.30.1.100 \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Creating the production
VM instance
This example creates a VM instance with the IP address 10.50.1.100
in
the production-subnet
(10.50.1.0/24
) in the production
VPC
network.
gcloud
The production-vm
can be in any zone in the same region as the load balancer,
and it can use any subnet in that region. In this example, the production-vm
is in the us-west1-a
zone.
Create the
production-vm
by running the following command.gcloud compute instances create production-vm \ --zone=us-west1-a \ --image-family=debian-9 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=production-subnet \ --private-network-ip 10.50.1.100 \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Testing load balancing to a single-NIC deployment
This test contacts the example destination VM in the production
VPC network from the client VM in the testing
VPC network. The load balancer is used as a next hop because
it routes the packet with destination 10.50.1.100
through the load balancer,
rather than sending it to the IP address of the load balancer.
In this example, iptables
software on the load balancer's
healthy backend appliance VMs processes NAT for the packet.
Connect to the client VM instance.
gcloud compute ssh testing-vm --zone=us-west1-a
Make a web request to the destination instance's web server sofware using
curl
. The expected output is the content of the index page on the destination instance (Page served from: destination-instance
).curl http://10.50.1.100
Setting up internal TCP/UDP load balancers as next hops with common backends
You can expand the example by load balancing to multiple backend NICs, as described in Load balancing to multiple NICs.
Create the
fw-allow-production-health-check
firewall rule to allow Google Cloud health checks to the third-party appliance VMs in theproduction
network.gcloud compute firewall-rules create fw-allow-production-health-check \ --network=production \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
Create the new instance template.
For this setup to work seamlessly with health checks, you must configure policy-based routing to ensure that egress health-check response packets leave through the correct interface. Health checks uses public IP addresses as sources for health check probes. Guest operating systems choose the outgoing NIC based on the destination IP address. If the destination IP address isn't within the subnet range, the outgoing NIC defaults to
nic0
. In such cases, you must configure a separate routing table for each network interface using policy-based routing.Note that source-based policy routing does not work on Windows or Mac operating systems.
In the backend VM template, add the following policy routing, where
10.50.1.0/24
is the subnet with the load balancer andeth1
of the multi-NIC VM. The default gateway is10.50.1.1
:gcloud compute instance-templates create third-party-template-multinic \ --region=us-west1 \ --network-interface subnet=testing-subnet,address="" \ --network-interface subnet=production-subnet \ --tags=allow-ssh,allow-health-check \ --image-family=debian-9 \ --image-project=debian-cloud \ --can-ip-forward \ --metadata=startup-script='#! /bin/bash # Enable IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf # Read VM network configuration: md_vm="http://169.254.169.254/computeMetadata/v1/instance/" md_net="$md_vm/network-interfaces" nic0_gw="$(curl $md_net/0/gateway -H "Metadata-Flavor:Google" )" nic0_mask="$(curl $md_net/0/subnetmask -H "Metadata-Flavor:Google")" nic0_addr="$(curl $md_net/0/ip -H "Metadata-Flavor:Google")" nic1_gw="$(curl $md_net/1/gateway -H "Metadata-Flavor:Google")" nic1_mask="$(curl $md_net/1/subnetmask -H "Metadata-Flavor:Google")" nic1_addr="$(curl $md_net/1/ip -H "Metadata-Flavor:Google")" # Source based policy routing for nic1 echo "100 rt-nic1" >> /etc/iproute2/rt_tables ip rule add pri 32000 from $nic1_gw/$nic1_mask table rt-nic1 sleep 1 ip route add 35.191.0.0/16 via $nic1_gw dev eth1 table rt-nic1 ip route add 130.211.0.0/22 via $nic1_gw dev eth1 table rt-nic1 # Start iptables: iptables -t nat -F iptables -t nat -A POSTROUTING \ -s $nic0_gw/$nic0_mask \ -d $nic1_gw/$nic1_mask \ -o eth1 \ -j SNAT \ --to-source $nic1_addr iptables -t nat -A POSTROUTING \ -s $nic1_gw/$nic1_mask \ -d $nic0_gw/$nic0_mask \ -o eth0 \ -j SNAT \ --to-source $nic0_addr iptables-save # Use a web server to pass the health check for this example. # You should use a more complete test in production. apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl echo "Example web page to pass health check" | \ tee /var/www/html/index.html systemctl restart apache2'
Update the instance group.
gcloud compute instance-groups managed set-instance-template \ third-party-instance-group \ --region us-west1 \ --template=third-party-template-multinic
Re-create the managed instance group, using the new
third-party-template-multinic
template.gcloud compute instance-groups managed rolling-action replace \ third-party-instance-group \ --region us-west1
Wait for a few minutes for the instances to be ready. You can verify progress by using the
list-instances
command.gcloud compute instance-groups managed list-instances \ third-party-instance-group \ --region us-west1
The output should look like this:
NAME ZONE STATUS ACTION INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR third-party-instance-group-5768 us-west1-a RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00 third-party-instance-group-4zf4 us-west1-b RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00 third-party-instance-group-f6lm us-west1-c RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00
Configure the load balancer resources in the
production
VPC network.gcloud compute backend-services create ilb2 \ --load-balancing-scheme=internal \ --global-health-checks \ --health-checks=hc-http-80 \ --region=us-west1 \ --network=production
gcloud compute backend-services add-backend ilb2 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
gcloud compute forwarding-rules create fr-ilb2 \ --load-balancing-scheme=internal \ --ports=80 \ --network=production \ --subnet=production-subnet \ --region=us-west1 \ --backend-service=ilb2 \ --address=10.50.1.99
gcloud compute routes create ilb-nhop-dest-10-30-1 \ --network=production \ --destination-range=10.30.1.0/24 \ --next-hop-ilb=fr-ilb2 \ --next-hop-ilb-region=us-west1
Testing load balancing to a multi-NIC deployment
Verify the health of the load balancer backends.
gcloud compute backend-services get-health ilb1 --region us-west1
gcloud compute backend-services get-health ilb2 --region us-west1
Test connectivity from the
testing
VM.gcloud compute ssh testing-vm --zone=us-west1-a
curl http://10.50.1.100
exit
Test connectivity from the
production
VM.gcloud compute ssh production-vm --zone=us-west1-a
curl http://10.30.1.100
Cleanup
In the load balancer configuration, remove the backend from the backend services.
gcloud compute backend-services remove-backend ilb1 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
gcloud compute backend-services remove-backend ilb2 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
Delete the routes.
gcloud compute routes delete ilb-nhop-dest-10-50-1
gcloud compute routes delete ilb-nhop-dest-10-30-1
In the load balancer configurations, delete the forwarding rules.
gcloud compute forwarding-rules delete fr-ilb1 \ --region=us-west1
gcloud compute forwarding-rules delete fr-ilb2 \ --region=us-west1
In the load balancer configurations, delete the backend services.
gcloud compute backend-services delete ilb1 \ --region=us-west1
gcloud compute backend-services delete ilb2 \ --region=us-west1
In the load balancer configurations, delete the health check.
gcloud compute health-checks delete hc-http-80
Delete the managed instance group.
gcloud compute instance-groups managed delete third-party-instance-group \ --region=us-west1
Delete the instance templates.
gcloud compute instance-templates delete third-party-template
gcloud compute instance-templates delete third-party-template-multinic
Delete the testing and production instances.
gcloud compute instances delete testing-vm \ --zone=us-west1-a
gcloud compute instances delete production-vm \ --zone=us-west1-a
Setting up internal TCP/UDP load balancers as next hops with common backends
You can expand the example by load balancing to multiple backend NICs, as described in Load balancing to multiple NICs.
Create the
fw-allow-production-health-check
firewall rule to allow Google Cloud health checks to the third-party appliance VMs in theproduction
network.gcloud compute firewall-rules create fw-allow-production-health-check \ --network=production \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
Create the new instance template.
For this setup to work seamlessly with health checks, you must configure policy-based routing to ensure that egress health-check response packets leave through the correct interface. Health checks uses public IP addresses as sources for health check probes. Guest operating systems choose the outgoing NIC based on the destination IP address. If the destination IP address isn't within the subnet range, the outgoing NIC defaults to
nic0
. In such cases, you must configure a separate routing table for each network interface using policy-based routing.Note that source-based policy routing does not work on Windows or Mac operating systems.
In the backend VM template, add the following policy routing, where
10.50.1.0/24
is the subnet with the load balancer andeth1
of the multi-NIC VM. The default gateway is10.50.1.1
:gcloud compute instance-templates create third-party-template-multinic \ --region=us-west1 \ --network-interface subnet=testing-subnet,address="" \ --network-interface subnet=production-subnet \ --tags=allow-ssh,allow-health-check \ --image-family=debian-9 \ --image-project=debian-cloud \ --can-ip-forward \ --metadata=startup-script='#! /bin/bash # Enable IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf # Read VM network configuration: md_vm="http://169.254.169.254/computeMetadata/v1/instance/" md_net="$md_vm/network-interfaces" nic0_gw="$(curl $md_net/0/gateway -H "Metadata-Flavor:Google" )" nic0_mask="$(curl $md_net/0/subnetmask -H "Metadata-Flavor:Google")" nic0_addr="$(curl $md_net/0/ip -H "Metadata-Flavor:Google")" nic1_gw="$(curl $md_net/1/gateway -H "Metadata-Flavor:Google")" nic1_mask="$(curl $md_net/1/subnetmask -H "Metadata-Flavor:Google")" nic1_addr="$(curl $md_net/1/ip -H "Metadata-Flavor:Google")" # Source based policy routing for nic1 echo "100 rt-nic1" >> /etc/iproute2/rt_tables ip rule add pri 32000 from $nic1_gw/$nic1_mask table rt-nic1 sleep 1 ip route add 35.191.0.0/16 via $nic1_gw dev eth1 table rt-nic1 ip route add 130.211.0.0/22 via $nic1_gw dev eth1 table rt-nic1 # Start iptables: iptables -t nat -F iptables -t nat -A POSTROUTING \ -s $nic0_gw/$nic0_mask \ -d $nic1_gw/$nic1_mask \ -o eth1 \ -j SNAT \ --to-source $nic1_addr iptables -t nat -A POSTROUTING \ -s $nic1_gw/$nic1_mask \ -d $nic0_gw/$nic0_mask \ -o eth0 \ -j SNAT \ --to-source $nic0_addr iptables-save # Use a web server to pass the health check for this example. # You should use a more complete test in production. apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl echo "Example web page to pass health check" | \ tee /var/www/html/index.html systemctl restart apache2'
Update the instance group.
gcloud compute instance-groups managed set-instance-template \ third-party-instance-group \ --region us-west1 \ --template=third-party-template-multinic
Re-create the managed instance group, using the new
third-party-template-multinic
template.gcloud compute instance-groups managed rolling-action replace \ third-party-instance-group \ --region us-west1
Wait for a few minutes for the instances to be ready. You can verify progress by using the
list-instances
command.gcloud compute instance-groups managed list-instances \ third-party-instance-group \ --region us-west1
The output should look like this:
NAME ZONE STATUS ACTION INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR third-party-instance-group-5768 us-west1-a RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00 third-party-instance-group-4zf4 us-west1-b RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00 third-party-instance-group-f6lm us-west1-c RUNNING NONE third-party-template-multinic 0/2019-10-24 18:48:48.018273+00:00
Configure the load balancer resources in the
production
VPC network.gcloud compute backend-services create ilb2 \ --load-balancing-scheme=internal \ --global-health-checks \ --health-checks=hc-http-80 \ --region=us-west1 \ --network=production
gcloud compute backend-services add-backend ilb2 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
gcloud compute forwarding-rules create fr-ilb2 \ --load-balancing-scheme=internal \ --ports=80 \ --network=production \ --subnet=production-subnet \ --region=us-west1 \ --backend-service=ilb2 \ --address=10.50.1.99
gcloud compute routes create ilb-nhop-dest-10-30-1 \ --network=production \ --destination-range=10.30.1.0/24 \ --next-hop-ilb=fr-ilb2 \ --next-hop-ilb-region=us-west1
Testing load balancing to a multi-NIC deployment
Verify the health of the load balancer backends.
gcloud compute backend-services get-health ilb1 --region us-west1
gcloud compute backend-services get-health ilb2 --region us-west1
Test connectivity from the
testing
VM.gcloud compute ssh testing-vm --zone=us-west1-a
curl http://10.50.1.100
exit
Test connectivity from the
production
VM.gcloud compute ssh production-vm --zone=us-west1-a
curl http://10.30.1.100
Cleanup
In the load balancer configuration, remove the backend from the backend services.
gcloud compute backend-services remove-backend ilb1 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
gcloud compute backend-services remove-backend ilb2 \ --instance-group=third-party-instance-group \ --instance-group-region=us-west1 \ --region=us-west1
Delete the routes.
gcloud compute routes delete ilb-nhop-dest-10-50-1
gcloud compute routes delete ilb-nhop-dest-10-30-1
In the load balancer configurations, delete the forwarding rules.
gcloud compute forwarding-rules delete fr-ilb1 \ --region=us-west1
gcloud compute forwarding-rules delete fr-ilb2 \ --region=us-west1
In the load balancer configurations, delete the backend services.
gcloud compute backend-services delete ilb1 \ --region=us-west1
gcloud compute backend-services delete ilb2 \ --region=us-west1
In the load balancer configurations, delete the health check.
gcloud compute health-checks delete hc-http-80
Delete the managed instance group.
gcloud compute instance-groups managed delete third-party-instance-group \ --region=us-west1
Delete the instance templates.
gcloud compute instance-templates delete third-party-template
gcloud compute instance-templates delete third-party-template-multinic
Delete the testing and production instances.
gcloud compute instances delete testing-vm \ --zone=us-west1-a
gcloud compute instances delete production-vm \ --zone=us-west1-a <<<<
What's next
- See Internal TCP/UDP Load Balancing Concepts for important fundamentals.
- See Failover concepts for Internal TCP/UDP Load Balancing for important information about failover.
- See Setting Up Internal TCP/UDP Load Balancing for an example internal TCP/UDP load balancer configuration.
- See Internal TCP/UDP Load Balancing Logging and Monitoring for information on configuring Stackdriver logging and monitoring for Internal TCP/UDP Load Balancing.
- See Internal TCP/UDP Load Balancing and Connected Networks for information about accessing internal TCP/UDP load balancers from peer networks connected to your VPC network.
- See Troubleshooting Internal TCP/UDP Load Balancing for information on how to troubleshoot issues with your internal TCP/UDP load balancer.