Setting up Internal TCP/UDP Load Balancing for third-party appliances

This guide uses an example to teach you how to configure a Google Cloud internal TCP/UDP load balancer to be a next hop. Before following this guide, familiarize yourself with the following:

Permissions

To follow this guide, you need to create instances and modify a network in a project. You should be either a project owner or editor, or you should have all of the following Compute Engine IAM roles:

Task Required Role
Create networks, subnets, and load balancer components Network Admin
Add and remove firewall rules Security Admin
Create instances Instance Admin

Load balancing to a single backend NIC

This guide shows you how to use an internal TCP/UDP load balancer as the next hop for a custom static route in order to integrate scaled-out virtual appliances.

The solution discussed in this guide integrates virtual appliances so that you don't need to explicitly reconfigure your clients to send traffic to each virtual appliance. The example in this setup guide sends all traffic through a load-balanced set of firewall virtual appliances.

The steps in this section describe how to configure the following resources:

  • Sample VPC networks and custom subnets
  • Google Cloud firewall rules that allow incoming connections to backend VMs
  • A custom static route
  • One client VM to test connections
  • The following internal TCP/UDP load balancer components:
    • Backend VMs in a managed instance group
    • A health check for the backend VM appliances
    • An internal backend service in the us-west1 region to manage connection distribution among the backend VMs
    • An internal forwarding rule and internal IP address for the frontend of the load balancer

The topology looks like this:

Next-Hop Single-NIC Example for Internal TCP/UDP Load Balancing (click to enlarge)
Next-Hop Single-NIC Example for Internal TCP/UDP Load Balancing (click to enlarge)

The diagram shows some of the resources that the example creates:

  • Application instances (in this case, VMs running firewall appliance software) behind an internal TCP/UDP load balancer (fr-ilb1, in this example). The application instances only have internal (RFC 1918) IP addresses.
  • Each application instance has its can-ip-forward flag enabled. Without this flag, a Compute Engine VM can only transmit a packet if the source IP address of the packet matches one of the VM's IP addresses. The can-ip-forward flag changes this behavior so that the VM can transmit packets with any source IP address.
  • A custom static route with destination 10.50.1.0/24 and next hop set to the load balancer's forwarding rule, fr-ilb1.

The diagram also shows the traffic flow:

  • The testing VPC network has a custom static route for traffic thet is destined to 10.50.1.0/24 subnet. This route directs the traffic to the load balancer.
  • The load balancer forwards traffic to one of the application instances based on the configured session affinity. (Session affinity only affects TCP traffic.)
  • The application instance performs source network address translation (SNAT) to deliver packets to the instance group in the production VPC network. For return traffic, it performs destination network address translation (DNAT) to deliver packets to the client instance in the testing VPC network.

For additional use cases, see Next-hop concepts for Internal TCP/UDP Load Balancing.

Configuring the networks, region, and subnets

This example uses the following VPC networks, region, and subnets:

  • Networks: This example requires two networks, each with at least one subnet. Each backend third-party appliance VM must have at least two network interfaces, one in each VPC network. The networks in this example are custom mode VPC networks named testing and production. The testing network in this example contains the client and the load balancer. The production network contains the destination target VM.

  • Region: The subnets are located in the us-west1 region. The subnets must be in the same region because VM instances are zonal resources.

  • Subnets: The subnets, testing-subnet and production-subnet, use the 10.30.1.0/24 and 10.50.1.0/24 primary IP address ranges, respectively.

To create the example networks and subnets, follow these steps.

Console

Create the testing network and the testing-subnet:

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC network page
  2. Click Create VPC network.
  3. Enter a Name of testing.
  4. In the Subnets section:
    • Set the Subnet creation mode to Custom.
    • In the New subnet section, enter the following information:
      • Name: testing-subnet
      • Region: us-west1
      • IP address range: 10.30.1.0/24
      • Click Done.
  5. Click Create.

Create the production network and the production-subnet:

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC network page
  2. Click Create VPC network.
  3. Enter a Name of production.
  4. In the Subnets section:
    • Set the Subnet creation mode to Custom.
    • In the New subnet section, enter the following information:
      • Name: production-subnet
      • Region: us-west1
      • IP address range: 10.50.1.0/24
      • Click Done.
  5. Click Create.

gcloud

  1. Create the custom VPC networks:

    gcloud compute networks create testing --subnet-mode=custom
    
    gcloud compute networks create production --subnet-mode=custom
    
  2. Create subnets in the testing and production networks in the us-west1 region:

    gcloud compute networks subnets create testing-subnet \
        --network=testing \
        --range=10.30.1.0/24 \
        --region=us-west1
    
    gcloud compute networks subnets create production-subnet \
        --network=production \
        --range=10.50.1.0/24 \
        --region=us-west1
    

Configuring firewall rules

This example uses the following firewall rules:

  • fw-allow-testing-subnet: An ingress rule, applicable to all targets in the testing network, allowing traffic from sources in the 10.30.1.0/24 ranges. This rule allows the VM instances and third-party VM appliances in the testing-subnet to communicate.

  • fw-allow-production-subnet: An ingress rule, applicable to all targets in the production network, allowing traffic from sources in the 10.50.1.0/24 ranges. This rule allows the VM instances and third-party VM appliances in the production-subnet to communicate.

  • fw-allow-testing-ssh: An ingress rule applied to the VM instances in the testing VPC network, allowing incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify the IP ranges of the systems from which you plan to initiate SSH sessions. This example uses the target tag allow-ssh to identify the VMs to which the firewall rule applies.

  • fw-allow-production-ssh: An ingress rule applied to the VM instances in the production VPC network, allowing incoming SSH connectivity on TCP port 22 from any address. Like the fw-allow-testing-ssh rule, you can choose a more restrictive source IP range for this rule.

  • fw-allow-health-check: An ingress rule, applicable to the third-party VM appliances load balanced, that allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag allow-health-check to identify the instances to which it should apply.

Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances. You must create a firewall rule to allow health checks from the IP ranges of Google Cloud probe systems. Refer to probe IP ranges for more information.

Console

  1. Go to the Firewall rules page in the Google Cloud Console.
    Go to the Firewall rules page
  2. Click Create firewall rule and enter the following information to create the rule to allow subnet traffic:
    • Name: fw-allow-testing-subnet
    • Network: testing
    • Priority: 1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: All instances in the network
    • Source filter: IP ranges
    • Source IP ranges: 10.30.1.0/24
    • Protocols and ports: Allow all
  3. Click Create.
  4. Click Create firewall rule and enter the following information to create the rule to allow subnet traffic:
    • Name: fw-allow-production-subnet
    • Network: production
    • Priority: 1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: All instances in the network
    • Source filter: IP ranges
    • Source IP ranges: 10.50.1.0/24
    • Protocols and ports: Allow all
  5. Click Create.
  6. Click Create firewall rule again to create the rule to allow incoming SSH connections:
    • Name: fw-allow-testing-ssh
    • Network: testing
    • Priority: 1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: allow-ssh
    • Source filter: IP ranges
    • Source IP ranges: 0.0.0.0/0
    • Protocols and ports: Choose Specified protocols and ports and type: tcp:22
  7. Click Create.
  8. Click Create firewall rule again to create the rule to allow incoming SSH connections:
    • Name: fw-allow-production-ssh
    • Network: production
    • Priority: 1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: allow-ssh
    • Source filter: IP ranges
    • Source IP ranges: 0.0.0.0/0
    • Protocols and ports: Choose Specified protocols and ports and type: tcp:22
  9. Click Create.
  10. Click Create firewall rule a third time to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-health-check
    • Network: testing
    • Priority: 1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: allow-health-check
    • Source filter: IP ranges
    • Source IP ranges: 130.211.0.0/22 and 35.191.0.0/16
    • Protocols and ports: Allow all
  11. Click Create.

gcloud

  1. Create the fw-allow-testing-subnet firewall rule to allow communication from with the subnet:

    gcloud compute firewall-rules create fw-allow-testing-subnet \
        --network=testing \
        --action=allow \
        --direction=ingress \
        --source-ranges=10.30.1.0/24 \
        --rules=tcp,udp,icmp
    
  2. Create the fw-allow-production-subnet firewall rule to allow communication from with the subnet:

    gcloud compute firewall-rules create fw-allow-production-subnet \
        --network=production \
        --action=allow \
        --direction=ingress \
        --source-ranges=10.50.1.0/24 \
        --rules=tcp,udp,icmp
    
  3. Create the fw-allow-testing-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-testing-ssh \
        --network=testing \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  4. Create the fw-allow-production-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh.

    gcloud compute firewall-rules create fw-allow-production-ssh \
        --network=production \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  5. Create the fw-allow-health-check rule to allow Google Cloud health checks to the third-party appliance VMs in the testing network.

    gcloud compute firewall-rules create fw-allow-testing-health-check \
        --network=testing \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp,udp,icmp
    

Creating the third-party virtual appliances

The following steps demonstrate how to create an instance template and managed regional instance group using the iptables software as a third-party virtual appliance.

Console

You must use gcloud for this step because you need to create an instance template with more than one network interface. The Cloud Console does not currently support creating instance templates with more than one network interface.

gcloud

  1. Create an instance template for your third-party virtual appliances. The instance template must include the --can-ip-forward flag so that the VM instances created from the template can forward packets from other instances in the testing and production networks.

    gcloud compute instance-templates create third-party-template \
        --region=us-west1 \
        --network-interface subnet=testing-subnet,address="" \
        --network-interface subnet=production-subnet \
        --tags=allow-ssh,allow-health-check \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --can-ip-forward \
        --metadata=startup-script='#! /bin/bash
        # Enable IP forwarding:
        echo 1 > /proc/sys/net/ipv4/ip_forward
        echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf
        # Read VM network configuration:
        md_vm="http://169.254.169.254/computeMetadata/v1/instance/"
        md_net="$md_vm/network-interfaces"
        nic0_gw="$(curl -H "Metadata-Flavor:Google" $md_net/0/gateway)"
        nic0_mask="$(curl -H "Metadata-Flavor:Google" $md_net/0/subnetmask)"
        nic1_gw="$(curl -H "Metadata-Flavor:Google" $md_net/1/gateway)"
        nic1_mask="$(curl -H "Metadata-Flavor:Google" $md_net/1/subnetmask)"
        nic1_addr="$(curl -H "Metadata-Flavor:Google" $md_net/1/ip)"
        # Start iptables:
        /sbin/iptables -t nat -F
        /sbin/iptables -t nat -A POSTROUTING \
        -s "$nic0_gw/$nic0_mask" \
        -d "$nic1_gw/$nic1_mask" \
        -o eth1 \
        -j SNAT \
        --to-source "$nic1_addr"
        /sbin/iptables-save
        # Use a web server to pass the health check for this example.
        # You should use a more complete test in production.
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        echo "Example web page to pass health check" | \
        tee /var/www/html/index.html
        systemctl restart apache2'
    
  2. Create a managed instance group for your third-party virtual appliances. This command creates a regional managed instance group, which can then be autoscaled, in us-west1.

    gcloud compute instance-groups managed create third-party-instance-group \
        --region=us-west1 \
        --template=third-party-template \
        --size=3
    

Creating the load balancing resources

These steps configure all of the internal TCP/UDP load balancer components starting with the health check and backend service, and then the frontend components:

  • Health check: In this example, the HTTP health check checks for an HTTP 200 (OK) response. For more information, see the health checks section of the Internal TCP/UDP Load Balancing overview.

  • Backend service: Even though this example's backend service specifies the TCP protocol, when the load balancer is the next hop for a route, both TCP and UDP traffic are sent to the load balancer's backends.

  • Forwarding rule: Even though this example forwarding rule specifies TCP port 80, when the load balancer is the next hop for a route, traffic on any TCP or UDP port is sent to the load balancer's backends.

  • Internal IP address: The example specifies an internal IP address, 10.30.1.99, for the forwarding rule.

Console

Create the load balancer and configure a backend service

  1. Go to the Load balancing page in the Google Cloud Console.
    Go to the Load balancing page
  2. Click Create load balancer.
  3. Under TCP load balancing, click Start configuration.
  4. Under Internet facing or internal only select Only between my VMs.
  5. Click Continue.
  6. Set the Name to ilb1.
  7. Click Backend configuration and make the following changes:
    1. Region: us-west1
    2. Network: testing
    3. Under Backends, in the New item section, select the third-party-instance-group instance group and click Done.
    4. Under Health check, choose Create another health check, enter the following information, and click Save and continue:
      • Name: hc-http-80
      • Protocol: HTTP
      • Port: 80
      • Proxy protocol: NONE
      • Request path: /
    5. Verify that there is a blue check mark next to Backend configuration before continuing. Review this step if not.
  8. Click Frontend configuration. In the New Frontend IP and port section, make the following changes:
    1. Name: fr-ilb1
    2. Subnetwork: testing-subnet
    3. From Internal IP, choose Reserve a static internal IP address, enter the following information, and click Reserve:
      • Name: ip-ilb
      • Static IP address: Let me choose
      • Custom IP address: 10.30.1.99
    4. Ports: Choose Single, and enter 80 for the Port number. Remember that the choice of a protocol and port for the load balancer does not limit the protocols and ports that are used when the load balancer is the next hop of a route.
    5. Verify that there is a blue check mark next to Frontend configuration before continuing. Review this step if not.
  9. Click Review and finalize. Double-check your settings.
  10. Click Create.

gcloud

  1. Create a new HTTP health check to test TCP connectivity to the VMs on 80.

    gcloud compute health-checks create http hc-http-80 \
        --port=80
    
  2. Create an internal backend service in the us-west1 region.

    gcloud compute backend-services create ilb1 \
        --load-balancing-scheme=internal \
        --region=us-west1 \
        --health-checks=hc-http-80
    
  3. Add the instance group containing the third-party virtual appliances as a backend on the backend service.

    gcloud compute backend-services add-backend ilb1 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
  4. Create the internal forwarding rule and connect it to the backend service to complete the load balancer configuration. Remember that the protocol (TCP) and port (80) of the internal load balancer do not limit the ports and protocols that are forwarded to the backend instances (the third-party virtual appliances) when the load balancer is used as the next hop of a route.

    gcloud compute forwarding-rules create fr-ilb1 \
        --load-balancing-scheme=internal \
        --ports=80 \
        --network=testing \
        --subnet=testing-subnet \
        --region=us-west1 \
        --backend-service=ilb1 \
        --address=10.30.1.99
    

Creating the static route that defines the load balancer as the next hop

When you create a static route, you cannot use next-hop-address to point to the IP address of a load balancer's forwarding rule. This is because when you use next-hop-address, Google Cloud passes traffic to the VM instance assigned to that IP address, and a load balancer is not a VM instance. Instead, if you want to designate a load balancer as next hop, you must use the next-hop-ilb flag, as demonstrated in this example.

Console

  1. Go to the Routes page in the Google Cloud Console.
    Go to the Routes page
  2. Click Create route.
  3. For the route Name, enter `ilb-nhop-dest-10-50-1.
  4. Select the testing network.
  5. For the Destination IP range, enter 10.50.1.0/24.
  6. Make sure tags aren't specified, as they aren't currently supported with this feature.
  7. For the route's Next hop, select Specify a forwarding rule internal TCP/UDP load balancer.
  8. For the next-hop region, select us-west1.
  9. For the forwarding rule name, select fr-ilb1.
  10. Click Create.

gcloud

Create an advanced route with the next hop set to the load balancer's forwarding rule, and the destination range set to the route 10.50.1.0/24.

gcloud compute routes create ilb-nhop-dest-10-50-1 \
    --network=testing \
    --destination-range=10.50.1.0/24 \
    --next-hop-ilb=fr-ilb1 \
    --next-hop-ilb-region=us-west1

Creating the testing VM instance

This example creates a VM instance with the IP address 10.30.1.100 in the testing-subnet (10.30.1.0/24) in the testing VPC network.

gcloud

  1. Create the testing-vm by running the following command.

    gcloud compute instances create testing-vm \
        --zone=us-west1-a \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --tags=allow-ssh \
        --subnet=testing-subnet \
        --private-network-ip 10.30.1.100 \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://169.254.169.254/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2'
    

Creating the production VM instance

This example creates a VM instance with the IP address 10.50.1.100 in the production-subnet (10.50.1.0/24) in the production VPC network.

gcloud

The production-vm can be in any zone in the same region as the load balancer, and it can use any subnet in that region. In this example, the production-vm is in the us-west1-a zone.

  1. Create the production-vm by running the following command.

    gcloud compute instances create production-vm \
        --zone=us-west1-a \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --tags=allow-ssh \
        --subnet=production-subnet \
        --private-network-ip 10.50.1.100 \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://169.254.169.254/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2'
    

Testing load balancing to a single-NIC deployment

This test contacts the example destination VM in the production VPC network from the client VM in the testing VPC network. The load balancer is used as a next hop because it routes the packet with destination 10.50.1.100 through the load balancer, rather than sending it to the IP address of the load balancer.

In this example, iptables software on the load balancer's healthy backend appliance VMs processes NAT for the packet.

  1. Connect to the client VM instance.

    gcloud compute ssh testing-vm --zone=us-west1-a
    
  2. Make a web request to the destination instance's web server sofware using curl. The expected output is the content of the index page on the destination instance (Page served from: destination-instance).

    curl http://10.50.1.100
    

Setting up internal TCP/UDP load balancers as next hops with common backends

You can expand the example by load balancing to multiple backend NICs, as described in Load balancing to multiple NICs.

Next-Hop Multi-NIC Detailed Example for Internal TCP/UDP Load Balancing (click to enlarge)
Next-Hop Multi-NIC Detailed Example for Internal TCP/UDP Load Balancing (click to enlarge)
  1. Create the fw-allow-production-health-check firewall rule to allow Google Cloud health checks to the third-party appliance VMs in the production network.

    gcloud compute firewall-rules create fw-allow-production-health-check \
        --network=production \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp,udp,icmp
    
  2. Create the new instance template.

    For this setup to work seamlessly with health checks, you must configure policy-based routing to ensure that egress health-check response packets leave through the correct interface. Health checks uses public IP addresses as sources for health check probes. Guest operating systems choose the outgoing NIC based on the destination IP address. If the destination IP address isn't within the subnet range, the outgoing NIC defaults to nic0. In such cases, you must configure a separate routing table for each network interface using policy-based routing.

    Note that source-based policy routing does not work on Windows or Mac operating systems.

    In the backend VM template, add the following policy routing, where 10.50.1.0/24 is the subnet with the load balancer and eth1 of the multi-NIC VM. The default gateway is 10.50.1.1:

    gcloud compute instance-templates create third-party-template-multinic \
        --region=us-west1 \
        --network-interface subnet=testing-subnet,address="" \
        --network-interface subnet=production-subnet \
        --tags=allow-ssh,allow-health-check \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --can-ip-forward \
        --metadata=startup-script='#! /bin/bash
        # Enable IP forwarding:
        echo 1 > /proc/sys/net/ipv4/ip_forward
        echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf
        # Read VM network configuration:
        md_vm="http://169.254.169.254/computeMetadata/v1/instance/"
        md_net="$md_vm/network-interfaces"
        nic0_gw="$(curl $md_net/0/gateway -H "Metadata-Flavor:Google" )"
        nic0_mask="$(curl $md_net/0/subnetmask -H "Metadata-Flavor:Google")"
        nic0_addr="$(curl $md_net/0/ip -H "Metadata-Flavor:Google")"
        nic1_gw="$(curl $md_net/1/gateway -H "Metadata-Flavor:Google")"
        nic1_mask="$(curl $md_net/1/subnetmask -H "Metadata-Flavor:Google")"
        nic1_addr="$(curl $md_net/1/ip -H "Metadata-Flavor:Google")"
        # Source based policy routing for nic1
        echo "100 rt-nic1" >> /etc/iproute2/rt_tables
        ip rule add pri 32000 from $nic1_gw/$nic1_mask table rt-nic1
        sleep 1
        ip route add 35.191.0.0/16 via $nic1_gw dev eth1 table rt-nic1
        ip route add 130.211.0.0/22 via $nic1_gw dev eth1 table rt-nic1
        # Start iptables:
        iptables -t nat -F
        iptables -t nat -A POSTROUTING \
        -s $nic0_gw/$nic0_mask \
        -d $nic1_gw/$nic1_mask \
        -o eth1 \
        -j SNAT \
        --to-source $nic1_addr
        iptables -t nat -A POSTROUTING \
        -s $nic1_gw/$nic1_mask \
        -d $nic0_gw/$nic0_mask \
        -o eth0 \
        -j SNAT \
        --to-source $nic0_addr
        iptables-save
        # Use a web server to pass the health check for this example.
        # You should use a more complete test in production.
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        echo "Example web page to pass health check" | \
        tee /var/www/html/index.html
        systemctl restart apache2'
    
  3. Update the instance group.

    gcloud compute instance-groups managed set-instance-template \
        third-party-instance-group \
        --region us-west1 \
        --template=third-party-template-multinic
    
  4. Re-create the managed instance group, using the new third-party-template-multinic template.

    gcloud compute instance-groups managed rolling-action replace \
        third-party-instance-group \
        --region us-west1
    

    Wait for a few minutes for the instances to be ready. You can verify progress by using the list-instances command.

    gcloud compute instance-groups managed list-instances \
        third-party-instance-group \
        --region us-west1
    

    The output should look like this:

    NAME                             ZONE        STATUS   ACTION  INSTANCE_TEMPLATE              VERSION_NAME                        LAST_ERROR
    third-party-instance-group-5768  us-west1-a  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    third-party-instance-group-4zf4  us-west1-b  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    third-party-instance-group-f6lm  us-west1-c  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    
  5. Configure the load balancer resources in the production VPC network.

    gcloud compute backend-services create ilb2 \
        --load-balancing-scheme=internal \
        --global-health-checks \
        --health-checks=hc-http-80 \
        --region=us-west1 \
        --network=production
    
    gcloud compute backend-services add-backend ilb2 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
    gcloud compute forwarding-rules create fr-ilb2 \
        --load-balancing-scheme=internal \
        --ports=80 \
        --network=production \
        --subnet=production-subnet \
        --region=us-west1 \
        --backend-service=ilb2 \
        --address=10.50.1.99
    
    gcloud compute routes create ilb-nhop-dest-10-30-1 \
        --network=production \
        --destination-range=10.30.1.0/24 \
        --next-hop-ilb=fr-ilb2 \
        --next-hop-ilb-region=us-west1
    

Testing load balancing to a multi-NIC deployment

  1. Verify the health of the load balancer backends.

    gcloud compute backend-services get-health ilb1 --region us-west1
    
    gcloud compute backend-services get-health ilb2 --region us-west1
    
  2. Test connectivity from the testing VM.

    gcloud compute ssh testing-vm --zone=us-west1-a
    
    curl http://10.50.1.100
    
    exit
    
  3. Test connectivity from the production VM.

    gcloud compute ssh production-vm --zone=us-west1-a
    
    curl http://10.30.1.100
    

Cleanup

  1. In the load balancer configuration, remove the backend from the backend services.

    gcloud compute backend-services remove-backend ilb1 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
    gcloud compute backend-services remove-backend ilb2 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
  2. Delete the routes.

    gcloud compute routes delete ilb-nhop-dest-10-50-1
    
    gcloud compute routes delete ilb-nhop-dest-10-30-1
    
  3. In the load balancer configurations, delete the forwarding rules.

    gcloud compute forwarding-rules delete fr-ilb1 \
        --region=us-west1
    
    gcloud compute forwarding-rules delete fr-ilb2 \
        --region=us-west1
    
  4. In the load balancer configurations, delete the backend services.

    gcloud compute backend-services delete ilb1 \
        --region=us-west1
    
    gcloud compute backend-services delete ilb2 \
        --region=us-west1
    
  5. In the load balancer configurations, delete the health check.

    gcloud compute health-checks delete hc-http-80
    
  6. Delete the managed instance group.

    gcloud compute instance-groups managed delete third-party-instance-group \
        --region=us-west1
    
  7. Delete the instance templates.

    gcloud compute instance-templates delete third-party-template
    
    gcloud compute instance-templates delete third-party-template-multinic
    
  8. Delete the testing and production instances.

    gcloud compute instances delete testing-vm \
        --zone=us-west1-a
    
    gcloud compute instances delete production-vm \
        --zone=us-west1-a
    

Setting up internal TCP/UDP load balancers as next hops with common backends

You can expand the example by load balancing to multiple backend NICs, as described in Load balancing to multiple NICs.

Next-Hop Multi-NIC Detailed Example for Internal TCP/UDP Load Balancing (click to enlarge)
Next-Hop Multi-NIC Detailed Example for Internal TCP/UDP Load Balancing (click to enlarge)
  1. Create the fw-allow-production-health-check firewall rule to allow Google Cloud health checks to the third-party appliance VMs in the production network.

    gcloud compute firewall-rules create fw-allow-production-health-check \
        --network=production \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp,udp,icmp
    
  2. Create the new instance template.

    For this setup to work seamlessly with health checks, you must configure policy-based routing to ensure that egress health-check response packets leave through the correct interface. Health checks uses public IP addresses as sources for health check probes. Guest operating systems choose the outgoing NIC based on the destination IP address. If the destination IP address isn't within the subnet range, the outgoing NIC defaults to nic0. In such cases, you must configure a separate routing table for each network interface using policy-based routing.

    Note that source-based policy routing does not work on Windows or Mac operating systems.

    In the backend VM template, add the following policy routing, where 10.50.1.0/24 is the subnet with the load balancer and eth1 of the multi-NIC VM. The default gateway is 10.50.1.1:

    gcloud compute instance-templates create third-party-template-multinic \
        --region=us-west1 \
        --network-interface subnet=testing-subnet,address="" \
        --network-interface subnet=production-subnet \
        --tags=allow-ssh,allow-health-check \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --can-ip-forward \
        --metadata=startup-script='#! /bin/bash
        # Enable IP forwarding:
        echo 1 > /proc/sys/net/ipv4/ip_forward
        echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf
        # Read VM network configuration:
        md_vm="http://169.254.169.254/computeMetadata/v1/instance/"
        md_net="$md_vm/network-interfaces"
        nic0_gw="$(curl $md_net/0/gateway -H "Metadata-Flavor:Google" )"
        nic0_mask="$(curl $md_net/0/subnetmask -H "Metadata-Flavor:Google")"
        nic0_addr="$(curl $md_net/0/ip -H "Metadata-Flavor:Google")"
        nic1_gw="$(curl $md_net/1/gateway -H "Metadata-Flavor:Google")"
        nic1_mask="$(curl $md_net/1/subnetmask -H "Metadata-Flavor:Google")"
        nic1_addr="$(curl $md_net/1/ip -H "Metadata-Flavor:Google")"
        # Source based policy routing for nic1
        echo "100 rt-nic1" >> /etc/iproute2/rt_tables
        ip rule add pri 32000 from $nic1_gw/$nic1_mask table rt-nic1
        sleep 1
        ip route add 35.191.0.0/16 via $nic1_gw dev eth1 table rt-nic1
        ip route add 130.211.0.0/22 via $nic1_gw dev eth1 table rt-nic1
        # Start iptables:
        iptables -t nat -F
        iptables -t nat -A POSTROUTING \
        -s $nic0_gw/$nic0_mask \
        -d $nic1_gw/$nic1_mask \
        -o eth1 \
        -j SNAT \
        --to-source $nic1_addr
        iptables -t nat -A POSTROUTING \
        -s $nic1_gw/$nic1_mask \
        -d $nic0_gw/$nic0_mask \
        -o eth0 \
        -j SNAT \
        --to-source $nic0_addr
        iptables-save
        # Use a web server to pass the health check for this example.
        # You should use a more complete test in production.
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        echo "Example web page to pass health check" | \
        tee /var/www/html/index.html
        systemctl restart apache2'
    
  3. Update the instance group.

    gcloud compute instance-groups managed set-instance-template \
        third-party-instance-group \
        --region us-west1 \
        --template=third-party-template-multinic
    
  4. Re-create the managed instance group, using the new third-party-template-multinic template.

    gcloud compute instance-groups managed rolling-action replace \
        third-party-instance-group \
        --region us-west1
    

    Wait for a few minutes for the instances to be ready. You can verify progress by using the list-instances command.

    gcloud compute instance-groups managed list-instances \
        third-party-instance-group \
        --region us-west1
    

    The output should look like this:

    NAME                             ZONE        STATUS   ACTION  INSTANCE_TEMPLATE              VERSION_NAME                        LAST_ERROR
    third-party-instance-group-5768  us-west1-a  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    third-party-instance-group-4zf4  us-west1-b  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    third-party-instance-group-f6lm  us-west1-c  RUNNING  NONE    third-party-template-multinic  0/2019-10-24 18:48:48.018273+00:00
    
  5. Configure the load balancer resources in the production VPC network.

    gcloud compute backend-services create ilb2 \
        --load-balancing-scheme=internal \
        --global-health-checks \
        --health-checks=hc-http-80 \
        --region=us-west1 \
        --network=production
    
    gcloud compute backend-services add-backend ilb2 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
    gcloud compute forwarding-rules create fr-ilb2 \
        --load-balancing-scheme=internal \
        --ports=80 \
        --network=production \
        --subnet=production-subnet \
        --region=us-west1 \
        --backend-service=ilb2 \
        --address=10.50.1.99
    
    gcloud compute routes create ilb-nhop-dest-10-30-1 \
        --network=production \
        --destination-range=10.30.1.0/24 \
        --next-hop-ilb=fr-ilb2 \
        --next-hop-ilb-region=us-west1
    

Testing load balancing to a multi-NIC deployment

  1. Verify the health of the load balancer backends.

    gcloud compute backend-services get-health ilb1 --region us-west1
    
    gcloud compute backend-services get-health ilb2 --region us-west1
    
  2. Test connectivity from the testing VM.

    gcloud compute ssh testing-vm --zone=us-west1-a
    
    curl http://10.50.1.100
    
    exit
    
  3. Test connectivity from the production VM.

    gcloud compute ssh production-vm --zone=us-west1-a
    
    curl http://10.30.1.100
    

Cleanup

  1. In the load balancer configuration, remove the backend from the backend services.

    gcloud compute backend-services remove-backend ilb1 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
    gcloud compute backend-services remove-backend ilb2 \
        --instance-group=third-party-instance-group \
        --instance-group-region=us-west1 \
        --region=us-west1
    
  2. Delete the routes.

    gcloud compute routes delete ilb-nhop-dest-10-50-1
    
    gcloud compute routes delete ilb-nhop-dest-10-30-1
    
  3. In the load balancer configurations, delete the forwarding rules.

    gcloud compute forwarding-rules delete fr-ilb1 \
        --region=us-west1
    
    gcloud compute forwarding-rules delete fr-ilb2 \
        --region=us-west1
    
  4. In the load balancer configurations, delete the backend services.

    gcloud compute backend-services delete ilb1 \
        --region=us-west1
    
    gcloud compute backend-services delete ilb2 \
        --region=us-west1
    
  5. In the load balancer configurations, delete the health check.

    gcloud compute health-checks delete hc-http-80
    
  6. Delete the managed instance group.

    gcloud compute instance-groups managed delete third-party-instance-group \
        --region=us-west1
    
  7. Delete the instance templates.

    gcloud compute instance-templates delete third-party-template
    
    gcloud compute instance-templates delete third-party-template-multinic
    
  8. Delete the testing and production instances.

    gcloud compute instances delete testing-vm \
        --zone=us-west1-a
    
    gcloud compute instances delete production-vm \
        --zone=us-west1-a
    <<<<
    

What's next

Σας βοήθησε αυτή η σελίδα; Πείτε μας τη γνώμη σας:

Αποστολή σχολίων σχετικά με…

Αυτή η σελίδα