Set up an external passthrough Network Load Balancer for multiple IP protocols

This guide provides instructions for creating backend service-based external passthrough Network Load Balancers that load-balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic. You can use such a configuration if you want to load-balance traffic that is using IP protocols other than TCP or UDP. Target pool-based external passthrough Network Load Balancers don't support this capability.

To configure an external passthrough Network Load Balancer for IP protocols other than TCP or UDP, you create a forwarding rule with protocol set to L3_DEFAULT. This forwarding rule points to a backend service with protocol set to UNSPECIFIED.

In this example, we use two external passthrough Network Load Balancers to distribute traffic across backend VMs in two zonal managed instance groups in the us-central1 region. Both load balancers receive traffic at the same external IP address.

One load balancer has a forwarding rule with protocol TCP and port 80, and the other load balancer has a forwarding rule with protocol L3_DEFAULT. TCP traffic arriving at the IP address on port 80 is handled by the TCP forwarding rule. All other traffic that does not match the TCP-specific forwarding rule is handled by the L3_DEFAULT forwarding rule.

External passthrough Network Load Balancer with zonal managed instance groups
External passthrough Network Load Balancer with zonal managed instance groups

This scenario distributes traffic across healthy instances. To support this, you create TCP health checks to ensure that traffic is sent only to healthy instances.

The external passthrough Network Load Balancer is a regional load balancer. All load balancer components must be in the same region.

Before you begin

Install the Google Cloud CLI. For a complete overview of the tool, see the gcloud CLI overview. You can find commands related to load balancing in the API and gcloud reference.

If you haven't run the gcloud CLI previously, first run the gcloud init command to authenticate.

This guide assumes that you are familiar with bash.

Set up the network and subnets

The example on this page uses a custom mode VPC network named lb-network. You can use an auto mode VPC network if you only want to handle IPv4 traffic. However, IPv6 traffic requires a custom mode subnet.

IPv6 traffic also requires a dual-stack subnet (stack-type set to IPv4_IPv6). When you create a dual stack subnet on a custom mode VPC network, you choose an IPv6 access type for the subnet. For this example, we set the subnet's ipv6-access-type parameter to EXTERNAL. This means new VMs on this subnet can be assigned both external IPv4 addresses and external IPv6 addresses. The forwarding rules can also be assigned both external IPv4 addresses and external IPv6 addresses.

The backends and the load balancer components used for this example are located in this region and subnet:

  • Region: us-central1
  • Subnet: lb-subnet, with primary IPv4 address range 10.1.2.0/24. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.

To create the example network and subnet, follow these steps.

Console

To support both IPv4 and IPv6 traffic, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. Enter a Name of lb-network.

  4. In the Subnets section:

    • Set the Subnet creation mode to Custom.
    • In the New subnet section, configure the following fields and click Done:
      • Name: lb-subnet
      • Region: us-central1
      • IP stack type: IPv4 and IPv6 (dual-stack)
      • IPv4 range: 10.1.2.0/24
        Although you can configure an IPv4 range of addresses for the subnet, you cannot choose the range of the IPv6 addresses for the subnet. Google provides a fixed size (/64) IPv6 CIDR block.
      • IPv6 access type: External
  5. Click Create.

To support IPv4 traffic only, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. Enter a Name of lb-network.

  4. In the Subnets section:

    • Set the Subnet creation mode to Custom.
    • In the New subnet section, configure the following fields and click Done:
      • Name: lb-subnet
      • Region: us-central1
      • IP stack type: IPv4 (single-stack)
      • IPv4 range: 10.1.2.0/24
  5. Click Create.

gcloud

  1. Create the custom mode VPC network:

    gcloud compute networks create lb-network \
        --subnet-mode=custom
    
  2. Within the lb-network network, create a subnet for backends in the us-central1 region.

    For both IPv4 and IPv6 traffic, use the following command to create a dual-stack subnet:

    gcloud compute networks subnets create lb-subnet \
      --stack-type=IPV4_IPv6 \
      --ipv6-access-type=EXTERNAL \
      --network=lb-network \
      --range=10.1.2.0/24 \
      --region=us-central1
    

    For IPv4 traffic only, use the following command:

    gcloud compute networks subnets create lb-subnet \
      --network=lb-network \
      --range=10.1.2.0/24 \
      --region=us-central1
    

Create the zonal managed instance groups

For this load balancing scenario, you create two Compute Engine zonal managed instance groups and install an Apache web server on each instance.

To handle both IPv4 and IPv6 traffic, configure the backend VMs to be dual-stack. Set the VM's stack-type to IPv4_IPv6. The VMs also inherit the ipv6-access-type setting (in this example, EXTERNAL) from the subnet. For more details about IPv6 requirements, see the External passthrough Network Load Balancer overview: Forwarding rules.

To use existing VMs as backends, update the VMs to be dual-stack by using the gcloud compute instances network-interfaces update command.

Instances that participate as backend VMs for external passthrough Network Load Balancers must run the appropriate Linux guest environment, Windows guest environment, or other processes that provide equivalent capability.

Create the instance group for TCP traffic on port 80

Console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. For Name, enter ig-us-template-tcp-80.
    3. Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get.
    4. Expand the Advanced options section.
    5. Expand the Management section, and then copy the following script into the Startup script field.

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2
      
    6. Expand the Networking section, and then specify the following:

      1. For Network tags, add network-lb-tcp-80.
      2. For Network interfaces, click the default interface and configure the following fields:
        1. Network: lb-network
        2. Subnetwork: lb-subnet
    7. Click Create.

  2. Create a managed instance group. Go to the Instance groups page in the Google Cloud console.

    Go to Instance groups

    1. Click Create instance group.
    2. Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. For the Name, enter ig-us-tcp-80.
    4. Under Location, select Single zone.
    5. For the Region, select us-central1.
    6. For the Zone, select us-central1-a.
    7. Under Instance template, select ig-us-template-tcp-80.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options under Autoscaling:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.
    9. Click Create.

gcloud

The gcloud instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with the gcloud compute instance-templates create command.

    To handle both IPv4 and IPv6 traffic, use the following command.

    gcloud compute instance-templates create ig-us-template-tcp-80 \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --ipv6-network-tier=PREMIUM \
    --stack-type=IPv4_IPv6 \
    --tags=network-lb-tcp-80 \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    apt-get update
    apt-get install apache2 -y
    a2ensite default-ssl
    a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    systemctl restart apache2'
    

    Or, if you want to handle IPv4 traffic only traffic, use the following command.

    gcloud compute instance-templates create ig-us-template-tcp-80 \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --tags=network-lb-tcp-80 \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    apt-get update
    apt-get install apache2 -y
    a2ensite default-ssl
    a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    systemctl restart apache2'
    
  2. Create a managed instance group in the zone with the gcloud compute instance-groups managed create command.

    gcloud compute instance-groups managed create ig-us-tcp-80 \
        --zone us-central1-a \
        --size 2 \
        --template ig-us-template-tcp-80
    

Create the instance group for TCP on port 8080, UDP, ESP, and ICMP traffic

Console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. For the Name, enter ig-us-template-l3-default.
    3. Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get.
    4. Expand the Advanced options section.
    5. Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port 8080 instead of port 80.

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf
      systemctl restart apache2
      
    6. Expand the Networking section, and then specify the following:

      1. For Network tags, add network-lb-l3-default.
      2. For Network interfaces, click the default interface and configure the following fields:
        1. Network: lb-network
        2. Subnetwork: lb-subnet
    7. Click Create.

  2. Create a managed instance group. Go to the Instance groups page in the Google Cloud console.

    Go to Instance groups

    1. Click Create instance group.
    2. Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. For the Name, enter ig-us-l3-default.
    4. Under Location, select Single zone.
    5. For the Region, select us-central1.
    6. For the Zone, select us-central1-c.
    7. Under Instance template, select ig-us-template-l3-default.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options under Autoscaling:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.
    9. Click Create.

gcloud

The gcloud instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with the gcloud compute instance-templates create command.

    The startup script also configures the Apache server to listen on port 8080 instead of port 80.

    To handle both IPv4 and IPv6 traffic, use the following command.

    gcloud compute instance-templates create ig-us-template-l3-default \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --ipv6-network-tier=PREMIUM \
    --stack-type=IPv4_IPv6 \
    --tags=network-lb-l3-default \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    apt-get update
    apt-get install apache2 -y
    a2ensite default-ssl
    a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf
    systemctl restart apache2'
    

    Or, if you want to handle IPv4 traffic only, use the following command.

    gcloud compute instance-templates create ig-us-template-l3-default \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --tags=network-lb-l3-default \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    apt-get update
    apt-get install apache2 -y
    a2ensite default-ssl
    a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf
    systemctl restart apache2'
    
  2. Create a managed instance group in the zone with the gcloud compute instance-groups managed create command.

    gcloud compute instance-groups managed create ig-us-l3-default \
        --zone us-central1-c \
        --size 2 \
        --template ig-us-template-l3-default
    

Configure firewall rules

Create the following firewall rules:

  • Firewall rules that allow external TCP traffic to reach backend instances in the ig-us-tcp-80 instance group on port 80 (using target tag network-lb-tcp-80). Create separate firewall rules to allow IPv4 and IPv6 traffic.
  • Firewall rules that allow other external traffic (TCP on port 8080, UDP, ESP, and ICMP) to reach backend instances in the ig-us-l3-default instance group (using target tag network-lb-l3-default). Create separate firewall rules to allow IPv4 and IPv6 traffic.

This example creates firewall rules that allows traffic from all source ranges to reach your backend instances on the configured ports. If you want to create separate firewall rules specifically for the health check probes, use the source IP address ranges documented in the Health checks overview: Probe IP ranges and firewall rules.

Console

  1. In the Google Cloud console, go to the Firewall policies page.
    Go to Firewall policies
  2. To allow IPv4 TCP traffic to reach backends in the ig-us-tcp-80 instance group, create the following firewall rule.
    1. Click Create firewall rule.
    2. Enter a Name of allow-network-lb-tcp-80-ipv4.
    3. Select the Network that the firewall rule applies to (Default).
    4. Under Targets, select Specified target tags.
    5. In the Target tags field, enter network-lb-tcp-80.
    6. Set Source filter to IPv4 ranges.
    7. Set the Source IPv4 ranges to 0.0.0.0/0, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances.
    8. Under Protocols and ports, select Specified protocols and ports. Then select the TCP checkbox and enter 80.
    9. Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
  3. To allow IPv4 UDP, ESP, and ICMP traffic to reach backends in the ig-us-l3-default instance group, create the following firewall rule.
    1. Click Create firewall rule.
    2. Enter a Name of allow-network-lb-l3-default-ipv4.
    3. Select the Network that the firewall rule applies to (Default).
    4. Under Targets, select Specified target tags.
    5. In the Target tags field, enter network-lb-l3-default.
    6. Set Source filter to IPv4 ranges.
    7. Set the Source IPv4 ranges to 0.0.0.0/0, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances.
    8. Under Protocols and ports, select Specified protocols and ports.
      1. Select the TCP checkbox and enter 8080.
      2. Select the UDP checkbox.
      3. Select the Other checkbox and enter esp, icmp.
    9. Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
  4. To allow IPv6 TCP traffic to reach backends in the ig-us-tcp-80 instance group, create the following firewall rule.
    1. Click Create firewall rule.
    2. Enter a Name of allow-network-lb-tcp-80-ipv6.
    3. Select the Network that the firewall rule applies to (Default).
    4. Under Targets, select Specified target tags.
    5. In the Target tags field, enter network-lb-tcp-80.
    6. Set Source filter to IPv6 ranges.
    7. Set the Source IPv6 ranges to ::/0, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances.
    8. Under Protocols and ports, select Specified protocols and ports. Click the checkbox next to TCP and enter 80.
    9. Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
  5. To allow IPv6 UDP, ESP, and ICMPv6 traffic to reach backends in the ig-us-l3-default instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.
    1. Click Create firewall rule.
    2. Enter a Name of allow-network-lb-l3-default-ipv6.
    3. Select the Network that the firewall rule applies to (Default).
    4. Under Targets, select Specified target tags.
    5. In the Target tags field, enter network-lb-l3-default.
    6. Set Source filter to IPv6 ranges.
    7. Set the Source IPv6 ranges to ::/0, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances.
    8. Under Protocols and ports, select Specified protocols and ports.
      1. Click the checkbox next to TCP and enter 8080.
      2. Click the checkbox next to UDP.
      3. Click the checkbox next to Other and enter esp, 58.
    9. Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.

gcloud

  1. To allow IPv4 TCP traffic to reach backends in the ig-us-tcp-80 instance group, create the following firewall rule.

    gcloud compute firewall-rules create allow-network-lb-tcp-80-ipv4 \
        --network=lb-network \
        --target-tags network-lb-tcp-80 \
        --allow tcp:80 \
        --source-ranges=0.0.0.0/0
    
  2. To allow IPv4 UDP, ESP, and ICMP traffic to reach backends in the ig-us-l3-default instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.

    gcloud compute firewall-rules create allow-network-lb-l3-default-ipv4 \
        --network=lb-network \
        --target-tags network-lb-l3-default \
        --allow tcp:8080,udp,esp,icmp \
        --source-ranges=0.0.0.0/0
    
  3. To allow IPv6 TCP traffic to reach backends in the ig-us-tcp-80 instance group, create the following firewall rule.

    gcloud compute firewall-rules create allow-network-lb-tcp-80-ipv6 \
        --network=lb-network \
        --target-tags network-lb-tcp-80 \
        --allow tcp:80 \
        --source-ranges=::/0
    
  4. To allow IPv6 UDP, ESP, and ICMPv6 traffic to reach backends in the ig-us-l3-default instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.

    gcloud compute firewall-rules create allow-network-lb-l3-default-ipv6 \
        --network=lb-network \
        --target-tags network-lb-l3-default \
        --allow tcp:8080,udp,esp,58 \
        --source-ranges=::/0
    

Configure the load balancers

Next, set up two load balancers. Configure both load balancers to use the same external IP address for the forwarding rules where one load balancer handles TCP traffic on port 80, and the other load balancer handles TCP, UDP, ESP, and ICMP traffic on port 8080.

When you configure a load balancer, your backend VM instances receive packets that are destined for the static external IP address you configure. If you are using an image provided by Compute Engine, your instances are automatically configured to handle this IP address. If you are using any other image, you must configure this address as an alias on eth0 or as a loopback on each instance.

To setup two load balancers, use the following the instructions.

Console

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
  4. For Proxy or passthrough, select Passthrough load balancer and click Next.
  5. For Public facing or internal, select Public facing (external) and click Next.
  6. Click Configure.

Basic configuration

  1. In the Name field, enter the name backend-service-tcp-80 for the new load balancer.
  2. In the Region list, select us-central1.

Backend configuration

  1. Click Backend configuration.
  2. On the Backend configuration page, make the following changes:
    1. In the New Backend section, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4 (single-stack).
    2. In the Instance group list, select ig-us-tcp-80, and then click Done.
    3. In the Health check list, click Create a health check, and then enter the following information:
      • Name: tcp-health-check-80
      • Protocol: TCP
      • Port: 80
    4. Click Save.
  3. Verify that there is a blue checkmark next to Backend configuration before continuing.

Frontend configuration

  1. Click Frontend configuration.
  2. In the Name field, enter forwarding-rule-tcp-80.
  3. To handle IPv4 traffic, use the following steps:
    1. For IP version, select IPv4.
    2. In the Internal IP purpose section, in the IP address list, select Create IP address.
      1. In the Name field, enter network-lb-ipv4.
      2. Click Reserve.
    3. For Ports, choose Single. In the Port number field, enter 80.
    4. Click Done.
  4. To handle IPv6 traffic, use the following steps:

    1. For IP version, select IPv6.
    2. For Subnetwork, select lb-subnet.
    3. In the IPv6 range list, select Create IP address.
      1. In the Name field, enter network-lb-ipv6.
      2. Click Reserve.
    4. For Ports, choose Single. In the Port number field, enter 80.
    5. Click Done.

    A blue circle with a checkmark to the left of Frontend configuration indicates a successful setup.

Review the configuration

  1. Click Review and finalize.
  2. Review your load balancer configuration settings.
  3. Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
  4. Click Create.

    On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.

Create the second load balancer

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
  4. For Proxy or passthrough, select Passthrough load balancer and click Next.
  5. For Public facing or internal, select Public facing (external) and click Next.
  6. Click Configure.

Basic configuration

  1. In the Name field, enter the name backend-service-l3-default for the new load balancer.
  2. In the Region list, select us-central1.

Backend configuration

  1. Click Backend configuration.
  2. On the Backend configuration page, make the following changes:
    1. In the New Backend section, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4 (single-stack).
    2. In the Instance group list, select ig-us-l3-default, and then click Done.
    3. In the Protocols list, select L3 (Multiple protocols).
    4. In the Health check list, click Create a health check, and then enter the following information:
      • Name: tcp-health-check-8080
      • Protocol: TCP
      • Port: 8080
    5. Click Save.
  3. Verify that there is a blue checkmark next to Backend configuration before continuing.

Frontend configuration

  1. Click Frontend configuration.
  2. In the Name field, enter forwarding-rule-l3-default.
  3. To handle IPv4 traffic, use the following steps:
    1. For IP version, select IPv4.
    2. In the Internal IP purpose section, in the IP address list, select Create IP address.
      1. In the Name field, enter network-lb-ipv4.
      2. Click Reserve.
    3. In the Protocol list, select L3 (Multiple protocols).
    4. For Ports, choose All.
    5. Click Done.
  4. To handle IPv6 traffic, use the following steps:

    1. For IP version, select IPv6.
    2. For Subnetwork, select lb-subnet.
    3. In the IPv6 range list, select Create IP address.
      1. In the Name field, enter network-lb-ipv6.
      2. Click Reserve.
    4. In the Protocol field, select L3 (Multiple protocols).
    5. For Ports, select All.
    6. Click Done.

    A blue circle with a checkmark to the left of Frontend configuration indicates a successful setup.

Review the configuration

  1. Click Review and finalize.
  2. Review your load balancer configuration settings.
  3. Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
  4. Click Create.

    On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.

gcloud

  1. Reserve a static external IP address.

    For IPv4 traffic: Create a static external IP address for your load balancers.

    gcloud compute addresses create network-lb-ipv4 \
        --region us-central1
    

    For IPv6 traffic: Create a static external IPv6 address range for your load balancers. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.

    gcloud compute addresses create network-lb-ipv6 \
        --region us-central1 \
        --subnet lb-subnet \
        --ip-version IPV6 \
        --endpoint-type NETLB
    
  2. Create a TCP health check for port 80. This health check is used to verify the health of backends in the ig-us-tcp-80 instance group.

    gcloud compute health-checks create tcp tcp-health-check-80 \
        --region us-central1 \
        --port 80
    
  3. Create a TCP health check for port 8080. This health check is used to verify the health of backends in the ig-us-l3-default instance group.

    gcloud compute health-checks create tcp tcp-health-check-8080 \
        --region us-central1 \
        --port 8080
    
  4. Create the first load balancer for TCP traffic on port 80.

    1. Create a backend service with the protocol set to TCP.

      gcloud compute backend-services create backend-service-tcp-80 \
          --protocol TCP \
          --health-checks tcp-health-check-80 \
          --health-checks-region us-central1 \
          --region us-central1
      
    2. Add the backend instance group to the backend service.

      gcloud compute backend-services add-backend backend-service-tcp-80 \
          --instance-group ig-us-tcp-80 \
          --instance-group-zone us-central1-a \
          --region us-central1
      
    3. For IPv4 traffic: Create a forwarding rule to route incoming TCP traffic on port 80 to the backend service. TCP is the default forwarding rule protocol and does not need to be set explicitly.

      Use the IP address reserved in step 1 as the static external IP address of the load balancer.

      gcloud compute forwarding-rules create forwarding-rule-tcp-80 \
          --load-balancing-scheme external \
          --region us-central1 \
          --ports 80 \
          --address network-lb-ipv4 \
          --backend-service backend-service-tcp-80
      
    4. For IPv6 traffic: Create a forwarding rule to route incoming TCP traffic on port 80 to the backend service. TCP is the default forwarding rule protocol and does not need to be set explicitly.

      Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.

      gcloud compute forwarding-rules create forwarding-rule-tcp-80 \
          --load-balancing-scheme external \
          --region us-central1 \
          --network-tier PREMIUM \
          --ip-version IPV6 \
          --subnet lb-subnet \
          --address network-lb-ipv6 \
          --ports 80 \
          --backend-service backend-service-tcp-80
      
  5. Create the second load balancer for TCP on port 8080, UDP, ESP, and ICMP traffic.

    1. Create a backend service with the protocol set to UNSPECIFIED.

      gcloud compute backend-services create backend-service-l3-default \
          --protocol UNSPECIFIED \
          --health-checks tcp-health-check-8080 \
          --health-checks-region us-central1 \
          --region us-central1
      
    2. Add the backend instance group to the backend service.

      gcloud compute backend-services add-backend backend-service-l3-default \
          --instance-group ig-us-l3-default \
          --instance-group-zone us-central1-c \
          --region us-central1
      
    3. For IPv4 traffic: Create a forwarding rule with the protocol set to L3_DEFAULT to handle all remaining supported IP protocol traffic (TCP on port 8080, UDP, ESP, and ICMP). All ports must be configured with L3_DEFAULT forwarding rules.

      Use the same external IPv4 address that you used for the previous load balancer.

      gcloud compute forwarding-rules create forwarding-rule-l3-default \
          --load-balancing-scheme external \
          --region us-central1 \
          --ports all \
          --ip-protocol L3_DEFAULT \
          --address network-lb-ipv4 \
          --backend-service backend-service-l3-default
      
    4. For IPv6 traffic: Create a forwarding rule with the protocol set to L3_DEFAULT to handle all remaining supported IP protocol traffic (TCP on port 8080, UDP, ESP, and ICMP). All ports must be configured with L3_DEFAULT forwarding rules.

      Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.

      gcloud compute forwarding-rules create forwarding-rule-l3-default \
          --load-balancing-scheme external \
          --region us-central1 \
          --network-tier PREMIUM \
          --ip-version IPV6 \
          --subnet lb-subnet \
          --address network-lb-ipv6 \
          --ports all \
          --ip-protocol L3_DEFAULT \
          --backend-service backend-service-l3-default
      

Test the load balancer

Now that the load balancing service is configured, you can start sending traffic to the load balancer's external IP address and watch traffic get distributed to the backend instances.

Look up the load balancer's external IP address

Console

  1. On the Advanced load balancing page, go to the Forwarding Rules tab.
    Go to the Forwarding Rules tab
  2. Locate the forwarding rules used by the load balancer.
  3. In the IP Address column, note the external IP address listed for each IPv4 and IPv6 forwarding rule.

gcloud: IPv4

Enter the following command to view the external IP address of the forwarding rule used by the load balancer.

gcloud compute forwarding-rules describe forwarding-rule-tcp-80 \
    --region us-central1

This example uses the same IP address for both IPv4 forwarding rules so using forwarding-rule-l3-default will also work.

gcloud: IPv6

Enter the following command to view the external IPv6 address of the forwarding-rule-tcp-80 forwarding rule used by the load balancer.

gcloud compute forwarding-rules describe forwarding-rule-tcp-80 \
    --region us-central1

This example uses the same IP address for both IPv6 forwarding rules so using forwarding-rule-l3-default will also work.

Send traffic to the load balancer

This procedure sends external traffic to the load balancer. Run the following tests to ensure that TCP traffic on port 80 is being load-balanced by the ig-us-tcp-80 instance group while all other traffic (TCP on port 8080, UDP, ESP, and ICMP) is being handled by the ig-us-l3-default instance group.

Verifying behavior with TCP requests on port 80

  1. Make web requests (over TCP on port 80) to the load balancer using curl to contact its IP address.

    • From clients with IPv4 connectivity, run the following command:

      $ while true; do curl -m1 IP_ADDRESS; done
      
    • From clients with IPv6 connectivity, run the following command:

      $ while true; do curl -m1 http://IPV6_ADDRESS; done
      

      For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96], the command should look like:

      $ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]; done
      
  2. Note the text returned by the curl command. The name of the backend VM generating the response is displayed in that text; for example: Page served from: VM_NAME. Responses should come from instances in the ig-us-tcp-80 instance group only.

    If your response is initially unsuccessful, you might need to wait approximately 30 seconds for the configuration to be fully loaded and for your instances to be marked healthy before trying again.

Verifying behavior with TCP requests on port 8080

Make web requests (over TCP on port 8080) to the load balancer using curl to contact its IP address.

  • From clients with IPv4 connectivity, run the following command:

    $ while true; do curl -m1 IPV4_ADDRESS:8080; done
    
  • From clients with IPv6 connectivity, run the following command:

    $ while true; do curl -m1 http://IPV6_ADDRESS; done
    

    For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96], the command should look like:

    $ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]:8080; done
    

Note the text returned by the curl command. Responses should come from instances in the ig-us-l3-default instance group only.

This shows that any traffic sent to the load balancer's IP address at port 8080 is being handled by backends in the ig-us-l3-default instance group only.

Verifying behavior with ICMP requests

To verify behavior with ICMP traffic, you capture output from the tcpdump command to confirm that only backend VMs in the ig-us-l3-default instance group are handling ICMP requests send to the load balancer.

  1. SSH to the backend VMs.

    1. In the Google Cloud console, go to the VM instances page.
      Go to the VM instances page

    2. In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.

  2. Run the following command to use tcpdump to start listening for ICMP traffic.

    sudo tcpdump icmp -w ~/icmpcapture.pcap -s0 -c 10000
    tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    

    Leave the SSH window open.

  3. Repeat steps 1 and 2 for all four backend VMs.

  4. Make ICMP requests to the load balancer.

    To test the IPv4 responses, use ping to contact the load balancer's IPv4 address.

    ping IPV4_ADDRESS
    

    To test the IPv6 responses, use ping6 to contact the load balancer's IPv6 address.

    ping6 IPV6_ADDRESS
    

    For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96], the command should look like:

    ping6 2001:db8:1:1:1:1:1:1
    
  5. Go back to each VM's open SSH window and stop the tcpdump capture command. You can use Ctrl+C to do this.

  6. For each VM, check the output of the tcpdump command in the icmpcapture.pcap file.

    sudo tcpdump -r ~/icmpcapture.pcap -n
    

    For backend VMs in the ig-us-l3-default instance group, you should see file entries like:

    reading from file /home/[user-directory]/icmpcapture.pcap, link-type EN10MB (Ethernet)
    22:13:07.814486 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 1, length 64
    22:13:07.814513 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 1, length 64
    22:13:08.816150 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 2, length 64
    22:13:08.816175 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 2, length 64
    22:13:09.817536 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 3, length 64
    22:13:09.817560 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 3, length 64
    ...
    

    For backend VMs in the ig-us-tcp-80 instance group, you should see that no packets have been received and the file should be blank:

    reading from file /home/[user-directory]/icmpcapture.pcap, link-type EN10MB (Ethernet)
    

Additional configuration options

Create an IPv6 forwarding rule with BYOIP

The load balancer created in the previous steps has been configured with forwarding rules with IP version as IPv4 or IPv6. This section provides instructions to create an IPv6 forwarding rule with bring your own IP (BYOIP) addresses.

Bring your own IP addresses lets you provision and use your own public IPv6 addresses for Google Cloud resources. For more information, see Bring your own IP addresses.

Before you start configuring an IPv6 forwarding rule with BYOIP addresses, you must complete the following steps:

  1. Create a public advertised IPv6 prefix
  2. Create public delegated prefixes
  3. Create IPv6 sub-prefixes
  4. Announce the prefix

To create a new forwarding rule, follow these steps:

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing.

  2. Click the name of the load balancer that you want to modify.
  3. Click Edit.
  4. Click Frontend configuration.
  5. Click Add frontend IP and port.
  6. In the New Frontend IP and port section, specify the following:
    1. The Protocol is TCP.
    2. In the IP version field, select IPv6.
    3. In the Source of IPv6 range field, select BYOIP.
    4. In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
    5. In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
    6. In the Ports field, enter a port number.
    7. Click Done.
  7. Click Add frontend IP and port.
  8. In the New Frontend IP and port section, specify the following:
    1. The Protocol is L3 (Multiple protocols).
    2. In the IP version field, select IPv6.
    3. In the Source of IPv6 range field, select BYOIP.
    4. In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
    5. In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
    6. In the Ports field, select All.
    7. Click Done.
  9. Click Update.

Google Cloud CLI

Create the forwarding rule by using the gcloud compute forwarding-rules create command:

gcloud compute forwarding-rules create FWD_RULE_NAME \
    --load-balancing-scheme EXTERNAL \
    --ip-protocol L3_DEFAULT \
    --ports ALL \
    --ip-version IPV6 \
    --region REGION_A \
    --address IPV6_CIDR_RANGE  \
    --backend-service BACKEND_SERVICE \
    --ip-collection PDP_NAME

Create the forwarding rule by using the gcloud compute forwarding-rules create command:

gcloud compute forwarding-rules create FWD_RULE_NAME \
    --load-balancing-scheme EXTERNAL \
    --ip-protocol PROTOCOL \
    --ports ALL \
    --ip-version IPV6 \
    --region REGION_A \
    --address IPV6_CIDR_RANGE  \
    --backend-service BACKEND_SERVICE \
    --ip-collection PDP_NAME

Replace the following:

  • FWD_RULE_NAME: the name of the forwarding rule
  • REGION_A: region for the forwarding rule
  • IPV6_CIDR_RANGE: the IPv6 address range that the forwarding rule serves. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
  • BACKEND_SERVICE: the name of the backend service
  • PDP_NAME: the name of the public delegated prefix. The PDP must be a sub-prefix in the EXTERNAL_IPV6_FORWARDING_RULE_CREATION mode

What's next