Set up an external passthrough Network Load Balancer with a backend service

This guide provides instructions for creating an external passthrough Network Load Balancer deployment by using a regional backend service. This example creates an external passthrough Network Load Balancer that supports either TCP or UDP traffic. If you want to create an external passthrough Network Load Balancer that load-balances TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic (not just TCP or UDP), see Set up an external passthrough Network Load Balancer for multiple IP protocols.

In this example, we'll use the load balancer to distribute TCP traffic across backend VMs in two zonal managed instance groups in the us-central1 region. An equally valid approach would be to use a single regional managed instance group for the us-central1 region.

External passthrough Network Load Balancer with zonal instance groups
External passthrough Network Load Balancer with zonal managed instance groups

This scenario distributes TCP traffic across healthy instances. To support this example, TCP health checks are configured to ensure that traffic is sent only to healthy instances. Note that TCP health checks are only supported with a backend service-based load balancer. Target pool-based load balancers can only use legacy HTTP health checks.

This example load balances TCP traffic, but you can use backend service-based external passthrough Network Load Balancers to load balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.

The external passthrough Network Load Balancer is a regional load balancer. All load balancer components (backend VMs, backend service, and forwarding rule) must be in the same region.

Before you begin

Install the Google Cloud CLI. For a complete overview of the tool, see the gcloud CLI overview. You can find commands related to load balancing in the API and gcloud references.

If you haven't run the Google Cloud CLI previously, first run gcloud init to authenticate.

This guide assumes that you are familiar with bash.

Set up the network and subnets

The example on this page uses a custom mode VPC network named lb-network. You can use an auto mode VPC network if you only want to handle IPv4 traffic. However, IPv6 traffic requires a custom mode subnet.

IPv6 traffic also requires a dual-stack subnet (stack-type set to IPv4_IPv6). When you create a dual stack subnet on a custom mode VPC network, you choose an IPv6 access type for the subnet. For this example, we set the subnet's ipv6-access-type parameter to EXTERNAL. This means new VMs on this subnet can be assigned both external IPv4 addresses and external IPv6 addresses. The forwarding rules can also be assigned both external IPv4 addresses and external IPv6 addresses.

The backends and the load balancer components used for this example are located in this region and subnet:

  • Region: us-central1
  • Subnet: lb-subnet, with primary IPv4 address range 10.1.2.0/24. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.

To create the example network and subnet, follow these steps.

Console

To support both IPv4 and IPv6 traffic, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. Enter a Name of lb-network.

  4. In the Subnets section:

    • Set the Subnet creation mode to Custom.
    • In the New subnet section, configure the following fields and click Done:
      • Name: lb-subnet
      • Region: us-central1
      • IP stack type: IPv4 and IPv6 (dual-stack)
      • IPv4 range: 10.1.2.0/24
        Although you can configure an IPv4 range of addresses for the subnet, you cannot choose the range of the IPv6 addresses for the subnet. Google provides a fixed size (/64) IPv6 CIDR block.
      • IPv6 access type: External
  5. Click Create.

To support IPv4 traffic only, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. Enter a Name of lb-network.

  4. In the Subnets section:

    • Set the Subnet creation mode to Custom.
    • In the New subnet section, configure the following fields and click Done:
      • Name: lb-subnet
      • Region: us-central1
      • IP stack type: IPv4 (single-stack)
      • IPv4 range: 10.1.2.0/24
  5. Click Create.

gcloud

  1. Create the custom mode VPC network:

    gcloud compute networks create lb-network \
        --subnet-mode=custom
    
  2. Within the lb-network network, create a subnet for backends in the us-central1 region.

    For both IPv4 and IPv6 traffic, use the following command to create a dual-stack subnet:

    gcloud compute networks subnets create lb-subnet \
      --stack-type=IPV4_IPv6 \
      --ipv6-access-type=EXTERNAL \
      --network=lb-network \
      --range=10.1.2.0/24 \
      --region=us-central1
    

    For IPv4 traffic only, use the following command:

    gcloud compute networks subnets create lb-subnet \
      --network=lb-network \
      --range=10.1.2.0/24 \
      --region=us-central1
    

Create the zonal managed instance groups

For this load balancing scenario, you create two Compute Engine zonal managed instance groups and install an Apache web server on each instance.

To handle both IPv4 and IPv6 traffic, configure the backend VMs to be dual-stack. Set the VM's stack-type to IPv4_IPv6. The VMs also inherit the ipv6-access-type setting (in this example, EXTERNAL) from the subnet. For more details about IPv6 requirements, see the External passthrough Network Load Balancer overview: Forwarding rules.

To use existing VMs as backends, update the VMs to be dual-stack by using the gcloud compute instances network-interfaces update command.

Instances that participate as backend VMs for external passthrough Network Load Balancers must be running the appropriate Linux Guest Environment, Windows Guest Environment, or other processes that provide equivalent capability.

Setting up the instances

Console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. For Name, enter ig-us-template.
    3. In the Boot disk section, ensure that the Image is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get.
    4. Click Advanced options.
    5. Click Networking.
      1. For Network tags, enter lb-tag.
      2. For Network interfaces, click the default interface and configure the following fields:
        • Network: lb-network
        • Subnetwork: lb-subnet
      3. Click Done.
    6. Click Management and copy the following script into the Startup script field.

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2
      
    7. Click Create.

  2. Create a managed instance group. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

    1. Click Create instance group.
    2. Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. For Name, enter ig-us-1.
    4. For Instance template, select ig-us-template.
    5. For Location, select Single zone.
    6. For Region, select us-central1.
    7. For Zone, select us-central1-a.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options in the Autoscaling section:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.
    9. Click Create.

  3. Repeat the previous steps to create a second managed instance group in the us-central1-c zone with the following specifications:

    • Name: ig-us-2
    • Zone: us-central1-c
    • Instance template: Use the same ig-us-template template created in the previous section.

gcloud

The gcloud instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with the gcloud compute instance-templates create command.

    To handle both IPv4 and IPv6 traffic, use the following command.

    gcloud compute instance-templates create ig-us-template \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --ipv6-network-tier=PREMIUM \
    --stack-type=IPv4_IPv6 \
    --tags=lb-tag \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2'
    

    To handle IPv4 traffic only traffic, use the following command.

    gcloud compute instance-templates create ig-us-template \
    --region=us-central1 \
    --network=lb-network \
    --subnet=lb-subnet \
    --tags=lb-tag \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2'
    
  2. Create a managed instance group in the zone with the gcloud compute instance-groups managed create command.

    gcloud compute instance-groups managed create ig-us-1 \
        --zone us-central1-a \
        --size 2 \
        --template ig-us-template
    
  3. Create a second managed instance group in the us-central1-c zone:

    gcloud compute instance-groups managed create ig-us-2 \
        --zone us-central1-c \
        --size 2 \
        --template ig-us-template
    

Configuring firewall rules

Create firewall rules that allow external traffic (which includes health check probes) to reach the backend instances.

This example creates a firewall rule that allows TCP traffic from all source ranges to reach your backend instances on port 80. If you want to create separate firewall rules specifically for the health check probes, use the source IP address ranges documented in the Health checks overview: Probe IP ranges and firewall rules.

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. To allow IPv4 traffic, perform the following steps:

    1. Click Create firewall rule.
    2. For Name, enter allow-network-lb-ipv4.
    3. In the Network list, select lb-network.
    4. For Targets, select Specified target tags.
    5. In the Target tags field, enter lb-tag.
    6. For Source filter, select IPv4 ranges.
    7. Set the Source IPv4 ranges to 0.0.0.0/0. This allows IPv4 traffic from any source. This also allows Google's health check probes to reach the backend instances.
    8. For Specified protocols and ports, select the TCP checkbox and enter 80.
    9. Click Create. It might take a moment for the Google Cloud console to display the new firewall rule, or you might have to click Refresh to see the rule.
  3. To allow IPv6 traffic, perform the following steps:

    1. Click Create firewall rule again.
    2. For Name, enter allow-network-lb-ipv6.
    3. In the Network list, select lb-network.
    4. For Targets, select Specified target tags.
    5. In the Target tags field, enter lb-tag.
    6. For Source filter, select IPv6 ranges.
    7. Set the Source IPv6 ranges to ::/0. This allows IPv6 traffic from any source. This also allows Google's health check probes to reach the backend instances.
    8. For Specified protocols and ports, select the TCP checkbox and enter 80.
    9. Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.

gcloud

  1. To allow IPv4 traffic, run the following command:

    gcloud compute firewall-rules create allow-network-lb-ipv4 \
        --network=lb-network \
        --target-tags=lb-tag \
        --allow=tcp:80 \
        --source-ranges=0.0.0.0/0
    
  2. To allow IPv6 traffic, run the following command:

    gcloud compute firewall-rules create allow-network-lb-ipv6 \
      --network=lb-network \
      --target-tags=lb-tag \
      --allow=tcp:80 \
      --source-ranges=::/0
    

Configure the load balancer

Next, set up the load balancer.

When you configure the load balancer, your virtual machine (VM) instances will receive packets that are destined for the static external IP address you configure. If you are using an image provided by Compute Engine, your instances are automatically configured to handle this IP address. If you are using any other image, you must configure this address as an alias on eth0 or as a loopback on each instance.

To setup the load balancer, use the following the instructions.

Console

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
  4. For Proxy or passthrough, select Passthrough load balancer and click Next.
  5. For Public facing or internal, select Public facing (external) and click Next.
  6. Click Configure.

Backend configuration

  1. On the Create external passthrough Network Load Balancer page, enter the name tcp-network-lb for the new load balancer.
  2. For Region, select us-central1.
  3. Click Backend configuration.
  4. On the Backend configuration page, make the following changes:
    1. For New Backend, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4(single-stack).
    2. In the Instance group list, select ig-us-1, and then click Done.
    3. Click Add a backend and repeat this step to add ig-us-2.
    4. For Health check, click Create a health check or Create another health check, and then enter the following information:
      • Name: tcp-health-check
      • Protocol: TCP
      • Port: 80
    5. Click Save.
  5. Verify that there is a blue checkmark next to Backend configuration before continuing.

Frontend configuration

  1. Click Frontend configuration.
  2. For Name, enter network-lb-forwarding-rule.
  3. To handle IPv4 traffic, use the following steps:
    1. For IP version, select IPv4.
    2. In the Internal IP purpose section, in the IP address list, select Create IP address.
      1. On the Reserve a new static IP address page, for Name, enter network-lb-ipv4.
      2. Click Reserve.
    3. For Ports, choose Single. For Port number, enter 80.
    4. Click Done.
  4. To handle IPv6 traffic, use the following steps:

    1. For IP version, select IPv6.
    2. For Subnetwork, select lb-subnet.
    3. In the IPv6 range list, select Create IP address.
      1. On the Reserve a new static IP address page, for Name, enter network-lb-ipv6.
      2. Click Reserve.
    4. For Ports, select Single. For Port number, enter 80.
    5. Click Done.

    A blue circle with a checkmark to the left of Frontend configuration indicates a successful setup.

Review the configuration

  1. Click Review and finalize.
  2. Review your load balancer configuration settings.
  3. Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
  4. Click Create.

    On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.

gcloud

  1. Reserve a static external IP address.

    For IPv4 traffic: Create a static external IPv4 address for your load balancer.

    gcloud compute addresses create network-lb-ipv4 \
        --region us-central1
    

    For IPv6 traffic: Create a static external IPv6 address range for your load balancer. The subnet used must be a dual-stack subnet with an external IPv6 range.

    gcloud compute addresses create network-lb-ipv6 \
        --region us-central1 \
        --subnet lb-subnet \
        --ip-version IPV6 \
        --endpoint-type NETLB
    
  2. Create a TCP health check.

    gcloud compute health-checks create tcp tcp-health-check \
        --region us-central1 \
        --port 80
    
  3. Create a backend service.

    gcloud compute backend-services create network-lb-backend-service \
        --protocol TCP \
        --health-checks tcp-health-check \
        --health-checks-region us-central1 \
        --region us-central1
    
  4. Add the instance groups to the backend service.

    gcloud compute backend-services add-backend network-lb-backend-service \
    --instance-group ig-us-1 \
    --instance-group-zone us-central1-a \
    --region us-central1
    
    gcloud compute backend-services add-backend network-lb-backend-service \
    --instance-group ig-us-2 \
    --instance-group-zone us-central1-c \
    --region us-central1
    
  5. Create the forwarding rules depending on whether you want to handle IPv4 traffic or IPv6 traffic. Create both forwarding rules to handle both types of traffic.

    1. For IPv4 traffic: Create a forwarding rule to route incoming TCP traffic to the backend service. Use the IPv4 address reserved in step 1 as the static external IP address of the load balancer.

      gcloud compute forwarding-rules create network-lb-forwarding-rule-ipv4 \
        --load-balancing-scheme EXTERNAL \
        --region us-central1 \
        --ports 80 \
        --address network-lb-ipv4 \
        --backend-service network-lb-backend-service
      
    2. For IPv6 traffic: Create a forwarding rule to handle IPv6 traffic. Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.

      gcloud compute forwarding-rules create network-lb-forwarding-rule-ipv6 \
          --load-balancing-scheme EXTERNAL \
          --region us-central1 \
          --network-tier PREMIUM \
          --ip-version IPV6 \
          --subnet lb-subnet \
          --address network-lb-ipv6 \
          --ports 80 \
          --backend-service network-lb-backend-service
      

Test the load balancer

Now that the load balancing service is configured, you can start sending traffic to the load balancer's external IP address and watch traffic get distributed to the backend instances.

Look up the load balancer's external IP address

Console

  1. On the Load balancing components page, go to the Forwarding rules tab.

    Go to Forwarding rules

  2. Locate the forwarding rule used by the load balancer.

  3. In the External IP address column, note the external IP address listed.

gcloud: IPv4

Enter the following command to view the external IPv4 address of the network-lb-forwarding-rule forwarding rule used by the load balancer.

gcloud compute forwarding-rules describe network-lb-forwarding-rule-ipv4 \
    --region us-central1

gcloud: IPv6

Enter the following command to view the external IPv6 address of the network-lb-forwarding-rule forwarding rule used by the load balancer.

gcloud compute forwarding-rules describe network-lb-forwarding-rule-ipv6 \
    --region us-central1

Send traffic to the load balancer

Make web requests to the load balancer using curl to contact its IP address.

  • From clients with IPv4 connectivity, run the following command:

    $ while true; do curl -m1 IPV4_ADDRESS; done
    
  • From clients with IPv6 connectivity, run the following command:

    $ while true; do curl -m1 http://IPV6_ADDRESS; done
    

    For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96]:80, the command should look like:

    $ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]:80; done
    

Note the text returned by the curl command. The name of the backend VM generating the response is displayed in that text; for example: Page served from: VM_NAME

The response from the curl command alternates randomly among the backend instances. If your response is initially unsuccessful, you might need to wait approximately 30 seconds for the configuration to be fully loaded and for your instances to be marked healthy before trying again.

Additional configuration options

This section expands on the configuration example to provide instructions about how to further customize your external passthrough Network Load Balancer. These tasks are optional. You can perform them in any order.

Configure session affinity

The example configuration creates a backend service with session affinity disabled (value set to NONE). This section shows you how to update the backend service to change the load balancer's session affinity setting.

For supported session affinity types, see Session affinity options.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. In the Load balancers tab, click the name of the backend service, and then click Edit.

  3. On the Edit external passthrough Network Load Balancer page, click Backend configuration.

  4. Select an option from the Session affinity list.

  5. Click Update.

gcloud

Use the following gcloud command to update session affinity for the backend service:

gcloud compute backend-services update BACKEND_SERVICE \
    --region=REGION \
    --session-affinity=SESSION_AFFINITY_OPTION

Replace the placeholders with valid values:

  • BACKEND_SERVICE: the backend service that you're updating
  • SESSION_AFFINITY_OPTION: the session affinity option that you want to set

    For the list of supported values for an external passthrough Network Load Balancer, see Session affinity options.

Configure a connection tracking policy

The example configuration creates a backend service with the default settings for its connection tracking policy. This section shows you how to update the backend service to change the load balancer's default connection tracking policy.

A connection tracking policy includes the following settings:

gcloud

Use the following gcloud compute backend-services command to update the connection tracking policy for the backend service:

gcloud compute backend-services update BACKEND_SERVICE \
    --region=REGION \
    --tracking-mode=TRACKING_MODE \
    --connection-persistence-on-unhealthy-backends=CONNECTION_PERSISTENCE_BEHAVIOR

Replace the placeholders with valid values:

  • BACKEND_SERVICE: the backend service that you're updating
  • TRACKING_MODE: the connection tracking mode to be used for incoming packets. For the list of supported values, see Tracking mode.
  • CONNECTION_PERSISTENCE_BEHAVIOR: the connection persistence behavior when backends are unhealthy. For the list of supported values, see Connection persistence on unhealthy backends.

Configure traffic steering

This section shows you how to update a load balancer's frontend configuration to set up source IP-based traffic steering. For details about how traffic steering works, see Traffic steering.

These instructions assume that you already have the parent base forwarding rule created. This example creates a second forwarding rule, which is the steering forwarding rule, with the same IP address, IP protocol, and ports as the parent. This steering forwarding rule is configured with source IP ranges so that you can customize how packets from those source IP ranges are forwarded.

gcloud

Use the following command to create a steering forwarding rule that points to a backend service:

gcloud compute forwarding-rules create STEERING_FORWARDING_RULE_BS \
    --load-balancing-scheme=EXTERNAL \
    --backend-service=BACKEND_SERVICE \
    --address=LOAD_BALANCER_VIP \
    --ip-protocol=IP_PROTOCOL \
    --ports=PORTS \
    --region=REGION \
    --source-ip-ranges=SOURCE_IP_ADDRESS_RANGES

Use the following command to create a steering forwarding rule that points to a target instance:

gcloud compute forwarding-rules create STEERING_FORWARDING_RULE_TI \
    --load-balancing-scheme=EXTERNAL \
    --target-instance=TARGET_INSTANCE \
    --address=LOAD_BALANCER_VIP \
    --ip-protocol=IP_PROTOCOL \
    --ports=PORTS \
    --region=REGION \
    --source-ip-ranges=SOURCE_IP_ADDRESS_RANGES

Replace the placeholders with valid values:

  • FORWARDING_RULE: the name of the steering forwarding rule that you're creating.
  • BACKEND_SERVICE or TARGET_INSTANCE: the name of the backend service or target instance to which this steering forwarding rule will send traffic. Even if the parent forwarding rule points to a backend service, you can create steering forwarding rules that point to target instances.
  • LOAD_BALANCER_VIP, IP_PROTOCOL, PORTS: the IP address, IP protocol, and ports, respectively, for the steering forwarding rule that you're creating. These settings should match a pre-existing base forwarding rule.
  • REGION: the region of the forwarding rule that you're creating.
  • SOURCE_IP_ADDRESS_RANGES: comma-separated list of IP addresses or IP address ranges. This forwarding rule will only forward traffic when the source IP address of the incoming packet falls into one of the IP ranges set here.

Use the following command to delete a steering forwarding rule. You must delete any steering forwarding rules that are being used by a load balancer before you can delete the load balancer itself.

gcloud compute forwarding-rules delete STEERING_FORWARDING_RULE \
    --region=REGION

Configure failover policy

To configure the failover policy, see Configure failover for external passthrough Network Load Balancers.

Configure weighted load balancing

To configure weighted load balancing, see Configure weighted load balancing.

Create an IPv6 forwarding rule with BYOIP

The load balancer created in the previous steps has been configured with forwarding rules with IP version as IPv4 or IPv6. This section provides instructions to create an IPv6 forwarding rule with bring your own IP (BYOIP) addresses.

Bring your own IP addresses lets you provision and use your own public IPv6 addresses for Google Cloud resources. For more information, see Bring your own IP addresses.

Before you start configuring an IPv6 forwarding rule with BYOIP addresses, you must complete the following steps:

  1. Create a public advertised IPv6 prefix
  2. Create public delegated prefixes
  3. Create IPv6 sub-prefixes
  4. Announce the prefix

To create a new forwarding rule, follow these steps:

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing.

  2. Click the name of the load balancer that you want to modify.
  3. Click Edit.
  4. Click Frontend configuration.
  5. Click Add frontend IP and port.
  6. In the New Frontend IP and port section, specify the following:
    1. The Protocol is TCP.
    2. In the IP version field, select IPv6.
    3. In the Source of IPv6 range field, select BYOIP.
    4. In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
    5. In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
    6. In the Ports field, enter a port number.
    7. Click Done.
  7. Click Update.

Google Cloud CLI

Create the forwarding rule by using the gcloud compute forwarding-rules create command:

gcloud compute forwarding-rules create FWD_RULE_NAME \
    --load-balancing-scheme EXTERNAL \
    --ip-protocol PROTOCOL \
    --ports ALL \
    --ip-version IPV6 \
    --region REGION_A \
    --address IPV6_CIDR_RANGE  \
    --backend-service BACKEND_SERVICE \
    --ip-collection PDP_NAME

Replace the following:

  • FWD_RULE_NAME: the name of the forwarding rule
  • PROTOCOL: the IP protocol for the forwarding rule The default is TCP. The IP protocol can be one of TCP, UDP, or L3_DEFAULT.
  • REGION_A: region for the forwarding rule
  • IPV6_CIDR_RANGE: the IPv6 address range that the forwarding rule serves. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
  • BACKEND_SERVICE: the name of the backend service
  • PDP_NAME: the name of the public delegated prefix. The PDP must be a sub-prefix in the EXTERNAL_IPV6_FORWARDING_RULE_CREATION mode

What's next