Migrate global external Application Load Balancer to dual-stack backends

This document shows you how to migrate global external Application Load Balancer resources and backends from single-stack to dual-stack.

You must first migrate from IPv4 only to IPv4 and IPv6 (dual-stack) backends. You can later switch the IP address selection policy of the backend service to specify the traffic type that is sent from the GFE to your backends. For more information, see configure the IP address selection policy.

About the migration process

The migration process requires you to update the following load balancer resources:

  • Subnet. The IP stack type of the subnet can be updated to support IPv4 only (single-stack) or IPv4 and IPv6 (dual-stack). You cannot downgrade your subnet from dual-stack to single-stack addresses. To update, see Update the subnet.
  • Backends. The IP stack type of the following backends can be updated to support IPv4 only (single-stack) or IPv4 and IPv6 (dual-stack):
  • Firewall rules. Create a firewall rule to allow traffic from IPv6 health check probes to reach backends. To create, see Create IPv6 health check firewall rule.
  • Backend service. The IP address selection policy of the backend service can be updated to specify the traffic type that is sent from the GFE to your backends. To update, see Update the backend service.
  • Forwarding rule. Create a forwarding rule for IPv6

There is no validation to check if you have updated all the required resources. After you update all the resources, the traffic flows to the backends and you can check the logs and verify that the migration is complete.

Identify the resources to migrate

  1. To list all the subnets, run the following command in Cloud Shell:

    gcloud compute networks subnets list
    

    Note the name of the subnet with IPv4 only addresses to migrate to dual-stack. This name is referred to later as SUBNET. The VPC network is referred to later as NETWORK.

  2. To list all the backend services, run the following command in Cloud Shell:

    gcloud beta compute backend-services list
    

    Note the name of the backend service to migrate to dual-stack. This name is referred to later as BACKEND_SERVICE.

  3. To list all the URL maps, run the following command in Cloud Shell:

    gcloud beta compute url-maps list
    

    Note the name of the URL map associated with your load balancer. This name is referred to later as URL_MAP.

  4. To list all the VM instances and instance templates, run the following command in Cloud Shell:

    gcloud compute instances list
    
    gcloud compute instance-templates list
    

    Note the name of the instances and instance templates to migrate to dual-stack. This name is referred to later as VM_INSTANCE and INSTANCE_TEMPLATES.

  5. To list all the zonal network endpoint groups (NEGs), run the following command in Cloud Shell:

    gcloud compute network-endpoint-groups list
    

    Note the name of the zonal NEG backends to migrate to dual-stack. This name is referred to later as ZONAL_NEG.

  6. To list all the target proxies, run the following command in Cloud Shell:

    gcloud compute target-http-proxies list
    

    Note the name of the target proxy associated with your load balancer. This name is referred to later as TARGET_PROXY.

Migrate from IPv4 to IPv4 and IPv6 (dual-stack) backends

This section describes the procedure to migrate your load balancer and backends using IPv4 only (single-stack) to IPv4 and IPv6 (dual-stack).

Prerequisites

Before you start, you must have already set up a global external Application Load Balancer with IP stack type as IPv4_ONLY for instance group or zonal NEG backends.

To setup global external Application Load Balancers, refer to the following documentation:

You can run the following command to list the stack type information of all VM instances in a project:

gcloud

gcloud compute instances list \
  --format= \
  "table(name, zone.basename(),
networkInterfaces[].stackType.notnull().list(),
networkInterfaces[].ipv6AccessConfigs[0].externalIpv6.notnull().list():label=EXTERNAL_IPV6,
networkInterfaces[].ipv6Address.notnull().list():label=INTERNAL_IPV6)"

Update the subnet

Dual-stack subnets are supported on custom mode VPC networks only. Dual-stack subnets are not supported on auto mode VPC networks or legacy networks.

To update the network to the dual-stack setting, follow these steps:

  1. If you are using an auto mode VPC network, you must first convert the auto mode VPC network to custom mode.

  2. To enable IPv6, see Change a subnet's stack type to dual stack.

Update the VM instance or templates

You cannot edit VM instances that are part of a managed or an unmanaged instance group. To update the VM instances to dual stack, follow these steps:

  1. Delete specific instances from a group
  2. Create a dual-stack VM.
  3. Creating instances with specific names in MIGs

You can't update an existing instance template. If you need to make changes, you can create another template with similar properties. To update the VM instance templates to dual stack, follow these steps:

Console

  1. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click the instance template that you want to copy and update.
    2. Click Create similar.
    3. Expand the Advanced options section.
    4. For Network tags, enter allow-health-check-ipv6.
    5. In the Network interfaces section, click Add a network interface.
    6. In the Network list, select the custom mode VPC network.
    7. In the Subnetwork list, select SUBNET.
    8. For IP stack type, select IPv4 and IPv6 (dual-stack).
    9. Click Create.
  2. Starting a basic rolling update on the managed instance group MIG associated with the load balancer.

Update the zonal NEG

Zonal NEG endpoints cannot be edited. You must delete the IPv4 endpoints and create a new dual-stack endpoint with both IPv4 and IPv6 addresses.

To set up a zonal NEG (with GCE_VM_IP_PORT type endpoints) in the REGION_A region, first create the VMs in the GCP_NEG_ZONE zone. Then add the VM network endpoints to the zonal NEG.

Create VMs

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Click Create instance.

  3. Set the Name to vm-a1.

  4. For the Region, choose REGION_A, and choose any value for the Zone field. This zone is referred to as GCP_NEG_ZONE in this procedure.

  5. In the Boot disk section, ensure that the Debian operating system and the 10 (buster) version are selected for the boot disk options. Click Choose to change the image if necessary.

  6. Expand the Advanced options section and make the following changes:

    • Expand the Networking section.
    • In the Network tags field, enter allow-health-check.
    • In the Network interfaces section, make the following changes:
      • Network: NETWORK
      • Subnet: SUBNET
      • IP stack type: IPv4 and IPv6 (dual-stack)
    • Click Done.
    • Click Management. In the Startup script field, copy and paste the following script contents.

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2
      
  7. Click Create.

  8. Repeat the following steps to create a second VM, using the following name and zone combination:

    • Name: vm-a2, zone: GCP_NEG_ZONE

gcloud

Create the VMs by running the following command two times, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.

  • VM_NAME of vm-a1 and any GCP_NEG_ZONE zone of your choice.
  • VM_NAME of vm-a2 and the same GCP_NEG_ZONE zone.

    gcloud compute instances create VM_NAME \
        --zone=GCP_NEG_ZONE \
        --stack-type=IPV4_IPV6 \
        --image-family=debian-10 \
        --image-project=debian-cloud \
        --tags=allow-health-check \
        --subnet=SUBNET \
        --metadata=startup-script='#! /bin/bash
          apt-get update
          apt-get install apache2 -y
          a2ensite default-ssl
          a2enmod ssl
          vm_hostname="$(curl -H "Metadata-Flavor:Google" \
          http://metadata.google.internal/computeMetadata/v1/instance/name)"
          echo "Page served from: $vm_hostname" | \
          tee /var/www/html/index.html
          systemctl restart apache2'
    

Add endpoints to the zonal NEG

Console

To add endpoints to the zonal NEG:

  1. In the Google Cloud console, go to the Network endpoint groups page.

    Go to Network endpoint groups

  2. In the Name list, click the name of the network endpoint group (ZONAL_NEG). You see the Network endpoint group details page.

  3. In the Network endpoints in this group section, select the previously created NEG endpoint. Click Remove endpoint.

  4. In the Network endpoints in this group section, click Add network endpoint.

  5. Select the VM instance.

  6. In the Network interface section, the name, zone, and subnet of the VM is displayed.

  7. In the IPv4 address field, enter the IPv4 address of the new network endpoint.

  8. In the IPv6 address field, enter the IPv6 address of the new network endpoint.

  9. Select the Port type.

    1. If you select Default, the endpoint uses the default port 80 for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port 80.
    2. If you select Custom, enter the Port number for the endpoint to use.
  10. To add more endpoints, click Add network endpoint and repeat the previous steps.

  11. After you add all the endpoints, click Create.

gcloud

  1. Add endpoints (GCE_VM_IP_PORT endpoints) to ZONAL_NEG.

    gcloud compute network-endpoint-groups update ZONAL_NEG \
        --zone=GCP_NEG_ZONE \
        --add-endpoint='instance=vm-a1,ip=IPv4_ADDRESS, \
          ipv6=IPv6_ADDRESS,port=80' \
        --add-endpoint='instance=vm-a2,ip=IPv4_ADDRESS, \
          ipv6=IPv6_ADDRESS,port=80'
    

Replace the following:

IPv4_ADDRESS: IPv4 address of the network endpoint. The IPv4 must belong to a VM in Compute Engine (either the primary IP or as part of an aliased IP range). If the IP address is not specified, then the primary IP address for the VM instance in the network that the network endpoint group belongs to is used.

IPv6_ADDRESS: IPv6 address of the network endpoint. The IPv6 address must belong to a VM instance in the network that the network endpoint group belongs (external IPv6 address).

Create an IPv6 health check firewall rule

Ensure that you have an ingress rule that is applicable to the instances being load balanced and that allows traffic from the Google Cloud health checking systems (2600:2d00:1:b029::/64,2600:2d00:1:1::/64). This example uses the target tag allow-health-check-ipv6 to identify the VM instances to which it applies.

Without this firewall rule, the default deny ingress rule blocks incoming IPv6 traffic to the backend instances.

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. To allow IPv6 subnet traffic, click Create firewall rule again and enter the following information:

    • Name: fw-allow-lb-access-ipv6
    • Network: NETWORK
    • Priority: 1000
    • Direction of traffic: ingress
    • Targets: Specified target tags
    • Target tags: allow-health-check-ipv6
    • Source filter: IPv6 ranges
    • Source IPv6 ranges: 2600:2d00:1:b029::/64,2600:2d00:1:1::/64
    • Protocols and ports: Allow all
  3. Click Create.

gcloud

  1. Create the fw-allow-lb-access-ipv6 firewall rule to allow communication with the subnet:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check-ipv6 \
        --source-ranges=2600:2d00:1:b029::/64,2600:2d00:1:1::/64 \
        --rules=all
    

Create a new backend service and forwarding rule for IPv6

Even though you can update the existing BACKEND_SERVICE to support dual-stack, this section provides instructions to create a new backend service with the IP address selection policy as Prefer IPv6. By creating a new backend service you can route traffic to the new IPv6 backend service and gradually migrate to IPv6 backends.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. Click Edit.

Configure the backend service:

  1. Click Backend configuration.
  2. In the Backend service field, select Create a backend service.
  3. Set the Name as BACKEND_SERVICE_IPV6.
  4. For Backend type, select Zonal network endpoint group.
  5. In the IP address selection policy list, select Prefer IPv6.
  6. In the Protocol field, select HTTP.
  7. In the New Backend panel, do the following:
    1. In the network endpoint group list, select ZONAL_NEG.
    2. For Maximum RPS, enter 10.
  8. In the Health check list, select an HTTP health check.
  9. Click Done.

Configure the IPv6 frontend:

  1. Click Frontend configuration.
  2. Click Add frontend IP and port.
  3. In the Name field, enter a name for the forwarding rule.
  4. In the Protocol field, select HTTP.
  5. Set IP version to IPv6.
  6. Click Done.
  7. Click Update.

Configure routing rules

  1. Click Routing rules.
  2. Click Advanced host and path rule.
  3. Click Update.

gcloud

  1. Create a health check:

    gcloud compute health-checks create http HEALTH_CHECK \
       --port 80
    
  2. Create the backend service for HTTP traffic:

    gcloud beta compute backend-services create BACKEND_SERVICE_IPV6 \
       --load-balancing-scheme=EXTERNAL_MANAGED \
       --protocol=HTTP \
       --ip-address-selection-policy=PREFER_IPV6 \
       --health-checks=HEALTH_CHECK \
       --global
    
  3. Add zonal NEGs as the backend to the backend service.

    gcloud beta compute backend-services add-backend BACKEND_SERVICE_IPV6 \
      --network-endpoint-group=ZONAL_NEG \
      --max-rate-per-endpoint=10 \
      --global
    
  4. Reserve an external IPv6 address that your customers use to reach your load balancer.

    gcloud compute addresses create lb-ipv6-1 \
       --ip-version=IPV6 \
       --network-tier=PREMIUM \
       --global
    
  5. Create a forwarding rule for the backend service. When you create the forwarding rule, specify the external IP address in the subnet.

    gcloud beta compute forwarding-rules create FORWARDING_RULE_IPV6 \
       --load-balancing-scheme=EXTERNAL_MANAGED \
       --network-tier=PREMIUM \
       --address=lb-ipv6-1 \
       --global \
       --target-https-proxy=TARGET_PROXY \
       --ports=443
    

Route traffic to the new IPv6 backend service

Both BACKEND_SERVICE and BACKEND_SERVICE_IPV6 are capable of serving traffic. Update the URL map to direct some fraction of client traffic to the new IPv6 backend service.

  1. Use the following command to edit the URL maps:

    gcloud compute url-maps edit URL_MAP \
        --global
    
  2. In the text editor that appears, add a routeRule with a weightedBackendServices action that directs a percentage of IPv6 traffic to BACKEND_SERVICE_IPV6.

    defaultService: global/backendServices/BACKEND_SERVICE
    hostRules:
    - hosts:
      - '*'
      pathMatcher: matcher1
    name: URL_MAP
    pathMatchers:
    - defaultService: global/backendServices/BACKEND_SERVICE
      name: matcher1
      routeRules:
      - matchRules:
        - prefixMatch: ''
        priority: 1
        routeAction:
          weightedBackendServices:
          - backendService: global/backendServices/BACKEND_SERVICE
            weight: 95
          - backendService: global/backendServices/BACKEND_SERVICE_IPV6
            weight: 5
    

To implement gradual migration to IPv6, increase the weight percentage for the new BACKEND_SERVICE_IPV6 backend service incrementally to 100% by editing the URL map many times.

Configure the IP address selection policy

This step is optional and provides instructions to configure the IP address selection policy of the backend service. After you have migrated your backends to dual-stack, you can use the IP address selection policy to specify the traffic type that is sent from the GFE to your backends.

Replace IP_ADDRESS_SELECTION_POLICY with any of the following values:

IP address selection policy Description
Only IPv4 Only send IPv4 traffic to the backends of the backend service, regardless of traffic from the client to the GFE. Only IPv4 health checks are used to check the health of the backends.
Prefer IPv6

Prioritize the backend's IPv6 connection over the IPv4 connection (provided there is a healthy backend with IPv6 addresses).

The health checks periodically monitor the backends' IPv6 and IPv4 connections. The GFE first attempts the IPv6 connection; if the IPv6 connection is broken or slow, the GFE uses happy eyeballs to fall back and connect to IPv4.

Even if one of the IPv6 or IPv4 connections is unhealthy, the backend is still treated as healthy, and both connections can be tried by the GFE, with happy eyeballs ultimately selecting which one to use.

Only IPv6

Only send IPv6 traffic to the backends of the backend service, regardless of traffic from the client to the proxy. Only IPv6 health checks are used to check the health of the backends.

There is no validation to check if the backend traffic type matches the IP address selection policy. For example, if you have IPV4 backends and select Only IPv6 as the IP address selection policy, you won't observe configuration errors but the traffic won't flow to your backends.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. Click Edit.

  4. Click Backend configuration.

  5. In the Backend service field, select BACKEND_SERVICE_IPV6.

  6. The Backend type must be Zonal network endpoint group or Instance group.

  7. In the IP address selection policy list, select IP_ADDRESS_SELECTION_POLICY.

  8. Click Done.

gcloud

  1. Update the backend service:

    gcloud beta compute backend-services update BACKEND_SERVICE_IPV6 \
       --load-balancing-scheme=EXTERNAL_MANAGED \
       --protocol=HTTP \
       --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \
       --global
    

Test your load balancer

Test the load balancer to confirm that the migration is successful and the incoming traffic is reaching the backends as expected.

Look up the load balancer's external IP address

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. In the Frontend section, two load balancer IP addresses are displayed. In this procedure, the IPv4 address is referred to as IP_ADDRESS_IPV4 and the IPv6 address is referred as IP_ADDRESS_IPV6.

  4. In the Backends section, when the IP address selection policy is Prefer IPv6 two health check status are displayed for the backends.

Test traffic sent to your instances

In this example, requests from the curl command are distributed randomly to the backends.

  1. Repeat the following commands a few times until you see all the backend VMs responding:

    curl http://IP_ADDRESS_IPV4
    
    curl http://IP_ADDRESS_IPV6
    

    For example, if the IPv6 address is [fd20:1db0:b882:802:0:46:0:0]:80, the command looks similar to this:

    curl http://[fd20:1db0:b882:802:0:46:0:0]
    

Check the logs

Every log entry captures the destination IPv4 and IPv6 address for the backend. Because we support dual-stack, it is important to observe the IP address used by the backend.

You can check that traffic is going to IPv6 or failing back to IPv4 by viewing the logs.

The HttpRequest contains the backend_ip address associated with the backend. By examining the logs and comparing the destination IPv4 and IPv6 address of the backend_ip, you can confirm which IP address is used.

Limitations

When the IP address selection policy is configured as IPV6_ONLY, you can still configure IPv4 only backends. Such a configuration results in no healthy backends, clients get response code 503, and there is no upstream traffic. By examining the logs, you can see the statusDetails HTTP failure message with failed_to_pick_backend.