This document shows you how to convert Application Load Balancer resources and backends from IPv4 only (single-stack) to IPv4 and IPv6 (dual-stack). In this document, IPv4 only (single-stack) refers to the resources that use only IPv4 addresses, and IPv4 and IPv6 (dual-stack) refers to the resources that use both IPv4 and IPv6 addresses.
Benefits
There are several benefits to convert your load balancers to dual stack:
The main advantage of IPv6 is that a much larger pool of IP addresses can be allocated.
Many customers that are already using IPv4 only load balancers can convert from IPv4 only resources to IPv4 and IPv6 (dual-stack) resources by taking advantage of cloud-specific methods.
You can configure the load balancer to terminate ingress IPv6 traffic and proxy this traffic over IPv4 or IPv6 connection to your backends, based on your preference. For more information, see IPv6.
Limitations
You cannot update the IP stack type of a subnet from IPv4 and IPv6 (dual-stack) to IPv4 only.
When you configure the IP address selection policy on the backend service as IPv6 only, you can still configure IPv4 only backends. However, such a configuration results in unhealthy backends, clients get response code
503
, and traffic doesn't flow upstream. The logs showfailed_to_pick_backend
in the statusDetails HTTP failure messages.You can configure the IP address selection policy of the backend service to IPv6 only, but the IP stack type of the backends is always IPv4 and IPv6 (dual-stack).
Only VM instance group backends and zonal network endpoint group (NEG) with
GCE_VM_IP_PORT
endpoints support IPv4 and IPv6 (dual-stack).Cross-region internal Application Load Balancers, regional external Application Load Balancers, and regional internal Application Load Balancers don't support forwarding rules with IPv6. Ingress IPv4 traffic is proxied over an IPv4 or IPv6 connection to dual-stack backends.
Classic Application Load Balancers don't support dual-stack backends. The load balancer terminates ingress IPv6 traffic received from IPv6 clients and proxy this traffic over IPv4 connection to your IPv4 backends.
Before you begin
You must have already set up an Application Load Balancer with IPv4 only stack with an instance group or zonal NEG backends.
For more information on how to set up global external Application Load Balancers, refer to the following documentation:
- Set up a global external Application Load Balancer with VM instance group backends
- Set up a global external Application Load Balancer with zonal NEGs
For more information on how to set up regional external Application Load Balancers, refer to the following documentation:
- Set up a regional external Application Load Balancer with VM instance group backends
- Set up a regional external Application Load Balancer with zonal NEGs
For more information on how to set up cross-region internal Application Load Balancers, refer to the following documentation:
- Set up a cross-region internal Application Load Balancer with VM instance group backends
- Set up a cross-region internal Application Load Balancer with zonal NEGs
For more information on how to set up regional internal Application Load Balancers, refer to the following documentation:
- Set up a regional internal Application Load Balancer with VM instance group backends
- Set up a regional internal Application Load Balancer with zonal NEGs
Identify the resources to convert
Note the names of the resources that your load balancer is associated with. You'll need to provide these names later.
To list all the subnets, use the
gcloud compute networks subnets list
command:gcloud compute networks subnets list
Note the name of the subnet with IPv4 only addresses to convert to dual-stack. This name is referred to later as
SUBNET
. The VPC network is referred to later asNETWORK
.To list all the backend services, use the
gcloud beta compute backend-services list
command:gcloud beta compute backend-services list
Note the name of the backend service to convert to dual-stack. This name is referred to later as
BACKEND_SERVICE
.To list all the URL maps, use the
gcloud beta compute url-maps list
command:gcloud beta compute url-maps list
Note the name of the URL map associated with your load balancer. This name is referred to later as
URL_MAP
.If you already have a load balancer, to view the IP stack type of your backends, use the
gcloud compute instances list
command:gcloud compute instances list \ --format= \ "table( name, zone.basename(), networkInterfaces[].stackType.notnull().list(), networkInterfaces[].ipv6AccessConfigs[0].externalIpv6.notnull().list():label=EXTERNAL_IPV6, networkInterfaces[].ipv6Address.notnull().list():label=INTERNAL_IPV6)"
To list all the VM instances and instance templates, use the
gcloud compute instances list
command and thegcloud compute instance-templates list
command:gcloud compute instances list
gcloud compute instance-templates list
Note the names of the instances and instance templates to convert to dual-stack. This name is referred to later as
VM_INSTANCE
andINSTANCE_TEMPLATES
.To list all the zonal network endpoint groups (NEGs), use the
gcloud compute network-endpoint-groups list
command:gcloud compute network-endpoint-groups list
Note the names of the zonal NEG backends to convert to dual-stack. This name is referred to later as
ZONAL_NEG
.To list all the target proxies, use the
gcloud compute target-http-proxies list
command:gcloud compute target-http-proxies list
Note the name of the target proxy associated with your load balancer. This name is referred to later as
TARGET_PROXY
.
Convert from single-stack to dual-stack backends
This section shows you how to convert your load balancer resources and backends using IPv4 only (single-stack) addresses to IPv4 and IPv6 (dual-stack) addresses.
Update the subnet
Dual-stack subnets are supported on custom mode VPC networks only. Dual-stack subnets are not supported on auto mode VPC networks or legacy networks. Though auto mode networks can be useful for early exploration, custom mode VPCs are better suited for most production environments. We recommend that you use VPCs in custom mode.
To update the VPC to the dual-stack setting, follow these steps:
If you are using an auto mode VPC network, you must first convert the auto mode VPC network to custom mode.
To enable IPv6, see Change a subnet's stack type to dual stack.
Optional: If you want to configure internal IPv6 address ranges on subnets in this network, complete these steps:
- For VPC network ULA internal IPv6 range, select Enabled.
For Allocate internal IPv6 range, select Automatically or Manually.
If you select Manually, enter a
/48
range from within thefd20::/20
range. If the range is in use, you are prompted to provide a different range.
Update the proxy-only subnet
If you are using an Envoy based load balancer, we recommend that you change the proxy-only subnet stack type to dual stack. For information about load balancers that support proxy-only subnets, see Supported load balancers.
To change the proxy-only subnet's stack type to dual stack, do the following:
Console
In the Google Cloud console, go to the VPC networks page.
Click the name of a network to view the VPC network details page.
Click the Subnets tab.
In the Reserved proxy-only subnets for load balancing section, click the name of the proxy-only subnet that you want to modify.
In the Subnet details page, click Edit.
For IP stack type, select IPv4 and IPv6 (dual-stack). The IPv6 access type is Internal.
Click Save.
gcloud
Use the
subnets update
command.
gcloud compute networks subnets update PROXY_ONLY_SUBNET \ --stack-type=IPV4_IPV6 \ --ipv6-access-type=INTERNAL \ --region=REGION
Replace the following:
PROXY_ONLY_SUBNET
: the name of the proxy-only subnet.REGION
: the region of the subnet.IPv6_ACCESS_TYPE
: the IPv6 access type of the subnet isINTERNAL
.
Update the VM instance or templates
You can configure IPv6 addresses on a VM instance if the subnet that the VM is connected to has an IPv6 range configured. Only the following backends can support IPv6 addresses:
- Instance group backends: One or more managed, unmanaged, or a combination of managed and unmanaged instance group backends.
- Zonal NEGs: One or more
GCE_VM_IP_PORT
type zonal NEGs.
Update VM instances
You cannot edit VM instances that are part of a managed or an unmanaged instance group. To update the VM instances to dual stack, follow these steps:
- Delete specific instances from a group
- Create a dual-stack VM
- Create instances with specific names in MIGs
Update VM instance templates
You can't update an existing instance template. If you need to make changes, you can create another template with similar properties. To update the VM instance templates to dual stack, follow these steps:
Console
In the Google Cloud console, go to the Instance templates page.
- Click the instance template that you want to copy and update.
- Click Create similar.
- Expand the Advanced options section.
- For Network tags, enter
allow-health-check-ipv6
. - In the Network interfaces section, click Add a network interface.
- In the Network list, select the custom mode VPC network.
- In the Subnetwork list, select
SUBNET
. - For IP stack type, select IPv4 and IPv6 (dual-stack).
- Click Create.
Starting a basic rolling update on the managed instance group
MIG
associated with the load balancer.
Update the zonal NEG
Zonal NEG endpoints cannot be edited. You must delete the IPv4 endpoints and create a new dual-stack endpoint with both IPv4 and IPv6 addresses.
To set up a zonal NEG (with GCE_VM_IP_PORT
type endpoints)
in the REGION_A
region, first create the VMs in
the GCP_NEG_ZONE
zone. Then add the VM network endpoints
to the zonal NEG.
Create VMs
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set the Name to
vm-a1
.For the Region, choose
REGION_A
, and choose any value for the Zone field. This zone is referred to asGCP_NEG_ZONE
in this procedure.In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.
Expand the Advanced options section and make the following changes:
- Expand the Networking section.
- In the Network tags field, enter
allow-health-check
. - In the Network interfaces section, make the following changes:
- Network:
NETWORK
- Subnet:
SUBNET
- IP stack type: IPv4 and IPv6 (dual-stack)
- Network:
- Click Done.
Click Management. In the Startup script field, copy and paste the following script contents.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Repeat the following steps to create a second VM, using the following name and zone combination:
- Name:
vm-a2
, zone:GCP_NEG_ZONE
- Name:
gcloud
Create the VMs by running the following command two times, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.
VM_NAME
ofvm-a1
and anyGCP_NEG_ZONE
zone of your choice.VM_NAME
ofvm-a2
and the sameGCP_NEG_ZONE
zone.gcloud compute instances create VM_NAME \ --zone=GCP_NEG_ZONE \ --stack-type=IPV4_IPV6 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-health-check \ --subnet=SUBNET \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Add endpoints to the zonal NEG
Console
To add endpoints to the zonal NEG:
In the Google Cloud console, go to the Network endpoint groups page.
In the Name list, click the name of the network endpoint group (
ZONAL_NEG
). You see the Network endpoint group details page.In the Network endpoints in this group section, select the previously created NEG endpoint. Click Remove endpoint.
In the Network endpoints in this group section, click Add network endpoint.
Select the VM instance.
In the Network interface section, the name, zone, and subnet of the VM is displayed.
In the IPv4 address field, enter the IPv4 address of the new network endpoint.
In the IPv6 address field, enter the IPv6 address of the new network endpoint.
Select the Port type.
- If you select Default, the endpoint uses the default port
80
for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port80
. - If you select Custom, enter the Port number for the endpoint to use.
- If you select Default, the endpoint uses the default port
To add more endpoints, click Add network endpoint and repeat the previous steps.
After you add all the endpoints, click Create.
gcloud
Add endpoints (
GCE_VM_IP_PORT
endpoints) toZONAL_NEG
.gcloud compute network-endpoint-groups update ZONAL_NEG \ --zone=GCP_NEG_ZONE \ --add-endpoint='instance=vm-a1,ip=IPv4_ADDRESS, \ ipv6=IPv6_ADDRESS,port=80' \ --add-endpoint='instance=vm-a2,ip=IPv4_ADDRESS, \ ipv6=IPv6_ADDRESS,port=80'
Replace the following:
IPv4_ADDRESS
:
IPv4 address of the network endpoint. The IPv4 must belong to
a VM in Compute Engine (either the primary IP or as part of an aliased IP range).
If the IP address is not specified, then the primary IP address for the VM
instance in the network that the network endpoint group belongs to is used.
IPv6_ADDRESS
:
IPv6 address of the network endpoint. The IPv6
address must belong to a VM instance in the network that the network endpoint
group belongs (external IPv6 address).
Create an IPv6 health check firewall rule
Ensure that you have an ingress rule that is applicable to the instances
being load balanced and that allows traffic from the Google Cloud health
checking systems (2600:2d00:1:b029::/64
). This example
uses the target tag allow-health-check-ipv6
to identify the VM instances to
which it applies.
Without this firewall rule, the default deny ingress rule blocks incoming IPv6 traffic to the backend instances.
Console
In the Google Cloud console, go to the Firewall policies page.
To allow IPv6 subnet traffic, click Create firewall rule again and enter the following information:
- Name:
fw-allow-lb-access-ipv6
- Network:
NETWORK
- Priority:
1000
- Direction of traffic: ingress
- Targets: Specified target tags
- Target tags:
allow-health-check-ipv6
- Source filter: IPv6 ranges
Source IPv6 ranges:
For global external Application Load Balancer and global external proxy Network Load Balancer, enter
2600:2d00:1:b029::/64
,2600:2d00:1:1::/64
For cross-region internal Application Load Balancer, regional external Application Load Balancer, regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer, regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer, enter
2600:2d00:1:b029::/64
Protocols and ports: Allow all
- Name:
Click Create.
gcloud
Create the
fw-allow-lb-access-ipv6
firewall rule to allow communication with the subnet.For global external Application Load Balancer and global external proxy Network Load Balancer, use the following command:
gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64,2600:2d00:1:1::/64 \ --rules=all
For cross-region internal Application Load Balancer, regional external Application Load Balancer, regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer, regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer, use the following command:
gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=all
Add a proxy-only subnet firewall rule
If you are using an Envoy based load balancer, you have update the ingress
firewall rule fw-allow-lb-access-ipv6
to allow traffic from the proxy-only
subnet to the backends.
To get the internalIpv6Prefix
from the proxy-only subnet, run the
following command:
gcloud compute networks subnets describe PROXY_ONLY_SUBNET \ --region=REGION \ --format="value(internalIpv6Prefix)"
Note the internal IPv6 address, this is referred to later as
IPv6_PROXY
.
To update the firewall rule fw-allow-lb-access-ipv6
for proxy-only subnet, do
the following:
Console
In the Google Cloud console, go to the Firewall policies page.
In the VPC firewall rules panel, click
fw-allow-lb-access-ipv6
.- Source IPv6 ranges:
2600:2d00:1:b029::/64
, IPv6_PROXY
- Source IPv6 ranges:
Click Save.
gcloud
Update the
fw-allow-lb-access-ipv6
firewall rule to allow communication with the proxy-only subnet:gcloud compute firewall-rules update fw-allow-lb-access-ipv6 \ --source-ranges=2600:2d00:1:b029::/64,2600:2d00:1:1::/64, IPv6_PROXY \
Create a new backend service and forwarding rule for IPv6
This section describes the procedure to create a new backend service and a forwarding rule for IPv6.
Both BACKEND_SERVICE
and
BACKEND_SERVICE_IPV6
are capable of serving traffic.
To avoid traffic disruption, create a new backend service with the IP address
selection policy set to Prefer IPv6
.
After you create the new backend service, you can
route traffic to the new IPv6 backend service.
The forwarding rule with IPv6 can be created only for global external Application Load Balancers. Forwarding rule with IPv6 is not supported for cross-region internal Application Load Balancers, regional external Application Load Balancers, and regional internal Application Load Balancers.
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer.
Click Edit.
Configure the backend service:
- Click Backend configuration.
- In the Backend service field, select Create a backend service.
- Set the Name as BACKEND_SERVICE_IPV6.
- For Backend type, select Zonal network endpoint group.
- In the IP address selection policy list, select Prefer IPv6.
- In the Protocol field, select HTTP.
- In the New Backend panel, do the following:
- In the network endpoint group list, select ZONAL_NEG.
- For Maximum RPS, enter
10
.
- In the Health check list, select an HTTP health check.
- Click Done.
Configure the IPv6 frontend:
Forwarding rule with IPv6 is not supported for cross-region internal Application Load Balancers, regional external Application Load Balancers, and regional internal Application Load Balancers.
- Click Frontend configuration.
- Click Add frontend IP and port.
- In the Name field, enter a name for the forwarding rule.
- In the Protocol field, select
HTTP
. - Set IP version to
IPv6
. - Click Done.
- Click Update.
Configure routing rules
- Click Routing rules.
- Click Advanced host and path rule.
- Click Update.
gcloud
Create a health check:
gcloud compute health-checks create http HEALTH_CHECK \ --port 80
Create the backend service for HTTP traffic:
global
For global external Application Load Balancer, use the command:
gcloud beta compute backend-services create BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=PREFER_IPV6 \ --health-checks=HEALTH_CHECK \ --global
For cross-region internal Application Load Balancer, use the command:
gcloud beta compute backend-services create BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=PREFER_IPV6 \ --health-checks=HEALTH_CHECK \ --global
regional
For regional external Application Load Balancer, use the command:
gcloud beta compute backend-services create BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=PREFER_IPV6 \ --health-checks=HEALTH_CHECK \ --region=REGION
For regional internal Application Load Balancer, use the command:
gcloud beta compute backend-services create BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=PREFER_IPV6 \ --health-checks=HEALTH_CHECK \ --region=REGION
Add zonal NEGs as the backend to the backend service.
global
For global external Application Load Balancer and cross-region internal Application Load Balancer, use the command:
gcloud beta compute backend-services add-backend BACKEND_SERVICE_IPV6 \ --network-endpoint-group=ZONAL_NEG \ --max-rate-per-endpoint=10 \ --global
regional
For regional external Application Load Balancer and regional internal Application Load Balancer, use the command:
gcloud beta compute backend-services add-backend BACKEND_SERVICE_IPV6 \ --network-endpoint-group=ZONAL_NEG \ --max-rate-per-endpoint=10 \ --region=REGION
Reserve an external IPv6 address that your customers use to reach your load balancer.
global
For global external Application Load Balancer, use the command:
gcloud compute addresses create lb-ipv6-1 \ --ip-version=IPV6 \ --network-tier=PREMIUM \ --global
Create a forwarding rule for the backend service. When you create the forwarding rule, specify the external IP address in the subnet.
global
For global external Application Load Balancer, use the command:
gcloud beta compute forwarding-rules create FORWARDING_RULE_IPV6 \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=PREMIUM \ --address=lb-ipv6-1 \ --global \ --target-https-proxy=TARGET_PROXY \ --ports=443
Route traffic to the new IPv6 backend service
Update the URL map to direct some fraction of client
traffic to the new IPv6 backend service
BACKEND_SERVICE_IPV6
.
Use the following command to edit the URL maps:
global
For global external Application Load Balancer and cross-region internal Application Load Balancer, use the command:
gcloud compute url-maps edit URL_MAP \ --global
regional
For regional external Application Load Balancer and regional internal Application Load Balancer, use the command:
gcloud compute url-maps edit URL_MAP \ --region=REGION
In the text editor that appears, add a
routeRule
with aweightedBackendServices
action that directs a percentage of IPv6 traffic toBACKEND_SERVICE_IPV6
.defaultService: global/backendServices/BACKEND_SERVICE hostRules: - hosts: - '*' pathMatcher: matcher1 name: URL_MAP pathMatchers: - defaultService: global/backendServices/BACKEND_SERVICE name: matcher1 routeRules: - matchRules: - prefixMatch: '' priority: 1 routeAction: weightedBackendServices: - backendService: global/backendServices/BACKEND_SERVICE weight: 95 - backendService: global/backendServices/BACKEND_SERVICE_IPV6 weight: 5
To implement gradual migration to IPv6, increase the weight percentage for the
new backend service BACKEND_SERVICE_IPV6
incrementally
to 100% by editing the URL map many times.
Configure the IP address selection policy
This step is optional. After you have converted your resources and backends to dual-stack, you can use the IP address selection policy to specify the traffic type that is sent from the backend service to your backends.
Replace IP_ADDRESS_SELECTION_POLICY
with any of the
following values:
IP address selection policy | Description |
---|---|
Only IPv4 | Only send IPv4 traffic to the backends of the backend service, regardless of traffic from the client to the GFE. Only IPv4 health checks are used to check the health of the backends. |
Prefer IPv6 | Prioritize the backend's IPv6 connection over the IPv4 connection (provided there is a healthy backend with IPv6 addresses). The health checks periodically monitor the backends' IPv6 and IPv4 connections. The GFE first attempts the IPv6 connection; if the IPv6 connection is broken or slow, the GFE uses happy eyeballs to fall back and connect to IPv4. Even if one of the IPv6 or IPv4 connections is unhealthy, the backend is still treated as healthy, and both connections can be tried by the GFE, with happy eyeballs ultimately selecting which one to use. |
Only IPv6 | Only send IPv6 traffic to the backends of the backend service, regardless of traffic from the client to the proxy. Only IPv6 health checks are used to check the health of the backends. There is no validation to check if the backend traffic type matches the
IP address selection policy. For example, if you have IPV4 backends and
select |
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer.
Click Edit.
Click Backend configuration.
In the Backend service field, select BACKEND_SERVICE_IPV6.
The Backend type must be Zonal network endpoint group or Instance group.
In the IP address selection policy list, select IP_ADDRESS_SELECTION_POLICY.
Click Done.
gcloud
Update the backend service:
global
For global external Application Load Balancer, use the command:
gcloud beta compute backend-services update BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \ --global
For cross-region internal Application Load Balancer, use the command:
gcloud beta compute backend-services update BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \ --global
regional
For regional external Application Load Balancer, use the command:
gcloud beta compute backend-services update BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \ --region=REGION
For regional internal Application Load Balancer, use the command:
gcloud beta compute backend-services update BACKEND_SERVICE_IPV6 \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \ --region=REGION
Test your load balancer
You must validate that all required resources are updated to dual stack. After you update all the resources, the traffic must automatically flow to the backends. You can check the logs and verify that the conversion is complete.
Test the load balancer to confirm that the migration is successful and the incoming traffic is reaching the backends as expected.
Look up the load balancer's external IP address
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer.
In the Frontend section, two load balancer IP addresses are displayed. In this procedure, the IPv4 address is referred to as IP_ADDRESS_IPV4 and the IPv6 address is referred as IP_ADDRESS_IPV6.
In the Backends section, when the IP address selection policy is
Prefer IPv6
two health check status are displayed for the backends.
Send traffic to your instances
In this example, requests from the curl
command are distributed randomly to the
backends.
Repeat the following commands a few times until you see all the backend VMs responding:
curl http://IP_ADDRESS_IPV4
curl http://IP_ADDRESS_IPV6
For example, if the IPv6 address is
[fd20:1db0:b882:802:0:46:0:0]:80
, the command looks similar to this:curl http://[fd20:1db0:b882:802:0:46:0:0]
Check the logs
Every log entry captures the destination IPv4 and IPv6 address for the backend. Because we support dual-stack, it is important to observe the IP address used by the backend.
You can check that traffic is going to IPv6 or failing back to IPv4 by viewing the logs.
The HttpRequest
contains the backend_ip
address associated with the
backend. By examining the logs and comparing the destination IPv4 and IPv6
address of the backend_ip
, you can confirm which IP address is used.