Troubleshooting Internal TCP/UDP Load Balancing

This guide describes how to troubleshoot configuration issues for a Google Cloud internal TCP/UDP load balancer.

Overview

The types of issues discussed in this guide include the following:

  • Load balancer setup issues
  • General connectivity issues
  • Backend failover issues
  • Load balancer as next-hop issues

Before you begin

Before investigating issues, familiarize yourself with the following pages.

For general connectivity:

For failover:

For next hop:

Backends have incompatible balancing modes

When creating a load balancer, you might see the error:

Validation failed for instance group INSTANCE_GROUP:

backend services 1 and 2 point to the same instance group
but the backends have incompatible balancing_mode. Values should be the same.

This happens when you try to use the same backend in two different load balancers, and the backends don't have compatible balancing modes.

For more information, see the following:

Troubleshooting general connectivity issues

If you can't connect to your internal TCP/UDP load balancer, check for the following common issues:

  • Verify firewall rules.

    • Ensure that ingress allow firewall rules are defined to permit health checks to backend VMs.
    • Ensure that ingress allow firewall rules allow traffic to the backend VMs from clients.
    • Ensure that relevant firewall rules exist to allow traffic to reach the backend VMs on the ports being used by the load balancer.
    • If you're using target tags for the firewall rules, make sure that the load balancer's backend VMs are tagged appropriately.

    To learn how to configure firewall rules required by your internal TCP/UDP load balancer, see Configuring firewall rules.

  • Verify that the Guest environment is running on the backend VM. If you can connect to a healthy backend VM, but cannot connect to the load balancer, it might be that the Guest environment (formerly, the Windows Guest Environment or Linux Guest Environment) on the VM is either not running or is unable to communicate with the metadata server (metadata.google.internal, 169.254.169.254).

    Check for the following:

    • Ensure that the Guest environment is installed and running on the backend VM.
    • Ensure that the firewall rules within the guest operating system of the backend VM (iptables or Windows Firewall) don't block access to the metadata server.
  • Verify that backend VMs are accepting packets sent to the load balancer. Each backend VM must be configured to accept packets sent to the load balancer. That is, the destination of packets delivered to the backend VMs is the IP address of the load balancer. Under most circumstances, this is implented with a local route.

    For VMs created from Google Cloud images, the Guest agent installs the local route for the load balancer's IP address. Google Kubernetes Engine instances based on Container-Optimized OS implement this by using iptables instead.

    On a Linux backend VM, you can verify the presence of the local route by running the following command. Replace LOAD_BALANCER_IP with the load balancer's IP address:

    sudo ip route list table local | grep LOAD_BALANCER_IP
    
  • Verify service IP address and port binding on the backend VMs. Packets sent to an internal TCP/UDP load balancer arrive at backend VMs with the destination IP address of the load balancer itself. This type of load balancer is not a proxy, and this is expected behavior.

    The software running on the backend VM must be doing the following:

    • Listening on (bound to) the load balancer's IP address or any IP address (0.0.0.0 or ::)
    • Listening on (bound to) a port that's included in the load balancer's forwarding rule

    To test this, connect to a backend VM using either SSH or RDP. Then perform the following tests using either curl, telnet, or a similar tool:

    • Attempt to reach the service by contacting it using the internal IP address of the backend VM itself, 127.0.0.1, or localhost.
    • Attempt to reach the service by contacting it using the IP address of the load balancer's forwarding rule.
  • Check if the client VM is in the same region as the load balancer. If the client connecting to the load balancer is in another region, make sure that global access is enabled.

  • Verify that health check traffic can reach backend VMs. To verify that health check traffic reaches your backend VMs, enable health check logging and search for successful log entries.

Troubleshooting Shared VPC issues

If you are using Shared VPC and you cannot create a new internal TCP/UDP load balancer in a particular subnet, an organization policy might be the cause. In the organization policy, add the subnet to the list of allowed subnets or contact your organization administrator. For more information, refer to the constraints/compute.restrictSharedVpcSubnetworks constraint.

Troubleshooting failover issues

If you've configured failover for an internal TCP/UDP load balancer, the following sections describe the issues that can occur.

Connectivity

  • Make sure that you've designated at least one failover backend.
  • Verify your failover policy settings:
    • Failover ratio
    • Dropping traffic when all backend VMs are unhealthy
    • Disabling connection draining on failover

Issues with managed instance groups and failover

  • Symptom: The active pool is changing back and forth (flapping) between the primary and failover backends.
  • Possible reason: Using managed instance groups with autoscaling and failover might cause the active pool to repeatedly failover and failback between the primary and failover backends. Google Cloud doesn't prevent you from configuring failover with managed instance groups, because your deployment might benefit from this setup.

Disable connection draining restriction for failover groups

Disabling connection draining only works if the backend service is set up with protocol TCP.

The following error message appears if you create backend service with UDP while connection draining is disabled:

gcloud compute backend-services create my-failover-bs
  --global-health-checks \
  --load-balancing-scheme internal
  --health-checks my-tcp-health-check
  --region us-central1
  --no-connection-drain-on-failover
  --drop-traffic-if-unhealthy
  --failover-ratio 0.5
  --protocol UDP
ERROR: (gcloud.compute.backend-services.create) Invalid value for
[--protocol]: can only specify --connection-drain-on-failover if the protocol is
TCP.

Traffic is sent to unexpected backend VMs

First check the following: If the client VM is also a backend VM of the load balancer, it's expected behavior that connections sent to the IP address of the load balancer's forwarding rule are always answered by the backend VM itself. For more information, refer to testing connections from a single client and sending requests from load balanced VMs.

If the client VM is not a backend VM of the load balancer:

  • For requests from a single client, refer to testing connections from a single client so that you understand the limitations of this method.

  • Ensure that you have configured ingress allow firewall rules to allow health checks.

  • For a failover configuration, make sure that you understand how membership in the active pool works, and when Google Cloud performs failover and failback. Inspect your load balancer's configuration:

    • Use the Cloud Console to check for the number of healthy backend VMs in each backend instance group. The Cloud Console also shows you which VMs are in the active pool.

    • Make sure that your load balancer's failover ratio is set appropriately. For example, if you have ten primary VMs and a failover ratio set to 0.2, this means Google Cloud performs a failover when fewer than two (10 × 0.2 = 2) primary VMs are healthy. A failover ratio of 0.0 has a special meaning: Google Cloud performs a failover when no primary VMs are healthy.

Existing connections are terminated during failover or failback

Edit your backend service's failover policy. Ensure that connection draining on failover is enabled.

Troubleshooting load balancer as next hop

When you set an internal TCP/UDP load balancer to be a next hop of a custom static route, the following issues might occur:

Connectivity

  • If you cannot ping an IP address in the destination range of a route whose next hop is a forwarding rule for an internal TCP/UDP load balancer, note that a route using this type of next hop might not process ICMP traffic depending on when the route was created. If the route was created before May 15, 2021, it only processes TCP and UDP traffic until August 16, 2021. Starting August 16, 2021, all routes will automatically forward all protocol traffic (TCP, UDP, and ICMP) regardless of when a route was created. If you don't want to wait until then, you can enable ping functionality now by creating new routes and deleting the old ones.

  • When using an internal TCP/UDP load balancer as a next hop for a custom static route, all traffic is delivered to the load balancer's healthy backend VMs, regardless of the protocol configured for the load balancer's internal backend service, and regardless of the port or ports configured on the load balancer's internal forwarding rule.

  • Ensure that you have created ingress allow firewall rules that correctly identify sources of traffic that should be delivered to backend VMs via the custom static route's next hop. Packets that arrive on backend VMs preserve their source IP addresses, even when delivered by way of a custom static route.

Invalid value for destination range

The destination range of a custom static route can't be more specific than any subnet route in your VPC network. If you receive the following error message when creating a custom static route:

Invalid value for field 'resource.destRange': [ROUTE_DESTINATION].
[ROUTE_DESTINATION] hides the address space of the network .... Cannot change
the routing of packets destined for the network.
  • You cannot create a custom static route with a destination that exactly matches or is more specific (with a longer mask) than a subnet route. Refer to applicability and order for further information.

  • If packets go to an unexpected destination, remove other routes in your VPC network with more specific destinations. Review the routing order to understand Google Cloud route selection.

What's next