Troubleshooting Internal HTTP(S) Load Balancing

This guide describes how to troubleshoot configuration issues for a Google Cloud internal HTTP(S) load balancer. Before following this guide, familiarize yourself with the following:

Load balanced traffic does not have the source address of the original client

This is expected behavior. Internal HTTP(S) Load Balancing operates as an HTTP(S) reverse proxy (gateway). When a client program opens a connection to the IP address of an INTERNAL_MANAGED forwarding rule, the connection terminates at a proxy. The proxy processes the requests that arrive over that connection. For each request, the proxy selects a backend to receive the request based on the URL map and other factors. The proxy then sends the request to the selected backend. As a result, from the point of view of the backend, the source of an incoming packet is an IP address from the region's proxy-only subnet.

Requests are rejected by the load balancer

For each request, the proxy selects a backend to receive the request based on a path matcher in the load balancer's URL map. If the URL map doesn't have a path matcher defined for a request, it cannot select a backend service, so it returns an HTTP 404 (Not Found) response code.

Load balancer doesn't connect to backends

The firewalls protecting your backend servers need to be configured to allow ingress traffic from the proxies in the proxy-only subnet range that you allocated to your internal HTTP(S) load balancer's region.

The proxies connect to backends using the connection settings specified by the configuration of your backend service. If these values don't match the configuration of the server(s) running on your backends, the proxy cannot forward requests to the backends.

Health check probes can't reach the backends

To verify that health check traffic reaches your backend VMs, enable health check logging and search for successful log entries.

Clients cannot connect to the load balancer

The proxies listen for connections to the load balancer's IP address and port configured in the forwarding rule (for example,, and with the protocol specified in the forwarding rule (HTTP or HTTPS). If your clients can't connect, ensure that they are using the correct address, port, and protocol.

Ensure that a firewall isn't blocking traffic between your client instances and the load balanced IP address.

Ensure that the clients are in the same region as the load balancer. Internal HTTP(S) Load Balancing is a regional product, so all clients (and backends) must be in the same region as the load balancer resource.

Organizational policy restriction for Shared VPC

If you are using Shared VPC and you cannot create a new internal HTTP(S) load balancer in a particular subnet, an organization policy might be the cause. In the organization policy, add the subnet to the list of allowed subnets or contact your organization administrator. For more information, see constraints/compute.restrictSharedVpcSubnetworks.

Load balancer doesn't distribute traffic evenly across zones

You might observe an imbalance in your internal HTTP(S) load balancer traffic across zones. This can happen especially when there is low utilization (< 10%) of your backend capacity.

Such behavior can affect overall latency due to traffic being sent to only a few servers in one zone.

To even out the traffic distribution across zones, you can make the following configuration changes:

  • Use the RATE balancing mode with a low max-rate-per-instance target capacity.
  • Use the LocalityLbPolicy backend traffic policy with a load balancing algorithm of LEAST_REQUEST.


If you are having trouble using Internal HTTP(S) Load Balancing with other Google Cloud networking features, note the current compatibility limitations.