Validation failed for instance group INSTANCE_GROUP:
backend services 1 and 2 point to the same instance group
but the backends have incompatible balancing_mode. Values should be the same.
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Troubleshoot issues with internal Application Load Balancers\n\nThis guide describes how to troubleshoot configuration issues\nfor a Google Cloud internal Application Load Balancer. Before following this\nguide, familiarize yourself with the following:\n\n- [Internal Application Load Balancer overview](/load-balancing/docs/l7-internal)\n- [Proxy-only subnets](/load-balancing/docs/proxy-only-subnets)\n- [Internal Application Load Balancer logging and\n monitoring](/load-balancing/docs/l7-internal/monitoring)\n\nTroubleshoot common issues with Network Analyzer\n------------------------------------------------\n\n\n[Network Analyzer](/network-intelligence-center/docs/network-analyzer/overview)\nautomatically monitors your VPC network configuration and detects\nboth suboptimal configurations and misconfigurations. It identifies network\nfailures, provides root cause information, and suggests possible resolutions. To\nlearn about the different misconfiguration scenarios that are automatically\ndetected by Network Analyzer, see [Load balancer insights](/network-intelligence-center/docs/network-analyzer/insights/network-services/load-balancer)\nin the Network Analyzer documentation.\n\nNetwork Analyzer is available in the Google Cloud console as a part of\nNetwork Intelligence Center.\n[Go to Network Analyzer](https://console.cloud.google.com/net-intelligence/network-analyzer)\n\n\u003cbr /\u003e\n\nBackends have incompatible balancing modes\n------------------------------------------\n\nWhen creating a load balancer, you might see the error: \n\n```\nValidation failed for instance group INSTANCE_GROUP:\n\nbackend services 1 and 2 point to the same instance group\nbut the backends have incompatible balancing_mode. Values should be the same.\n```\n\nThis happens when you try to use the same backend in two different load\nbalancers, and the backends don't have compatible balancing modes.\n\nFor more information, see the following:\n\n- [Restrictions and guidance for instance\n groups](/load-balancing/docs/backend-service#restrictions_and_guidance)\n- [Change the balancing mode of a load\n balancer](/load-balancing/docs/backend-service#change_balancing_mode)\n\nLoad balanced traffic does not have the source address of the original client\n-----------------------------------------------------------------------------\n\nThis is expected behavior. An internal Application Load Balancer operates as an HTTP(S)\nreverse proxy (gateway). When a client program opens a connection to the IP\naddress of an INTERNAL_MANAGED forwarding rule, the connection terminates at a\nproxy. The proxy processes the requests that arrive over that connection. For\neach request, the proxy selects a backend to receive the request based on the\nURL map and other factors. The proxy then sends the request to the selected\nbackend. As a result, from the point of view of the backend, the source of an\nincoming packet is an IP address from the region's [proxy-only\nsubnet](/load-balancing/docs/proxy-only-subnets).\n\nRequests are rejected by the load balancer\n------------------------------------------\n\nFor each request, the proxy selects a backend to receive the request based on a\npath matcher in the load balancer's URL map. If the URL map doesn't have a path\nmatcher defined for a request, it cannot select a backend service, so it returns\nan HTTP `404` (Not Found) response code.\n\nLoad balancer doesn't connect to backends\n-----------------------------------------\n\nThe firewalls protecting your backend servers need to be configured to allow\ningress traffic from the proxies in the [proxy-only subnet\nrange](/load-balancing/docs/l7-internal/proxy-only-subnets) that you allocated to your internal HTTP(S) load\nbalancer's region.\n\nThe proxies connect to backends using the connection settings specified by the\nconfiguration of your backend service. If these values don't match the\nconfiguration of the server(s) running on your backends, the proxy cannot forward\nrequests to the backends.\n\nHealth check probes can't reach the backends\n--------------------------------------------\n\nTo verify that health check traffic reaches your backend VMs, [enable\nhealth check logging](/load-balancing/docs/health-check-logging) and search\nfor successful log entries.\n\nClients cannot connect to the load balancer\n-------------------------------------------\n\nThe proxies listen for connections to the load balancer's IP address and port\nconfigured in the forwarding rule (for example, `10.1.2.3:80`), and with the\nprotocol specified in the forwarding rule (HTTP or HTTPS). If your clients can't\nconnect, ensure that they are using the correct address, port, and protocol.\n\nEnsure that a firewall isn't blocking traffic between your client instances\nand the load balanced IP address.\n\nEnsure that the clients are in the same region as the load balancer. Internal\nHTTP(S) Load Balancing is a regional product, so all clients (and backends) must\nbe in the same region as the load balancer resource.\n\nOrganizational policy restriction for Shared VPC\n------------------------------------------------\n\nIf you are using Shared VPC and you cannot create a new internal Application Load Balancer\nin a particular subnet, an organization policy might be the\ncause. In the organization policy, add the subnet to the list of allowed\nsubnets or contact your organization administrator. For more information, see\n[`constraints/compute.restrictSharedVpcSubnetworks`](/resource-manager/docs/organization-policy/org-policy-constraints).\n\nLoad balancer doesn't distribute traffic evenly across zones\n------------------------------------------------------------\n\nYou might observe an imbalance in your internal Application Load Balancer traffic across zones.\nThis can happen especially when there is low utilization (\\\u003c 10%) of your\nbackend capacity.\n\nSuch behavior can affect overall latency due to traffic being sent to only a few\nservers in one zone.\n\nTo even out the traffic distribution across zones, you can make the following\nconfiguration changes:\n\n- Use the `RATE` [balancing\n mode](/load-balancing/docs/backend-service#balancing-mode) with a low `max-rate-per-instance` target capacity.\n- Use the `LocalityLbPolicy` backend [traffic\n policy](/load-balancing/docs/l7-internal/traffic-management#traffic_policies) with a load balancing algorithm of `LEAST_REQUEST`.\n\nUnexplained `5xx` errors\n------------------------\n\nFor error conditions caused by a communications issue between the load balancer\nproxy and its backends, the load balancer generates an HTTP status code\n(`5xx`) and returns that status code to the client. Not all HTTP `5xx`\nerrors are generated by the load balancer---for example, if a backend sends\nan HTTP `5xx` response to the load balancer, the load balancer relays that\nresponse to its client. To determine whether an HTTP `5xx` response was relayed\nfrom a backend or if it was generated by the load balancer proxy, see the\n`proxyStatus` field of the [load balancer\nlogs](/load-balancing/docs/l7-internal/monitoring#what_is_logged).\n\nConfiguration changes to the internal Application Load Balancer, such as addition or\nremoval of a backend service, can result in a brief period of time where users\nsee the HTTP status code `503`. While these configuration changes\npropagate to\n[Envoys](https://www.envoyproxy.io/) globally,\nyou see log entries where the `proxyStatus` field matches the\n`connection_refused` log string.\n\nIf HTTP `5xx` status codes persist longer than a few minutes after you complete\nthe load balancer configuration, take the following steps to troubleshoot HTTP\n`5xx` responses:\n\n1. Verify that there is a [firewall rule configured to allow health\n checks](/load-balancing/docs/l7-internal/setting-up-l7-internal#configure_firewall_rules).\n In the\n absence of one, load balancer logs typically have a `proxyStatus` matching\n `destination_unavailable`, which indicates that the load balancer considers\n the backend to be unavailable.\n\n2. Verify that health check traffic reaches your backend VMs. To do this,\n [enable\n health check logging](/load-balancing/docs/health-check-logging) and search\n for successful log entries.\n\n For new load balancers, the lack of successful health check log entries\n doesn't mean that health check traffic is not reaching your backends. It\n might mean that the backend's initial health state has not yet changed from\n `UNHEALTHY` to a different state. You see successful health check log entries\n only after the health check prober receives an HTTP `200 OK` response from the\n backend.\n3. Verify that the keepalive configuration parameter for the HTTP server\n software running on the backend instance is not less than the keepalive\n timeout of the load balancer, whose value is fixed at 10\n minutes (600 seconds) and is not configurable.\n\n The load balancer generates an HTTP `5xx` status code when the connection to\n the backend has unexpectedly closed while sending the HTTP request or before\n the complete HTTP response has been received. This can happen because the\n keepalive configuration parameter for the web server software running on the\n backend instance is less than the fixed keepalive timeout of the\n load balancer. Ensure that the keepalive timeout configuration for HTTP\n server software on each backend is set to slightly greater than 10 minutes\n (the recommended value is 620 seconds).\n\nLimitations\n-----------\n\nIf you are having trouble using an internal Application Load Balancer with other\nGoogle Cloud networking features, note the current compatibility\n[limitations](/load-balancing/docs/l7-internal#limitations)."]]