このページでは、Google Kubernetes Engine(GKE)におけるコンテナネイティブのロード バランシングについて説明します。コンテナネイティブのロード バランシングにより、数種類のロードバランサは Pod を直接ターゲットにすることができ、またトラフィックを Pod に均等に分配できます。
kube-proxy によってノードの iptables ルールが構成され、トラフィックが Pod に配信されます。コンテナネイティブのロードバランサを使用しない場合、ロードバランサのトラフィックはノード インスタンス グループに送られ、iptables ルールを使用して Pod にルーティングされます。しかし、この Pod は同じノードにあるとは限りません。コンテナネイティブのロードバランサを使用すると、ロードバランサのトラフィックは受信に適した Pod に直接配信され、余分なネットワーク ホップがなくなります。また、コンテナネイティブの負荷分散では Pod を直接ターゲットとするため、ヘルスチェックも容易になります。
コンテナネイティブのロード バランシングを使用すると、アプリケーション ロードバランサから Pod までのレイテンシを確認できます。アプリケーション ロードバランサから各 Pod へのレイテンシが表示され、ノード IP ベースのコンテナネイティブ ロードバランサと集約されています。これにより、NEG レベルでの Service のトラブルシューティングが容易になります。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-07-17 UTC。"],[],[],null,["# Container-native load balancing\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page explains container-native load balancing in\nGoogle Kubernetes Engine (GKE).\n\nThis page assumes that you know about\n[Cloud Load Balancing](/load-balancing/docs/load-balancing-overview#load-balancer-types)\nand\n[zonal network endpoint groups (NEGs)](/load-balancing/docs/negs/zonal-neg-concepts).\nWith container-native load balancing, these load balancers can directly and\nand evenly distribute traffic to Pods.\n\nContainer-native load balancing architecture\n--------------------------------------------\n\nContainer-native load balancing uses\n[`GCE_VM_IP_PORT` network endpoint groups (NEGs)](/load-balancing/docs/negs/zonal-neg-concepts#gce-vm-ip-port).\nThe endpoints of the NEG are Pod IP addresses.\n\nContainer-native load balancing is always used for internal GKE\nIngress and is optional for external Ingress. The Ingress controller creates the\nload balancer, including the virtual IP address, forwarding rules, health\nchecks, and firewall rules.\n\nTo learn how to use container-native load balancing with Ingress, see\n[Container-native load balancing through Ingress](/kubernetes-engine/docs/how-to/container-native-load-balancing).\n\nFor more flexibility, you can also\n[create standalone NEGs](/kubernetes-engine/docs/how-to/standalone-neg). In this\ncase, you are responsible for creating and managing all aspects of the load\nbalancer.\n\nBenefits of container-native load balancing\n-------------------------------------------\n\nContainer-native load balancing offers the following benefits:\n\nPods are core objects for load balancing\n: [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)\n configures nodes' `iptables` rules to distribute traffic to Pods. Without\n container-native load balancing, load balancer\n traffic travels to the node instance groups and gets routed using `iptables`\n rules to Pods which might or might not be in the same node. With\n container-native load balancing, load balancer traffic is distributed directly\n to the Pods which should receive the traffic, eliminating the extra network\n hop. Container-native load balancing also helps with improved health checking\n since it targets Pods directly. \n\n\n Comparison of default behavior (left) with container-native load balancer behavior.\n\nImproved network performance\n: Because the container-native load balancer talks directly with the Pods, and\n connections have fewer network hops, both latency and throughput are improved.\n\nIncreased visibility\n: With container-native load balancing, you have visibility into the latency\n from the Application Load Balancer to Pods. The latency from the\n Application Load Balancer to each Pod is visible, which were aggregated with\n node IP-base container-native load balancing. This makes troubleshooting your\n Services at the NEG-level easier.\n\nSupport for advanced load balancing features\n: Container-native load balancing in GKE supports several\n features of external Application Load Balancers, such as integration with Google Cloud services\n like\n [Google Cloud Armor](/kubernetes-engine/docs/how-to/cloud-armor-backendconfig),\n [Cloud CDN](/kubernetes-engine/docs/how-to/cdn-backendconfig), and\n [Identity-Aware Proxy](/iap/docs/enabling-kubernetes-howto). It also features load\n balancing algorithms for accurate traffic distribution.\n\nSupport for Cloud Service Mesh\n: The NEG data model is required to use [Cloud Service Mesh](/traffic-director/docs),\n Google Cloud's fully managed traffic control plane for service mesh.\n\nPod readiness\n-------------\n\nFor relevant Pods, the corresponding Ingress controller manages a\n[readiness gate](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)\nof type `cloud.google.com/load-balancer-neg-ready`. The Ingress controller polls\nthe load balancer's\n[health check status](/load-balancing/docs/health-check-concepts), which\nincludes the health of all endpoints in the NEG. When the load balancer's health\ncheck status indicates that the endpoint corresponding to a particular Pod is\nhealthy, the Ingress controller sets the Pod's readiness gate value to `True`.\nThe kubelet running on each Node then computes the Pod's effective readiness,\nconsidering both the value of this readiness gate and, if defined, the Pod's\n[readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes).\n\nPod readiness gates are automatically enabled when using container-native\nload balancing through Ingress.\n\nReadiness gates control the rate of a rolling update. When you initiate a\nrolling update, as GKE creates new Pods, an endpoint for each new\nPod is added to a NEG. When the endpoint is healthy from the perspective of the\nload balancer, the Ingress controller sets the readiness gate to `True`. A\nnewly created Pod must at least pass its readiness gate *before*\nGKE removes an old Pod. This ensures that the corresponding\nendpoint for the Pod has already passed the load balancer's health check and\nthat the backend capacity is maintained.\n\nIf a Pod's readiness gate never indicates that the Pod is ready, due to a bad\ncontainer image or a misconfigured load balancer health check, the load balancer\nwon't direct traffic to the new Pod. If such a failure occurs while rolling out\nan updated Deployment, the rollout stalls after attempting to create one new\nPod because that Pod's readiness gate is never True. See the\n[troubleshooting section](/kubernetes-engine/docs/how-to/container-native-load-balancing#stalled_rollout)\nfor information on how to detect and fix this situation.\n\nWithout container-native load balancing and readiness gates, GKE\ncan't detect if a load balancer's endpoints are healthy before marking Pods as\nready. In previous Kubernetes versions, you control the\nrate that Pods are removed and replaced by specifying a delay period\n(`minReadySeconds` in the Deployment specification).\n\nGKE sets the value of\n`cloud.google.com/load-balancer-neg-ready` for a Pod to `True` if any of the\nfollowing conditions are met:\n\n- None of the Pod's IP addresses are endpoints in a [`GCE_VM_IP_PORT` NEG](/load-balancing/docs/negs) managed by the GKE control plane.\n- One or more of the Pod's IP addresses are endpoints in a `GCE_VM_IP_PORT` NEG managed by the GKE control plane. The NEG is attached to a [backend service](/load-balancing/docs/backend-service). The backend service has a successful load balancer health check.\n- One or more of the Pod's IP addresses are endpoints in a `GCE_VM_IP_PORT` NEG managed by the GKE control plane. The NEG is attached to a backend service. The load balancer health check for the backend service [times out](/load-balancing/docs/health-check-concepts#method).\n- One or more of the Pod's IP addresses are endpoints in one or more `GCE_VM_IP_PORT` NEGs. None of the NEGs are attached to a backend service. No load balancer health check data is available.\n\nSession affinity\n----------------\n\nContainer-native load balancing supports\nPod-based\n[session affinity](/load-balancing/docs/backend-service#session_affinity).\n\nRequirements for using container-native load balancing\n------------------------------------------------------\n\nContainer-native load balancers through Ingress on GKE have the\nfollowing requirements:\n\n- The cluster must be VPC-native.\n- The cluster must have the `HttpLoadBalancing` add-on enabled. GKE clusters have the `HttpLoadBalancing` add-on enabled by default; you must not disable it.\n\nLimitations for container-native load balancers\n-----------------------------------------------\n\nContainer-native load balancers through Ingress on GKE have the\nfollowing limitations:\n\n- Don't support external passthrough Network Load Balancers.\n- You must not manually change or update the configuration of the Application Load Balancer that GKE creates. Any changes that you make are overwritten by GKE.\n\nPricing for container-native load balancers\n-------------------------------------------\n\nYou are charged for the Application Load Balancer provisioned by the Ingress that\nyou create in this guide. For load balancer pricing information, refer to\n[Load balancing and forwarding rules](/vpc/network-pricing#lb) on the\nVPC pricing page.\n\nWhat's next\n-----------\n\n- Learn more about [NEGs](/load-balancing/docs/negs).\n- Learn more about [VPC-native clusters](/kubernetes-engine/docs/how-to/alias-ips).\n- Learn more about [external Application Load Balancers](/load-balancing/docs).\n- Watch a [KubeCon talk about Pod readiness gates](https://www.youtube.com/watch?v=Vw9GmSeomFg)."]]