GKE Ingress 控制器是 Google 實作的 Ingress API。您可以使用 Ingress API 管理叢集中執行的服務外部存取權。在 GKE 中建立 Ingress 資源時,控制器會自動設定第 7 層應用程式負載平衡器,允許 HTTP 或 HTTP(S) 流量連線至叢集中執行的應用程式。
對於需要進階流量管理、支援多種通訊協定或更完善的多租戶功能的新部署作業和應用程式,建議使用 GKE 閘道。不過,對於較簡單的 HTTP/HTTPS 轉送情境,GKE Ingress 仍是可行的選項,尤其是現有設定。在這種情況下,遷移至 Gateway API 的好處可能還不足以抵銷所花費的心力。
LoadBalancer 服務
Service API 可讓您將叢集中以 Pod 形式執行的應用程式,公開給外部或內部流量。建立 LoadBalancer 類型的 Service 時,GKE 會根據 Service 資訊清單的參數,自動建立第 4 層 (TCP/UDP) 直通網路負載平衡器。
在直通式網路負載平衡器中,當流量抵達後端 VM 時,原始來源和目的地 IP 位址、通訊協定 (例如 TCP 或 UDP) 和通訊埠號碼 (如果通訊協定使用這些號碼) 都會保持不變。也就是說,流量會直接傳遞至後端 VM 或 Pod,負載平衡器不會終止連線。後端服務會處理連線終止作業,並確保流量從用戶端順暢地流向服務。
加權負載平衡
如果您設定了外部 LoadBalancer 服務,讓虛擬私人雲端網路外的用戶端和 Google Cloud 虛擬機器可以存取,則可以啟用加權負載平衡。加權負載平衡會根據每個 GKE 節點上提供服務的 Pod 數量分配流量,因此與 Pod 數量較少的節點相比,提供服務的 Pod 數量較多的節點會收到較大比例的流量。
容器原生負載平衡的做法是使用 GCE_VM_IP_PORT NEG,將流量平均分配到個別 Pod 的 IP 位址 (而非節點)。GCE_VM_IP_PORT 您可以使用 NEGs,透過 Compute Engine 虛擬機器 (VM) 的主要內部 IP 位址,或 VM 已設定別名 IP 範圍中的 IP 位址,指定後端端點。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-30 (世界標準時間)。"],[],[],null,["# About load balancing in GKE\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page provides a general overview of how Google Kubernetes Engine (GKE) creates\nand manages Cloud Load Balancing. This page assumes that you know about\nthe following:\n\n- Types of [Google Cloud load balancers](/load-balancing/docs/load-balancing-overview#load-balancer-types)\n- The difference between Layer 4 (Network Load Balancers) and Layer 7 (Application Load Balancers) load balancers\n\nThis page is for Cloud architects and Networking specialists who\ndesign and architect the network for their organization. To learn more about\ncommon roles and example tasks that we reference in Google Cloud content, see\n[Common GKE Enterprise user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nHow GKE creates load balancers\n------------------------------\n\nTo make your applications accessible either from outside the cluster (external\nusers) or within your private network (internal users), you can expose your\napplications by provisioning load balancers using the Gateway, Ingress, and\nService APIs. Alternatively, you can create the load balancer components\nyourself, while GKE manages the network endpoint groups (NEGS)\nthat connect your load balancer to the Pods in your cluster.\n\n### Gateway\n\nThe GKE Gateway controller is Google's implementation of the Kubernetes\nGateway API for Cloud Load Balancing. Gateway API is an open-source project\naimed at standardizing how service meshes and ingress controllers expose\napplications in Kubernetes. It's designed to be a more expressive, flexible, and\nextensible successor to the Ingress resource.\n\nThe GKE Gateway controller is used to configure Layer 7\nApplication Load Balancers to expose HTTP(S) traffic to applications that run in\nthe cluster.\n**Best practice** :\n\nUse Gateway API to implement your load balancer.\n\n### Ingress\n\nThe GKE Ingress controller is Google's implementation of the\nIngress API. The Ingress API lets you manage external access to Services that run\nin a cluster. When you create an Ingress resource in GKE, the\ncontroller automatically configures a Layer 7 Application Load Balancer that\nallows HTTP or HTTP(S) traffic to reach your applications that run in the\ncluster.\n\nGKE Gateway is the recommended choice for new deployments and\napplications that require advanced traffic management, multi-protocol support,\nor better multi-tenancy. However, GKE Ingress is a viable option\nfor simpler HTTP/HTTPS routing scenarios, especially for existing configurations\nwhere the benefits of migrating to the Gateway API might not yet outweigh the\neffort.\n\n### LoadBalancer Services\n\nThe [Service API](https://kubernetes.io/docs/concepts/services-networking/service/) lets you\nexpose applications that run as Pods in your cluster to external or internal\ntraffic. When you create a Service of type `LoadBalancer`, GKE\nautomatically creates a Layer 4 (TCP/UDP)\nPassthrough Network Load Balancer based on the parameters of your Service manifest.\n\nIn Passthrough Network Load Balancers, when traffic reaches your backend VMs, the original\nsource and destination IP addresses, the communication protocol (like TCP or\nUDP), and the port numbers (if the protocol uses them) remain the same. This\nmeans that traffic is directly passed through to the backend VMs or Pods and\nthe load balancer does not terminate the connections. The backend services\nhandle the termination of connections and ensure that traffic flows seamlessly\nfrom the client to the Service.\n\n#### Weighted load balancing\n\nIf you configured an external LoadBalancer Service that clients\noutside your VPC network and Google Cloud VMs can access,\nthen you can enable [weighted load balancing](/kubernetes-engine/docs/concepts/service-load-balancer#weighted-lb).\nWeighted load balancing distributes traffic based on the number of serving Pods\non each GKE node so that nodes that have more serving Pods\nreceive a larger proportion of traffic compared to nodes that have fewer Pods.\n\n### Standalone NEGs\n\nAnother method to manage your load balancers in GKE is to create\nthe load balancer components yourself, and let GKE manage the\nNEGs. This type of load balancer is called a Proxy Network Load Balancer. NEGs are a\nway to represent groups of backend endpoints (for example, Pods) for load\nbalancing.\n\nThis type of load balancer is intended for TCP traffic only.\nProxy Network Load Balancers distribute TCP traffic to backends in your\nVPC network or in other cloud environments. Traffic is terminated\nat the load balancing layer. The load balancer then forwards the traffic by\nestablishing new TCP connections to the closest available backend.\n\nWhat is container-native load balancing?\n----------------------------------------\n\nContainer-native load balancing is the practice of evenly distributing traffic\ndirectly to the IP addresses of individual Pods (rather than nodes) using the\n`GCE_VM_IP_PORT` NEGs. `GCE_VM_IP_PORT` NEGs allow you\nto specify backend endpoints using either the primary internal IP address of a\nCompute Engine virtual machine (VM), or an IP address from one of\nthe VM's configured alias IP ranges.\n\nContainer-native load balancing is used for all GKE-managed Layer\n7 load balancers, including Gateway and Ingress, and standalone NEGs.\nLoadBalancer Services don't use container-native load balancing. However, you can\nachieve a similar capability by enabling weighted load balancing.\n\nContainer-native load balancing has several advantages, including improved\nnetwork performance and improved health checks, because it targets Pods directly.\nFor more information, see [Container-native load\nbalancing](/kubernetes-engine/docs/concepts/container-native-load-balancing).\n| **Note:** To determine which load balancer best meets your needs, see [Choose a load\n| balancer](/load-balancing/docs/choosing-load-balancer).\n\nSummary tables\n--------------\n\nUse the following tables to help you plan your load balancing configuration.\n\n### Choose a type of load balancer\n\nThe following table shows you what type of load balancer is created for a given\nresource (Gateway, Ingress, or LoadBalancer Service):\n\n### Choose a method for creating a load balancer\n\nThe following table shows you the options in GKE to create your chosen load balancer:\n\nWhat's next\n-----------\n\n- Learn about the [Gateway API in GKE](/kubernetes-engine/docs/concepts/gateway-api).\n- Learn about [GKE Ingress](/kubernetes-engine/docs/concepts/ingress).\n- Learn about [LoadBalancer Services](/kubernetes-engine/docs/concepts/service-load-balancer)."]]