[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Manage load balancers\n\nGoogle Distributed Cloud (GDC) air-gapped appliance provides load balancers that enable\napplications to expose services to one another. Load balancers allocate a stable\nvirtual IP (VIP) address that balances traffic over a set of backend workloads.\nLoad balancers in GDC perform layer four (L4) load balancing, which means they\nmap a set of configured frontend TCP or UDP ports to corresponding backend\nports. Load balancers are configured at the project level.\n\nLoad balancers are configured for the following workload types:\n\n- Workloads running on VMs.\n- Containerized workloads inside the Kubernetes cluster.\n\nThere are three ways to configure load balancers in\nGDC:\n\n- Use the [Networking Kubernetes Resource Model (KRM)\n API](/distributed-cloud/hosted/docs/latest/appliance/apis/service/networking/networking-api-overview). You can use this API to create load balancers.\n- Use the [gdcloud CLI](/distributed-cloud/hosted/docs/latest/appliance/resources/gdcloud-overview). You can use this API to create load balancers.\n- Use the Kubernetes Service directly from the Kubernetes cluster. This method only creates zonal load balancers.\n\nLoad balancer components\n------------------------\n\nWhen you use the KRM API or gdcloud CLI to configure the load\nbalancer, you use an L4 passthrough load balancer:\n\n- L4 means the protocol is either TCP or UDP.\n- Passthrough means there is no proxy between workload and client.\n\nThe load balancer consists of the following configurable components:\n\n- **Forwarding rules**: specify what traffic is forwarded, and to which\n backend service. Forwarding rules have the following specifications:\n\n - Consists of three tuples, CIDR, port and protocol, for the client to access.\n - Supports TCP and UDP protocols.\n - Offers internal and external forwarding rules. Clients can access internal forwarding rules from within the Virtual Private Cloud (VPC). Clients can access external forwarding rules from outside the GDC platform or from within by workloads that have `EgressNAT` value defined.\n - Forwarding rules connect to a backend service. You can point multiple forwarding rules to point to the same backend service.\n- **Backend services**: are the load balancing hub that link forwarding rules,\n health checks and backends together. A backend service references a backend\n object, that identifies the workloads the load balancer forwards the traffic\n to. There are limitations on what backends a single backend service can\n reference:\n\n - One zonal backend resource per zone.\n - One cluster backend resource per cluster. This cannot be mixed with the project backends.\n- **Backends**: a zonal object that specifies the endpoints that serve as the backends for the created backend services. Backend resources must be scoped to a zone. Select endpoints using labels. Scope the selector to a project or cluster:\n\n - A project backend is is a backend that does not have the `ClusterName` field specified. In this case the specified labels apply to all of the workloads in the specific project, in the specific VPC of a zone. The labels are applied to VM and pod workloads across multiple clusters. When a backend service uses a project backend, you can't reference another backend for that zone in that backend service.\n\n - A cluster backend is a backend that has the `ClusterName` field specified. In this case, the specified labels apply to all the workloads in the named cluster in the specified project. You can specify, at most, one backend per zone per cluster in a single backend service.\n\n- **Health checks**: specify the probes to determine whether a given\n workload endpoint in the backend is healthy. The unhealthy endpoint is\n taken out from the load balancer, until it becomes healthy again. Health\n checks are only applicable to VM workloads. Pod workloads can use the built-in Kubernetes probe mechanism to determine if a specific endpoint is healthy.\n\nWhen using the Kubernetes Service directly, you use the `Service` object instead of the previously listed components. You can only target workloads in the cluster where the `Service` object is created.\n\nExternal and internal load balancing\n------------------------------------\n\nGDC applications have access to the following\nnetworking service types:\n\n- **Internal Load Balancer (ILB)**: lets you expose a service to other clusters within the organization.\n- **External Load Balancer (ELB)**: allocates a VIP address from a range that is routable from external workloads and exposes services outside of the GDC organization, such as other organizations inside or outside of the GDC instance.\n\nService virtual IP addresses\n----------------------------\n\nILBs allocate VIP addresses that are internal only to the organization. These\nVIP addresses are not reachable from outside the organization; therefore, you\ncan only use them to expose services to other applications within an\norganization. These IP addresses may overlap between organizations in the same\ninstance.\n\nOn the other hand, ELBs allocate VIP addresses that are externally reachable\nfrom outside the organization. For this reason, ELB VIP addresses must be unique\namong all organizations. Typically, fewer ELB VIP addresses will be available\nfor use by the organization.\n\nWhat's next\n-----------\n\n- [Configure internal load balancers](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/ilb-service)\n- [Configure external load balancers](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/elb-service)"]]