[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-31 (世界標準時間)。"],[],[],null,["# Ingress for internal Application Load Balancers\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page explains how Ingress for internal Application Load Balancers works in\nGoogle Kubernetes Engine (GKE). You can also learn how to [set up and use Ingress for\ninternal Application Load Balancers](/kubernetes-engine/docs/how-to/internal-load-balance-ingress).\n\nIn GKE, the internal Application Load Balancer is a proxy-based,\nregional, Layer 7 load balancer that enables you to run and scale your services\nbehind an internal load balancing IP address. GKE\n[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)\nobjects support the internal Application Load Balancer natively through the creation of\nIngress objects on GKE clusters.\n\nFor general information about using Ingress for load balancing in\nGKE, see\n[HTTP(S) load balancing with Ingress](/kubernetes-engine/docs/concepts/ingress).\n\nBenefits of using Ingress for internal Application Load Balancers\n-----------------------------------------------------------------\n\nUsing GKE Ingress for internal Application Load Balancers provides the\nfollowing benefits:\n\n- A highly available, GKE-managed Ingress controller.\n- Load balancing for internal, service-to-service communication.\n- Container-native load balancing with [Network Endpoint Groups (NEG)](/load-balancing/docs/negs).\n- Application routing with HTTP and HTTPS support.\n- High-fidelity Compute Engine health checks for resilient services.\n- Envoy-based proxies that are deployed on-demand to meet traffic capacity needs.\n\nSupport for Google Cloud features\n---------------------------------\n\nIngress for internal Application Load Balancers supports a variety of additional features.\n\n- [Self-managed SSL Certificates](/load-balancing/docs/ssl-certificates) using Google Cloud. Only regional certificates are supported for this feature.\n- Self-managed SSL Certificates using Kubernetes [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/).\n- The [Session Affinity](/load-balancing/docs/backend-service#session_affinity) and [Connection Timeout](/load-balancing/docs/backend-service#timeout-setting) [BackendService](/load-balancing/docs/backend-service) features. You can configure these features using [BackendConfig](/kubernetes-engine/docs/concepts/backendconfig).\n\nRequired networking environment for internal Application Load Balancers\n-----------------------------------------------------------------------\n\n| **Important:** Ingress for internal Application Load Balancers requires you to use [NEGs](/load-balancing/docs/negs) as backends. It does not support Instance Groups as backends.\n\nThe internal Application Load Balancer provides a pool of proxies for your network.\nThe proxies evaluate where each HTTP(S) request should go based on factors such as\nthe URL map, the BackendService's session affinity, and the balancing mode of\neach backend NEG.\n\nA region's internal Application Load Balancer uses the [proxy-only subnet](/load-balancing/docs/proxy-only-subnets)\nfor that region in your VPC network to assign internal IP\naddresses to each proxy created by Google Cloud.\n\nBy default, the IP address assigned to a load balancer's forwarding rule comes\nfrom the node's subnet range assigned by GKE instead of from the\nproxy-only subnet. You can also [manually specify an IP address](/load-balancing/docs/using-forwarding-rules#adding-fr)\nfor the forwarding rule from any subnet when you create the rule.\n\nThe following diagram provides an overview of the traffic flow for an\ninternal Application Load Balancer, as described in the preceding paragraph.\n\nHere's how the internal Application Load Balancer works:\n\n1. A client makes a connection to the IP address and port of the load balancer's forwarding rule.\n2. A proxy receives and terminates the client's network connection.\n3. The proxy establishes a connection to the appropriate endpoint (Pod) in a NEG, as determined by the load balancer's URL map, and backend services.\n\nEach proxy listens on the IP address and port specified by the corresponding\nload balancer's forwarding rule. The source IP address of each packet sent from a proxy to an endpoint\nis the internal IP address assigned to that proxy from the proxy-only subnet.\n\nHTTPS (TLS) between load balancer and your application\n------------------------------------------------------\n\nAn internal Application Load Balancer acts as a proxy between your clients and your\napplication. Clients can use HTTP or HTTPS to communicate with the load\nbalancer proxy. The connection from the load balancer proxy to\nyour application uses HTTP by default. However, if your application runs in a\nGKE Pod and can receive HTTPS requests, you can\nconfigure the load balancer to use HTTPS when it forwards requests to your\napplication.\n\nTo configure the protocol used between the load balancer and your application,\nuse the `cloud.google.com/app-protocols` annotation in your Service manifest.\n\nThe following Service manifest specifies two ports. The annotation specifies that\nan internal Application Load Balancer should use HTTP when it targets port 80 of the Service,\nAnd use HTTPS when it targets port 443 of the Service.\n\nYou must use the port's `name` field in the annotation. Do not use a different field such as `targetPort`.\n**Caution:** To limit potential downtime, do not edit the Service's port name when you enable this feature. If your Service's port doesn't have a name, use the empty port name as the key in the annotation, similar to `cloud.google.com/app-protocols: '{\"\": \"HTTPS\"}'`. Editing the port name or annotation after the initial setup might cause downtime for your applications \n\n apiVersion: v1\n kind: Service\n metadata:\n name: my-service\n annotations:\n cloud.google.com/app-protocols: '{\"my-https-port\":\"HTTPS\",\"my-http-port\":\"HTTP\"}'\n spec:\n type: NodePort\n selector:\n app: metrics\n department: sales\n ports:\n - name: my-https-port\n port: 443\n targetPort: 8443\n - name: my-http-port\n port: 80\n targetPort: 50001\n\nWhat's next\n-----------\n\n- [Learn how to deploy a proxy-only subnet](/load-balancing/docs/l7-internal/setting-up-l7-internal#configure-a-network).\n- [Learn about Ingress for external Application Load Balancers](/kubernetes-engine/docs/concepts/ingress-xlb).\n- [Learn how to configure Ingress for internal Application Load Balancers](/kubernetes-engine/docs/how-to/internal-load-balance-ingress).\n- [Read an overview of networking in GKE](/kubernetes-engine/docs/concepts/network-overview)."]]