This page explains how Ingress for Internal HTTP(S) Load Balancing works in Google Kubernetes Engine (GKE). You can also learn how to set up and use Ingress for Internal HTTP(S) Load Balancing.
For general information about using Ingress for load balancing in GKE, see HTTP(S) load balancing with Ingress.
In GKE, the internal HTTP(S) load balancer is a proxy-based, regional, Layer 7 load balancer that enables you to run and scale your services behind an internal load balancing IP address. GKE Ingress objects support the internal HTTP(S) load balancer natively through the creation of Ingress objects on GKE clusters.
Benefits of using Ingress for the internal HTTP(S) load balancer
Using GKE Ingress for Internal HTTP(S) Load Balancing provides the following benefits:
- A highly available, GKE-managed Ingress controller.
- Load balancing for internal, service-to-service communication.
- Container-native load balancing with Network Endpoint Groups (NEG).
- Application routing with HTTP and HTTPS support.
- High-fidelity Compute Engine health checks for resilient services.
- Envoy-based proxies that are deployed on-demand to meet traffic capacity needs.
Support for Google Cloud features
Ingress for Internal HTTP(S) Load Balancing supports a variety of additional features.
- Self-managed SSL Certificates using Google Cloud. Only regional certificates are supported for this feature.
- Self-managed SSL Certificates using Kubernetes Secrets.
- The Session Affinity and Connection Timeout BackendService features. You can configure these features using BackendConfig.
Required networking environment
Using Ingress for Internal HTTP(S) Load Balancing requires you to use a proxy-only subnet.
The internal HTTP(S) load balancer provides a pool of proxies for your network. The proxies evaluate where each HTTP(S) request should go based on factors such as the URL map, the BackendService's session affinity, and the balancing mode of each backend NEG.
The following diagram provides an overview of the traffic flow for internal HTTP(S) load balancer.
The proxy makes the connection in the following way:
- A client makes a connection to the IP address and port of the load balancer's forwarding rule.
- One of the proxies receives and terminates the client's network connection.
- The proxy establishes a connection to the appropriate backend VM or endpoint in a NEG, as determined by the load balancer's URL map, and backend services.
Each of the load balancer's proxies is assigned an internal IP address. The
proxies for all internal HTTP(S) load balancers in a region use internal IP
addresses. This address comes from a single, proxy-only subnet in that region,
in your VPC network. This subnet is reserved exclusively for Internal HTTP(S) Load Balancing
proxies and cannot be used for other purposes. A proxy-only
subnet must provide 64 or more IP addresses. This corresponds to a prefix length
/26 or shorter. Only one proxy-only subnet per region, per VPC network can
Only the proxies created by Google Cloud for a region's internal HTTP(S) load balancers use the proxy-only subnet. The IP address for the load balancer's forwarding rule doesn't come from the proxy-only subnet. Also, the IP addresses of the backend VMs and endpoints don't come from the proxy-only subnet.
Each proxy listens on the IP address and port specified by the corresponding load balancer's forwarding rule. Each packet sent from a proxy to a backend VM or endpoint has a source IP address from the proxy-only subnet.
The steps to deploy this proxy-only subnet are explained in configuring the network and subnets. The GKE Ingress Controller manages deployment of firewall rules, so you do not need to manually deploy them.