This page explains how Ingress for external Application Load Balancers works in Google Kubernetes Engine (GKE). You can also learn how to set up and use Ingress for External Load Balancing.
For general information about using load balancing in GKE, see Ingress for external Application Load Balancers.
Google Kubernetes Engine (GKE) networking is built upon Cloud Load Balancing. With Cloud Load Balancing, a single anycast IP address enables routing to determine the lowest cost path to the closest Google Cloud load balancer.
Support for Google Cloud features
You can use a BackendConfig to configure an external Application Load Balancer to use features like:
BackendConfig is a custom resource that holds configuration information for Google Cloud features. To learn more about supported features, see Ingress configuration.
Support for WebSocket
With external Application Load Balancers, the WebSocket protocol works without any configuration.
If you intend to use the WebSocket protocol, you might want to use a timeout value larger than the default 30 seconds. For WebSocket traffic sent through a Google Cloud external Application Load Balancer, the backend service timeout is interpreted as the maximum amount of time that a WebSocket connection can remain open, whether idle or not.
To set the timeout value for a backend
service configured through Ingress, create a BackendConfig object, and use the
beta.cloud.google.com/backend-config
annotation in your Service manifest.
For configuration information, see Backend response timeout.
Static IP addresses for HTTPS load balancers
When you create an Ingress object, you get a stable external IP address that clients can use to access your Services and in turn, your running containers. The IP address is stable in the sense that it lasts for the lifetime of the Ingress object. If you delete your Ingress and create a new Ingress from the same manifest file, you are not guaranteed to get the same external IP address.
If you want a permanent IP address that stays the same across deleting your Ingress and creating a new one, you must reserve a global static external IP address. Then in your Ingress manifest, include an annotation that gives the name of your reserved static IP address. If you modify an existing Ingress to use a static IP address instead of an ephemeral IP address, GKE might change the IP address of the load balancer when GKE re-creates the forwarding rule of the load balancer.
For example, suppose you have reserved a global static external IP address named
my-static-address
. In your Ingress manifest, include a
kubernetes.io/ingress.global-static-ip-name
annotation as shown here:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-address
Setting up HTTPS (TLS) between client and load balancer
An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake.
When the load balancer accepts an HTTPS request from a client, the traffic between the client and the load balancer is encrypted using TLS. However, the load balancer terminates the TLS encryption, and forwards the request without encryption to the application. For information about how to encrypt traffic between the load balancer and your application, see HTTPS between load balancer and your application.
You can use Google-managed SSL certificates or certificates that you manage yourself. For more information about creating an Ingress that uses Google-managed certificates, see Using Google-managed SSL certificates.
To provide an HTTP(S) load balancer with a certificate and key that you created
yourself, create a Kubernetes Secret
object. The Secret holds the certificate and key. Add the Secret to the
tls
field of your Ingress
manifest, as in the following example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-2
spec:
tls:
- secretName: SECRET_NAME
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: SERVICE_NAME
port:
number: 60000
This manifest includes the following values:
SECRET_NAME
: the name of the Secret you created.SERVICE_NAME
: the name of your backend service.
Changes to Secrets are picked up periodically so if you modify the data inside of the Secret, it will take a max of 10 minutes for those changes to be applied to the load balancer.
For more information, see Using multiple SSL certificates in HTTPS load balancing with Ingress.
To secure HTTPS encrypted Ingress for your GKE clusters, see example Secure Ingress.
Disabling HTTP
If you want all traffic between the client and the HTTP(S) load balancer to use
HTTPS, you can disable HTTP by including the kubernetes.io/ingress.allow-http
annotation in your Ingress manifest. Set the value of the annotation to
"false"
.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-2
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: SECRET_NAME
...
This manifests includes the SECRET_NAME
that is the name of
the Secret you created.
Pre-shared certificates for load balancers
As an alternative to using Kubernetes Secrets to provide certificates to the load balancer for HTTP(S) termination, you can use certificates previously uploaded to your Google Cloud project. For more information, see Using pre-shared certificates and Using multiple SSL certificates in HTTPS load balancing with Ingress.
HTTPS (TLS) between load balancer and your application
An HTTP(S) load balancer acts as a proxy between your clients and your application. Clients can use HTTP or HTTPS to communicate with the load balancer proxy. The connection from the load balancer proxy to your application uses HTTP by default. However, if your application, running in a GKE Pod, is capable of receiving HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application.
To configure the protocol used between the load balancer and your application,
use the cloud.google.com/app-protocols
annotation in your Service manifest.
This Service manifest must include type: NodePort
unless you're using
container native load balancing.
If using container native load balancing, use the type: ClusterIP
.
The following Service manifest specifies two ports. The annotation says that when an HTTP(S) load balancer targets port 80 of the Service, it should use HTTP. And when the load balancer targets port 443 of the Service, it should use HTTPS.
The Service manifest must include a name
value in the port annotation. You can
only edit the Service port by referring to its assigned name
, not by its
targetPort
value.
apiVersion: v1
kind: Service
metadata:
name: my-service-3
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
type: NodePort
selector:
app: metrics
department: sales
ports:
- name: my-https-port
port: 443
targetPort: 8443
- name: my-http-port
port: 80
targetPort: 50001
What's next
Configure an external Application Load Balancer by creating a Deployment, Service, and Ingress.
Learn how to expose applications using Services.
Read an overview of networking in GKE.
Learn more about configuring Ingress features.
Learn how to enable HTTP to HTTPS redirects.