Ingress for External HTTP(S) Load Balancing

This page explains how Ingress for External HTTP(S) Load Balancing works in Google Kubernetes Engine (GKE). You can also learn how to set up and use Ingress for External Load Balancing.

For general information about using load balancing in GKE, see Ingress for HTTP(S) load balancing.

Overview

Google Cloud's external HTTP(S) load balancer is a globally distributed load balancer for exposing applications publicly on the internet. It's deployed across Google Points of Presence (PoPs) globally providing low latency HTTP(S) connections to users. Anycast routing is used for the load balancer IPs, allowing internet routing to determine the lowest cost path to its closest Google Load Balancer.

GKE Ingress deploys the external HTTP(S) load balancer to provide global load balancing natively for Pods as backends.

Support for Google Cloud features

You can use a BackendConfig to configure an external HTTP(S) load balancer to use features like:

BackendConfig is a custom resource that holds configuration information for Google Cloud features. To learn more about supported features, see Ingress features.

Support for WebSocket

With External HTTP(S) Load Balancing, the WebSocket protocol works without any configuration.

If you intend to use the WebSocket protocol, you might want to use a timeout value larger than the default 30 seconds. For WebSocket traffic sent through a Google Cloud external HTTP(S) load balancer, the backend service timeout is interpreted as the maximum amount of time that a WebSocket connection can remain open, whether idle or not.

To set the timeout value for a backend service configured through Ingress, create a BackendConfig object, and use the beta.cloud.google.com/backend-config annotation in your Service manifest.

For configuration information, see Backend response timeout

Static IP addresses for HTTP(S) load balancers

When you create an Ingress object, you get a stable external IP address that clients can use to access your Services and in turn, your running containers. The IP address is stable in the sense that it lasts for the lifetime of the Ingress object. If you delete your Ingress and create a new Ingress from the same manifest file, you are not guaranteed to get the same external IP address.

If you want a permanent IP address that stays the same across deleting your Ingress and creating a new one, you must reserve a global static external IP address. Then in your Ingress manifest, include an annotation that gives the name of your reserved static IP address.

For example, suppose you have reserved a global static external IP address named my-static-address. In your Ingress manifest, include a kubernetes.io/ingress.global-static-ip-name annotation as shown here:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: my-static-address

Setting up HTTPS (TLS) between client and load balancer

An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake.

When the load balancer accepts an HTTPS request from a client, the traffic between the client and the load balancer is encrypted using TLS. However, the load balancer terminates the TLS encryption, and forwards the request without encryption to the application. For information about how to encrypt traffic between the load balancer and your application, see HTTPS between load balancer and your application.

You can use Google-managed SSL certificates (Beta) or certificates that you manage yourself. For more information about creating an Ingress that uses Google-managed certificates, see Using Google-managed SSL certificates (Beta).

To provide an HTTP(S) load balancer with a certificate and key that you created yourself, create a Kubernetes Secret object. The Secret holds the certificate and key. Add the Secret to the tls field of your Ingress manifest, as in the following example:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress-2
spec:
  tls:
  - secretName: secret-name
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: service-name
          servicePort: 60000

Changes to Secrets are picked up periodically so if you modify the data inside of the Secret, it will take a max of 10 minutes for those changes to be applied to the load balancer.

For more information, see Using multiple SSL certificates in HTTP(S) load balancing with Ingress.

Disabling HTTP

If you want all traffic between the client and the HTTP(S) load balancer to use HTTPS, you can disable HTTP by including the kubernetes.io/ingress.allow-http annotation in your Ingress manifest. Set the value of the annotation to "false".

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress-2
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
  - secretName: secret-name
  ...

Pre-shared certificates for load balancers

As an alternative to using Kubernetes Secrets to provide certificates to the load balancer for HTTP(S) termination, you can use certificates previously uploaded to your Google Cloud project. For more information, see Using pre-shared certificates and Using multiple SSL certificates in HTTPS load balancing with Ingress.

HTTPS (TLS) between load balancer and your application

An HTTP(S) load balancer acts as a proxy between your clients and your application. Clients can use HTTP or HTTPS to communicate with the load balancer proxy. The connection from the load balancer proxy to your application uses HTTP by default. However, if your application, running in a GKE Pod, is capable of receiving HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application.

To configure the protocol used between the load balancer and your application, use the cloud.google.com/app-protocols annotation in your Service manifest.

The following Service manifest specifies two ports. The annotation says that when an HTTP(S) load balancer targets port 80 of the Service, it should use HTTP. And when the load balancer targets port 443 of the Service, it should use HTTPS.

apiVersion: v1
kind: Service
metadata:
  name: my-service-3
  annotations:
    cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
  type: NodePort
  selector:
    app: metrics
    department: sales
  ports:
  - name: my-https-port
    port: 443
    targetPort: 8443
  - name: my-http-port
    port: 80
    targetPort: 50001

What's next