In Google Kubernetes Engine (GKE), an Ingress object defines rules for routing external HTTP(S) traffic to applications running in a cluster. An Ingress object is associated with one or more Service objects, each of which is associated with a set of Pods.
When you create an Ingress object, the GKE ingress controller creates a Google Cloud HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services.
To learn how to set up HTTP load balancing, see Setting up HTTP Load Balancing with Ingress.
Features of HTTP(S) load balancing
HTTP(S) load balancing, configured by Ingress, includes the following features:
- Flexible configuration for Services
- An Ingress defines how traffic reaches your Services and how the traffic is routed to your application. In addition, an Ingress can provide a single IP address for multiple Services in your cluster.
- Integration with Google Cloud network services
- An Ingress can configure Google Cloud features such as Google-managed SSL certificates (Beta), Google Cloud Armor, Cloud CDN, and Identity-Aware Proxy.
- Support for multiple TLS certificates
- An Ingress can specify the use of multiple TLS certificates for request termination.
To learn more about these features, refer to HTTP(S) Load Balancing Concepts.
Options for providing SSL certificates
There are three ways to provide SSL certificates to an HTTPS load balancer:
- Google-managed certificates
- Google-managed SSL certificates are provisioned, deployed, renewed, and managed for your domains. Managed certificates do not support wildcard domains or multiple subject alternative names (SANs).
- Self-managed certificates shared with Google Cloud
- You can provision your own SSL certificate and create a certificate resource in your Google Cloud project. You can then list the certificate resource in an annotation on an Ingress to create an HTTP(S) load balancer that uses the certificate. Refer to instructions for pre-shared certificates for more information.
- Self-managed certificates as Secret resources
- You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate. Refer to the instructions for using certificates in Secrets for more information.
Limitations
The total length of the namespace and name of an Ingress must not exceed 40 characters. Failure to follow this guideline may cause the GKE ingress controller to act abnormally. For more information, see this issue.
The maximum number of rules for a URL map is 50. This means that you can specify a maximum of 50 rules in an Ingress.
If you use the GKE ingress controller, your cluster cannot have more than 1000 nodes.
For the GKE ingress controller to use your
readinessProbes
as health checks, the Pods for an Ingress must exist at the time of Ingress creation. If your replicas are scaled to 0, the default health check applies. For more information, see this issue comment.Changes to a Pod's
readinessProbe
do not affect the Ingress after it is created.The HTTPS load balancer terminates TLS in locations that are distributed globally, to minimize latency between clients and the load balancer. If you require geographic control over where TLS is terminated, you should use a custom ingress controller and GCP Network Load Balancing instead, and terminate TLS on backends that are located in regions appropriate to your needs.
Multiple backend services
An HTTP(S) load balancer provides one stable IP address that you can use to route requests to a variety of backend services.
For example, you can configure the load balancer to route requests to different backend services depending on the URL path. Requests sent to your-store.example could be routed to a backend service that displays full-price items, and requests sent to your-store.example/discounted could be routed to a backend service that displays discounted items.
You can also configure the load balancer to route requests according to the hostname. Requests sent to your-store.example could go to one backend service, and requests sent to your-experimental-store.example could go to another backend service.
In a GKE cluster, you create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress object. An Ingress object must be associated with one or more Service objects, each of which is associated with a set of Pods.
Here is a manifest for an Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: rules: - http: paths: - path: /* backend: serviceName: my-products servicePort: 60000 - path: /discounted backend: serviceName: my-discounted-products servicePort: 80
When you create the Ingress, the GKE ingress controller creates and configures an HTTP(S) load balancer according to the information in the Ingress and the associated Services. Also, the load balancer is given a stable IP address that you can associate with a domain name.
In the preceding example, assume you have associated the load balancer's IP
address with the domain name your-store.example. When a client sends a request
to your-store.example, the request is routed to a Kubernetes Service named
my-products
on port 60000. And when a client sends a request to
your-store.example/discounted, the request is routed to a Kubernetes Service
named my-discounted-products
on port 80.
The only supported wildcard character for the path
field of an Ingress
is the *
character. The *
character must follow a forward slash (/
) and
must be the last character in the pattern. For example, /*
, /foo/*
, and
/foo/bar/*
are valid patterns, but *
, /foo/bar*, and /foo/*/bar
are not.
A more specific pattern takes precedence over a less specific pattern. If you
have both /foo/*
and /foo/bar/*
, then /foo/bar/bat
is taken to match
/foo/bar/*
.
For more information about path limitations and pattern matching, see the URL Maps documentation.
The manifest for the my-products
Service might look like this:
apiVersion: v1 kind: Service metadata: name: my-products spec: type: NodePort selector: app: products department: sales ports: - protocol: TCP port: 60000 targetPort: 50000
In the Service manifest, notice that the type
is NodePort
. This is the
required type for an Ingress that is used to configure an HTTP(S) load balancer.
In the Service manifest, the selector
field says any Pod that has both the
app: products
label and the department: sales
label is a member of this
Service.
When a request comes to the Service on port 60000, it is routed to one of the member Pods on TCP port 50000.
Each member Pod must have a container listening on TCP port 50000.
The manifest for the my-discounted-products
Service might look like this:
apiVersion: v1 kind: Service metadata: name: my-discounted-products spec: type: NodePort selector: app: discounted-products department: sales ports: - protocol: TCP port: 80 targetPort: 8080
In the Service manifest, the selector
field says any Pod that has both the
app: discounted-products
label and the department: sales
label is a member
of this Service.
When a request comes to the Service on port 80, it is routed to one of the member Pods on TCP port 8080.
Each member Pod must have a container listening on TCP port 8080.
Default backend
You can specify a default backend by providing a backend
field in your Ingress
manifest. Any requests that don't match the paths in the rules
field are sent
to the Service and port specified in the backend
field. For example, in the
following Ingress, any requests that don't match /
or /discounted
are sent
to a Service named my-products
on port 60001.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: backend: serviceName: my-products servicePort: 60001 rules: - http: paths: - path: / backend: serviceName: my-products servicePort: 60000 - path: /discounted backend: serviceName: my-discounted-products servicePort: 80
If you don't specify a default backend, GKE provides a default backend that returns 404.
Health checks
A Service exposed through an Ingress must respond to health checks from the load balancer. Any container that is the final destination of load-balanced traffic must do one of the following to indicate that it is healthy:
Serve a response with an HTTP 200 status to GET requests on the
/
path.Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the
path
specified by the readiness probe. The Service exposed through an Ingress must point to the same container port on which the readiness probe is enabled.For example, suppose a container specifies this readiness probe:
... readinessProbe: httpGet: path: /healthy
Then if the handler for the container's
/healthy
path returns an HTTP 200 status, the load balancer considers the container to be alive and healthy.
Kubernetes Service compared to Google Cloud backend service
A Kubernetes
Service
and a
Google Cloud backend service
are different things. There is a
strong relationship between the two, but the relationship is not necessarily
one to one. The GKE ingress controller creates a Google Cloud backend
service for each (serviceName
, servicePort
) pair in an Ingress manifest. So
it is possible for one Kubernetes Service object to be related to several Google Cloud
backend services.
Support for Google Cloud features
You can use a BackendConfig to configure an HTTP(S) load balancer to use features like Google Cloud Armor, Cloud CDN, and IAP.
BackendConfig is a custom resource that holds configuration information for Google Cloud features.
An Ingress manifest refers to a Service, and the Service manifest refers to
a BackendConfig by using a beta.cloud.google.com/backend-config
annotation.
... kind: Ingress ... spec: rules: - http: paths: - path: / backend: serviceName: my-service servicePort: 80 ... kind: Service metadata: name: my-service ... annotations: beta.cloud.google.com/backend-config: '{"ports": {"80":"my-backend-config"}}' spec: type: NodePort ...
Support for WebSocket
With HTTP(S) Load Balancing, the WebSocket protocol just works. No configuration is required.
If you intend to use the WebSocket protocol, you might want to use a timeout
value larger than the default 30 seconds. To set the timeout value for a backend
service configured through Ingress, create a BackendConfig object, and use the
beta.cloud.google.com/backend-config
annotation in your Service manifest.
For more information, see Configuring a backend service through Ingress.
Static IP addresses for HTTP(S) load balancers
When you create an Ingress object, you get a stable external IP address that clients can use to access your Services and in turn, your running containers. The IP address is stable in the sense that it lasts for the lifetime of the Ingress object. If you delete your Ingress and create a new Ingress from the same manifest file, you are not guaranteed to get the same external IP address.
If you want a permanent IP address that will stay the same across deleting your Ingress and creating a new one, you must reserve a global static external IP address. Then in your Ingress manifest, include an annotation that gives the name of your reserved static IP address.
For example, suppose you have reserved a global static external IP address named
my-static-address
. In your Ingress manifest, include a
kubernetes.io/ingress.global-static-ip-name
annotation as shown here:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.global-static-ip-name: my-static-address
For more information about creating a static external IP address for an Ingress, see Configuring a static IP address.
Setting up HTTPS (TLS) between client and load balancer
An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake.
When the load balancer accepts an HTTPS request from a client, the traffic between the client and the load balancer is encrypted using TLS. However, the load balancer terminates the TLS encryption, and forwards the request without encryption to the application. For information about how to encrypt traffic between the load balancer and your application, see HTTPS between load balancer and your application.
You have the option to use Google-managed SSL certificates (Beta) or to use certificates that you manage yourself. For more information about creating an Ingress that uses Google-managed certificates, see Using Google-managed SSL certificates (Beta).
To provide an HTTP(S) load balancer with a certificate and key that you created
yourself, create a Kubernetes Secret.
The Secret holds the certificate and key. To use a Secret, add its name in the tls
field of your Ingress manifest as shown in the following example:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress-2 spec: tls: - secretName: my-secret rules: - http: paths: - path: /* backend: serviceName: my-metrics servicePort: 60000
Changes to Secrets are picked up periodically so if you modify the data inside of the Secret, it will take a max of 10 minutes for those changes to be applied to the load balancer.
For more information about using Secrets to provide certificates to the load balancer, see Using multiple SSL certificates in HTTP(S) load balancing with Ingress.
Disabling HTTP
If you want all traffic between the client and the HTTP(S) load balancer to use
HTTPS, you can disable HTTP by including the kubernetes.io/ingress.allow-http
annotation in your Ingress manifest. Set the value of the annotation to
"false"
.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress-2 annotations: kubernetes.io/ingress.allow-http: "false" spec: tls: - secretName: my-secret ...
Pre-shared certificates for load balancers
As an alternative to using Kubernetes Secrets to provide certificates to the load balancer for HTTP(S) termination, you can use certificates previously uploaded to your GCP project. For more information, see Using pre-shared certificates and Using multiple SSL certificates in HTTP(S) load balancing with Ingress.
HTTPS (TLS) between load balancer and your application
An HTTP(S) load balancer acts as a proxy between your clients and your application. Clients can use HTTP or HTTPS to communicate with the load balancer proxy. However, the connection from the load balancer proxy to your application uses HTTP by default. If your application, running in a GKE Pod, is capable of receiving HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application.
To configure the protocol used between the load balancer and your application,
use the cloud.google.com/app-protocols
annotation in your Service manifest.
The following Service manifest specifies two ports. The annotation says that when an HTTP(S) load balancer targets port 80 of the Service, it should use HTTP. And when the load balancer targets port 443 of the Service, it should use HTTPS.
apiVersion: v1 kind: Service metadata: name: my-service-3 annotations: cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' spec: type: NodePort selector: app: metrics department: sales ports: - name: my-https-port port: 443 targetPort: 8443 - name: my-http-port port: 80 targetPort: 50001
The cloud.google.com/app-protocols
annotation serves the same purpose as the
older service.alpha.kubernetes.io/app-protocols
annotation.
The old and new annotation names can coexist, as shown in the following
Service manifest. When both annotations appear in the same Service manifest,
service.alpha.kubernetes.io/app-protocols
takes precedence.
apiVersion: v1 kind: Service metadata: name: my-service-3 annotations: cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' service.alpha.kubernetes.io/app-protocols: '{"my-first-port":"HTTPS","my-second-port":"HTTP"}' spec: type: NodePort selector: app: metrics department: sales ports: - name: my-first-port port: 443 targetPort: 8443 - name: my-second-port port: 80 targetPort: 50001
Using multiple TLS certificates
Suppose you want an HTTP(S) load balancer to serve content from two hostnames: your-store.example, and your-experimental-store.example. Also, you want the load balancer to use one certificate for your-store.example and a different certificate for your-experimental-store.example.
You can specify multiple certificates in an Ingress manifest, and the load balancer chooses a certificate if the Common Name (CN) in the certificate matches the hostname used in the request. For detailed information on how to configure multiple certificates, see Using multiple SSL certificates in HTTP(S) load balancing with Ingress.
What's next
Configure an HTTP(S) load balancer by creating a Deployment, Service, and Ingress.
Learn how to expose applications using Services.
Do the tutorial on setting up HTTP Load Balancing with Ingress.
Read an overview of networking in GKE.
Learn how to configure Google Cloud Armor for your cluster.
Learn how to set up Cloud CDN in GKE.
Learn how to configure IAP for your workloads.