This page shows you how to configure an external HTTP(S) load balancer by creating a Kubernetes Ingress object. An Ingress object must be associated with one or more Service objects, each of which is associated with a set of Pods.
A Service object has one or more
servicePort
structures. Each servicePort
that is targeted by an Ingress is associated with
a Google Cloud backend service resource.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default gcloud
settings using one of the following methods:
- Using
gcloud init
, if you want to be walked through setting defaults. - Using
gcloud config
, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error One of [--zone, --region] must be supplied: Please specify
location
, complete this section.
-
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
-
Follow the instructions to authorize
gcloud
to use your Google Cloud account. - Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
- Update
gcloud
to the latest version:gcloud components update
You need to have a GKE cluster in your project with the HTTP(S) Load Balancing add-on enabled:
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Select the cluster you want to enable.
Click the
icon.Click Add-ons.
Under HTTP Load Balancing, select Enabled.
Click Save to update your cluster.
gcloud
gcloud container clusters update cluster-name --update-addons=HttpLoadBalancing=ENABLED
where cluster-name is the name of your cluster.
Multiple backend services
An external HTTP(S) load balancer provides one stable IP address that you can use to route requests to a variety of backend services.
In this exercise, you configure an external HTTP(S) load balancer to route requests to different
backend services depending on the URL path. Requests that have the path
/
are routed to one backend service, and requests that have the path
/kube
are routed to a different backend service.
Here's the big picture of the steps in this exercise:
- Create a Deployment and expose it with a Service named
hello-world
. - Create a second Deployment and expose it with a Service named
hello-kubernetes
. - Create an Ingress that specifies rules for routing requests to one Service or the other, depending on the URL path in the request. When you create the Ingress, the GKE Ingress controller creates and configures an external HTTP(S) load balancer.
- Test the external HTTP(S) load balancer.
Creating a Deployment
Here's a manifest for the first Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
selector:
matchLabels:
greeting: hello
department: world
replicas: 3
template:
metadata:
labels:
greeting: hello
department: world
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50000"
Copy the manifest to a file named hello-world-deployment.yaml
, and create the
Deployment:
kubectl apply -f hello-world-deployment.yaml
Creating a Service
Here's a manifest for a Service that exposes your first Deployment:
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
greeting: hello
department: world
ports:
- protocol: TCP
port: 60000
targetPort: 50000
For the purpose of this exercise, these are the important points to understand about this Service:
Any Pod that has both the
greeting: hello
label and thedepartment: world
label is a member of the Service.When a request is sent to the Service on TCP port 60000, it is forwarded to one of the member Pods on TCP port 50000.
Copy the manifest to a file named hello-world-service.yaml
, and create the
Service:
kubectl apply -f hello-world-service.yaml
Creating a second Deployment
Here's a manifest for a second Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-deployment
spec:
selector:
matchLabels:
greeting: hello
department: kubernetes
replicas: 3
template:
metadata:
labels:
greeting: hello
department: kubernetes
spec:
containers:
- name: hello-again
image: "gcr.io/google-samples/node-hello:1.0"
env:
- name: "PORT"
value: "8080"
Copy the manifest to a file named hello-kubernetes-deployment.yaml
, and create
the Deployment:
kubectl apply -f hello-kubernetes-deployment.yaml
Creating a second Service
Here's a manifest for a Service that exposes your second Deployment:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: NodePort
selector:
greeting: hello
department: kubernetes
ports:
- protocol: TCP
port: 80
targetPort: 8080
For the purpose of this exercise, these are the important points to understand about this Service:
Any Pod that has both the
greeting: hello
label and thedepartment: kubernetes
label is a member of the Service.When a request is sent to the Service on TCP port 80, it is forwarded to one of the member Pods on TCP port 8080.
Copy the manifest to a file named hello-kubernetes-service.yaml
, and create
the Service:
kubectl apply -f hello-kubernetes-service.yaml
Creating an Ingress
Here's a manifest for an Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 60000
- path: /kube
backend:
serviceName: hello-kubernetes
servicePort: 80
For the purpose of this exercise, these are the important points to understand about this Ingress manifest:
There are two Ingress classes available for GKE Ingress. The
gce
class deploys an external load balancer and thegce-internal
class deploys an internal load balancer. Ingress resources without a class specified default togce
.The only supported wildcard character for the
path
field of an Ingress is the*
character. The*
character must follow a forward slash (/
) and must be the last character in the pattern. For example,/*
,/foo/*
, and/foo/bar/*
are valid patterns, but*
,/foo/bar*
, and/foo/*/bar
are not.A more specific pattern takes precedence over a less specific pattern. If you have both
/foo/*
and/foo/bar/*
, then/foo/bar/bat
is taken to match/foo/bar/*
. For more information about path limitations and pattern matching, see the URL Maps documentation.The Ingress manifest has two (
serviceName
,servicePort
) pairs. Each (serviceName
,servicePort
) is associated with a Google Cloud backend service.
Copy the manifest to a file named my-ingress.yaml
, and create the
Ingress:
kubectl apply -f my-ingress.yaml
When you create the Ingress, the GKE Ingress controller creates an external HTTP(S) load balancer, and configures the load balancer as follows:
When a client sends a request to the load balancer with URL path
/
, the request is forwarded to thehello-world
Service on port 60000.When a client sends a request to the load balancer using URL path
/kube
, the request is forwarded to thehello-kubernetes
Service on port 80.
Wait about five minutes for the load balancer to be configured.
Testing the external HTTP(S) load balancer
To test the external HTTP(S) load balancer:
View the Ingress:
kubectl get ingress my-ingress --output yaml
The output shows the IP address of the external HTTP(S) load balancer:
status: loadBalancer: ingress: - ip: 203.0.113.1
Test the
/
path:curl load-balancer-ip/
Replace load-balancer-ip with the external IP address of your load balancer.
The output shows a
Hello, world!
message:Hello, world! Version: 2.0.0 Hostname: ...
Test the
/kube
path:curl load-balancer-ip/kube
The output shows a
Hello Kubernetes!
message:Hello Kubernetes!
HTTPS between client and load balancer
An external HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake. For more information, see:
- Setting up HTTPS (TLS) between client and load balancer.
- Using multiple SSL certificates in HTTPS Load Balancing with Ingress.
Disabling HTTP
If you want all traffic between the client and the load balancer to use HTTPS, you can disable HTTP. For more information, see Disabling HTTP.
HTTPS between load balancer and application
If your application, running in a GKE Pod, is capable of receiving HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application. For more information, see HTTPS (TLS) between load balancer and your application.
HTTP/2 between client and load balancer
Clients can use HTTP/2 to send requests to the load balancer. No configuration is required.
HTTP/2 between load balancer and application
If your application, running in a GKE Pod, is capable of receiving HTTP/2 requests, you can configure the load balancer to use HTTP/2 when it forwards requests to your application. For more information, see HTTP/2 for load balancing with Ingress.
Network endpoint groups
If your cluster supports
Container-native load balancing,
it is recommended to use network endpoint groups (NEGs). For GKE
clusters 1.17 and later and
under certain conditions,
container-native load balancing is default and does not require an explicit
cloud.google.com/neg: '{"ingress": true}'
Service annotation.
Summary of external Ingress annotations
Ingress annotations
Annotation | Description |
---|---|
kubernetes.io/ingress.allow-http | Specifies whether to allow HTTP traffic between the client and the HTTP(S) load balancer. Possible values are "true" and "false". Default is "true". See Disabling HTTP. |
ingress.gcp.kubernetes.io/pre-shared-cert | You can upload certificates and keys to your Google Cloud project. Use this annotation to reference the certificates and keys. See Using multiple SSL certificates in HTTP(S) Load Balancing. |
kubernetes.io/ingress.global-static-ip-name | Use this annotation to specify that the load balancer should use a static external IP address that you previously created. See Static IP addresses for HTTP(S) load balancers. |
networking.gke.io/v1beta1.FrontendConfig | Use this annotation to customize the client-facing configuration of the load balancer. For more information, see Ingress features. |
Service annotations related to Ingress
Annotation | Description |
---|---|
service.alpha.kubernetes.io/app-protocols | Use this annotation to set the protocol for communication between the load balancer and the application. Possible protocols are HTTP, HTTPS, and HTTP/2. See HTTPS between load balancer and your application and HTTP/2 for load balancing with Ingress. |
beta.cloud.google.com/backend-config | Use this annotation to configure the backend service associated with a servicePort. For more information, see Ingress features. |
cloud.google.com/neg | Use this annotation to specify that the load balancer should use network endpoint groups. See Using Container-native Load Balancing. |
What's next
Read a conceptual overview of Ingress for HTTP(S) load balancing in GKE.
Perform the tutorial on Setting up HTTP Load Balancing with Ingress.
Read a conceptual overview of Services in GKE.