The HTTP(S) load balancer terminates client SSL/TLS connections, and then balances requests across your Pods. When you configure an HTTP(S) load balancer through Ingress, you can configure the load balancer to present up to ten TLS certificates to the client.
The load balancer uses Server Name Indication (SNI) to determine which certificate to present to the client, based on the domain name in the TLS handshake. If the client does not use SNI, or if the client uses a domain name that does not match the Common Name (CN) in one of the certificates, the load balancer uses the first certificate listed in the Ingress. The following diagram depicts the load balancer sending traffic to different backends, depending on the domain name used in the request:
You can specify certificates for an Ingress using one of three methods:
Pre-shared certificates that you previously upload to your Google Cloud project.
Google-managed SSL certificates. Managed certificates support a single, non-wildcard domain. Refer to the managed certificates page for information on how to use them.
You can use more than one method in the same Ingress. This allows for no-downtime migrations between methods.
Minimum GKE version
You must have GKE version 1.10.2 or later to use pre-shared certificates or to specify multiple certificates in an Ingress.
The big picture
Here's an overview of the steps in this topic:
Create a Deployment
Create a Service
Create two certificate files and two key files.
Create an Ingress that uses either Secrets or pre-shared certificates. As a result of creating the Ingress, GKE creates and configures an HTTP(S) load balancer.
Test the HTTP(S) load balancer.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default
gcloud settings using one of the following methods:
gcloud init, if you want to be walked through setting defaults.
gcloud config, to individually set your project ID, zone, and region.
Using gcloud init
gcloud initand follow the directions:
If you are using SSH on a remote server, use the
--console-onlyflag to prevent the command from launching a browser:
gcloud init --console-only
Follow the instructions to authorize
gcloudto use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
Using gcloud config
- Set your default project ID:
gcloud config set project [PROJECT_ID]
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone [COMPUTE_ZONE]
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region [COMPUTE_REGION]
gcloudto the latest version:
gcloud components update
Creating a Deployment
Here is a manifest for a Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: my-mc-deployment spec: selector: matchLabels: app: products department: sales replicas: 3 template: metadata: labels: app: products department: sales spec: containers: - name: hello image: "gcr.io/google-samples/hello-app:2.0" env: - name: "PORT" value: "50001" - name: hello-again image: "gcr.io/google-samples/node-hello:1.0" env: - name: "PORT" value: "50002"
The Deployment has three Pods, and each Pod has two containers. One container
hello-app and listens on TCP port 50001. The other container runs
node-hello and listens on TCP port 50002.
Copy the manifest to a file named
my-mc-deployment.yaml, and create the
kubectl apply -f my-mc-deployment.yaml
Creating a Service
Here is a manifest for a Service.
apiVersion: v1 kind: Service metadata: name: my-mc-service spec: type: NodePort selector: app: products department: sales ports: - name: my-first-port protocol: TCP port: 60001 targetPort: 50001 - name: my-second-port protocol: TCP port: 60002 targetPort: 50002
selector field in the Service manifest says any Pod that has the
app: products label and the
department: sales label is a member of this
Service. So the Pods of the Deployment you created in the preceding step are
members of the Service.
ports field of the Service manifest is an array of
objects. When a client sends a request to the Service on
my-first-port, the request is forwarded to one of the member Pods on port
50001. When a client sends a request to the Service on
request is forwarded to one of the member Pods on port 50002.
Copy the manifest to a file named
my-mc-service.yaml, and create the
kubectl apply -f my-mc-service.yaml
Creating certificates and keys
To do the exercises on this page, you need two certificates, each with a corresponding key. Each certificate must have a Common Name (CN) that is equal to a domain name that you own. If you already have two certificate files with the appropriate values for Common Name, you can skip ahead to the next section.
Create your first key:
openssl genrsa -out test-ingress-1.key 2048
Create your first certificate signing request:
openssl req -new -key test-ingress-1.key -out test-ingress-1.csr \ -subj "/CN=[FIRST_DOMAIN_NAME]"
[FIRST_DOMAIN_NAME] is a domain name that you own or a fake domain
For example, suppose you want the load balancer to serve requests from your-store.example. Then your certificate signing request would look like this:
openssl req -new -key test-ingress-1.key -out test-ingress-1.csr \ -subj "/CN=your-store.example"
Create your first certificate:
openssl x509 -req -days 365 -in test-ingress-1.csr -signkey test-ingress-1.key \ -out test-ingress-1.crt
Create your second key:
openssl genrsa -out test-ingress-2.key 2048
Create your second certificate signing request:
openssl req -new -key test-ingress-2.key -out test-ingress-2.csr \ -subj "/CN=[SECOND_DOMAIN_NAME]"
[SECOND_DOMAIN] is another domain name that you own or a fake
For example, suppose you want the load balancer to serve requests from your-experimental-store.example. Then your certificate signing request would look like this:
openssl req -new -key test-ingress-2.key -out test-ingress-2.csr \ -subj "/CN=your-experimental-store.example"
Create your second certificate:
openssl x509 -req -days 365 -in test-ingress-2.csr -signkey test-ingress-2.key \ -out test-ingress-2.crt
For more information about creating your own certificates and keys, see Obtaining a private key and signed certificate.
You now have two certificate files and two key files. The remaining steps in this task use these placeholders to refer to your domains, certificate files, and key files:
[FIRST_CERT_FILE]is the path to your first certificate file.
[FIRST_KEY_FILE]is the path to the key file that goes with your first certificate.
[SECOND_CERT_FILE]is the path to your second certificate file.
[SECOND_KEY_FILE]is the path to the key file that goes with your second certificate.
[FIRST_DOMAIN]is a domain name that own or a fake domain name.
[SECOND_DOMAIN]is a second domain name that you own or a second fake domain name.
Specifying certificates for your Ingress
The next step is to create an Ingress object. In your Ingress manifest, you can use one of two methods to provide certificates for the load balancer:
- Pre-shared certificates
Choose one of the two methods by selecting the
SECRETS tab or the
PRE-SHARED CERTS tab:
Create a Secret that holds your first certificate and key:
kubectl create secret tls my-first-secret \ --cert [FIRST_CERT_FILE] --key [FIRST_KEY_FILE]
Create a Secret that holds your second certificate and key:
kubectl create secret tls my-second-secret \ --cert [SECOND_CERT_FILE] --key [SECOND_KEY_FILE]
Creating an Ingress
Here is a manifest for an Ingress.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-mc-ingress spec: tls: - secretName: my-first-secret - secretName: my-second-secret rules: - host: [FIRST_DOMAIN] http: paths: - backend: serviceName: my-mc-service servicePort: my-first-port - host: [SECOND_DOMAIN] http: paths: - backend: serviceName: my-mc-service servicePort: my-second-port
Copy the manifest to a file named
[SECOND_DOMAIN] with domain names that you own or with
fake domain names.
Create the Ingres:
kubectl apply -f my-mc-ingress.yaml
When you create an Ingress, the GKE ingress controller creates an HTTP(S) load balancer. Wait a minute for GKE assign an external IP address to the load balancer.
Describe your Ingress:
kubectl describe ingress my-mc-ingress
The output shows that two Secrets are associated with the Ingress. The output also shows the external IP address of the load balancer.
Name: my-mc-ingress Address: 203.0.113.1 ... TLS: my-first-secret terminates my-second-secret terminates Rules: Host Path Backends ---- ---- -------- your-store.example my-mc-service:my-first-port (<none>) your-experimental-store.example my-mc-service:my-second-port (<none>) Annotations: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 3m loadbalancer-controller default/my-mc-ingress Normal CREATE 2m loadbalancer-controller ip: 203.0.113.1
Using pre-shared certificates
Create a certificate resource in your Google Cloud project:
gcloud compute ssl-certificates create test-ingress-1 \ --certificate [FIRST_CERT_FILE] --private-key [FIRST_KEY_FILE]
[FIRST_CERT_FILE]is your first certificate file.
[FIRST_KEY_FILE]is your first key file.
Create a second certificate resource in your Google Cloud project:
gcloud compute ssl-certificates create test-ingress-2 \ --certificate [SECOND_CERT_FILE] --private-key [SECOND_KEY_FILE]
[SECOND_CERT_FILE]is your second certificate file.
[SECOND_KEY_FILE]is your second key file.
View your certificate resources:
gcloud compute ssl-certificates list
The output shows you have certificate resources named
NAME CREATION_TIMESTAMP test-ingress-1 2018-11-03T12:08:47.751-07:00 test-ingress-2 2018-11-03T12:09:25.359-07:00
Here's a manifest for an Ingress that lists pre-shared certificate resources in an annotation:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-psc-ingress annotations: ingress.gcp.kubernetes.io/pre-shared-cert: "test-ingress-1,test-ingress-2" spec: rules: - host: [FIRST_DOMAIN] http: paths: - backend: serviceName: my-mc-service servicePort: my-first-port - host: [SECOND_DOMAIN] http: paths: - backend: serviceName: my-mc-service servicePort: my-second-port
Copy the manifest to a file named
[SECOND_DOMAIN] with your domain names or with fake
Create the Ingress:
kubectl apply -f my-psc-ingress.yaml
Wait a minute for GKE assign an external IP address to the load balancer.
Describe your Ingress:
kubectl describe ingress my-psc-ingress
The output shows that the Ingress is associated with pre-shared certificates
test-ingress-2. The output also shows the external
IP address of the load balancer:
Name: my-psc-ingress Address: 203.0.113.2 ... Rules: Host Path Backends ---- ---- -------- your-store.example my-mc-service:my-first-port (<none>) your-experimental-store.example my-mc-service:my-second-port (<none>) Annotations: ... ingress.gcp.kubernetes.io/pre-shared-cert: test-ingress-1,test-ingress-2 ... ingress.kubernetes.io/ssl-cert: test-ingress-1,test-ingress-2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 2m loadbalancer-controller default/my-psc-ingress Normal CREATE 1m loadbalancer-controller ip: 203.0.113.2
Testing the load balancer
Wait about five minutes for GKE to finish configuring the load balancer.
To do this step, you need to own two domain names, and both of your domain names must resolve the external IP address of the HTTP(S) load balancer.
Send a request to the load balancer by using your first domain name:
curl -v https://[FIRST_DOMAIN]
The output shows that your first certificate was used in the TLS handshake. If your first domain is your-store.example, the output is similar to this:
... * Trying 203.0.113.1... ... * Connected to your-store.example (203.0.113.1) port 443 (#0) ... * TLSv1.2 (IN), TLS handshake, Certificate (11): ... * Server certificate: * subject: CN=your-store.example ... > Host: your-store.example ... < Hello, world! Version: 2.0.0 ...
Send a request to the load balancer by using your second domain name:
curl -v https://[SECOND_DOMAIN]
The output shows that your second certificate was used in the TLS handshake. If your second domain is your-experimental-store.example, the output is similar to this:
... * Trying 203.0.113.1... ... * Connected to your-experimental-store.example (203.0.113.1) port 443 (#0) ... * Server certificate: * subject: CN=your-experimental-store.example ... > Host: your-experimental-store.example ... Hello Kubernetes!
The hosts field of an Ingress object
tls field that is an array of
IngressTLS object has a
hosts field and
In GKE, the
hosts field is not used. GKE
reads the Common Name (CN) from the certificate in the Secret. If the Common
Name matches the domain name in a client request, then the load balancer
presents the matching certificate to the client.
Which certificate is presented?
The load balancer chooses a certificate according to these rules:
If both Secrets and pre-shared certificates are listed in the Ingress, the load balancer ignores the Secrets and uses the list of pre-shared certificates.
If no certificate has a Common Name (CN) that matches the domain name in the client request, the load balancer presents the primary certificate.
For Secrets listed in the
tlsblock, the primary certificate is in the first Secret in the list.
For pre-shared certificates listed in the annotation, the primary certificate is the first certificate in the list.
Specifying invalid or non-existent Secrets results in a Kubernetes event error. You can check Kubernetes events for an Ingress as follows:
kubectl describe ingress
This output is similar to this:
Name: my-ingress Namespace: default Address: 203.0.113.3 Default backend: hello-server:8080 (10.8.0.3:8080) TLS: my-faulty-Secret terminates Rules: Host Path Backends ---- ---- -------- * * my-service:443 (10.8.0.3:443) Events: Error during sync: cannot get certs for Ingress default/my-ingress: Secret "my-faulty-ingress" has no 'tls.crt'
Read about using a Kubernetes Ingress object to configure an HTTP(S) load balancer.
Read the GKE network overview.
Do the tutorial on how to Configure HTTP(S) load balancing with Ingress.
Learn how to configure a static IP address and domain name
If you have an application running on multiple GKE clusters in different regions, configure a multi-cluster Ingress to route traffic to a cluster in the region closest to the user.