This page shows how to use Kubernetes Ingress and Service objects to configure an an external HTTP(S) load balancer to use HTTP/2 for communication with backend services. This feature is available starting with Google Kubernetes Engine version 1.11.2.
Overview
An HTTP(S) load balancer acts as a proxy between your clients and your application. Clients can use HTTP/1.1 or HTTP/2 to communicate with the load balancer proxy. However, the connection from the load balancer proxy to your application uses HTTP/1.1 by default. If your application, running in a Google Kubernetes Engine pod, is capable of receiving HTTP/2 requests, you configure the external load balancer to use HTTP/2 when it forwards requests to your application.
In this exercise, you create a Deployment, a Service, and an Ingress. You put a
cloud.google.com/app-protocols
annotation in your Service manifest to specify
that the load balancer should use HTTP/2 to communicate with your application.
Then you call your service and verify that your application received an HTTP/2
request.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default gcloud
settings using one of the following methods:
- Using
gcloud init
, if you want to be walked through setting defaults. - Using
gcloud config
, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error One of [--zone, --region] must be supplied: Please specify
location
, complete this section.
-
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
-
Follow the instructions to authorize
gcloud
to use your Google Cloud account. - Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
- Update
gcloud
to the latest version:gcloud components update
- Read about the Kubernetes Ingress and Service resources.
- Read about the HTTP/2 limitations for an external HTTP(S) load balancer.
Creating the Deployment
This Deployment manifest declares that you want to run two replicas of the
echoheaders
web application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoheaders
spec:
replicas: 2
selector:
matchLabels:
app: echoheaders
template:
metadata:
labels:
app: echoheaders
spec:
containers:
- name: echoheaders
image: k8s.gcr.io/echoserver:1.10
ports:
- containerPort: 8443
Copy the manifest to a file named my-deployment.yaml
, and create the
Deployment:
kubectl apply -f my-deployment.yaml
Creating the Service
Here's a manifest for the Service:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/app-protocols: '{"my-port":"HTTP2"}'
name: echoheaders
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
protocol: TCP
name: my-port
selector:
app: echoheaders
Save the manifest to a file named my-service.yaml
, and create the Service:
kubectl apply -f my-service.yaml
View the Service:
kubectl get service echoheaders --output yaml
The output is similar to this:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/app-protocols: '{"my-port":"HTTP2"}'
...
labels:
app: echoheaders
name: echoheaders
...
spec:
clusterIP: 10.39.251.148
...
ports:
- name: my-port
nodePort: 30647
port: 443
protocol: TCP
targetPort: 8443
selector:
app: echoheaders
...
type: NodePort
...
For this exercise, here are the important things to note about your Service:
The Service has type
NodePort
. This type is required for Services that are going to be associated with an Ingress.Any Pod that has the label
app: echoheaders
is a member of the Service. Theselector
field specifies this.The Service has one port, and the port is named
my-port
. Thecloud.google.com/app-protocols
annotation specifies thatmy-port
should use the HTTP/2 protocol.Traffic directed to the service on TCP port 443 is routed to TCP port 8443 in one of the member Pods. The
port
andtargetPort
fields specify this.
Creating the Ingress
Here's a manifest for the Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echomap
spec:
backend:
serviceName: echoheaders
servicePort: 443
Copy the manifest to a file named my-ingress.yaml
, and create the Ingress:
kubectl apply -f my-ingress.yaml
Wait a few minutes for the Kubernetes Ingress controller to configure an HTTP(S) load balancer, and then view the Ingress:
kubectl get ingress echomap --output yaml
The output is similar to this:
kind: Ingress
metadata:
...
name: echomap
...
spec:
backend:
serviceName: echoheaders
servicePort: 443
status:
loadBalancer:
ingress:
- ip: 203.0.113.2
For this exercise, here are the important things to note about your Ingress:
The IP address for incoming traffic is listed under
loadBalancer:ingress
.Incoming requests are routed to a Pod that is a member of the
echoheaders
Service. In this exercise, the member pods have the labelapp: echoheaders
.Requests are routed to the Pod on the target port specified in the
echoheaders
Service manifest. In this exercise, the Pod target port is 8443.
Verifying that your load balancer supports HTTP/2
gcloud
List your backend services:
gcloud compute backend-services list
Describe your backend service:
gcloud beta compute backend-services describe backend-service-name --global
where backend-service-name is the name of your backend service.
In the output, verify that the protocol is HTTP/2:
backends: ... description: '{...,"kubernetes.io/service-port":"443","x-features":["HTTP2"]}' ... kind: compute#backendService loadBalancingScheme: EXTERNAL protocol: HTTP2 ...
Console
Calling your service
Wait a few minutes for the load balancer and backend service to be configured. Enter the external IP address of your load balancer in your browser's address bar.
The output shows information about the request from the load balancer to the Pod:
Hostname: echoheaders-7886d5bc68-xnrwj
...
Request Information:
...
method=GET
real path=/
query=
request_version=2
request_scheme=https
...
Request Headers:
...
x-forwarded-for=[YOUR_IP_ADDRESS], 203.0.113.2
x-forwarded-proto=http
...
For this exercise, here are the important things to note about the preceding output:
The line
request_version=2
indicates that the request between the load balancer and the Pod used HTTP/2.The line
x-forwarded-proto=http
indicates that the request between you and the load balancer used HTTP 1.1, not HTTP/2.
What's next?
Set up HTTP load balancing with Ingress.
Configure a static IP and domain name for your Ingress application using Ingress.
Configure SSL certificates for your Ingress load balancer.
If you have an application running on multiple Google Kubernetes Engine clusters in different regions, configure a multi-cluster Ingress to route traffic to a cluster in the region closest to the user.