Setting up HTTP Load Balancing with Ingress

This tutorial shows how to run an nginx web server behind an HTTP load balancer by configuring the Ingress resource.

Background

Container Engine offers integrated support for two types of cloud load balancing for a publicly accessible application:

  1. You can create TCP load balancers by specifying type: LoadBalancer on a Service resource manifest. Although a TCP load balancer works for HTTP web servers, they are not designed to terminate HTTP(S) traffic as they are not aware of individual HTTP(S) requests. Container Engine does not configure any health checks for TCP load balancers.

  2. You can create HTTP(S) load balancers by using an Ingress resource. HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions. They offer features like customizable URL maps and TLS termination. Container Engine automatically configures health checks for HTTP(S) load balancers.

If you are exposing an HTTP(S) service hosted on Container Engine, HTTP(S) load balancing is the recommended method for load balancing.

Before you begin

Take the following steps to enable the Google Container Engine API:
  1. Visit the Container Engine page in the Google Cloud Platform Console.
  2. Create or select a project.
  3. Wait for the API and related services to be enabled. This can take several minutes.
  4. Enable billing for your project.

    Enable billing

Install the following command-line tools used in this tutorial:

  • gcloud is used to create and delete Container Engine clusters gcloud is included in the Google Cloud SDK.
  • kubectl is used to manage Kubernetes, the cluster orchestration system used by Container Engine. You can install kubectl using gcloud:
    gcloud components install kubectl

Set defaults for the gcloud command-line tool

To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set default configuration values by running the following commands:
$ gcloud config set project PROJECT_ID
$ gcloud config set compute/zone us-central1-b

Create a container cluster

Create a container cluster named loadbalancedcluster by running:

gcloud container clusters create loadbalancedcluster

Step 1: Deploy an nginx server

Create a single-replica nginx Deployment by running the following command:

kubectl run nginx --image=nginx --port=80

This command runs an instance of the nginx Docker image, serving on port 80.

Step 2: Expose your nginx deployment as a service internally

Create a Service resource to make the nginx deployment reachable within your container cluster:

kubectl expose deployment nginx --target-port=80  --type=NodePort

When you create a Service of type NodePort with this command, Container Engine makes your Service available on a randomly-selected high port number (e.g. 32640) on all the nodes in your cluster.

Verify the Service was created and a node port was allocated:

$ kubectl get service nginx
NAME      CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx     10.3.255.226          80:32640/TCP   15m

In the sample output above, the node port for the nginx Service is 32640. Also, note that there is no external IP allocated for this Service. Since the Container Engine nodes are not externally accessible by default, creating this Service does not make your application accessible from the Internet.

To make your HTTP(S) web server application publicly accessible, you need to create an Ingress resource.

Step 3: Create an Ingress resource

Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.

On Container Engine, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, Container Engine creates an HTTP(S) load balancer and configures it to route traffic to your application.

While the Kubernetes Ingress is a beta resource, meaning how you describe the Ingress object is subject to change, the Cloud Load Balancers that Container Engine provisions to implement the Ingress are production-ready.

The following config file defines an Ingress resource that directs traffic to your nginx server:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
spec:
  backend:
    serviceName: nginx
    servicePort: 80

To deploy this Ingress resource, download basic-ingress.yaml and run:

kubectl apply -f basic-ingress.yaml

Once you deploy this manifest, Kubernetes creates an Ingress resource on your cluster. The Ingress controller running in your cluster is responsible for creating an HTTP(S) Load Balancer to route all external HTTP traffic to the nginx Service you exposed.

Step 4: Visit your application

Find out the external IP address of the load balancer serving your application by running:

$ kubectl get ingress basic-ingress
NAME            HOSTS     ADDRESS         PORTS     AGE
basic-ingress   *         203.0.113.12    80        2m

Point your browser to the external IP address of your application and see the web page titled “Welcome to nginx!”. You have successfully exposed your application using an Ingress!

You can visit Load Balancing on Cloud Platform Console and inspect the networking resources created by the Ingress controller.

Step 5: (Optional) Configuring a static IP address

When you expose a web server on a domain name, you need the external IP address of an application to be a static IP that does not change.

By default, Container Engine allocates ephemeral external IP addresses for HTTP applications exposed through an Ingress. Ephemeral addresses are subject to change. For a web application you are planning for a long time, you need to use a static external IP address.

Note that once you configure a static IP for the Ingress resource, deleting the Ingress will not delete the static IP address associated to it. Make sure to clean up the static IP addresses you configured once you no longer plan to use them again.

Option 1: Convert existing ephemeral IP address to static IP address

If you already have an Ingress deployed, you can convert the existing ephemeral IP address of your application to a reserved static IP address without changing the external IP address by visiting the External IP addresses section on Cloud Platform Console.

Option 2: Reserve a new static IP address

Reserve a static external IP address named nginx-static-ip by running:

gcloud compute addresses create nginx-static-ip --global

Now you need to configure the existing Ingress resource to use the reserved IP address. Replace the basic-ingress.yaml manifest contents with the following manifest:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: nginx-static-ip
spec:
  backend:
    serviceName: nginx
    servicePort: 80

This change adds an annotation on Ingress to use the static IP resource named nginx-static-ip . To apply this modification to the existing Ingress, run the following command:

kubectl apply -f basic-ingress.yaml

Run kubectl get ingress basic-ingress and wait until the IP address of your application changes to use the reserved IP address of the nginx-static-ip resource.

It may take a couple of minutes to update the existing Ingress resource, re- configure the load balancer and propagate the load balancing rules across the globe. Once this operation completes, the Container Engine releases the ephemeral IP address previously allocated to your application.

Step 6: (Optional) Serving multiple applications on a Load Balancer

You can run multiple services on a single load balancer and public IP by configuring routing rules on the Ingress. By hosting multiple services on the same Ingress, you can avoid creating additional load balancers (which are billable resources) for every Service you expose to the Internet.

Create another web server Deployment with the echoserver sample image that responds the requests with the details of the request:

kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080

Then, expose the echoserver Deployment internally to the cluster on a NodePort:

kubectl expose deployment echoserver --target-port=8080 --type=NodePort

The following manifest describes an Ingress resource that routes all the requests on example.com to the nginx Service and the requests on example.com/echo to the echoserver Service:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fanout-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
      - path: /echo
        backend:
          serviceName: echoserver
          servicePort: 8080

To deploy this manifest, save it to a fanout-ingress.yaml, and run:

kubectl create -f fanout-ingress.yaml

Once the ingress is deployed, run kubectl get ingress fanout-ingress to find out the public IP address of the cluster. Then, visit http://<IP_ADDRESS>/ to see the welcome page of the nginx Service, and visit http://<IP_ADDRESS>/echo to see the response from the echoserver Service.

Step 7: Cleanup

  1. Delete the Ingress: This deallocates the ephemeral external IP address and the load balancing resources associated to your application:

    kubectl delete ingress basic-ingress
    

    If you have followed "Step 6", delete the ingress by running:

    kubectl delete ingress fanout-ingress
    
  2. Delete the static IP address: Execute this only if you followed Step 5.

    • If you have followed “Option 1” in Step 5 to convert an existing ephemeral IP address to static IP, visit Cloud Platform Console to delete the static IP.

    • If you have followed “Option 2” in Step 5, run the following command to delete the static IP address:

      gcloud compute addresses delete nginx-static-ip --global
      
  3. Delete the cluster: This deletes the compute nodes of your container cluster and other resources such as the Deployments in the cluster:

    gcloud container clusters delete loadbalancedcluster
    

Remarks

Services exposed through an Ingress must serve a response with HTTP 200 status to the GET requests on / path. This is used for health checking. If your application does not serve HTTP 200 on /, the backend will be marked unhealthy and will not get traffic.

Ingress supports more advanced use cases, such as:

  • Name-based virtual hosting: You can use Ingress to reuse the load balancer for multiple domain names, subdomains and to expose multiple Services on a single IP address and load balancer. Check out the simple fanout and name-based virtual hosting examples to learn how to configure Ingress for these tasks.

  • HTTPS termination: You can configure the Ingress to terminate the HTTPS traffic using the Cloud Load Balancer.

When an Ingress is deleted, the Ingress controller cleans up the associated resources (except reserved static IP addresses) automatically.

What's next?

For a complete set of features and configuration details, check out the Ingress user guide on Kubernetes documentation.

To learn more about the Cloud Load Balancing used for Ingress, see the HTTP(s) Load Balancing documentation.

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Container Engine Documentation