Configuring load balancing through Ingress

This page shows how to configure an HTTP(S) load balancer by creating a Kubernetes Ingress object. An Ingress object must be associated with one or more Service objects, each of which is associated with a set of Pods.

A Service object has one or more servicePort structures. Each servicePort that is targeted by an Ingress is associated with a Google Cloud Platform backend service resource.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Multiple backend services

An HTTP(S) load balancer provides one stable IP address that you can use to route requests to a variety of backend services.

In this exercise, you configure the load balancer to route requests to different backend services depending on the URL path. Requests that have the path / are routed to one backend service, and requests that have the path /kube are routed to a different backend service.

Here's the big picture of the steps in this exercise:

  1. Create a Deployment and expose it with a Service named hello-world.
  2. Create a second Deployment and expose it with a Service named hello-kubernetes.
  3. Create an Ingress that specifies rules for routing requests to one Service or the other, depending on the URL path in the request. When you create the Ingress, the GKE ingress controller creates and configures an HTTP(S) load balancer.
  4. Test the HTTP(S) load balancer.

Here's a manifest for the first Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-deployment
spec:
  selector:
    matchLabels:
      greeting: hello
      department: world
  replicas: 3
  template:
    metadata:
      labels:
        greeting: hello
        department: world
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "50000"

Copy the manifest to a file named hello-world-deployment.yaml, and create the Deployment:

kubectl apply -f hello-world-deployment.yaml

Here's a manifest for a Service that exposes your first Deployment:

apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: NodePort
  selector:
    greeting: hello
    department: world
  ports:
  - protocol: TCP
    port: 60000
    targetPort: 50000

For the purpose of this exercise, these are the important points to understand about the Service:

  • Any Pod that has both the greeting: hello label and the department: world label is a member of the Service.

  • When a request is sent to the Service on TCP port 60000, it is forwarded to one of the member Pods on TCP port 50000.

Copy the manifest to a file named hello-world-service.yaml, and create the Service:

kubectl apply -f hello-world-service.yaml

Here's a manifest for a second Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes-deployment
spec:
  selector:
    matchLabels:
      greeting: hello
      department: kubernetes
  replicas: 3
  template:
    metadata:
      labels:
        greeting: hello
        department: kubernetes
    spec:
      containers:
      - name: hello-again
        image: "gcr.io/google-samples/node-hello:1.0"
        env:
        - name: "PORT"
          value: "8080"

Copy the manifest to a file named hello-kubernetes-deployment, and create the Deployment:

kubectl apply -f hello-kubernetes-deployment.yaml

Here's a manifest for a Service that exposes your second Deployment:

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes
spec:
  type: NodePort
  selector:
    greeting: hello
    department: kubernetes
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

For the purpose of this exercise, these are the important points to understand about the Service:

  • Any Pod that has both the greeting: hello label and the department: kubernetes label is a member of the Service.

  • When a request is sent to the Service on TCP port 80, it is forwarded to one of the member Pods on TCP port 8080.

Copy the manifest to a file named hello-kubernetes-service.yaml, and create the Service:

kubectl apply -f hello-kubernetes-service.yaml

Here's a manifest for an Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: hello-world
          servicePort: 60000
      - path: /kube
        backend:
          serviceName: hello-kubernetes
          servicePort: 80

Notice that the Ingress manifest has two (serviceName, servicePort) pairs. Each (serviceName, servicePort) is associated with a GCP backend service.

Copy the manifest to a file named my-ingress.yaml, and create the Ingress:

kubectl apply -f my-ingress.yaml

When you create the Ingress, the GKE ingress controller creates an HTTP(S) load balancer, and configures the load balancer as follows:

  • When a client sends a request to the load balancer with URL path /, the request is forwarded to the hello-world Service on port 60000.

  • When a client sends a request to the load balancer using URL path /kube, the request is forwarded to the hello-kubernetes Service on port 80.

Wait about five minutes for the load balancer to be configured.

View the Ingress:

kubectl get ingress my-ingress --output yaml

The output shows the external IP address of the HTTP(S) load balancer:

status:
  loadBalancer:
    ingress:
    - ip: 203.0.113.1

Test the / path:

curl [LOAD_BALANCER_IP]/

where [LOAD_BALANCER_IP] is the external IP address of your load balancer.

The output shows a Hello, world! message:

Hello, world!
Version: 2.0.0
Hostname: ...

Test the /kube path:

curl [LOAD_BALANCER_IP]/kube

The output shows a Hello Kubernetes message:

Hello Kubernetes!

HTTPS between client and load balancer

An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake. For more information, see:

Disabling HTTP

If you want all traffic between the client and the load balancer to use HTTPS, you can disable HTTP. For more information, see Disabling HTTP.

HTTPS between load balancer and client

If your application, running in a GKE Pod, is capable of receiving HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application. For more information, see HTTPS (TLS) between load balancer and your application.

HTTP/2 between client and load balancer

Clients can use HTTP/2 to send requests to the load balancer. No configuration is required.

HTTP/2 between load balancer and application

If your application, running in a GKE Pod, is capable of receiving HTTP/2 requests, you can configure the load balancer to use HTTP/2 when it forwards requests to your application. For more information, see HTTP/2 for load balancing with Ingress.

Network endpoint groups

If your cluster supports Container-native Load Balancing, you can configure the load balancer to use network endpoint groups. For more information, see Using Container-native Load Balancing.

Summary of Ingress annotations

kubernetes.io/ingress.allow-http
Specifies whether to allow HTTP traffic between the client and the HTTP(S) load balancer. Possible values are "true" and "false". Default is "true". See Disabling HTTP.
ingress.gcp.kubernetes.io/pre-shared-cert
You can upload certificates and keys to your GCP project. Use this annotation to reference the certificates and keys. See Using multiple SSL certificates in HTTP(S) load balancing.
kubernetes.io/ingress.global-static-ip-name
Use this annotation to specify that the load balancer should use a static external IP address that you previously created. See Static IP addresses for HTTP(S) load balancers.
service.alpha.kubernetes.io/app-protocols
Use this annotation to set the protocol for communication between the load balancer and the application. Possible protocols are HTTP, HTTPS, and HTTP/2. See HTTPS between load balancer and your application and HTTP/2 for load balancing with Ingress.
beta.cloud.google.com/backend-config
Use this annotation to configure the backend service associated with a servicePort. See BackendConfig custom resource.
cloud.google.com/neg
Use this annotation to specify that the load balancer should use network endpoint groups. See Using Container-native Load Balancing.

What's next

هل كانت هذه الصفحة مفيدة؟ يرجى تقييم أدائنا:

إرسال تعليقات حول...

Kubernetes Engine Documentation