Configuring Ingress for internal load balancing

This page shows you how to set up and use Ingress for Internal HTTP(S) Load Balancing in Google Kubernetes Engine (GKE). Ingress for Internal HTTP(S) Load Balancing provides native support for internal load balancing through the GKE Ingress controller.

You can also learn more about how Ingress for Internal HTTP(S) Load Balancing works in Ingress for Internal HTTP(S) Load Balancing.

Preparing your environment

Prepare your environment to use the internal HTTP(S) load balancer:

  1. Prepare the networking environment so that the load balancer proxies can be deployed in a given region. The steps to deploy the proxy-only subnet are explained in configuring the network and subnets. You don't have to manually deploy firewall rules as that is managed by the GKE Ingress controller. You must complete this before you deploy Ingress.

  2. Deploying load balancer resources through the Kubernetes Ingress API. The steps to do this are given below.

Deploying Ingress for Internal HTTP(S) Load Balancing

Use the following sections to deploy Ingress.

Step 1: Create a cluster

In this step, you create a cluster in the Rapid release channel with alias IP enabled.

To create the cluster:

gcloud

gcloud beta container clusters create cluster-name \
--release-channel=rapid \
--enable-ip-alias \
--zone=zone-name
--network=cluster-network

Where:

  • cluster-name is the name you choose for your cluster.
  • --enable-ip-alias turns on alias IP. Clusters using Ingress for Internal HTTP(S) Load Balancing must run in VPC-Native (Alias IP) mode. For more information, see Creating a VPC-native cluster.
  • --release-channel=rapid creates the cluster in the Rapid release channel. Ingress for Internal HTTP(S) Load Balancing is not currently available in any other release channels.
  • zone-name creates your cluster in the zone you choose. You must choose a zone in the same region as the proxy-subnet you created for your internal HTTP(S) load balancer in configuring the network and subnets.
  • cluster-network is the name of the network you want the cluster to be created in. This network must be in the same VPC network as the proxy-subnet.

This cluster must also have HTTP load balancing enabled. Clusters have HTTP load- balancing enabled by default. Do not disable it.

Step 2: Deploy a web application

In this step, you create a Deployment using a container image that listens on an HTTPS server on port 9376. This Deployment manages Pods for your application. Each Pod runs one application container with an HTTPS server that returns the hostname of the application server as the response. The default hostname of a Pod is the name of the Pod. The container also handles graceful termination.

An example Deployment file, web-deployment.yaml, is provided below.

# web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: hostname
  name: hostname-server
spec:
  selector:
    matchLabels:
      app: hostname
  minReadySeconds: 60
  replicas: 3
  template:
    metadata:
      labels:
        app: hostname
    spec:
      containers:
      - image: k8s.gcr.io/serve_hostname:v1.4
        name: hostname-server
        ports:
        - containerPort: 9376
          protocol: TCP
      terminationGracePeriodSeconds: 90

After you have created your Deployment, apply the resource to the cluster.

To apply the resource:

kubectl apply -f web-deployment.yaml

Step 3: Deploy a Service as a Network Endpoint Group (NEG)

In this step, you create a Service resource. The Service selects the backend containers by their labels so that the Ingress controller can program them as backend endpoints. Ingress for Internal HTTP(S) Load Balancing requires you to use NEGs as backends. It does not support Instance Groups as backends. Because NEG backends are required, use the NEG annotation when deploying Services that will be exposed through Ingress:

annotations:
    cloud.google.com/neg: '{"ingress": true}'

The usage of NEGs allows the Ingress controller to perform container native load balancing. Traffic is load balanced from the Ingress proxy directly to the Pod IP as opposed to traversing the node IP or kube-proxy networking. In addition, Pod readiness gates are implemented to determine the health of Pods from the perspective of the load balancer and not just the Kubernetes readiness and liveness checks. This ensures that traffic is not dropped during lifecycle events such as Pod startup, Pod loss, or node loss.

If the NEG annotation is left off, you will receive a warning on the Ingress object that prevents the internal HTTP(S) load balancer from being configured. A Kubernetes event is generated on the Ingress if this happens. Following is an example of the event message:

Message
-------
error while evaluating the ingress spec: could not find port "8080" in service "default/no-neg-svc"

An NEG is not created until an Ingress references the Service. The NEG does not appear in Compute Engine until the Ingress and its referenced Service both exist. NEGs are a zonal resource and for multi-zonal clusters, one will be created per Service per zone.

To create the Service, use a YAML file. An example file, web-service.yaml, that uses the correct annotation is given below.

# web-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: hostname
  namespace: default
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  ports:
  - name: host1
    port: 80
    protocol: TCP
    targetPort: 9376
  selector:
    app: hostname
  type: NodePort

After you have created the Service, apply the resource to the cluster.

kubectl apply -f web-service.yaml

Step 4: Deploy Ingress

In this step, you create an Ingress resource which triggers the deployment of Compute Engine load balancing through the Ingress controller. Ingress Internal HTTP(S) Load Balancing requires the annotation:

annotations:
    kubernetes.io/ingress.class: "gce-internal"

To prevent users from exposing applications publicly by accidentally omitting the correct annotation, implement a Cloud IAM policy that enforces limits on public load balancer creation. This blocks public load balancer deployment cluster-wide while still allowing internal load balancer creation through Ingress and Service resources.

To deploy the Ingress resource, copy and save the following internal-ingress.yaml file.

# internal-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ilb-demo-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "gce-internal"
spec:
  backend:
    serviceName: hostname
    servicePort: 80

After you have copied the file, apply the resource to the cluster:

kubectl apply -f internal-ingress.yaml

Step 5: Validate successful Ingress deployment

It can take several minutes for the Ingress resource to become fully provisioned. During this time, the Ingress controller is creating items such as forwarding rules, backend services, URL maps, and NEGs.

To retrieve the status of your Ingress resource:

kubectl get ingress ilb-demo-ingress

You should see output similar to the following:

NAME               HOSTS    ADDRESS            PORTS     AGE
ilb-demo-ingress   *        10.128.0.58        80        59s

When the ADDRESS field is populated, the Ingress is ready. The use of an RFC 1918 address in this field indicates an internal IP within the VPC. Since the internal HTTP(S) load balancer is a regional load balancer, the VIP is only accessible from a client within the same region and VPC. After retrieving the load balancer VIP, you can use tools (for example, curl) to issue HTTP GET calls against the VIP from inside the VPC.

To reach the Ingress VIP from inside the VPC, deploy a VM within the same region and network as the cluster.

To deploy this VM:

gcloud

gcloud compute instances create l7-ilb-client-us-central1-a \
--image-family=debian-9 \
--image-project=debian-cloud \
--network=default \
--subnet=default \
--zone=us-central1-a \
--tags=allow-ssh

To access the internal VIP from inside the test the VM, use curl.

  # SSH in to the VM
  gcloud compute ssh l7-ilb-client-us-central1-a \
      --zone=us-central1-a

  # Use curl to access the internal application VIP
  curl 10.128.0.58
  hostname-server-6696cf5fc8-z4788

The successful HTTP response and hostname of one of the backend containers indicates that the full load balancing path is functioning correctly.

HTTPS between client and load balancer

Ingress for internal load balancing supports the serving of TLS certificates to clients. This can be done through Kubernetes Secrets or through pre-shared regional SSL certificates in Google Cloud. You can also specify multiple certificates per Ingress resource.

The following steps detail how to create a certificate in Google Cloud and then serve it through Ingress to internal clients.

  1. Create the regional certificate:

    gcloud alpha compute ssl-certificates create ingress-cert \
    --certificate cert-file --private-key key-file --region us-central1
    
  2. Create an Ingress. The following ilb-demo-ing.yaml file is a sample of the Ingress resource that you need to create.

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: ilb-demo-ing
      namespace: default
      annotations:
        ingress.gcp.kubernetes.io/pre-shared-cert: "ingress-cert"
        kubernetes.io/ingress.class: "gce-internal"
        kubernetes.io/ingress.allow-http: "false"
    spec:
      rules:
      - host: your-domain
        http:
          paths:
          - backend:
              serviceName: hostname
              servicePort: 80
    
  3. Run the following command to create an Ingress:

    kubectl apply -f ingress-pre-shared-cert.yaml
    

Deleting Ingress

Removing Ingress and Service resources also removes the Compute Engine load balancer resources associated with them. To prevent resource leaking, ensure that Ingress resources are torn down when you no longer need them. Ingress and Service resources must be deleted before clusters are deleted or else the Compute Engine load balancing resources are orphaned.

Step 1: Delete the Ingress

Deleting the Ingress removes the forwarding rules, backend services, and URL maps associated with this Ingress resource.

To delete the Ingress:

kubectl delete ingress ilb-demo-ingress

Step 2: Delete the Service

Deleting the Service removes the NEG associated with the Service.

To delete the Service:

kubectl delete service hostname

Summary of internal Ingress annotations

Ingress annotations

Annotation Description
kubernetes.io/ingress.class Specified as "gce-internal" for internal Ingress. If the class is not specified an Ingress resource is interpreted by default as external Ingress.
kubernetes.io/ingress.allow-http Specifies whether to allow HTTP traffic between the client and the HTTP(S) load balancer. Possible values are "true" and "false". Default is "true", but must be set to "false" if using HTTPS for internal load balancing. See Disabling HTTP.
ingress.gcp.kubernetes.io/pre-shared-cert You can upload certificates and keys to your Google Cloud project. Use this annotation to reference the certificates and keys. See Using multiple SSL certificates in HTTP(S) load balancing.
Annotation Description
beta.cloud.google.com/backend-config Use this annotation to configure the backend service associated with a servicePort. See BackendConfig custom resource.
cloud.google.com/neg Use this annotation to specify that the load balancer should use network endpoint groups. See Using Container-native Load Balancing.

Troubleshooting

Understanding and observing the state of Ingress typically involves inspecting the associated resources. The types of issues encountered often include load balancing resources not being created properly, traffic not reaching backends, or backends not appearing healthy. Some common steps in troubleshooting include:

  • Verifying that client traffic is originating from within the same region and VPC as the load balancer.
  • Verifying that the Pods and backends are healthy.
  • Validating the traffic path to the VIP and for Compute Engine health checks to ensure it is not blocked by firewall rules.
  • Checking the Ingress resource events for errors.
  • Describing the Ingress resource to see the mapping to Compute Engine resources.
  • Validating that the Compute Engine LB resources exist, have the correct configurations, and do not have errors reported.

Filtering for Ingress events

The following query filters for errors across all Ingress events in your cluster.

kubectl get events --all-namespaces --field-selector involvedObject.kind=Ingress

Alternatively, you can filter by objects or object names.

kubectl get events --field-selector involvedObject.kind=Ingress,involvedObject.name=hostname-internal-ingress

In this error you can see that the service referenced by the Ingress does not exist.

LAST SEEN   TYPE      REASON      OBJECT                              MESSAGE
0s          Warning   Translate   ingress/hostname-internal-ingress   error while evaluating the ingress spec: could not find service "default/hostname-invalid"

Inspecting Compute Engine load balancer resources

The following command displays the full output for the Ingress resource so you can see the mappings to the Compute Engine resources that are created by the Ingress controller.

$ kubectl get ing ilb-demo-ing -o yaml

apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      ingress.kubernetes.io/backends: '{"k8s1-241a2b5c-default-hostname-80-29269aa5":"HEALTHY"}'
      ingress.kubernetes.io/forwarding-rule: k8s-fw-default-ilb-demo-ingress--241a2b5c94b353ec
      ingress.kubernetes.io/target-proxy: k8s-tp-default-ilb-demo-ingress--241a2b5c94b353ec
      ingress.kubernetes.io/url-map: k8s-um-default-ilb-demo-ingress--241a2b5c94b353ec
      kubectl.kubernetes.io/last-applied-configuration: |
       {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce-internal"},"name":"ilb-demo-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"hostname","servicePort":80}}}
      kubernetes.io/ingress.class: gce-internal
    creationTimestamp: "2019-10-15T02:16:18Z"
    finalizers:
    - networking.gke.io/ingress-finalizer
    generation: 1
    name: ilb-demo-ingress
    namespace: default
    resourceVersion: "1538072"
    selfLink: /apis/networking.k8s.io/v1beta1/namespaces/default/ingresses/ilb-demo-ingress
    uid: 0ef024fe-6aea-4ee0-85f6-c2578f554975
  spec:
    backend:
      serviceName: hostname
      servicePort: 80
  status:
    loadBalancer:
      ingress:
      - ip: 10.128.0.127
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The ingress.kubernetes.io/backends annotations list the backends and their status. Make sure that your backends are listed as HEALTHY.

The Compute Engine resources created by the Ingress can be queried directly to understand their status and configuration. This can also be helpful when troubleshooting:

To list all Compute Engine forwarding rules:

gcloud compute forwarding-rules list

Expected output:

NAME                                                        REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
k8s-fw-default-hostname-internal-ingress--42084f6a534c335b  us-central1  10.128.15.225   TCP          us-central1/targetHttpProxies/k8s-tp-default-hostname-internal-ingress--42084f6a534c335b

To list the health of all BackendServices:

gcloud compute backend-services list

NAME                                         BACKENDS                                                                                                                                                                                                      PROTOCOL
k8s1-42084f6a-default-hostname-80-98cbc1c1   us-central1-a/networkEndpointGroups/k8s1-42084f6a-default-hostname-80-98cbc1c1                                                                                                                                HTTP
#Query the health of your BackendService
gcloud compute backend-services get-health k8s1-42084f6a-default-hostname-80-98cbc1c1 --region us-central1
---
backend: https://www.googleapis.com/compute/v1/projects/user1-243723/zones/us-central1-a/networkEndpointGroups/k8s1-42084f6a-default-hostname-80-98cbc1c1
status:
  healthStatus:
  - healthState: HEALTHY

What's next