This page shows you how to set up and use Ingress for Internal HTTP(S) Load Balancing in Google Kubernetes Engine (GKE). Ingress for Internal HTTP(S) Load Balancing provides built-in support for internal load balancing through the GKE Ingress controller.
To learn more about which features are supported for Ingress for Internal HTTP(S) Load Balancing, see Ingress features. You can also learn more about how Ingress for Internal HTTP(S) Load Balancing works in Ingress for Internal HTTP(S) Load Balancing.
Before you begin
Before you can deploy load balancer resources through the Kubernetes Ingress API, you must prepare your networking environment so that the load balancer proxies can be deployed in a given region. To learn how to deploy the proxy-only subnet, see configuring the network and subnets.
You must complete this step before you deploy Ingress for Internal HTTP(S) Load Balancing.
To learn more about why Ingress for Internal HTTP(S) Load Balancing requires you to use a proxy-only subnet, see Required networking environment.
Requirements
Ingress for Internal HTTP(S) Load Balancing has the following requirements:
- Your cluster must use a GKE version later than 1.16.5-gke.10.
- Your cluster must run in VPC-Native (Alias IP) mode. For more information, see VPC-native clusters.
- Your cluster must have the HTTP load balancing add-on enabled. Clusters have HTTP load balancing enabled by default. Do not disable the add-on.
- You need to use Network Endpoint Groups (NEGs) as backends for your Service.
Deploying Ingress for Internal HTTP(S) Load Balancing
The following exercises show you how to deploy Ingress for Internal HTTP(S) Load Balancing:
- Create a cluster.
- Deploy an application.
- Deploy a Service.
- Deploy Ingress.
- Validate the deployment.
- Delete Ingress resources.
Creating a cluster
In this section, you create a cluster that you can use with Ingress for
Internal HTTP(S) Load Balancing. You can create this cluster using the gcloud
command-line tool or
the Google Cloud Console.
gcloud
Create a cluster with the following fields:
gcloud container clusters create CLUSTER_NAME \
--cluster-version=VERSION_NUMBER \
--enable-ip-alias \
--zone=ZONE \
--network=NETWORK
Replace the following:
- CLUSTER_NAME: add a name for your cluster.
- VERSION_NUMBER: add a version that is later than 1.16.5-gke.10.
You can also use the
--release-channel
flag to select a release channel with a default version of later than 1.16.5-gke.10. --enable-ip-alias
turns on alias IP. Clusters using Ingress for Internal HTTP(S) Load Balancing must run in VPC-Native (Alias IP) mode. For more information, see Creating a VPC-native cluster.- ZONE: add a zone to create your cluster in. You must choose a zone in the same region as the proxy-subnet you created for your internal HTTP(S) load balancer in configuring the network and subnets.
- NETWORK: add the name of the network that you want the cluster to be created in. This network must be in the same VPC network as the proxy-subnet.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
In the Cluster basics section, complete the following:
- Enter the Name for your cluster.
- For the Location type, select a zone for your cluster. You must choose a zone in the same region as the proxy-subnet that you created for your internal HTTP(S) load balancer in configuring the network and subnets.
- Under Master Version, select a Static version later than 1.16.5-gke.10 or select a Release channel with a default version later than 1.16.5-gke.10.
From the navigation pane, under Cluster, click Networking.
- From the Network drop-down list, select the network that you want the cluster to be created in. This network must be in the same VPC network as the proxy-subnet.
- Under Advanced networking options, ensure Enable VPC-native traffic routing (uses alias IP) and Enable HTTP load balancing are selected.
Click Create.
Deploying a web application
In this section, you create a Deployment.
To create a Deployment:
Copy and save the following Deployment resource into a file named
web-deployment.yaml
:# web-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: hostname name: hostname-server spec: selector: matchLabels: app: hostname minReadySeconds: 60 replicas: 3 template: metadata: labels: app: hostname spec: containers: - image: k8s.gcr.io/serve_hostname:v1.4 name: hostname-server ports: - containerPort: 9376 protocol: TCP terminationGracePeriodSeconds: 90
This Deployment file uses a container image that listens on an HTTPS server on port 9376. This Deployment also manages Pods for your application. Each Pod runs one application container with an HTTPS server that returns the hostname of the application server as the response. The default hostname of a Pod is the name of the Pod. The container also handles graceful termination.
After you have created your Deployment file, apply the resource to the cluster:
kubectl apply -f web-deployment.yaml
Deploying a Service as a Network Endpoint Group (NEG)
In this section, you create a Service resource. The Service selects the backend containers by their labels so that the Ingress controller can program them as backend endpoints. Ingress for Internal HTTP(S) Load Balancing requires you to use NEGs as backends. The feature does not support Instance Groups as backends. Because NEG backends are required, the following NEG annotation is required when you deploy Services that are exposed through Ingress:
annotations:
cloud.google.com/neg: '{"ingress": true}'
Your Service is automatically annotated with
cloud.google.com/neg: '{"ingress": true}'
when all of the following
conditions are true:
- You created the Service in GKE clusters 1.17.6-gke.7 and later.
- You are using VPC-native clusters.
- You are not using a Shared VPC.
- You are not using GKE Network Policy.
The usage of NEGs allows the Ingress controller to perform container native load balancing. Traffic is load balanced from the Ingress proxy directly to the Pod IP as opposed to traversing the node IP or kube-proxy networking. In addition, Pod readiness gates are implemented to determine the health of Pods from the perspective of the load balancer and not only the Kubernetes readiness and liveness checks. Pod readiness gates ensure that traffic is not dropped during lifecycle events such as Pod startup, Pod loss, or node loss.
If you do not include a NEG annotation, you receive a warning on the Ingress object that prevents you from configuring the internal HTTP(S) load balancer. A Kubernetes event is also generated on the Ingress if the NEG annotation is not included. The following message is an example of the event message:
Message
-------
error while evaluating the ingress spec: could not find port "8080" in service "default/no-neg-svc"
An NEG is not created until an Ingress references the Service. The NEG does not appear in Compute Engine until the Ingress and its referenced Service both exist. NEGs are a zonal resource and for multi-zonal clusters, one is created per Service per zone.
To create a Service:
Copy and save the following Service resource into a file named
web-service.yaml
:# web-service.yaml apiVersion: v1 kind: Service metadata: name: hostname namespace: default annotations: cloud.google.com/neg: '{"ingress": true}' spec: ports: - name: host1 port: 80 protocol: TCP targetPort: 9376 selector: app: hostname type: NodePort
After you have created your Service file, apply the resource to the cluster:
kubectl apply -f web-service.yaml
Deploying Ingress
In this section, you create an Ingress resource that triggers the deployment of Compute Engine load balancing through the Ingress controller. An Ingress for Internal HTTP(S) Load Balancing requires the following annotation:
annotations:
kubernetes.io/ingress.class: "gce-internal"
To create an Ingress:
Copy and save the following Ingress resource into a file named
internal-ingress.yaml
:# internal-ingress.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ilb-demo-ingress namespace: default annotations: kubernetes.io/ingress.class: "gce-internal" spec: backend: serviceName: hostname servicePort: 80
After you have created your Ingress file, apply the resource to the cluster:
kubectl apply -f internal-ingress.yaml
Validating a successful Ingress deployment
In this section, you validate if your deployment was successful.
It can take several minutes for the Ingress resource to become fully provisioned. During this time, the Ingress controller creates items such as forwarding rules, backend services, URL maps, and NEGs.
To retrieve the status of your Ingress resource that you created in the previous section, run the following command:
kubectl get ingress ilb-demo-ingress
You should see output similar to the following example:
NAME HOSTS ADDRESS PORTS AGE
ilb-demo-ingress * 10.128.0.58 80 59s
When the ADDRESS
field is populated, the Ingress is ready. The use of an RFC
1918 address in this field indicates an internal IP within the
VPC.
Since the internal HTTP(S) load balancer is a regional load balancer, the
virtual IP (VIP) is only accessible from a client within the same region and
VPC. After retrieving the load balancer VIP, you can use tools
(for example, curl
) to issue HTTP GET
calls against the VIP from inside the
VPC.
To issue a HTTP GET
call, complete the following steps:
To reach your Ingress VIP from inside the VPC, deploy a VM within the same region and network as the cluster.
For example, if you followed the previous steps for creating your Deployment, Service, and Ingress and created your cluster in the default network and in the
us-central1-a
zone, you could use the following command:gcloud compute instances create l7-ilb-client-us-central1-a \ --image-family=debian-9 \ --image-project=debian-cloud \ --network=default \ --subnet=default \ --zone=us-central1-a \ --tags=allow-ssh
To learn more about creating instances, see Creating and starting a VM instance.
To access the internal VIP from inside the VM, use
curl
:SSH in to the VM that you created in the previous step:
gcloud compute ssh l7-ilb-client-us-central1-a \ --zone=us-central1-a
Use
curl
to access the internal application VIP:curl 10.128.0.58 hostname-server-6696cf5fc8-z4788
The successful HTTP response and hostname of one of the backend containers indicates that the full load balancing path is functioning correctly.
Deleting Ingress resources
Removing Ingress and Service resources also removes the Compute Engine load balancer resources associated with them. To prevent resource leaking, ensure that Ingress resources are torn down when you no longer need them. You must also delete Ingress and Service resources before you delete clusters or else the Compute Engine load balancing resources are orphaned.
To remove an Ingress, complete the following steps:
Delete the Ingress. For example, to delete the Ingress you created in this page, run the following command:
kubectl delete ingress ilb-demo-ingress
Deleting the Ingress removes the forwarding rules, backend services, and URL maps associated with this Ingress resource.
Delete the Service. For example, to delete the Service you created in this page, run the following command:
kubectl delete service hostname
Deleting the Service removes the NEG associated with the Service.
Static IP addressing
Internal Ingress resources support both static and ephemeral IP addressing. If an IP address is not specified, an available IP address is automatically allocated from the GKE node subnet. However, the Ingress resource does not provision IP addresses from the proxy-only subnet as that subnet is only used for internal proxy consumption. These ephemeral IP addresses are allocated to the Ingress only for the lifecycle of the internal Ingress resource. If you delete your Ingress and create a new Ingress from the same manifest file, you are not guaranteed to get the same external IP address.
If you want a permanent IP address that's independent from the lifecycle of the
internal Ingress resource, you must reserve a regional static internal IP
address. You can then specify a static IP address by using the
kubernetes.io/ingress.regional-static-ip-name
annotation on your Ingress
resource.
The following example shows you how to add this annotation:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.regional-static-ip-name: STATIC_IP_NAME
kubernetes.io/ingress.class: "gce-internal"
Replace STATIC_IP_NAME with a static IP name that meets the following criteria:
- Create the static IP before deploying the Ingress. A load balancer does not deploy until the static IP exists, and referencing a non-existent IP address resource does not create a static IP.
- Reference the Google Cloud IP address resource by its name, rather than its IP address.
- The IP address must be from a subnet in the same region as the GKE cluster. You can use any available private subnet within the region (with the exception of the proxy-only subnet). Different Ingress resources can also have addresses from different subnets.
HTTPS between client and load balancer
Ingress for internal load balancing supports the serving of TLS certificates to clients. You can serve TLS certificates through Kubernetes Secrets or through pre-shared regional SSL certificates in Google Cloud. You can also specify multiple certificates per Ingress resource.
The following steps detail how to create a certificate in Google Cloud and then serve it through Ingress to internal clients:
Create the regional certificate:
gcloud compute ssl-certificates create CERT_NAME \ --certificate CERT_FILE_PATH \ --private-key KEY_FILE_PATH \ --region REGION
Replace the following:
- CERT_NAME: add a name for your certificate.
- CERT_FILE_PATH: add the path to your local certificate file to create a self-managed certificate. The certificate must be in PEM format.
- KEY_FILE_PATH: add the path to a local private key file. The private key must be in PEM format and must use RSA or ECDSA encryption.
- REGION: add a region for your certificate.
Create an Ingress. The following YAML file, named
ingress-pre-shared-cert.yaml
is an example of the Ingress resource that you need to create:# ingress-pre-shared-cert.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ilb-demo-ing namespace: default annotations: ingress.gcp.kubernetes.io/pre-shared-cert: "CERT_NAME" kubernetes.io/ingress.class: "gce-internal" kubernetes.io/ingress.allow-http: "false" spec: rules: - host: DOMAIN http: paths: - backend: serviceName: SERVICE_NAME servicePort: 80
Replace the following:
- DOMAIN: add your domain.
- CERT_NAME: add the name of the certificate you created in the previous section.
- SERVICE_NAME: add the name of your Service.
After you have created the Ingress, apply the resource to the cluster:
kubectl apply -f ingress-pre-shared-cert.yaml
Summary of internal Ingress annotations
The following tables show you the annotations that you can add when you are creating Ingress and Service resources for Ingress for Internal HTTP(S) Load Balancing.
Ingress annotations
Annotation | Description |
---|---|
kubernetes.io/ingress.class | Specified as "gce-internal" for internal Ingress. If the class is not
specified, an Ingress resource is interpreted by default as an external Ingress. |
kubernetes.io/ingress.allow-http | Specifies whether to allow HTTP traffic between the client and the HTTP(S)
load balancer. Possible values are true and false .
The default value is true , but you must set this annotation to
false if you are using HTTPS for internal load balancing.
For more information, see Disabling HTTP. |
ingress.gcp.kubernetes.io/pre-shared-cert | You can upload certificates and keys to your Google Cloud project. Use this annotation to reference the certificates and keys. For more information, see Using multiple SSL certificates in HTTP(S) load balancing. |
Service annotations related to Ingress
Annotation | Description |
---|---|
cloud.google.com/backend-config | Use this annotation to configure the backend service associated with a servicePort. For more information, see Ingress features. |
cloud.google.com/neg | Use this annotation to specify that the load balancer should use network endpoint groups. For more information, see Using Container-native Load Balancing. |
Troubleshooting
Understanding and observing the state of Ingress typically involves inspecting the associated resources. The types of issues encountered often include load balancing resources not being created properly, traffic not reaching backends, or backends not appearing healthy.
Some common troubleshooting steps include:
- Verifying that client traffic is originating from within the same region and VPC as the load balancer.
- Verifying that the Pods and backends are healthy.
- Validating the traffic path to the VIP and for Compute Engine health checks to ensure it is not blocked by firewall rules.
- Checking the Ingress resource events for errors.
- Describing the Ingress resource to see the mapping to Compute Engine resources.
- Validating that the Compute Engine load balancing resources exist, have the correct configurations, and do not have errors reported.
Filtering for Ingress events
The following query filters for errors across all Ingress events in your cluster:
kubectl get events --all-namespaces --field-selector involvedObject.kind=Ingress
You can also filter by objects or object names:
kubectl get events --field-selector involvedObject.kind=Ingress,involvedObject.name=hostname-internal-ingress
In the following error, you can see that the Service referenced by the Ingress does not exist:
LAST SEEN TYPE REASON OBJECT MESSAGE
0s Warning Translate ingress/hostname-internal-ingress error while evaluating the ingress spec: could not find service "default/hostname-invalid"
Inspecting Compute Engine load balancer resources
The following command displays the full output for the Ingress resource so that you can see the mappings to the Compute Engine resources that are created by the Ingress controller:
kubectl get ing INGRESS_FILENAME -o yaml
Replace INGRESS_FILENAME with your Ingress resource's filename.
You should see output similar to following example:
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s1-241a2b5c-default-hostname-80-29269aa5":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-ilb-demo-ingress--241a2b5c94b353ec
ingress.kubernetes.io/target-proxy: k8s-tp-default-ilb-demo-ingress--241a2b5c94b353ec
ingress.kubernetes.io/url-map: k8s-um-default-ilb-demo-ingress--241a2b5c94b353ec
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce-internal"},"name":"ilb-demo-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"hostname","servicePort":80}}}
kubernetes.io/ingress.class: gce-internal
creationTimestamp: "2019-10-15T02:16:18Z"
finalizers:
- networking.gke.io/ingress-finalizer
generation: 1
name: ilb-demo-ingress
namespace: default
resourceVersion: "1538072"
selfLink: /apis/networking.k8s.io/v1beta1/namespaces/default/ingresses/ilb-demo-ingress
uid: 0ef024fe-6aea-4ee0-85f6-c2578f554975
spec:
backend:
serviceName: hostname
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 10.128.0.127
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ingress.kubernetes.io/backends
annotations list the backends and their
status. Make sure that your backends are listed as HEALTHY
.
The Compute Engine resources created by the Ingress can be queried directly to understand their status and configuration. Running these queries can also be helpful when troubleshooting.
To list all Compute Engine forwarding rules:
gcloud compute forwarding-rules list
Example output:
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
k8s-fw-default-hostname-internal-ingress--42084f6a534c335b us-central1 10.128.15.225 TCP us-central1/targetHttpProxies/k8s-tp-default-hostname-internal-ingress--42084f6a534c335b
To list the health of a backend service, first list the backend services, and make a copy of the name of the backend service you want to inspect:
gcloud compute backend-services list
Example output:
NAME BACKENDS PROTOCOL
k8s1-42084f6a-default-hostname-80-98cbc1c1 us-central1-a/networkEndpointGroups/k8s1-42084f6a-default-hostname-80-98cbc1c1 HTTP
You can now use the backend service name to query its health:
gcloud compute backend-services get-health k8s1-42084f6a-default-hostname-80-98cbc1c1 \
--region us-central1
Example output:
backend: https://www.googleapis.com/compute/v1/projects/user1-243723/zones/us-central1-a/networkEndpointGroups/k8s1-42084f6a-default-hostname-80-98cbc1c1
status:
healthStatus:
- healthState: HEALTHY
What's next
Read a conceptual overview of Ingress for HTTP(S) load balancing in GKE.
Read a conceptual overview of Services in GKE.
Learn how to create a Compute Engine internal TCP/UDP load balancer on GKE.