Deploying Gateways


This page describes how to deploy Kubernetes Gateway resources on Google Kubernetes Engine (GKE). It explains how to deploy a private Gateway and an internet-facing Gateway to expose applications, and demonstrates some of the concepts of the Gateway API resource model. To see how Gateways are deployed for multi-cluster load balancing, see Deploying multi-cluster Gateways.

The Gateway API resources are designed to be managed by users in different roles:

  • Cluster operators manage Gateways that can be shared by multiple application teams.
  • Application developers manage HTTPRoutes that bind their application's Services to a Gateway. The examples on this page feature two applications: a store and a site.

This document includes the following steps:

  1. A cluster operator deploys a Gateway resource to expose GKE-hosted applications on the internet. The cluster operator configures TLS and other policies on the Gateway.
  2. Application developers on the store team deploy their application and an HTTPRoute to expose their application through the Gateway.
  3. The site team deploys their application with an HTTPRoute that uses the same Gateway.

GKE Gateway controller requirements

  • GKE version 1.20 or later.
  • The Gateway API is supported by VPC-native (Alias IP) clusters only.
  • If you are using the internal GatewayClasses, a proxy-only subnet must be enabled.
  • GKE clusters only in the following regions:
    • us-west1
    • us-east1
    • us-central1
    • europe-west4
    • europe-west3
    • europe-west2
    • europe-west1
    • asia-southeast1

Preview limitations & known issues

While GKE's support of the Gateway API is in Preview, the following limitations apply:

  • The GKE GatewayClasses support different capabilities depending on their underlying load balancer. See the GatewayClass capabilities to learn more about the different features supported across the available GatewayClasses.
  • GKE clusters that have Istio or ASM Gateway resources will conflict with Kubernetes Gateway resources when using kubectl. kubectl get gateway may not return your Istio or ASM Gateway resources as expected. For avoiding commandline resource conflicts with Istio Gateways, see Kubernetes Gateways and Istio Gateways
  • Google Cloud load balancer resources created by Gateways are not currently visible in the Google Cloud Console UI.
  • Viewing Gateway, HTTPRoute, and ServiceExport resources in the GKE UI is not supported.
  • Using the GKE Gateway controller with Kubernetes on Compute Engine (self-managed Kubernetes) is not supported.
  • Terminating TLS traffic using credentials stored in Kubernetes secrets is not supported; however referencing Google Cloud SSL certificate resources to terminate TLS is supported.
  • Configuring SSL policies or HTTPS redirects using the FrontendConfig resource is not supported.
  • The BackendConfig resource is not supported for the multi-cluster (-mc) GatewayClasses, however it is supported for the single-cluster GatewayClasses.
  • The automatic generation of Google-managed SSL certificates is not supported; however a Google-managed SSL certificate can be created manually and referenced using the networking.gke.io/pre-shared-certs TLS option.
  • Service traffic capacity settings max-rate-per-endpoint, max-rate, and capacity-scaler are not supported.

Install Gateway API CRDs

Before using Gateway resources in GKE you must install the Gateway API Custom Resource Definitions (CRDs) in your cluster.

  1. Run the following command in the GKE cluster where you want to deploy Gateway resources:

    kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.3.0" \
    | kubectl apply -f -
    

    The following CRDs are installed:

    customresourcedefinition.apiextensions.k8s.io/backendpolicies.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/gatewayclasses.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/gateways.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/httproutes.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/tcproutes.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/tlsroutes.networking.x-k8s.io created
    customresourcedefinition.apiextensions.k8s.io/udproutes.networking.x-k8s.io created
    

GKE GatewayClasses

After a GKE cluster detects the existence of Gateway API CRDs, the following GKE GatewayClasses are automatically installed by the GKE Gateway controller.

  1. Check for the available GatewayClasses with the following command:

    kubectl get gatewayclass
    

    This output confirms that the GKE GatewayClasses are ready to use in your cluster:

    NAME          CONTROLLER
    gke-l7-rilb   networking.gke.io/gateway
    gke-l7-gxlb   networking.gke.io/gateway
    

    To understand the capabilities of each GatewayClass, see GatewayClass capabilities.

Deploying an internal Gateway

A Gateway resource represents a data plane that routes traffic in Kubernetes. A Gateway can represent many different kinds of load balancing and routing depending on the GatewayClass it is derived from. To learn more about the Gateway resource, see the Gateway resource description or the API specification.

In this case, the administrator of the GKE cluster wants to create a Gateway that can be used by different teams to expose their applications internally. The administrator deploys the Gateway, and application teams deploy their Routes independently and bind them to this Gateway.

  1. Save the following Gateway manifest to a file named gateway.yaml:

    kind: Gateway
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - protocol: HTTP
        port: 80
        routes:
          kind: HTTPRoute
          selector:
            matchLabels:
              gateway: internal-http
    

    Explanation of fields:

    • gatewayClassName: gke-l7-rilb specifies the GatewayClass that this Gateway is derived from. gke-l7-rilb corresponds to the regional internal HTTP(S) load balancer.
    • port: 80 specifies that the Gateway exposes only port 80 for listening for HTTP traffic.
    • The selector specifies that the Gateway binds with any HTTPRoutes in the default Namespace that also have the gateway: internal-http label. The concepts of how Routes bind to Gateways using label selectors are covered in Route binding.
  2. Deploy the Gateway in your cluster:

    kubectl apply -f gateway.yaml
    
  3. Validate that the Gateway has deployed correctly. It might take a few minutes for it to deploy all of its resources.

    kubectl describe gateway internal-http
    

    The output resembles the following:

    Name:         internal-http
    Namespace:    default
    ...
    Status:
      Addresses:
        Type:   IPAddress
        Value:  192.168.1.14
      Conditions:
        Last Transition Time:  1970-01-01T00:00:00Z
        Message:               Waiting for controller
        Reason:                NotReconciled
        Status:                False
        Type:                  Scheduled
    Events:
      Type    Reason  Age                From                       Message
      ----    ------  ----               ----                       -------
      Normal  ADD     92s                networking.gke.io/gateway  mark-test/internal-http
      Normal  UPDATE  45s (x3 over 91s)  networking.gke.io/gateway  mark-test/internal-http
      Normal  SYNC    45s                networking.gke.io/gateway  SYNC on mark-test/internal-http was a success
    

    At this point, there is a Gateway deployed in your cluster that has provisioned a load balancer and an IP address. The Gateway has no Routes, however, and so it does not yet know how it should send traffic to backends. Without Routes, all traffic goes to a default backend, which returns an HTTP 404. Next, you deploy an application and Routes, which tell the Gateway how to get to application backends.

Deploying the demo applications

Application teams can deploy their applications and Routes independently from the deployment of Gateways. In some cases the application team might want to own the Gateway as well and deploy it themselves as a resource dedicated to their applications. See Route binding for different ownership models of Gateways and Routes. In this example however, the store team deploys their application and an accompanying HTTPRoute to expose their app through the internal-http Gateway created in the previous section.

The HTTPRoute resource has many configurable fields for traffic matching. For an explanation of HTTPRoute's fields, see the API specification.

  1. Deploy the store application (store-v1, store-v2, and store-german deployments) to your cluster:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/master/gateway/gke-gateway-controller/app/store.yaml
    

    This creates three Deployments and three Services which are named store-v1, store-v2, and store-german.

  2. Validate that the application has deployed successfully:

    kubectl get pod
    

    The output resembles the following after the application is running:

    NAME                        READY   STATUS    RESTARTS   AGE
    store-german-66dcb75977-5gr2n   1/1     Running   0          38s
    store-v1-65b47557df-jkjbm       1/1     Running   0          14m
    store-v2-6856f59f7f-sq889       1/1     Running   0          14m
    
  3. Validate that the Services have also been deployed:

    kubectl get service
    

    The output resembles the following, showing a Service for each store Deployment:

    NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
    store-german   ClusterIP   10.48.3.183   <none>        8080/TCP   4s
    store-v1       ClusterIP   10.48.2.224   <none>        8080/TCP   5s
    store-v2       ClusterIP   10.48.4.48    <none>        8080/TCP   5s
    

Container-native load balancing

The GKE Gateway controller uses container-native load balancing by default. This means that load balanced traffic is sent directly to Pod IP addresses. The load balancer has direct visibility of the Pod IP addresses so that traffic does not traverse Kubernetes Service load balancing. This leads to more efficient, stable, and understandable traffic. The gke-l7-* GatewayClasses do not support Instance Group-based load balancing.

GKE Gateways do not require any special user-specified annotations on Services. Any Service that is referenced by an HTTPRoute is annotated automatically by the GKE Gateway Controller with references to the Service's NEGs, like the following example:

Name:              store-v1
Namespace:         default
Annotations:       cloud.google.com/neg: {"exposed_ports":{"8080":{}}}
                   cloud.google.com/neg-status: {"network_endpoint_groups":{"8080":"k8s1-cb368ccb-default-foo-v1-8080-f376ae25"},"zones":["us-central1-a"]}
...

Deploying the HTTPRoute

Route resources define protocol-specific rules for mapping traffic from a Gateway to Kubernetes backends. The HTTPRoute resource does HTTP and HTTPS traffic matching and filtering and is supported by all of the gke-l7 GatewayClasses.

In this section, you deploy an HTTPRoute, which programs the Gateway with the routing rules needed to reach your store application.

To deploy an HTTPRoute, follow these steps:

  1. Save the following HTTPRoute manifest to a file named store-route.yaml:

    kind: HTTPRoute
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: store
      labels:
        gateway: internal-http
    spec:
      hostnames:
      - "store.example.com"
      rules:
      - forwardTo:
        - serviceName: store-v1
          port: 8080
      - matches:
        - headers:
            type: Exact
            values:
              env: canary
        forwardTo:
        - serviceName: store-v2
          port: 8080
      - matches:
        - path:
            type: Prefix
            value: /de
        forwardTo:
        - serviceName: store-german
          port: 8080
    
  2. Deploy the HTTProute in your cluster:

    kubectl apply -f store-route.yaml
    

The store HTTPRoute is bound to the internal-http Gateway and so these routing rules are configured on the underlying load balancer as in this diagram:

The routing rules configured by the store HTTPRoute

These routing rules will process HTTP traffic in the following manner:

  • Traffic to store.example.com/de goes to Service store-german.
  • Traffic to store.example.com with the HTTP header "env: canary" goes to Service store-v2.
  • The remaining traffic to store.example.com goes to Service store-v1.

Route binding

Gateways use route binding to match Routes with Gateways. When a Route binds to a Gateway, the underlying load balancer or proxy is programmed with the routing rules specified in the Route. Route and Gateway resources have built-in controls to permit or constrain how they select each other, which determines binding.

The store HTTPRoute is bound to the internal-http Gateway because of the matching labels and selectors in each of the resources:

kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: internal-http
spec:
  listeners:
  - routes:
      kind: HTTPRoute
      selector:
        matchLabels:
          gateway: internal-http
---
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: store
  labels:
    gateway: internal-http

The Gateway specifies that it selects any HTTPRoutes that have the label gateway: internal-http and the HTTPRoute has the label gateway: internal-http. This enables the Route to bind to the Gateway, configuring the underlying load balancer with the traffic matching rules in the Route. Many Routes can bind to a single Gateway.

When many Routes bind to the same Gateway, the GKE Gateway controller merges these Routes into a single routing configuration for the underlying load balancer. This is what enables different teams to share the same underlying infrastructure, while controlling their portion of the routing configuration independently.

The GKE Gateway controller implements strict Route merging, precedence, and validation logic, which provides predictable merging between Routes and prevents invalid Routes from disrupting the underlying load balancer configuration. For details, see the route merging, precedence, and validation.

For instructions on how to deploy a second HTTPRoute that also uses the internal-http Gateway, see shared Gateways section.

Sending traffic to your application

Now that your Gateway, Route, and application are deployed in your cluster, you can pass traffic to your application.

  1. Validate that the store HTTPRoute has been applied successfully. Events are emitted on the Route when it binds to a Gateway successfully.

    kubectl describe httproute store
    

    The events section of the output includes events like the following if the HTTPRoute bound to the Gateway:

    Events:
    Type    Reason  Age   From                              Message
    ----    ------  ----  ----                              -------
    Normal  ADD     11m   networking.gke.io/gateway         default/store
    Normal  ADD     11m   multi-cluster-ingress-controller  default/store
    
  2. Retrieve the IP address from the Gateway so that you can send traffic to your application:

    kubectl get gateway internal-http -o=jsonpath="{.status.addresses[0].value}"
    

    The output is an IP address.

  3. Send traffic to this IP address from shell on a virtual machine (VM) instance with connectivity to the cluster. You can create a VM for this purpose. This is necessary because the Gateway has an internal IP address and is only accessible from within your VPC network. Because the internal-http is a regional load balancer, the client shell must be within the same region as the GKE cluster.

    Because you do not own the example.com hostname, set the host header manually so that the traffic routing can be observed. First try requesting store.example.com:

    curl -H "host: store.example.com" VIP
    

    Replace VIP with the IP address from the previous step.

    The output from the demo app shows information about the location where the app is running:

    {
      "cluster_name": "gke1",
      "host_header": "store.example.com",
      "metadata": "store-v1",
      "node_name": "gke-gke1-pool-2-bd121936-5pfc.c.church-243723.internal",
      "pod_name": "store-v1-84b47c7f58-pmgmk",
      "pod_name_emoji": "💇🏼‍♀️",
      "project_id": "church-243723",
      "timestamp": "2021-03-25T13:31:17",
      "zone": "us-central1-a"
    }
    
  4. Test the path match by going to the German version of the store service at store.example.com/de:

    curl -H "host: store.example.com" VIP/de
    

    The output confirms that the request was served by a store-german Pod:

    {
      "cluster_name": "gke1", 
      "host_header": "store.example.com", 
      "metadata": "Gutentag!", 
      "node_name": "gke-gke1-pool-2-bd121936-n3xn.c.church-243723.internal", 
      "pod_name": "store-german-5cb6474c55-lq5pl", 
      "pod_name_emoji": "🧞‍♀", 
      "project_id": "church-243723", 
      "timestamp": "2021-03-25T13:35:37", 
      "zone": "us-central1-a"
    }
    
  5. Finally, use the env: canary HTTP header to send traffic to the canary version of the store Service:

    curl -H "host: store.example.com" -H "env: canary " VIP
    

    The output confirms that the request was served by a store-v2 Pod:

    {
      "cluster_name": "gke1", 
      "host_header": "store.example.com", 
      "metadata": "store-v2", 
      "node_name": "gke-gke1-pool-2-bd121936-5pfc.c.church-243723.internal", 
      "pod_name": "store-v2-5788476cbd-s9thb", 
      "pod_name_emoji": "👩🏿🦰", 
      "project_id": "church-243723", 
      "timestamp": "2021-03-25T13:38:26", 
      "zone": "us-central1-a"
    }
    

Shared Gateways

The Gateway APIs use separate resources, Gateways and Route resources, to deploy load balancers and routing rules. This differs from Ingress, which combines everything in one resource. By splitting responsibility among resources, Gateway enables the load balancer and its routing rules to be deployed separately and to be deployed by different users or teams. This enables Gateways to become shared Gateways that bind with many different Routes that can be fully owned and managed by independent teams, even across different Namespaces.

In the previous steps, the "store" team deployed their application, store.example.com. Next, the independent "site" team deploys their application, site.example.com, behind the same internal-http Gateway using the same IP address.

Deploying routes against a shared Gateway

In this example the site team deploys their application, Services, and a corresponding HTTPRoute to match traffic from the Gateway to those Services.

  1. Deploy the example application:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/master/gateway/gke-gateway-controller/app/site.yaml
    
  2. Save the following HTTPRoute manifest to a file named site-route.yaml:

    This site HTTPRoute matches all traffic for site.example.com and routes it to the site-v1 Service. Like the store HTTPRoute, the site HTTPRoute also has the gateway: internal-http label and is in the same Namespace as the internal-http Gateway. The Gateway's spec.listeners[].routes.selector field allows the Gateway to select and bind this Route. The GKE Gateway controller merges these HTTPRoutes into a single underlying URLmap with routes for site.example.com and store.example.com.

    kind: HTTPRoute
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: site
      labels:
        gateway: internal-http
    spec:
      hostnames:
      - "site.example.com"
      rules:
      - forwardTo:
        - serviceName: site-v1
          port: 8080
    
  3. Deploy the HTTPRoute in your cluster:

    kubectl apply -f site-route.yaml
    
  4. With both HTTPRoutes deployed, the routing logic should resemble the following diagram. Traffic for site.example.com should go to site-v1 Pods, store.example.com should go to store-v1 Pods, store-v2 Pods or store-german Pods.

    A picture of the store and site HTTPRoutes bound to the same Gateway

  5. Verify that the HTTPRoute has successfully bound to the internal-http gateway:

    kubectl describe httproute site
    

    The output resembles the following if the HTTPRoute successfully bound to the Gateway.

    Status:
      Gateways:
        Conditions:
          Last Transition Time:  2021-04-19T16:13:46Z
          Message:               Route admitted
          Observed Generation:   1
          Reason:                RouteAdmitted
          Status:                True
          Type:                  Admitted
        Gateway Ref:
          Name:       internal-http
          Namespace:  default
    Events:
      Type    Reason  Age                 From                              Message
      ----    ------  ----                ----                              -------
      Normal  ADD     60m                 networking.gke.io/gateway         foo/foo-route
    

    If the Admitted condition for the internal-http Gateway is True, then this indicates that HTTPRoute/site has successfully bound to internal-http.

    Invalid configuration, references to nonexistent Kubernetes Services, incorrect Gateway labelling, or conflict with other Routes can all lead to an HTTPRoute being rejected from binding with a Gateway.

    For details on interpreting the status of routes, see the route status section.

  6. After you've validated that the site route has bound successfully, send traffic to the Gateway to confirm that it's being routed correctly. Send traffic from a VM in the same VPC network as the Gateway. Send an HTTP request to both site.example.com and store.example.com to validate the responses:

    curl -H "host: site.example.com" VIP
    curl -H "host: store.example.com" VIP
    

Route status

HTTPRoute resources emit conditions and events to help users understand the status of their applied configuration. Together they indicate whether a Route has successfully bound with one or more Gateways or if it was rejected.

HTTPRoute conditions

HTTPRoute conditions indicate the status of the Route binding between any Gateways that a Route is bound to. Because a Route can be bound to multiple Gateways, this is a list of Gateways and the individual conditions between the Route and each Gateway.

  • Admitted=True indicates that the HTTPRoute is successfully bound to a Gateway.
  • Admitted=False indicates that the HTTPRoute has been rejected from binding with this Gateway.

If there are no Gateways listed under the Gateway bindings heading, then your Route labels and Gateway label selectors likely do not match. This means that your Route is not being selected by any Gateways and thus its configuration is not applied anywhere.

HTTPRoute events

HTTPRoute events provide more detail about the status of the Route. There are a few classes or reasons under which events are grouped:

  • ADD events are triggered by a resource being added.
  • UPDATE events are triggered by a resource being updated.
  • SYNC events are triggered by periodic reconciliation.

Deploying an external Gateway

The following example shows how to deploy a Gateway to be used for external internet load balancing. It demonstrates how to configure TLS for the client connection to secure the Gateway. This example uses the gke-l7-gxlb GatewayClass, but these TLS concepts apply equally for all of the gke-l7-* GatewayClasses.

The application deployed in previously is a prerequisite for this example. If you have not already, go back and deploy this application in your cluster before proceeding.

The gke-l7-gxlb and gke-l7-gxlb-mc GatewayClasses both deploy a global external HTTP(S) load balancer for internet-facing traffic. They have equivalent functionality, except that the gke-l7-gxlb-mc also supports multi-cluster use cases because it is able to load balance to applications across multiple GKE clusters. For a comparison with other GatewayClasses, see GatewayClass capabilities.

TLS between client and Gateway

An external HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients, the load balancer must have a certificate so it can prove its identity to your clients. The load balancer must also have a private key to complete the HTTPS handshake. The certificate and private key are referred to as TLS credentials.

The TLS certificate and key are stored as Google Cloud SSL certificate resources.

The following Gateway manifest shows how TLS is configured:

kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: external-http
spec:
  gatewayClassName: gke-l7-gxlb
  listeners:
  - protocol: HTTPS
    port: 443
    routes:
      kind: HTTPRoute
      selector:
        matchLabels:
          gateway: external-http
    tls:
      mode: Terminate
      options:
        networking.gke.io/pre-shared-certs: store-example-com

These elements are required in a Gateway that uses TLS:

  • The Gateway listener port and protocol must be set to 443 and HTTPS.
  • The listener.tls.mode field must be set to Terminate.
  • The TLS credentials must be referenced in the listeners.tls block.

The example manifest references an SSL certificate resource named store-example-com. All traffic to port 443 of this Gateway is terminated by this certificate.

Creating and storing a TLS certificate

There are many ways to generate TLS certificates. They can be manually generated on the command line, generated using Google-managed certificates, or may be generated internally by your company's public key infrastructure (PKI) system. In this example, we manually generate a self-signed certificate. Self-signed certificates are not typically used for public services, but it demonstrates these concepts more easily.

  1. Follow Step 1: Create a private key and certificate of the self-managed SSL certificates guide. This generates a self-signed certificate and key pair using the command line. Use the following OpenSSL configuration to create a self-signed certificate for store.example.com.

    [req]
    default_bits              = 2048
    req_extensions            = extension_requirements
    distinguished_name        = dn_requirements
    prompt                    = no
    
    [extension_requirements]
    basicConstraints          = CA:FALSE
    keyUsage                  = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName            = @sans_list
    
    [dn_requirements]
    0.organizationName        = example
    commonName                = store.example.com
    
    [sans_list]
    DNS.1                     = store.example.com
    

    Save your certificate and key files as cert.pem and key.pem respectively. Do not move on to step 2 of the Using self-managed SSL certificates instructions.

  2. Save your certificate credentials as a global SSL certificate resource:

    gcloud compute ssl-certificates create store-example-com \
        --certificate=cert.pem \
        --private-key=key.pem \
        --global
    

    The GatewayClass name indicates the type of SSL certificate resource that can be used with Gateways of that class. The scope and location of the SSL certificate must match the scope and location of the Gateway that is using it. For example, a global SSL certificate cannot be used by a regional Gateway.

    The following table shows the scope and location requirements for SSL certificate resources used by Gateways:

    GatewayClasses SSL certificate scope SSL certificate location
    • gke-l7-rilb
    • gke-l7-rilb-mc
    Regional SSL certificate Must be the same region as the Gateway
    • gke-l7-gxlb
    • gke-l7-gxlb-mc
    Global SSL certificate Global

Deploying the Gateway

  1. Save the following Gateway manifest, which uses the gke-l7-gxlb GatewayClass, as external-gateway.yaml:

    kind: Gateway
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: external-http
    spec:
      gatewayClassName: gke-l7-gxlb
      listeners:
      - protocol: HTTPS
        port: 443
        routes:
          kind: HTTPRoute
          selector:
            matchLabels:
              gateway: external-http
        tls:
          mode: Terminate
          options:
            networking.gke.io/pre-shared-certs: store-example-com
    

    This external Gateway manifest differs from the earlier internal Gateway example in several ways:

    • It uses the gke-l7-gxlb GatewayClass, which deploys an external HTTP(S) load balancer.
    • Its port and protocol are set to 443 and HTTPS.
    • The Route label selector is gateway: external-http, which binds this Gateway to a different set of Routes than the previous Gateway, which was using gateway: internal-http as its Route label selector.
    • The tls section of the manifest is configured to terminate TLS using an SSL certificate resource.
  2. Deploy this Gateway in your GKE cluster:

    kubectl apply -f external-gateway.yaml
    
  3. Save the following HTTPRoute manifest as store-external-route.yaml:

    kind: HTTPRoute
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: store-external
      labels:
        gateway: external-http
    spec:
      hostnames:
      - "store.example.com"
      rules:
      - forwardTo:
        - serviceName: store-v1
          port: 8080
    
  4. Deploy the HTTPRoute in your cluster:

    kubectl apply -f store-external-route.yaml
    

    It might take several minutes for the gke-l7-gxlb Gateway to fully deploy.

  5. Verify that the Gateway works by sending a request over the internet.

    1. Save the cert.pem file that you generated earlier to the machine that you are using to connect to the Gateway. Because the Gateway uses a self-signed certificate, this certificate is needed to authenticate the Gateway.

    2. Get the IP address of the load balancer:

      kubectl get gateway external-http -o=jsonpath="{.status.addresses[0].value}"
      

      The command outputs the IP address of the load balancer. This is a public IP address, so any client with internet access can connect to it.

    3. Use curl to access the domain of the Gateway. Because DNS is not configured for this domain, use the --resolve option to tell curl to resolve the domain name to the IP address of the Gateway:

      curl https://store.example.com --resolve store.example.com:443:VIP --cacert cert.pem -v
      

      Replace VIP with the load balancer IP address.

    4. curl's verbose output includes a successful TLS handshake followed by a response from the application like the following output. This proves that TLS is being terminated at the Gateway correctly and that the application is responding to the client securely.

      ...
      * TLSv1.2 (OUT), TLS handshake, Client hello (1):
      * TLSv1.2 (IN), TLS handshake, Server hello (2):
      * TLSv1.2 (IN), TLS handshake, Certificate (11):
      * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
      * TLSv1.2 (IN), TLS handshake, Server finished (14):
      * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
      * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
      * TLSv1.2 (OUT), TLS handshake, Finished (20):
      * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
      * TLSv1.2 (IN), TLS handshake, Finished (20):
      * SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
      * ALPN, server accepted to use h2
      * Server certificate:
      *  subject: O=example; CN=store.example.com
      *  start date: Apr 19 15:54:50 2021 GMT
      *  expire date: Apr 19 15:54:50 2022 GMT
      *  common name: store.example.com (matched)
      *  issuer: O=example; CN=store.example.com
      *  SSL certificate verify ok.
      ...
      {
        "cluster_name": "gw",
        "host_header": "store.example.com",
        "metadata": "store-v1",
        "node_name": "gke-gw-default-pool-51ccbf30-yya8.c.agmsb-k8s.internal",
        "pod_name": "store-v1-84b47c7f58-tj5mn",
        "pod_name_emoji": "😍",
        "project_id": "agmsb-k8s",
        "timestamp": "2021-04-19T16:30:08",
        "zone": "us-west1-a"
      }
      

BackendConfig with Gateway

The gke-l7 GatewayClasses support the BackendConfig resource to customize backend settings on a per-Service level. This requires the cloud.google.com/backend-config Service annotation to reference the BackendConfig resource. In the following example, the load balancer health check and connection draining settings are customized for the store-v1 Service.

apiVersion: v1
kind: Service
metadata:
  name: store-v1
  annotations:
    cloud.google.com/backend-config: '{"default": "store-backendconfig"}'
spec:
  selector:
    app: store
    version: v1
  ports:
  - port: 8080
    targetPort: 8080
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: store-backendconfig
spec:
  healthCheck:
    checkIntervalSec: 15
    port: 15020
    type: HTTPS
    requestPath: /healthz
  connectionDraining:
    drainingTimeoutSec: 60

See Configuring Ingress features through BackendConfig parameters for more detail on how to use the BackendConfig resource.

Gateway IP addressing

Every Gateway has an IP address on which it listens for traffic. If no address is specified on the Gateway then an IP address is automatically provisioned by the Gateway. Static addresses can also be pre-provisioned so that the lifecycle of the IP address is independent of the Gateway.

After a Gateway is deployed, its IP address shows in the status field:

kind: Gateway
...
status:
  addresses:
    - value: 10.15.32.3

Depending on the GatewayClass, the IP address is allocated from the following subnets:

GatewayClasses Default IP Address Pool
  • gke-l7-rilb
  • gke-l7-rilb-mc
Regional private IP addresses from the GKE cluster node subnet range
  • gke-l7-gxlb
  • gke-l7-gxlb-mc
Global public IP addresses from Google's public IP ranges

You can specify an IP address in two ways:

  • addresses.IPAddress: lets you specify an IP address at the time of Gateway deployment. The IP address is configurable, rather than automatically provisioned, but the IP address has the same lifecycle as the Gateway and is released if the Gateway is deleted.

  • addresses.NamedAddress: lets you specify an IP address independently of the Gateway. You can create a static IP address resource prior to Gateway deployment and the resource is referenced by the NamedAddress. You can reuse the static IP address even if the Gateway is deleted.

You can configure an IPAddress by specifying the IP address in the addresses field of the Gateway you deploy.

  1. Deploy the following internal Gateway and specify 10.0.0.3 as the static IP address:

    kind: Gateway
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - protocol: HTTP
        port: 80
        routes:
          kind: HTTPRoute
          selector:
            matchLabels:
              gateway: internal-gateway
      addresses:
      - type: IPAddress
        value: 10.0.0.3
    

    A NamedAddress requires that you provision a static IP outside of the Gateway deployment. This requires two separate steps:

  2. Create a static IP address resource. In this case an internal, regional Gateway is deployed so a corresponding internal, regional IP address is needed.

    gcloud compute addresses create ADDRESS_NAME \
        --region REGION \
        --subnet SUBNET \
        --project PROJECT_ID
    

    When using regional Gateways, replace PROJECT_ID and and REGION with the project and region where your GKE cluster is running. Global, external Gateways do not require a region or subnet specification.

  3. Deploy your Gateway by referencing the same address resource name as the NamedAddress.

    kind: Gateway
    apiVersion: networking.x-k8s.io/v1alpha1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - protocol: HTTP
        port: 80
        routes:
          kind: HTTPRoute
          selector:
            matchLabels:
              gateway: internal-gateway
      addresses:
      - type: NamedAddress
        value: ADDRESS_NAME
    

Route merging, precedence, and validation

Route precedence

The Gateway API defines strict precedence rules for how traffic is matched by Routes that have overlapping routing rules. The precedence between two overlapping HTTPRoutes is as follows:

  1. Hostname merge: The longest/most specific hostname match.
  2. Path merge: The longest/most specific path match.
  3. Header merge: The largest number of HTTP headers that match.
  4. Conflict: If the previous three rules don't establish precedence, then precedence goes to the HTTPRoute resource with the oldest timestamp.

Route merging

For gke-l7 GatewayClasses, all HTTPRoutes for a given Gateway are merged into the same URL map resource. How the HTTPRoutes are merged together depends on the type of overlap between HTTPRoutes. The HTTPRoute from the earlier example can be split into three separate HTTPRoutes to illustrate route merging and precedence:

  1. Route merge: All three HTTPRoutes bind with the same internal-http Gateway, so they will be merged together.
  2. Hostname merge: All three Routes match for store.example.com, so their hostname rules are merged.
  3. Path merge: store-german-route has a more specific path /de, so this is not merged further. store-v1-route and store-v2-route both match on the same /* path as well, so they are merged on the path.
  4. Header merge: store-v2-route has a more specific set of HTTP header matches than store-v1-route, so they are not merged further.
  5. Conflict: Because the Routes are able to be merged on hostname, path, and headers, there are no conflicts, and all of the routing rules will apply to traffic.

The single HTTPRoute used in the earlier example are equivalent to these three separate routes:

kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: store-v1-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - forwardTo:
    - serviceName: store-v1
      port: 8080
---
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: store-v2-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - matches:
    - headers:
        type: Exact
        values:
          env: canary
    forwardTo:
    - serviceName: store-v2
      port: 8080
---
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: store-german-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - matches:
    - path:
        type: Prefix
        value: /de
    forwardTo:
    - serviceName: store-german
      port: 8080

Gateway default backend

All of the gke-l7 GatewayClasses have an implicit default backend which returns an HTTP 404 to unmatched traffic. It is possible to customize the default backend with an explicit default Route that sends unmatched traffic to a user-provided Service. The following HTTPRoute is an example of how to customize the default backend. If applied, it will take precedence over the implicit default backend:

kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: custom-default-backend
  labels:
    gateway: my-gateway
spec:
  rules:
  - forwardTo:
    - serviceName: my-custom-default-backend-service
      port: 8080

This HTTPRoute matches all traffic from a particular Gateway. There can only be one such HTTPRoute per Gateway or else they will conflict and the precedence ordering will apply.

An explicit default backend is also an effective strategy for preventing someone from accidentally creating a default Route that black-holes traffic for a Gateway. Because an explicit default HTTPRoute is the older resource it will always take precedence over any new HTTPRoutes that have conflicting routing rules.

Kubernetes Gateways and Istio Gateways

Note that the Kubernetes Gateway API and the Istio API both have a resource named Gateway. While they perform similar functions, they are not the same resource. If you are using Istio and the Gateway API in the same Kubernetes cluster, these names will overlap when using kubectl on the command line. kubectl get gateway might return the Kubernetes Gateway resources and not the Istio Gateway resources or vice versa.

$ kubectl api-resources
NAME       SHORTNAMES   APIGROUP                       NAMESPACED   KIND
gateways   gw           networking.istio.io/v1beta1            true         Gateway
gateways   gtw          networking.x-k8s.io/v1alpha1           true         Gateway

For GKE 1.20 and later clusters the GKE Gateway controller automatically installs the Gateway.networking.x-k8s.io/v1alpha1 resource in clusters. If you are using Istio and upgrade to GKE 1.20 and later it is recommended to start using the Gateway resource shortname or specify the API group. The shortname for a Kubernetes Gateway is gtw and the shortname for an Istio Gateway is gw. The following commands return the Kubernetes Gateway and Istio Gateway resources respectively.

# Kubernetes Gateway
$ kubectl get gtw
NAME                        CLASS
multi-cluster-gateway       gke-l7-gxlb-mc

$ kubectl get gateway.networking.x-k8s.io
NAME                        CLASS
multi-cluster-gateway       gke-l7-gxlb-mc

# Istio Gateway
$ kubectl get gw
NAME               AGE
bookinfo-gateway   64m

$ kubectl get gateway.networking.istio.io
NAME               AGE
bookinfo-gateway   64m