Deploying Gateways

This page describes how to deploy Kubernetes Gateway resources on Google Kubernetes Engine (GKE). It explains how to deploy a private Gateway and an internet-facing Gateway to expose applications, and demonstrates some of the concepts of the Gateway API resource model.

To see how Gateways are deployed for multi-cluster load balancing, see Deploying multi-cluster Gateways. To understand the differences between GatewayClasses see GatewayClass capabilities.

GKE Gateway controller requirements

  • GKE version 1.20 or later.
  • The Gateway API is supported by VPC-native (Alias IP) clusters only.
  • If you are using the internal GatewayClasses, a proxy-only subnet must be enabled.
  • Multi-cluster and Single-cluster Gateways are supported in all Google Cloud regions.

Preview limitations & known issues

While GKE's support of the Gateway API is in Preview, the following limitations apply:

  • Autopilot clusters are not supported.
  • The GKE GatewayClasses support different capabilities depending on their underlying load balancer. See the GatewayClass capabilities to learn more about the different features supported across the available GatewayClasses.
  • GKE clusters that have Istio or ASM Gateway resources will conflict with Kubernetes Gateway resources when using kubectl. kubectl get gateway may not return your Istio or ASM Gateway resources as expected. For avoiding commandline resource conflicts with Istio Gateways, see Kubernetes Gateways and Istio Gateways
  • Google Cloud load balancer resources created by Gateways are not currently visible in the Google Cloud console UI.
  • Viewing Gateway, HTTPRoute, and ServiceExport resources in the GKE UI is not supported.
  • Using the GKE Gateway controller with Kubernetes on Compute Engine (self-managed Kubernetes) is not supported.
  • Configuring SSL policies or HTTPS redirects using the FrontendConfig resource is not supported.
  • The BackendConfig resource is not supported for the multi-cluster (-mc) GatewayClasses, however it is supported for the single-cluster GatewayClasses.
  • The automatic generation of Google-managed SSL certificates is not supported; however a Google-managed SSL certificate can be created manually and referenced using the networking.gke.io/pre-shared-certs TLS option.
  • Service traffic capacity settings max-rate-per-endpoint, max-rate, and capacity-scaler are not supported.

Install Gateway API CRDs

Before using Gateway resources in GKE you must install the Gateway API Custom Resource Definitions (CRDs) in your cluster.

  1. Run the following command in the GKE cluster where you want to deploy Gateway resources:

    kubectl apply -k "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.5.0"
    

    This command installs the v1beta1 CRDs.

    The output is similar to the following:

    customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/referencepolicies.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/tcproutes.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/tlsroutes.gateway.networking.k8s.io configured
    customresourcedefinition.apiextensions.k8s.io/udproutes.gateway.networking.k8s.io configured
    

GKE GatewayClasses

After a GKE cluster detects the existence of Gateway API CRDs, the following GKE GatewayClasses are automatically installed by the GKE Gateway controller. It might take a few minutes for the controller to recognize the CRDs and install the GatewayClasses.

  1. Check for the available GatewayClasses with the following command:

    kubectl get gatewayclass
    

    This output confirms that the GKE GatewayClasses are ready to use in your cluster:

    NAME          CONTROLLER
    gke-l7-rilb   networking.gke.io/gateway
    gke-l7-gxlb   networking.gke.io/gateway
    

    To understand the capabilities of each GatewayClass, see GatewayClass capabilities.

Configuring a proxy-only subnet

If you have not already done so, configure a proxy-only subnet for each region in which you are deploying internal Gateways. This subnet is used to provide internal IP addresses to the load balancer proxies.

You must create a proxy-only subnet before you create Gateways that manage internal HTTP(S) load balancers. Each region of a VPC in which you use internal HTTP(S) load balancers must have a proxy-only subnet.

The gcloud compute networks subnets create command creates a proxy-only a subnet.

gcloud compute networks subnets create SUBNET_NAME \
    --purpose=REGIONAL_MANAGED_PROXY \
    --role=ACTIVE \
    --region=REGION \
    --network=VPC_NETWORK_NAME \
    --range=CIDR_RANGE

Replace the following:

  • SUBNET_NAME: the name of the proxy-only subnet.
  • REGION: the region of the proxy-only subnet.
  • VPC_NETWORK_NAME: the name of the VPC network that contains the subnet.
  • CIDR_RANGE: the primary IP address range of the subnet. You must use a subnet mask no larger than /26 so that at least 64 IP addresses are available for proxies in the region. The recommended subnet mask is /23.

If the following event appears on your internal Gateway, a proxy-only subnet does not exist for that region. To resolve this issue, deploy a proxy-only subnet.

generic::invalid_argument: error ensuring load balancer: Insert: Invalid value for field 'resource.target': 'regions/us-west1/targetHttpProxies/gkegw-x5vt-default-internal-http-2jzr7e3xclhj'. A reserved and active subnetwork is required in the same region and VPC as the forwarding rule.

Deploying an internal Gateway

A Gateway resource represents a data plane that routes traffic in Kubernetes. A Gateway can represent many different kinds of load balancing and routing depending on the GatewayClass it is derived from. To learn more about the Gateway resource, see the Gateway resource description or the API specification.

In this case, the administrator of the GKE cluster wants to create a Gateway that can be used by different teams to expose their applications internally. The administrator deploys the Gateway, and application teams deploy their Routes independently and attach them to this Gateway.

  1. Save the following Gateway manifest to a file named gateway.yaml:

    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1beta1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - name: http
        protocol: HTTP
        port: 80
    

    Explanation of fields:

    • gatewayClassName: gke-l7-rilb specifies the GatewayClass that this Gateway is derived from. gke-l7-rilb corresponds to the regional internal HTTP(S) load balancer.
    • port: 80 specifies that the Gateway exposes only port 80 for listening for HTTP traffic.
  2. Deploy the Gateway in your cluster:

    kubectl apply -f gateway.yaml
    
  3. Validate that the Gateway has deployed correctly. It might take a few minutes for it to deploy all of its resources.

    kubectl describe gateways.gateway.networking.k8s.io internal-http
    

    The output resembles the following:

    Name:         internal-http
    Namespace:    default
    ...
    Status:
      Addresses:
        Type:   IPAddress
        Value:  192.168.1.14
      Conditions:
        Last Transition Time:  1970-01-01T00:00:00Z
        Message:               Waiting for controller
        Reason:                NotReconciled
        Status:                False
        Type:                  Scheduled
    Events:
      Type    Reason  Age                From                       Message
      ----    ------  ----               ----                       -------
      Normal  ADD     92s                networking.gke.io/gateway  test/internal-http
      Normal  UPDATE  45s (x3 over 91s)  networking.gke.io/gateway  test/internal-http
      Normal  SYNC    45s                networking.gke.io/gateway  SYNC on test/internal-http was a success
    

    At this point, there is a Gateway deployed in your cluster that has provisioned a load balancer and an IP address. The Gateway has no Routes, however, and so it does not yet know how it should send traffic to backends. Without Routes, all traffic goes to a default backend, which returns an HTTP 404. Next, you deploy an application and Routes, which tell the Gateway how to get to application backends.

Deploying the demo applications

Application teams can deploy their applications and Routes independently from the deployment of Gateways. In some cases the application team might want to own the Gateway as well and deploy it themselves as a resource dedicated to their applications. See Route binding for different ownership models of Gateways and Routes. In this example however, the store team deploys their application and an accompanying HTTPRoute to expose their app through the internal-http Gateway created in the previous section.

The HTTPRoute resource has many configurable fields for traffic matching. For an explanation of HTTPRoute's fields, see the API specification.

  1. Deploy the store application (store-v1, store-v2, and store-german deployments) to your cluster:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/app/store.yaml
    

    This creates three Deployments and three Services which are named store-v1, store-v2, and store-german.

  2. Validate that the application has deployed successfully:

    kubectl get pod
    

    The output resembles the following after the application is running:

    NAME                        READY   STATUS    RESTARTS   AGE
    store-german-66dcb75977-5gr2n   1/1     Running   0          38s
    store-v1-65b47557df-jkjbm       1/1     Running   0          14m
    store-v2-6856f59f7f-sq889       1/1     Running   0          14m
    
  3. Validate that the Services have also been deployed:

    kubectl get service
    

    The output resembles the following, showing a Service for each store Deployment:

    NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
    store-german   ClusterIP   10.48.3.183   <none>        8080/TCP   4s
    store-v1       ClusterIP   10.48.2.224   <none>        8080/TCP   5s
    store-v2       ClusterIP   10.48.4.48    <none>        8080/TCP   5s
    

Container-native load balancing

The GKE Gateway controller uses container-native load balancing by default. This means that load balanced traffic is sent directly to Pod IP addresses. The load balancer has direct visibility of the Pod IP addresses so that traffic does not traverse Kubernetes Service load balancing. This leads to more efficient, stable, and understandable traffic. The gke-l7-* GatewayClasses do not support Instance Group-based load balancing.

GKE Gateways do not require any special user-specified annotations on Services. Any Service that is referenced by an HTTPRoute is annotated automatically by the GKE Gateway Controller with references to the Service's NEGs, like the following example:

Name:              store-v1
Namespace:         default
Annotations:       cloud.google.com/neg: {"exposed_ports":{"8080":{}}}
                   cloud.google.com/neg-status: {"network_endpoint_groups":{"8080":"k8s1-cb368ccb-default-foo-v1-8080-f376ae25"},"zones":["us-central1-a"]}
...

Deploying the HTTPRoute

Route resources define protocol-specific rules for mapping traffic from a Gateway to Kubernetes backends. The HTTPRoute resource does HTTP and HTTPS traffic matching and filtering and is supported by all of the gke-l7 GatewayClasses.

In this section, you deploy an HTTPRoute, which programs the Gateway with the routing rules needed to reach your store application.

  1. Save the following HTTPRoute manifest to a file named store-route.yaml:

    kind: HTTPRoute
    apiVersion: gateway.networking.k8s.io/v1beta1
    metadata:
      name: store
    spec:
     parentRefs:
     - kind: Gateway
       name: internal-http
     hostnames:
     - "store.example.com"
     rules:
     - backendRefs:
       - name: store-v1
         port: 8080
     - matches:
       - headers:
         - name: env
           value: canary
       backendRefs:
       - name: store-v2
         port: 8080
     - matches:
       - path:
           value: /de
       backendRefs:
       - name: store-german
         port: 8080
    
  2. Deploy the HTTProute in your cluster:

    kubectl apply -f store-route.yaml
    

The store HTTPRoute is bound to the internal-http Gateway and so these routing rules are configured on the underlying load balancer as in this diagram:

The routing rules configured by the store HTTPRoute

These routing rules will process HTTP traffic in the following manner:

  • Traffic to store.example.com/de goes to Service store-german.
  • Traffic to store.example.com with the HTTP header "env: canary" goes to Service store-v2.
  • The remaining traffic to store.example.com goes to Service store-v1.

Route binding

Gateways use route binding to match Routes with Gateways. When a Route binds to a Gateway, the underlying load balancer or proxy is programmed with the routing rules specified in the Route. Route and Gateway resources have built-in controls to permit or constrain how they select each other, which determines binding.

The store HTTPRoute is bound to the internal-http Gateway using the parentsRefs property:

kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: internal-http
spec:
  gatewayClassName: gke-l7-gxlb
  listeners:
  - name: https
    protocol: HTTPS
    port: 443
    allowedRoutes:
      kinds:
      - kind: HTTPRoute
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: store
spec:
    parentRefs:
    - name: internal-http

An HTTPRoute attaches to a particular Gateway through the parentsRef property. If the Gateway is configured to allow attachment from that specific Namespace and Route type, then the attachment succeeds. By default, the Gateway trusts all Routes from the same namespace. Routes attached to Gateways configure the underlying load balancer's routing rules. A Route can attach to multiple Gateways and Gateways can also have multiple Routes attached.

When many Routes attach to the same Gateway, the GKE Gateway controller merges these Routes into a single routing configuration for the underlying load balancer. This is what enables different teams to share the same underlying infrastructure, while controlling their portion of the routing configuration independently.

The GKE Gateway controller implements strict Route merging, precedence, and validation logic, which provides predictable merging between Routes and prevents invalid Routes from disrupting the underlying load balancer configuration. For details, see the route merging, precedence, and validation.

For instructions on how to deploy a second HTTPRoute that also uses the internal-http Gateway, see shared Gateways section.

Sending traffic to your application

Now that your Gateway, Route, and application are deployed in your cluster, you can pass traffic to your application.

  1. Validate that the store HTTPRoute has been applied successfully. Events are emitted on the Route when it binds to a Gateway successfully.

    kubectl describe httproute.gateway.networking.k8s.io store
    

    The events section of the output includes events like the following if the HTTPRoute bound to the Gateway:

    Events:
    Type    Reason  Age   From                              Message
    ----    ------  ----  ----                              -------
    Normal  ADD     11m   networking.gke.io/gateway         default/store
    Normal  ADD     11m   multi-cluster-ingress-controller  default/store
    
  2. Retrieve the IP address from the Gateway so that you can send traffic to your application:

    kubectl get gateways.gateway.networking.k8s.io internal-http -o=jsonpath="{.status.addresses[0].value}"
    

    The output is an IP address.

  3. Send traffic to this IP address from shell on a virtual machine (VM) instance with connectivity to the cluster. You can create a VM for this purpose. This is necessary because the Gateway has an internal IP address and is only accessible from within your VPC network. Because the internal-http is a regional load balancer, the client shell must be within the same region as the GKE cluster.

    Because you do not own the example.com hostname, set the host header manually so that the traffic routing can be observed. First try requesting store.example.com:

    curl -H "host: store.example.com" VIP
    

    Replace VIP with the IP address from the previous step.

    The output from the demo app shows information about the location where the app is running:

    {
      "cluster_name": "gke1",
      "host_header": "store.example.com",
      "metadata": "store-v1",
      "node_name": "gke-gke1-pool-2-bd121936-5pfc.c.church-243723.internal",
      "pod_name": "store-v1-84b47c7f58-pmgmk",
      "pod_name_emoji": "💇🏼‍♀️",
      "project_id": "church-243723",
      "timestamp": "2021-03-25T13:31:17",
      "zone": "us-central1-a"
    }
    
  4. Test the path match by going to the German version of the store service at store.example.com/de:

    curl -H "host: store.example.com" VIP/de
    

    The output confirms that the request was served by a store-german Pod:

    {
      "cluster_name": "gke1", 
      "host_header": "store.example.com", 
      "metadata": "Gutentag!", 
      "node_name": "gke-gke1-pool-2-bd121936-n3xn.c.church-243723.internal", 
      "pod_name": "store-german-5cb6474c55-lq5pl", 
      "pod_name_emoji": "🧞‍♀", 
      "project_id": "church-243723", 
      "timestamp": "2021-03-25T13:35:37", 
      "zone": "us-central1-a"
    }
    
  5. Finally, use the env: canary HTTP header to send traffic to the canary version of the store Service:

    curl -H "host: store.example.com" -H "env: canary " VIP
    

    The output confirms that the request was served by a store-v2 Pod:

    {
      "cluster_name": "gke1", 
      "host_header": "store.example.com", 
      "metadata": "store-v2", 
      "node_name": "gke-gke1-pool-2-bd121936-5pfc.c.church-243723.internal", 
      "pod_name": "store-v2-5788476cbd-s9thb", 
      "pod_name_emoji": "👩🏿🦰", 
      "project_id": "church-243723", 
      "timestamp": "2021-03-25T13:38:26", 
      "zone": "us-central1-a"
    }
    

Shared Gateways

The Gateway APIs use separate resources, Gateways and Route resources, to deploy load balancers and routing rules. This differs from Ingress, which combines everything in one resource. By splitting responsibility among resources, Gateway enables the load balancer and its routing rules to be deployed separately and to be deployed by different users or teams. This enables Gateways to become shared Gateways that attach with many different Routes that can be fully owned and managed by independent teams, even across different Namespaces.

In the previous steps, the "store" team deployed their application, store.example.com. Next, the independent "site" team deploys their application, site.example.com, behind the same internal-http Gateway using the same IP address.

Deploying routes against a shared Gateway

In this example the site team deploys their application, Services, and a corresponding HTTPRoute to match traffic from the Gateway to those Services.

  1. Deploy the example application:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/app/site.yaml
    
  2. Save the following HTTPRoute manifest to a file named site-route.yaml:

    This site HTTPRoute matches all traffic for site.example.com and routes it to the site-v1 Service. Like the store HTTPRoute, the site HTTPRoute specifies the internal-http Gateway. The GKE Gateway controller merges these HTTPRoutes into a single underlying URLmap with routes for site.example.com and store.example.com.

    kind: HTTPRoute
    apiVersion: gateway.networking.k8s.io/v1beta1
    metadata:
     name: site
    spec:
     hostnames:
     - "site.example.com"
     rules:
     - backendRefs:
       - name: site-v1
         port: 8080
    
  3. Deploy the HTTPRoute in your cluster:

    kubectl apply -f site-route.yaml
    
  4. With both HTTPRoutes deployed, the routing logic should resemble the following diagram. Traffic for site.example.com should go to site-v1 Pods, store.example.com should go to store-v1 Pods, store-v2 Pods or store-german Pods.

    A picture of the store and site HTTPRoutes bound to the same Gateway

  5. Verify that the HTTPRoute has successfully bound to the internal-http gateway:

    kubectl describe httproute.gateway.networking.k8s.io site
    

    The output resembles the following if the HTTPRoute successfully bound to the Gateway.

    Status:
      Gateways:
        Conditions:
          Last Transition Time:  2021-04-19T16:13:46Z
          Message:               Route admitted
          Observed Generation:   1
          Reason:                RouteAdmitted
          Status:                True
          Type:                  Admitted
        Gateway Ref:
          Name:       internal-http
          Namespace:  default
    Events:
      Type    Reason  Age                 From                              Message
      ----    ------  ----                ----                              -------
      Normal  ADD     60m                 networking.gke.io/gateway         foo/foo-route
    

    If the Admitted condition for the internal-http Gateway is True, then this indicates that HTTPRoute/site has successfully bound to internal-http.

    Invalid configuration, references to nonexistent Kubernetes Services, incorrect Gateway labelling, or conflict with other Routes can all lead to an HTTPRoute being rejected from binding with a Gateway.

    For details on interpreting the status of routes, see the route status section.

  6. After you've validated that the site route has bound successfully, send traffic to the Gateway to confirm that it's being routed correctly. Send traffic from a VM in the same VPC network as the Gateway. Send an HTTP request to both site.example.com and store.example.com to validate the responses:

    curl -H "host: site.example.com" VIP
    curl -H "host: store.example.com" VIP
    

Route status

HTTPRoute resources emit conditions and events to help users understand the status of their applied configuration. Together they indicate whether a Route has successfully bound with one or more Gateways or if it was rejected.

HTTPRoute conditions

HTTPRoute conditions indicate the status of the Route binding between any Gateways that a Route is bound to. Because a Route can be bound to multiple Gateways, this is a list of Gateways and the individual conditions between the Route and each Gateway.

  • Admitted=True indicates that the HTTPRoute is successfully bound to a Gateway.
  • Admitted=False indicates that the HTTPRoute has been rejected from binding with this Gateway.

If there are no Gateways listed under the Gateway bindings heading, then your Route labels and Gateway label selectors likely do not match. This means that your Route is not being selected by any Gateways and thus its configuration is not applied anywhere.

HTTPRoute events

HTTPRoute events provide more detail about the status of the Route. There are a few classes or reasons under which events are grouped:

  • ADD events are triggered by a resource being added.
  • UPDATE events are triggered by a resource being updated.
  • SYNC events are triggered by periodic reconciliation.

Deploying an external Gateway

To learn how to secure a Gateway using a Kubernetes Secret, SSL certificate, or Certificate Manager see Secure a Gateway.

BackendConfig with Gateway

The gke-l7 GatewayClasses support the BackendConfig resource to customize backend settings on a per-Service level. This requires the cloud.google.com/backend-config Service annotation to reference the BackendConfig resource. In the following example, the load balancer health check and connection draining settings are customized for the store-v1 Service.

apiVersion: v1
kind: Service
metadata:
  name: store-v1
  annotations:
    cloud.google.com/backend-config: '{"default": "store-backendconfig"}'
spec:
  selector:
    app: store
    version: v1
  ports:
  - port: 8080
    targetPort: 8080
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: store-backendconfig
spec:
  healthCheck:
    checkIntervalSec: 15
    port: 15020
    type: HTTPS
    requestPath: /healthz
  connectionDraining:
    drainingTimeoutSec: 60

See Configuring Ingress features through BackendConfig parameters for more detail on how to use the BackendConfig resource.

Gateway IP addressing

Every Gateway has an IP address on which it listens for traffic. If no address is specified on the Gateway then an IP address is automatically provisioned by the Gateway. Static addresses can also be pre-provisioned so that the lifecycle of the IP address is independent of the Gateway.

After a Gateway is deployed, its IP address shows in the status field:

kind: Gateway
...
status:
  addresses:
    - value: 10.15.32.3

Depending on the GatewayClass, the IP address is allocated from the following subnets:

GatewayClasses Default IP Address Pool
  • gke-l7-rilb
  • gke-l7-rilb-mc
Regional private IP addresses from the GKE cluster node subnet range
  • gke-l7-gxlb
  • gke-l7-gxlb-mc
Global public IP addresses from Google's public IP ranges

You can specify an IP address in two ways:

  • addresses.IPAddress: lets you specify an IP address at the time of Gateway deployment. The IP address is configurable, rather than automatically provisioned, but the IP address has the same lifecycle as the Gateway and is released if the Gateway is deleted.

  • addresses.NamedAddress: lets you specify an IP address independently of the Gateway. You can create a static IP address resource prior to Gateway deployment and the resource is referenced by the NamedAddress. You can reuse the static IP address even if the Gateway is deleted.

You can configure an IPAddress by specifying the IP address in the addresses field of the Gateway you deploy.

  1. Deploy the following internal Gateway and specify 10.0.0.3 as the static IP address:

    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1beta1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - name: http
        protocol: HTTP
        port: 80
        allowedRoutes:
          kinds:
          - kind: HTTPRoute
      addresses:
      - type: IPAddress
        value: 10.0.0.3
    

    A NamedAddress requires that you provision a static IP outside of the Gateway deployment. This requires two separate steps:

  2. Create a static IP address resource. In this case an internal, regional Gateway is deployed so a corresponding internal, regional IP address is needed.

    gcloud compute addresses create ADDRESS_NAME \
        --region REGION \
        --subnet SUBNET \
        --project PROJECT_ID
    

    When using regional Gateways, replace PROJECT_ID and and REGION with the project and region where your GKE cluster is running. Global, external Gateways do not require a region or subnet specification.

  3. Deploy your Gateway by referencing the same address resource name as the NamedAddress.

    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1beta1
    metadata:
      name: internal-http
    spec:
      gatewayClassName: gke-l7-rilb
      listeners:
      - name: http
        protocol: HTTP
        port: 80
      addresses:
      - type: NamedAddress
        value: ADDRESS_NAME
    

Route merging, precedence, and validation

Route precedence

The Gateway API defines strict precedence rules for how traffic is matched by Routes that have overlapping routing rules. The precedence between two overlapping HTTPRoutes is as follows:

  1. Hostname merge: The longest/most specific hostname match.
  2. Path merge: The longest/most specific path match.
  3. Header merge: The largest number of HTTP headers that match.
  4. Conflict: If the previous three rules don't establish precedence, then precedence goes to the HTTPRoute resource with the oldest timestamp.

Route merging

For gke-l7 GatewayClasses, all HTTPRoutes for a given Gateway are merged into the same URL map resource. How the HTTPRoutes are merged together depends on the type of overlap between HTTPRoutes. The HTTPRoute from the earlier example can be split into three separate HTTPRoutes to illustrate route merging and precedence:

  1. Route merge: All three HTTPRoutes attach with the same internal-http Gateway, so they will be merged together.
  2. Hostname merge: All three Routes match for store.example.com, so their hostname rules are merged.
  3. Path merge: store-german-route has a more specific path /de, so this is not merged further. store-v1-route and store-v2-route both match on the same /* path as well, so they are merged on the path.
  4. Header merge: store-v2-route has a more specific set of HTTP header matches than store-v1-route, so they are not merged further.
  5. Conflict: Because the Routes are able to be merged on hostname, path, and headers, there are no conflicts, and all of the routing rules will apply to traffic.

The single HTTPRoute used in the earlier example are equivalent to these three separate routes:

kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: store-v1-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - backendRefs:
    - kind: Service
      name: store-v1
      port: 8080
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: store-v2-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - matches:
    - headers:
      - type: Exact
        name: env
        value: canary
    backendRefs:
    - kind: Service
      name: store-v2
      port: 8080
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: store-german-route
  labels:
    gateway: internal-http
spec:
  hostnames:
  - "store.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /de
    backendRefs:
    - kind: Service
      name: store-german
      port: 8080

Gateway default backend

All of the gke-l7 GatewayClasses have an implicit default backend which returns an HTTP 404 to unmatched traffic. It is possible to customize the default backend with an explicit default Route that sends unmatched traffic to a user-provided Service. The following HTTPRoute is an example of how to customize the default backend. If applied, it will take precedence over the implicit default backend:

kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: custom-default-backend
  labels:
    gateway: my-gateway
spec:
  rules:
  - backendRefs:
    - name: my-custom-default-backend-service
      port: 8080

This HTTPRoute matches all traffic from a particular Gateway. There can only be one such HTTPRoute per Gateway or else they will conflict and the precedence ordering will apply.

An explicit default backend is also an effective strategy for preventing someone from accidentally creating a default Route that black-holes traffic for a Gateway. Because an explicit default HTTPRoute is the older resource it will always take precedence over any new HTTPRoutes that have conflicting routing rules.

Kubernetes Gateways and Istio Gateways

Note that the Kubernetes Gateway API and the Istio API both have a resource named Gateway. While they perform similar functions, they are not the same resource. If you are using Istio and the Gateway API in the same Kubernetes cluster, these names will overlap when using kubectl on the command line. kubectl get gateway might return the Kubernetes Gateway resources and not the Istio Gateway resources or vice versa.

$ kubectl api-resources
NAME       SHORTNAMES   APIGROUP                       NAMESPACED   KIND
gateways   gw           networking.istio.io/v1beta1            true         Gateway
gateways   gtw          networking.k8s.io/v1beta1           true         Gateway

If you are using Istio and upgrade to GKE 1.20 and later it is recommended to start using the Gateway resource shortname or specify the API group. The shortname for a Kubernetes Gateway is gtw and the shortname for an Istio Gateway is gw. The following commands return the Kubernetes Gateway and Istio Gateway resources respectively.

# Kubernetes Gateway
$ kubectl get gtw
NAME                        CLASS
multi-cluster-gateway       gke-l7-gxlb-mc

$ kubectl get gateway.networking.x-k8s.io
NAME                        CLASS
multi-cluster-gateway       gke-l7-gxlb-mc

# Istio Gateway
$ kubectl get gw
NAME               AGE
bookinfo-gateway   64m

$ kubectl get gateway.networking.istio.io
NAME               AGE
bookinfo-gateway   64m

What's next