Enable multi-cluster Gateways

Stay organized with collections Save and categorize content based on your preferences.

This page shows you how to enable the multi-cluster GKE Gateway controller, a Google-hosted controller that provisions external and internal load balancers. To learn how to use Gateway and HTTPRoute resources for container load balancing, see Deploying Gateways or Deploying multi-cluster Gateways.

The multi-cluster GKE Gateway Controller installs two multi-cluster GatewayClasses: gke-l7-gxlb-mc for external multi-cluster Gateways and gke-l7-rilb-mc for internal multi-cluster Gateways.

Learn more about the capabilities of the various GatewayClasses in GKE.

Requirements

Using Multi-cluster Gateways requires the following:

Internal multi-cluster Gateways additionally require:

  • A proxy-only subnet must be enabled in the project.
  • GKE clusters that share the same Gateway must all be located in the same Google Cloud region and in the same VPC.

Pricing

All Compute Engine resources deployed through the Gateway controller are charged against the project in which your GKE clusters reside. The single-cluster Gateway controller is offered at no additional charge as a part of GKE Standard and Autopilot pricing. You can use the multi-cluster Gateway controller without additional charge during Preview. Upon GA, multi-cluster Gateways will be charged according to the Multi Cluster Ingress and Gateway pricing.

Before you begin

Before you begin deploying multi-cluster Gateways, ensure you complete the following tasks:

Run this command to enable the required APIs if they are not already enabled:

  gcloud services enable \
    container.googleapis.com \
    gkehub.googleapis.com \
    multiclusterservicediscovery.googleapis.com \
    multiclusteringress.googleapis.com \
    trafficdirector.googleapis.com \
    --project=PROJECT_ID

Replace PROJECT_ID with the project ID where your GKE clusters are running.

Preparing the environment

It requires multiple GKE clusters to complete the examples in Deploying multi-cluster Gateways. All of the clusters are registered to the same fleet so that multi-cluster Gateways and Services can operate across them.

The following steps will deploy three GKE clusters across two different regions in your project:

  • us-west1-a/gke-west-1
  • us-west1-a/gke-west-2
  • us-east1-b/gke-east-1

This will create the following cluster topology:

The cluster topology which shows the relationship between the regions, fleet, and project.

These GKE clusters are used to demonstrate multi-region load balancing and blue-green, multi-cluster traffic splitting using external and internal Gateways.

Deploy clusters

In these steps you will deploy three GKE clusters into regions us-east1 and us-west1.

  1. Create a GKE cluster in us-west1 named gke-west-1:

    gcloud container clusters create gke-west-1 \
        --gateway-api=standard \
        --zone=us-west1-a \
        --workload-pool=PROJECT_ID.svc.id.goog \
        --cluster-version=VERSION \
        --project=PROJECT_ID
    

    Replace the following:

    • PROJECT_ID: the project ID where your GKE clusters are running.
    • VERSION: the GKE version, which must be 1.24 or later. You can also use the --release-channel flag to select a release channel. The release channel must have a default version 1.24 or later.
  2. Create another GKE cluster in us-west1 (or the same region as the previous cluster) named gke-west-2:

    gcloud container clusters create gke-west-2 \
        --gateway-api=standard \
        --zone=us-west1-a \
        --workload-pool=PROJECT_ID.svc.id.goog \
        --cluster-version=VERSION \
        --project=PROJECT_ID
    
  3. Create a GKE cluster in us-east1 (or a region that is different than the previous one) named gke-east-1

    gcloud container clusters create gke-east-1 \
        --gateway-api=standard \
        --zone=us-east1-b \
        --workload-pool=PROJECT_ID.svc.id.goog \
        --cluster-version=VERSION \
        --project=PROJECT_ID
    

Configure cluster credentials

This step configures cluster credentials with memorable names. This makes it easier to switch between clusters when deploying resources across several clusters.

  1. Fetch the credentials for cluster gke-west-1, gke-west-2, and gke-east-1:

    gcloud container clusters get-credentials gke-west-1 --zone=us-west1-a --project=PROJECT_ID
    gcloud container clusters get-credentials gke-west-2 --zone=us-west1-a --project=PROJECT_ID
    gcloud container clusters get-credentials gke-east-1 --zone=us-east1-b --project=PROJECT_ID
    

    This stores the credentials locally so that you can use your kubectl client to access the cluster API servers. By default an auto-generated name is created for the credential.

  2. Rename the cluster contexts so they are easier to reference later:

    kubectl config rename-context gke_PROJECT_ID_us-west1-a_gke-west-1 gke-west-1
    kubectl config rename-context gke_PROJECT_ID_us-west1-a_gke-west-2 gke-west-2
    kubectl config rename-context gke_PROJECT_ID_us-east1-b_gke-east-1 gke-east-1
    

    Replace PROJECT_ID with the project ID where your clusters are deployed.

Register to the fleet

  1. After all three clusters have successfully been created, you will need to register these clusters to your project's fleet. Grouping your GKE clusters together in a fleet allows them to be targeted by a multi-cluster Gateway.

    gcloud container fleet memberships register gke-west-1 \
         --gke-cluster us-west1-a/gke-west-1 \
         --enable-workload-identity \
         --project=PROJECT_ID
    
    gcloud container fleet memberships register gke-west-2 \
         --gke-cluster us-west1-a/gke-west-2 \
         --enable-workload-identity \
         --project=PROJECT_ID
    
    gcloud container fleet memberships register gke-east-1 \
         --gke-cluster us-east1-b/gke-east-1 \
         --enable-workload-identity \
         --project=PROJECT_ID
    
  2. Confirm that the clusters have been successfully registered to the fleet:

    gcloud container fleet memberships list --project=PROJECT_ID
    

    The output will be similar to the following:

    NAME          EXTERNAL_ID
    gke-east-1  657e835d-3b6b-4bc5-9283-99d2da8c2e1b
    gke-west-2  f3727836-9cb0-4ffa-b0c8-d51001742f19
    gke-west-1  93de69c0-859e-4ddd-bf3a-e3d62ef5090b
    

Enable Multi-cluster Services

  1. Enable multi-cluster Services in your fleet for the registered clusters. This enables the MCS controller for the three clusters that are registered to your fleet so that it can start listening to and exporting Services.

    gcloud container fleet multi-cluster-services enable \
        --project PROJECT_ID
    
  2. Grant the required Identity and Access Management (IAM) permissions required for MCS:

     gcloud projects add-iam-policy-binding PROJECT_ID \
         --member "serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]" \
         --role "roles/compute.networkViewer" \
         --project=PROJECT_ID
    

    Replace PROJECT_ID with the project ID where your clusters are deployed.

  3. Confirm that MCS is enabled for the registered clusters. You will see the memberships for the three registered clusters. It may take several minutes for all of the clusters to show.

    gcloud container fleet multi-cluster-services describe --project=PROJECT_ID
    

    The output is similar to the following:

    createTime: '2021-04-02T19:34:57.832055223Z'
    membershipStates
      projects/381015962062/locations/global/memberships/gke-east-1:
        state:
          code: OK
          description: Firewall successfully updated
          updateTime: '2021-05-27T11:03:07.770208064Z'
      projects/381015962062/locations/global/memberships/gke-west-1:
        state:
          code: OK
          description: Firewall successfully updated
          updateTime: '2021-05-27T09:32:14.401508987Z'
      projects/381015962062/locations/global/memberships/gke-west-2:
        state:
          code: OK
          description: Firewall successfully updated
          updateTime: '2021-05-27T13:53:27.628109510Z'
    name: projects/PROJECT_ID/locations/global/features/multiclusterservicediscovery
    resourceState:
      state: ACTIVE
    spec: {}
    updateTime: '2021-04-02T19:34:58.983512446Z'
    

Enable the multi-cluster Gateway controller

The multi-cluster GKE Gateway controller governs the deployment of multi-cluster Gateways (and also MulticlusterIngress resources). It is enabled with the gcloud container fleet ingress enable command.

When enabling the multi-cluster Gateway controller, you must select your config cluster. The config cluster is the GKE cluster in which your Gateway and Route resources are deployed. It is a central place that controls routing across your clusters. See Config cluster design to help you decide which cluster to choose as your config cluster.

  1. Enable the multi-cluster GKE Gateway controller and specify your config cluster. Note that you can always update the config cluster at a later time. This example specifies gke-west-1 as the config cluster that will host the resources for multi-cluster Gateways.

    gcloud container fleet ingress enable \
        --config-membership=gke-west-1 \
        --project=PROJECT_ID
    
  2. Confirm that the global GKE Gateway controller is enabled for your fleet:

    gcloud container fleet ingress describe --project=PROJECT_ID
    

    The output is similar to the following:

    createTime: '2021-05-26T13:27:37.460383111Z'
    membershipStates:
      projects/381015962062/locations/global/memberships/gke-east-1:
        state:
          code: OK
          updateTime: '2021-05-27T15:08:19.397896080Z'
      projects/381015962062/locations/global/memberships/gke-west-1:
        state:
          code: OK
          updateTime: '2021-05-27T15:08:19.397895711Z'
      projects/381015962062/locations/global/memberships/gke-west-2:
        state:
          code: OK
          updateTime: '2021-05-27T15:08:19.397896293Z'
    resourceState:
      state: ACTIVE
    spec:
      multiclusteringress:
        configMembership: projects/PROJECT_ID/locations/global/memberships/gke-west-1
    state:
      state:
        code: OK
        description: Ready to use
        updateTime: '2021-05-26T13:27:37.899549111Z'
    updateTime: '2021-05-27T15:08:19.397895711Z'
    
  3. Grant Identity and Access Management (IAM) permissions required by the Gateway controller:

     gcloud projects add-iam-policy-binding PROJECT_ID \
         --member "serviceAccount:service-PROJECT_NUMBER@gcp-sa-multiclusteringress.iam.gserviceaccount.com" \
         --role "roles/container.admin" \
         --project=PROJECT_ID
    

    Replace PROJECT_ID and PROJECT_NUMBER with the project ID and project number where your clusters are deployed.

  4. Confirm that the GatewayClasses exist in the cluster:

    kubectl get gatewayclasses --context=gke-west-1
    

    The output is similar to the following:

    NAME             CONTROLLER
    gke-l7-gxlb      networking.gke.io/gateway
    gke-l7-gxlb-mc   networking.gke.io/gateway
    gke-l7-rilb      networking.gke.io/gateway
    gke-l7-rilb-mc   networking.gke.io/gateway
    

    This output includes the GatewayClass gke-l7-gxlb-mc for external multi-cluster Gateways and the GatewayClass gke-l7-rilb-mc for internal multi-cluster Gateways.

  5. Switch your kubectl context to the control cluster:

    kubectl config use-context gke-west-1
    

You are now ready to begin deploying multi-cluster Gateways and Routes in the config cluster.

Troubleshooting

This section shows you how to resolve issues related to the GKE Gateway controller.

Server doesn't have a resource type gatewayclasses

The following error might occur when you run the command kubectl get gatewayclasses:

error: the server doesn't have a resource type "gatewayclasses"

To resolve this issue, install the Gateway API on your cluster:

gcloud container clusters update CLUSTER_NAME \
    --gateway-api=standard \
    --region=COMPUTE_REGION

Replace the following:

  • CLUSTER_NAME: the name of your cluster.
  • COMPUTE_REGION: the Compute Engine region of your cluster. For zonal clusters, use --zone=COMPUTE_ZONE.

What's next