This page shows you how to route traffic across multiple Google Kubernetes Engine (GKE) clusters in different regions using Multi Cluster Ingress.
To learn more about deploying Multi Cluster Ingress, see Deploying Ingress across clusters.
These steps require elevated permissions and should be performed by a GKE administrator.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Google Cloud CLI.
- Set up default Google Cloud CLI settings for your project by using one of the following methods:
- Use
gcloud init
, if you want to be walked through setting project defaults. - Use
gcloud config
, to individually set your project ID, zone, and region. -
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
- Follow the instructions to authorize the gcloud CLI to use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
- Choose a default Compute Engine region.
- Set your default project ID:
gcloud config set project PROJECT_ID
- Set your default Compute Engine region (for example,
us-central1
):gcloud config set compute/region COMPUTE_REGION
- Set your default Compute Engine zone (for example,
us-central1-c
):gcloud config set compute/zone COMPUTE_ZONE
- Update
gcloud
to the latest version:gcloud components update
gcloud init
gcloud config
By setting default locations, you can avoid errors in gcloud CLI like the
following: One of [--zone, --region] must be supplied: Please specify location
.
Requirements and limitations
Multi Cluster Ingress has the following requirements:
- Google Cloud CLI version 290.0.0 and later.
- Clusters must have the
HttpLoadBalancing
add-on enabled. This add-on is enabled by default, you must not disable it. - Clusters must be VPC-native.
- Clusters must have Workload Identity enabled.
Multi Cluster Ingress has the following limitations:
- Only supported with an external HTTP(S) load balancer.
- You can only deploy one instance of Multi Cluster Ingress per project. You can deploy
MultiClusterIngress
resources to specific subsets of clusters within a project by controlling the scoping of clusters using cluster selection. - Do not create Compute Engine load balancers in the same project with
the prefix
mci-
that are not managed by Multi Cluster Ingress or they will be deleted. Google Cloud uses the prefixmci-[6 char hash]
to manage the Compute Engine resources that Multi Cluster Ingress deploys. - Configuration of HTTPS requires a pre-allocated static IP address. HTTPS is not supported with ephemeral IP addresses.
Overview
In this exercise, you perform the following steps:
- Select the pricing you want to use.
- Deploy clusters.
- Configure cluster credentials.
- Register the clusters into a fleet.
- Specify a config cluster. This cluster can be a dedicated control plane, or it can run other workloads.
The following diagram shows what your environment will look like after you complete the exercise:
In the diagram, there are two GKE clusters named gke-us
and
gke-eu
in the zones europe-west1
and us-central1
. The clusters are
registered into a fleet so that the Multi Cluster Ingress controller can recognize
them.
Select pricing
If Multi Cluster Ingress is the only Anthos capability that you are using, then we recommend that you use standalone pricing. If your project is using other Anthos on Google Cloud components or capabilities, you should use the Anthos licensing.
The APIs that you must enable depend on the Multi Cluster Ingress pricing that you use.
- If the Anthos API (
anthos.googleapis.com
) is enabled, then your project is billed according to the number of cluster vCPUs and Anthos pricing. - If the Anthos API is disabled, then your project is billed according to the number of backend Multi Cluster Ingress pods in your project.
You can change the Multi Cluster Ingress billing model from standalone to Anthos, or from Anthos to standalone at any time without impacting Multi Cluster Ingress resources or traffic.
Anthos pricing
To enable Anthos pricing, perform the following steps:
Enable the Anthos, Multi Cluster Ingress, Connect, and GKE APIs in your project:
gcloud services enable \ anthos.googleapis.com \ multiclusteringress.googleapis.com \ gkehub.googleapis.com \ container.googleapis.com \ --project=PROJECT_ID
After
anthos.googleapis.com
is enabled in your project, any clusters registered to Connect are billed according to Anthos pricing.
Standalone pricing
To enable standalone pricing, perform the following steps:
Confirm that the Anthos API is disabled in your project:
gcloud services list --project=PROJECT_ID | grep anthos.googleapis.com
Replace
PROJECT_ID
with the project ID where your GKE clusters are running.If the output is an empty response, the Anthos API is disabled in your project and any Multi Cluster Ingress resources are billed using standalone pricing.
Enable the Multi Cluster Ingress, Connect, and GKE APIs in your project:
gcloud services enable \ multiclusteringress.googleapis.com \ gkehub.googleapis.com \ container.googleapis.com \ --project=PROJECT_ID
Deploy clusters
Create two GKE clusters named gke-us
and gke-eu
in the
europe-west1
and us-central1
zones.
We recommend that you create your clusters with Workload Identity enabled, because it lets the workloads in your cluster authenticate without requiring you to download, manually rotate, or manage Google Cloud service account keys. If you create your clusters without Workload Identity enabled you must register your clusters into the fleet manually using service account keys.
Workload Identity
Create the
gke-us
cluster in theus-central1-b
zone:gcloud container clusters create gke-us \ --zone=us-central1-b \ --enable-ip-alias \ --workload-pool=PROJECT_ID.svc.id.goog \ --release-channel=stable \ --project=PROJECT_ID
Create the
gke-eu
cluster in theeurope-west1-b
zone:gcloud container clusters create gke-eu \ --zone=europe-west1-b \ --enable-ip-alias \ --workload-pool=PROJECT_ID.svc.id.goog \ --release-channel=stable \ --project=PROJECT_ID
Manual
Create the
gke-us
cluster in theus-central1-b
zone:gcloud container clusters create gke-us \ --zone=us-central1-b \ --enable-ip-alias \ --release-channel=stable \ --project=PROJECT_ID
Create the
gke-eu
cluster in theeurope-west1-b
zone:gcloud container clusters create gke-eu \ --zone=europe-west1-b \ --enable-ip-alias \ --release-channel=stable \ --project=PROJECT_ID
Configure cluster credentials
Configure credentials for your clusters and rename the cluster contexts to make it easier to switch between clusters when deploying resources.
Retrieve the credentials for your clusters:
gcloud container clusters get-credentials gke-us \ --zone=us-central1-b \ --project=PROJECT_ID gcloud container clusters get-credentials gke-eu \ --zone=europe-west1-b \ --project=PROJECT_ID
The credentials are stored locally so that you can use your kubectl client to access the cluster API servers. By default, an auto-generated name is created for the credentials.
Rename the cluster contexts:
kubectl config rename-context gke_PROJECT_ID_us-central1-b_gke-us gke-us kubectl config rename-context gke_PROJECT_ID_europe-west1-b_gke-eu gke-eu
Register clusters to a fleet
Register your clusters to a fleet by using Connect.
You can register clusters using Workload Identity or manually using a Google Cloud service account.
Workload Identity
Register your clusters:
gcloud container hub memberships register gke-us \ --gke-cluster us-central1-b/gke-us \ --enable-workload-identity \ --project=PROJECT_ID gcloud container hub memberships register gke-eu \ --gke-cluster europe-west1-b/gke-eu \ --enable-workload-identity \ --project=PROJECT_ID
Confirm that your clusters have successfully registered to Connect:
gcloud container hub memberships list --project=PROJECT_ID
The output is similar to the following:
NAME EXTERNAL_ID gke-us 0375c958-38af-11ea-abe9-42010a800191 gke-eu d3278b78-38ad-11ea-a846-42010a840114
Manual
If you do not have Workload Identity enabled for your clusters, perform the following steps to register your clusters:
Ensure you have completed the prerequisites for registering a cluster.
Create a service account and download the service account's private key JSON file.
Register your clusters to the fleet using the private key JSON file that you downloaded:
gcloud container hub memberships register gke-us \ --gke-cluster us-central1-b/gke-us \ --service-account-key-file=SERVICE_ACCOUNT_KEY_PATH \ --project=PROJECT_ID gcloud container hub memberships register gke-eu \ --gke-cluster europe-west1-b/gke-eu \ --service-account-key-file=SERVICE_ACCOUNT_KEY_PATH \ --project=PROJECT_ID
Replace the following:
SERVICE_ACCOUNT_KEY_PATH
: the local file path to the service account's private key JSON file that you downloaded in the registration prerequisites. This service account key is stored as a secret namedcreds-gcp
in thegke-connect
namespace.
Confirm that your clusters have successfully registered to Connect:
gcloud container hub memberships list --project=PROJECT_ID
The output is similar to the following:
NAME EXTERNAL_ID gke-us 0375c958-38af-11ea-abe9-42010a800191 gke-eu d3278b78-38ad-11ea-a846-42010a840114
After you register your clusters, GKE deploys the following
components to your cluster: gke-mcs-importer
, mcs-core-dns
, and
mcs-core-dns-autoscaler
.
Specify a config cluster
The config cluster is a GKE cluster you choose to be the central point of control for Ingress across the member clusters. This cluster must already be registered to the fleet. For more information, see Config cluster design.
Enable Multi Cluster Ingress and select gke-us
as the config cluster:
gcloud beta container hub ingress enable \
--config-membership=gke-us
The config cluster takes up to 15 minutes to register. The successful output is similar to the following:
Waiting for Feature to be created...done.
Waiting for controller to start...done.
The unsuccessful output is similar to the following:
Waiting for controller to start...failed.
ERROR: (gcloud.alpha.container.hub.ingress.enable) Controller did not start in 2 minutes. Please use the `describe` command to check Feature state for debugging information.
If a failure occurred in the previous step, then check the feature state:
gcloud beta container hub ingress describe
The successful output is similar to the following:
createTime: '2021-02-04T14:10:25.102919191Z'
membershipStates:
projects/PROJECT_ID/locations/global/memberships/CLUSTER_NAME:
state:
code: ERROR
description: '...is not a VPC-native GKE Cluster.'
updateTime: '2021-08-10T13:58:50.298191306Z'
projects/PROJECT_ID/locations/global/memberships/CLUSTER_NAME:
state:
code: OK
updateTime: '2021-08-10T13:58:08.499505813Z'
To learn more about troubleshooting errors with Multi Cluster Ingress, see Troubleshooting and operations.
Shared VPC
You can deploy a MultiClusterIngress
resource for clusters in a
Shared VPC network, but all the participating backend GKE
clusters must be in the same project. Having GKE clusters in
different projects using the same Cloud Load Balancing VIP is not supported.
In non-Shared VPC networks, the Multi Cluster Ingress controller manages firewall rules to allow health checks to pass from the load balancer to container workloads.
In a Shared VPC network, a host project administrator must manually create the firewall rules for load balancer traffic on behalf of the Multi Cluster Ingress controller.
The following command shows the firewall rule that you must create if your
clusters are on a Shared VPC network. The source ranges are the ranges
that load balancer uses to send traffic to backends. This rule must exist for
the operational lifetime of a MultiClusterIngress
resource.
If your clusters are on a Shared VPC network, create the firewall rule:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \
--project=HOST_PROJECT \
--network=SHARED_VPC \
--direction=INGRESS \
--allow=tcp:0-65535 \
--source-ranges=130.211.0.0/22,35.191.0.0/16
Replace the following:
FIREWALL_RULE_NAME
: the name of the new firewall rule that you choose.HOST_PROJECT
: the ID of the Shared VPC host project.SHARED_VPC
: the name of the Shared VPC network.
Known issues
InvalidValueError for field config_membership
A known issue prevents the Google Cloud CLI from interacting with Multi Cluster Ingress. This issue was introduced in version 346.0.0 and was fixed in version 348.0.0. We do not recommend using the gcloud CLI versions 346.0.0 and 347.0.0 with Multi Cluster Ingress.
Invalid value for field 'resource'
Google Cloud Armor cannot communicate with Multi Cluster Ingress config clusters running on the following GKE versions:
- 1.18.19-gke.1400 and later
- 1.19.10-gke.700 and later
- 1.20.6-gke.700 and later
When you configure a Google Cloud Armor security policy, the following message appears:
Invalid value for field 'resource': '{"securityPolicy": "global/securityPolicies/"}': The given policy does not exist
To avoid this issue, upgrade your config cluster
to version 1.21 or later, or use the following command to update the
BackendConfig CustomResourceDefinition
:
kubectl patch crd backendconfigs.cloud.google.com --type='json' -p='[{"op": "replace", "path": "/spec/versions/1/schema/openAPIV3Schema/properties/spec/properties/securityPolicy", "value":{"properties": {"name": {"type": "string"}}, "required": ["name" ],"type": "object"}}]'