Setting up Multi-cluster Ingress

This page shows you how to configure Multi-cluster Ingress to route traffic across multiple clusters in different regions. Multi-cluster Ingress (MCI) is a cloud-hosted multi-cluster Ingress controller for Anthos clusters. It's a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions.

You can also learn more about How Multi-cluster Ingress works and How the External HTTP(S) Load Balancer works.

Requirements for Multi-cluster Ingress

Multi-cluster Ingress is supported on:

  • GKE clusters on Google Cloud. GKE on-prem clusters are not currently supported.
  • GKE clusters in all GKE Release Channels.
  • Clusters in VPC-native (Alias IP) mode. For more information, see Creating a VPC-native cluster.
  • Clusters that have HTTP-load balancing enabled, which is enabled by default. Note that Multi-cluster Ingress only supports the external HTTP(S) load balancer.
  • MCI is part of Anthos on Google Cloud and requires Anthos licensing.

Additionally, Multi-cluster Ingress requires Cloud SDK version 290 or higher (gcloud --version), so you can register your clusters.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update
  • Enable the environ API in your project:

    gcloud services enable gkehub.googleapis.com
    
  • Enable the Anthos API in your project:

    gcloud services enable anthos.googleapis.com
    
  • Enable the Multi-cluster Ingress API in your project:

    gcloud services enable multiclusteringress.googleapis.com
    
  • To use Multi-cluster Ingress, your clusters must be in VPC-native mode. Multi-cluster Ingress uses network endpoint groups (NEGs) to create backends for the HTTP(S) load balancer. If your existing clusters are not in VPC-native mode, delete and recreate the clusters, ensuring VPC-native is enabled by using the --enable-ip-alias flag.

Preparing the environment

Creating the clusters

This example uses the following clusters:

NAME      LOCATION
gke-eu    europe-west1-c
gke-us    us-central1-a
  1. Create the gke-eu cluster:

    gcloud container clusters create gke-eu --zone europe-west1-c \
      --release-channel stable --enable-ip-alias
    
  2. Create the gke-us cluster:

    gcloud container clusters create gke-us --zone us-central1-a \
      --release-channel stable --enable-ip-alias
    

Registering your clusters

Connect enables you to operate your Kubernetes clusters in hybrid environments. Each cluster must be registered as a member of an environ. For more information, see Multi-cluster Ingress architecture.

  1. Ensure you have completed the Prerequisites for registering a cluster.

  2. Ensure you have created a service account and downloaded its private key.

  3. Find the URIs for your clusters:

    gcloud container clusters list --uri
    
  4. Register the gke-eu cluster:

    gcloud container hub memberships register gke-eu \
        --project=project-id \
        --gke-uri=uri \
        --service-account-key-file=service-account-key-path
    

    where:

    • project-id is your Project ID.
    • uri is the URI of the GKE cluster.
    • service-account-key-path is the local file path to the service account's private key JSON file downloaded as part of registration prerequisites. This service account key is stored as a secret named creds-gcp in the gke-connect namespace.
  5. Register the gke-us cluster:

    gcloud container hub memberships register gke-us \
        --project=project-id \
        --gke-uri=uri \
        --service-account-key-file=service-account-key-path
    
  6. Verify your clusters are registered:

    gcloud container hub memberships list
    

    The output should look similar to this:

    NAME                                  EXTERNAL_ID
    gke-us                                0375c958-38af-11ea-abe9-42010a800191
    gke-eu                                d3278b78-38ad-11ea-a846-42010a840114
    

Specifying a config cluster

The config cluster is a GKE cluster you choose to be the central point of control for Ingress across the member clusters. Unlike GKE Ingress, the Anthos Ingress controller does not live in a single cluster but is a Google-managed service that watches resources in the config cluster. This GKE cluster is used as a multi-cluster API server to store resources such as MultiClusterIngress and MultiClusterService. Any member cluster can become a config cluster, but there can only be one config cluster at a time.

For more information about config clusters, see Config cluster design.

If the config cluster is down or inaccessible, then MultiClusterIngress and MultiClusterService objects cannot update across the member clusters. Load balancers and traffic can continue to function independently of the config cluster in the case of an outage.

Enabling Multi-cluster Ingress and selecting the config cluster occurs in the same step. The GKE cluster you choose as the config cluster must already be registered to an environ.

  1. Identify the URI of the cluster you want to specify as the config cluster:

    gcloud container hub memberships list
    

    The output is similar to this:

    NAME                                  EXTERNAL_ID
    gke-us                                0375c958-38af-11ea-abe9-42010a800191
    gke-eu                                d3278b78-38ad-11ea-a846-42010a840114
    
  2. Enable Multi-cluster Ingress and select gke-us as the config cluster:

    gcloud alpha container hub ingress enable \
      --config-membership=projects/project_id/locations/global/memberships/gke-us
    

    The output is similar to this:

    Waiting for Feature to be created...done.
    

    Note that this process can take a few minutes while the controller is bootstrapping. If successful, the output is similar to this:

    Waiting for Feature to be created...done.
    Waiting for controller to start...done.
    

    If unsuccessful, the command will timeout like below:

    Waiting for controller to start...failed.
    ERROR: (gcloud.alpha.container.hub.ingress.enable) Controller did not start in 2 minutes. Please use the `describe` command to check Feature state for debugging information.
    

    If no failure occurred in the previous step, you may proceed with next steps. If a failure occurred in the previous step, then check the feature state. It should indicate what exactly went wrong:

    gcloud alpha container hub ingress describe
    

    An example failure state is below:

    featureState:
      detailsByMembership:
        projects/393818921412/locations/global/memberships/0375c958-38af-11ea-abe9-42010a800191:
          code: FAILED,
          description: "... is not a VPC-native cluster..."
    lifecycleState: ENABLED
    multiclusteringressFeatureSpec:
      configMembership: projects/project_id/locations/global/memberships/0375c958-38af-11ea-abe9-42010a800191
    name: projects/project_id/locations/global/features/multiclusteringress
    updateTime: '2020-01-22T19:16:51.172840703Z'
    

    To find out about more such error messages, see Troubleshooting and operations.

Shared VPC deployment

Multi-cluster Ingress can be deployed for clusters in a Shared VPC network, but all of the participating backend GKE clusters must be in the same project. Having GKE clusters in different projects using the same Cloud Load Balancing VIP is not supported.

In non-shared VPC networks, the Multi-cluster Ingress controller manages firewall rules to allow health checks to pass from the Cloud Load Balancing to container workloads.

In a Shared VPC network, Multi-cluster Ingress is not capable of managing these firewall rules because firewalling is managed by the host project, which service project administrators do not have access to. The centralized security model of Shared VPC networks intentionally centralizes network control. In Shared VPC networks, a host project administrator must manually create the necessary firewall rules for Cloud Load Balancing traffic on behalf of Multi-cluster Ingress.

The following command shows the firewall rule that you must create if your clusters are on a Shared VPC network. The source ranges are the ranges that the Cloud Load Balancing uses to send traffic to backends in Google Cloud. This rule must exist for the operational lifetime of Multi-cluster Ingress and can only be removed if Multi-cluster Ingress or Cloud Load Balancing to GKE load balancing is no longer used.

  • If your clusters are on a Shared VPC network, create the firewall rule:

    gcloud compute firewall-rules create firewall-rule-name \
        --project host-project \
        --network shared-vpc \
        --direction INGRESS \
        --allow tcp:0-65535 \
        --source-ranges 130.211.0.0/22,35.191.0.0/16
    

What's next