Deploying Ingress across clusters

Overview

Ingress for Anthos (Ingress) is a cloud-hosted multi-cluster Ingress controller for Anthos GKE clusters. It's a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions. This page shows you how to configure Ingress for Anthos to route traffic across multiple clusters in different regions.

You can also learn more about how Ingress for Anthos works and How the External HTTP(S) Load Balancer works.

Environmental prerequisites

Ingress for Anthos is supported with the following limitations:

  • GKE clusters on GCP. On-prem clusters are not currently supported.
  • GKE clusters in the Rapid or Regular release channels.
  • Clusters in VPC-Native (Alias IP) mode. For more information, see Creating a VPC-native cluster.
  • Have HTTP load balancing enabled. Clusters have HTTP load-balancing enabled by default, so do not disable it.
  • gcloud --version must be 281 or higher. GKE cluster registration steps depend on this version or higher.
  • Ingress for Anthos only supports the external HTTP(S) load balancer and does not support internal load balancing

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project project-id
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone compute-zone
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region compute-region
  • Update gcloud to the latest version:
    gcloud components update
  • Enable the Hub API in your project:

    gcloud services enable gkehub.googleapis.com
    
  • Enable the Anthos API in your project:

    gcloud services enable anthos.googleapis.com
    
  • Enable the Ingress for Anthos API in your project:

    gcloud services enable multiclusteringress.googleapis.com
    
  • To use Ingress for Anthos, your clusters must be in VPC-Native mode. Ingress for Anthos uses network endpoint groups (NEGs) to create backends for the HTTP(S) load balancer. If your existing clusters are not in VPC-Native mode, delete them and recreate them in this mode by including the --enable-ip-alias flag.

Preparing the environment

Creating the clusters

This example uses the following 3 clusters:

NAME      LOCATION
gke-asia  asia-northeast1-a
gke-eu    europe-west1-c
gke-us    us-central1-a
  1. Create the gke-asia cluster:

    gcloud beta container clusters create gke-asia --zone asia-northeast1-a \
      --release-channel stable --enable-ip-alias
    
  2. Create the gke-eu cluster:

    gcloud beta container clusters create gke-eu --zone europe-west1-c \
      --release-channel stable --enable-ip-alias
    
  3. Create the gke-us cluster:

    gcloud beta container clusters create gke-us --zone us-central1-a \
      --release-channel stable --enable-ip-alias
    

Registering your clusters

GKE Hub enables you to operate your Kubernetes clusters in hybrid environments. Each cluster must be registered as a member of a Hub.

Before you can register your clusters, you must complete the prerequisite steps.

After you have created a service account and downloaded its private key, you can register your clusters.

  1. Find the URIs for your clusters:

    gcloud container clusters list --uri
    
  2. Register the gke-asia cluster:

    gcloud container hub memberships register gke-asia \
        --project=project-id \
        --gke-uri=uri \
        --service-account-key-file=service-account-key-path
    

    where:

    • project-id is your Project ID.
    • uri is the URI of the GKE cluster.
    • service-account-key-path is the local file path to the service account's private key JSON file downloaded as part of Prerequisites. This service account key is stored as a secret named creds-gcp in gke-connect namespace.
  3. Register the gke-eu cluster:

    gcloud container hub memberships register gke-eu \
        --project=project-id \
        --gke-uri=uri \
        --service-account-key-file=service-account-key-path
    
  4. Register the gke-us cluster:

    gcloud container hub memberships register gke-us \
        --project=project-id \
        --gke-uri=uri \
        --service-account-key-file=service-account-key-path
    
  5. Verify your clusters are registered:

    gcloud container hub memberships list
    

    The output should look similar to this:

    NAME                                  EXTERNAL_ID
    gke-us                                0375c958-38af-11ea-abe9-42010a800191
    gke-asia                              6cf0b237-38ae-11ea-aefd-42010a920fd1
    gke-eu                                d3278b78-38ad-11ea-a846-42010a840114
    

Specifying a config cluster

The config cluster is a GKE cluster you choose to be the central point of control for Ingress across the member clusters. Unlike GKE Ingress, the Anthos Ingress controller does not live in a single cluster but is a Google-managed service that watches resources in the config cluster. This GKE cluster is used as a multi-cluster API server to store resources such as MultiClusterIngress and MultiClusterService. Any member cluster can become a config cluster, but there can only be one config cluster at a time.

Ingress for Anthos arch

If the config cluster is down or inaccessible, then MultiClusterIngress and MultiClusterService objects can not update across the member clusters. Load balancers and traffic can continue to function independently of the config cluster in the case of an outage.

Enabling Ingress for Anthos and selecting the config cluster occurs in the same step. The GKE cluster you choose as the config cluster must already be registered as a member to Hub.

  1. Identify the URI of the cluster you want to specify as the config cluster:

    gcloud container hub memberships list
    

    The output is similar to this:

    NAME                                  EXTERNAL_ID
    gke-us                                0375c958-38af-11ea-abe9-42010a800191
    gke-asia                              6cf0b237-38ae-11ea-aefd-42010a920fd1
    gke-eu                                d3278b78-38ad-11ea-a846-42010a840114
    
  2. Enable Ingress for Anthos and select gke-us as the config cluster:

    gcloud alpha container hub features multiclusteringress enable \
      --config-membership=projects/project_id/locations/global/memberships/gke-us
    

    The output is similar to this:

    Waiting for Feature to be created...done.
    
  3. To check the status, run:

    gcloud alpha container hub features multiclusteringress describe
    

    The output is similar to this:

    featureState:
      detailsByMembership:
        projects/393818921412/locations/global/memberships/0375c958-38af-11ea-abe9-42010a800191:
          code: OK
    lifecycleState: ENABLED
    multiclusteringressFeatureSpec:
    configMembership: projects/project_id/locations/global/memberships/0375c958-38af-11ea-abe9-42010a800191
    name: projects/project_id/locations/global/features/multiclusteringress
    updateTime: '2020-01-22T19:16:51.172840703Z'
    

    While lifecycleState is marked as enabled, this feature is not ready to use until your config cluster has an OK status. If you do not see code: OK after a few minutes, go to the troubleshooting section.

Shared VPC deployment

Ingress for Anthos can be deployed for clusters that reside in a Shared VPC network, but all of the participating backend GKE clusters must be in the same project. Having GKE clusters in different projects using the same Cloud Load Balancing VIP is not supported.

In non-shared VPC networks, the Ingress for Anthos controller manages firewall rules to allow health checks to pass from the Cloud Load Balancing to container workloads.

In a Shared VPC network Ingress for Anthos is not capable of managing these firewall rules because firewalling is managed by the host project, which service project administrators do not have access to. The centralized security model of Shared VPC networks intentionally centralizes network control. In Shared VPC networks a host project administrator must manually create the necessary firewall rules for Cloud Load Balancing traffic on behalf of Ingress for Anthos.

The following command shows the firewall rule that you must create if your clusters are on a Shared VPC network. The source ranges are the ranges that the Cloud Load Balancing uses to send traffic to backends in Google Cloud. This rule must exist for the operational lifetime of Ingress for Anthos and can only be removed if Ingress for Anthos or Cloud Load Balancing to GKE load balancing is no longer used.

  • If your clusters are on a Shared VPC network, create the firewall rule:

    gcloud compute firewall-rules create firewall-rule-name \
        --project host-project \
        --network shared-vpc \
        --direction INGRESS \
        --allow tcp:0-65535 \
        --source-ranges 130.211.0.0/22,35.191.0.0/16
    

Ingress deployment

The following sections show how to deploy an Ingress that serves an application across multiple GKE clusters. You deploy a fictional app called zoneprinter in the three clusters you previously created. The Ingress provides a shared virtual IP (VIP) address for the app deployments.

MultiClusterIngress

In this section you create a Kubernetes Deployment across three clusters. The Deployment does not have to be identical in any way, but to be exposed by the same Ingress, these characteristics must be the same across clusters:

  • Be deployed into a namespace with the same name across clusters.
  • Share the same set of labels so they can be selected as a unit across the clusters.

Ingress for Anthos assumes that namespaces are cross-cluster entities, specifically, that namespaces with the same name in different clusters represent the same auth/isolation/grouping from the user's perspective.

The following deploy.yaml defines a set of Pods for zoneprinter that serve the application on port 80 within the Pods with labels app: zoneprinter:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: zoneprinter
  namespace: prod
  labels:
    app: zoneprinter
spec:
  selector:
    matchLabels:
      app: zoneprinter
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: zoneprinter
    spec:
      containers:
      - name: frontend
        image: gcr.io/google-samples/zone-printer:0.1
        ports:
        - containerPort: 80
  1. Create the Deployment:

    kubectl apply -f deploy.yaml
    

    The output is similar to:

    deployment.apps/zoneprinter created
    
  2. Verify the application is deployed across all three clusters:

    kubectl get deploy -o wide
    

    The output for each cluster is similar to this:

    NAME          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                   SELECTOR
    zoneprinter   1/1     1            1           11s   frontend     gcr.io/google-samples/zone-printer:0.1   app=zoneprinter
    
  3. View your clusters:

    kubectl config get-clusters
    

    The output is similar to this:

    Name
    gke_project_ID_us-central1-a_gke-us
    gke_project ID_asia-northeast1-a_gke-asia
    gke_project ID_europe-west1-c_gke-eu
    

Creating MultiClusterIngress and MultiClusterServices

To create the load balancer, you need two resources: a MultiClusterIngress and one or more MultiClusterServices. MultiClusterIngress and MultiClusterService objects are multi-cluster analogs to the existing Kubernetes Ingress and Service resources used in the single cluster context.

MultiClusterIngress

The Ingress for Anthos resource describes the protocol termination, virtual host, and path mappings to Services. A MultiClusterService object describes a Kubernetes Service spanning one or more Kubernetes clusters. The MultiClusterIngress and MultiClusterService resources must be created in the cluster you designated as the config cluster when enabling the feature.

The following mci.yaml describes the load balancer frontend:

apiVersion: networking.gke.io/v1beta1
kind: MultiClusterIngress
metadata:
  name: zone-mci
  namespace: prod
spec:
  template:
    spec:
      backend:
       serviceName: zone-svc
       servicePort: 80

The configuration routes all traffic to the MultiClusterService named "zone-svc" that exists in the "prod" namespace.

Ingress for Anthos has the same schema as the Kubernetes Ingress. The semantics of the ingress resource are the same as that of the Kubernetes Ingress with the exception of the backend.serviceName field.

The backend.serviceName field in a MultiClusterIngress references a MultiClusterService in the Hub API rather than a Service in a Kubernetes cluster. This means that any of the settings for ingress such as TLS termination settings can be configured in the same way as before.

  1. Point kubectl to your config cluster:

    kubectl config use-context gke_project-id_us-central1-a_gke-us
    
  2. Create the MultiClusterIngress object named mci.yaml from the manifest:

    kubectl apply -f mci.yaml
    

    The output is similar to:

    multiclusteringress.networking.gke.io/zone-mci created
    

MultiClusterService

The zone-svc referenced from the MultiClusterIngress resource is a MultiClusterService also defined in the cluster denoted as your "config membership". The resource represents a Kubernetes Service spanning multiple Kubernetes clusters.

The MultiClusterService definition consists of two pieces:

  1. A "template" section that defines the Service to be created in the Kubernetes clusters.

  2. An optional "clusters" section that defines which clusters receive traffic and the load balancing properties for each cluster. If the "clusters" section is not specified or if no clusters are listed, it defaults to "all" clusters.

To continue our example, the following mcs.yaml defines the zone-svc referenced from the Ingress for Anthos definition, sending traffic to the set of clusters mapped to Memberships in the Hub. Specifically, the HTTP(S) load balancer created sends traffic to the Asia, US, and EU clusters based on the location of the client.

Further, the HTTP(S) load balancers created by Ingress for Anthos use NEGs as the backend representation.

apiVersion: networking.gke.io/v1beta1
kind: MultiClusterService
metadata:
  name: zone-svc
  namespace: prod
spec:
  template:
    spec:
      selector:
        app: zoneprinter
      ports:
      - name: web
        protocol: TCP
        port: 80
        targetPort: 80
  1. Create a MultiClusterService named mcs.yaml from the manifest:

    kubectl apply -f mcs.yaml
    

    The output is similar to:

    multiclusterservice.networking.gke.io/zone-svc created
    

Verifying status

Once you have deployed the MultiClusterIngress and MultiClusterService resources, the load balancer resources take a couple of minutes to be provisioned. The load balancer is officially created once it has an IP address (VIP).

  1. To check if the load balancer is created, fetch the status of the MCI resource:

    kubectl get mci zone-mci -o yaml
    

    The output is similar to:

    ...
    spec:
      template:
        spec:
          backend:
            serviceName: zone-svc
            servicePort: 80
    status:
      VIP: 35.244.232.60
    

    If you see a virtual IP (VIP), the load balancer created successfully. This does not mean that it is ready to serve a 200. It may take a couple more minutes before the load balancer is provisioned in such a way that it is reachable across the world.

    If you do not see a VIP in a couple minutes, or if the load balancer is not serving a 200 response within 10 minutes, go to the troubleshooting section.

  2. Once you see a VIP, curl the address to verify traffic is being routed to the closest Google Point of Presence to you.

    curl vip
    

    where vip is the VIP from the previous output.

    The output is similar to:

    Welcome from Google Cloud datacenters at:
    Council Bluffs, Iowa, USA
    ...
    

Cluster selection

By default, Services derived from Ingress for Anthos are scheduled on every member cluster. However, you may want to apply ingress rules to specific clusters. Some use-cases include:

  • Applying Ingress for Anthos to all clusters but the config cluster for isolation of the config cluster.
  • Migrating workloads between clusters in a blue-green fashion.
  • Routing to application backends that only exist in a subset of clusters.
  • Using a single L7 VIP for host/path routing to backends that live on different clusters.

Cluster selection allows you to select clusters by region/name in the MultiClusterService object. This controls which clusters your Ingress for Anthos is pointing to and where the derived Services are scheduled. Clusters within the same Hub and region should not have the same name so that clusters can be referenced uniquely.

  1. Open mcs.yaml

    apiVersion: networking.gke.io/v1beta1
    kind: MultiClusterService
    metadata:
      name: zone-svc-v1
      namespace: prod
    spec:
      template:
        spec:
          selector:
            app: zoneprinter
          ports:
          - name: web
            protocol: TCP
            port: 80
            targetPort: 80
    

    This specification currently creates Derived Services in all clusters, the default behavior.

  2. Append the following lines in the clusters section:

    apiVersion: networking.gke.io/v1beta1
    kind: MultiClusterService
    metadata:
      name: zone-svc-v1
      namespace: prod
    spec:
      template:
        spec:
          selector:
            app: zoneprinter
          ports:
          - name: web
            protocol: TCP
            port: 80
            targetPort: 80
      clusters:
      - link: "us-central1-a/gke-us"
      - link: "europe-west1-c/gke-eu"
    

    This example creates Derived Service resources only in gke-us and gke-eu clusters. You must select clusters to selectively apply ingress rules. If the "clusters" section of the MultiClusterService is not specified or if no clusters are listed, it is interpreted as the default "all" clusters.

HTTPS support

The Kubernetes Secret supports HTTPS. Before enabling HTTPS support, you must create a static IP address. This static IP allows HTTP and HTTPS to share the same IP address. For more information, see Creating a static IP.

Once you have created a static IP address, you can create a Secret.

  1. Create a Secret:

    kubectl -n prod create secret tls secret-name --key /path/to/keyfile --cert/path/to/certfile
    

    where secret-name is the name of your Secret.

  2. Update the mci.yaml file with the static IP and Secret:

    kind: MultiClusterIngress
    metadata:
      name: shopping-service
      namespace: prod
      annotations:
        networking.gke.io/static-ip: static-ip-address
    spec:
      template:
        spec:
           backend:
             serviceName: shopping-service
              servicePort: 80
           tls:
           - secretName: secret-name
    

BackendConfig support

The BackendConfig CRD allows you to customize settings on the Compute Engine BackendService resource. Currently, only the health check configuration is supported by BackendConfig. The supported specification is below:

apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: zone-health-check-cfg
  namespace: prod
spec:
  healthCheck:
    checkIntervalSec: [int]
    timeoutSec: [int]
    healthyThreshold: [int]
    unhealthyThreshold: [int]
    type: [HTTP | HTTPS | HTTP2]
    port: [int]
    requestPath: [string]

To use BackendConfig, attach it on your MultiClusterService resource using an annotation:

apiVersion: networking.gke.io/v1beta1
kind: MultiClusterService
metadata:
 name: zone1
 namespace: prod
 annotations:
   beta.cloud.google.com/backend-config:
     '{"ports": {"8080":"zone-health-check-cfg"}}'
spec:
 template:
   spec:
     selector:
       app: zone1
     ports:
     - name: web
       protocol: TCP
       port: 8080
       targetPort: 8080

For more information about BackendConfig semantics, see Associating a service port with a BackendConfig.

Resource lifecycle

Configuration changes

MultiClusterIngress and MultiClusterService resources behave as standard Kubernetes objects, so changes to the objects are asynchronously reflected in the system. Any changes that result in an invalid configuration cause associated Google Cloud objects to remain unchanged and raise an error in the object event stream. Errors associated with the configuration will be reported as events.

Managing Kubernetes resources

Deleting the Ingress object tears down the HTTP(S) load balancer so traffic is no longer forwarded to any defined MultiClusterService.

Deleting the MultiClusterService removes the associated derived services in each of the clusters.

Managing clusters

The set of clusters targeted by the load balancer can be changed by adding or removing a Membership.

For example, to remove the gke-asia cluster as a backend for an ingress, run:

gcloud container hub memberships unregister cluster-name \
  --gke-uri=uri

where

  • cluster-name is the name of your cluster.
  • uri is the URI of the GKE cluster.

To add a cluster in Europe, run:

gcloud container hub memberships register europe-cluster \
  --context=europe-cluster --service-account-key-file=/path/to/service-account-key-file

Note that registering or unregistering a cluster changes its status as a backend for all Ingresses. In the above case, unregistering the gke-asia cluster removes it as an available backend for all Ingresses you create. The reverse is true for registering a new cluster.

Disabling Ingress for Anthos

In Beta, disabling Ingress for Anthos results in orphaned networking resources. To avoid this, delete your MultiClusterIngress and MultiClusterService resources and verify any associated networking resources are deleted.

Disable Ingress for Anthos:

gcloud beta container hub features multiclusteringress disable

Annotations

Creating a static IP

  1. Allocate a static IP:

    gcloud compute addresses create address-name --global
    

    where address-name is the name of the address to create.

    The output is similar to:

    Created [https://www.googleapis.com/compute/v1/projects/project-id/global/addresses/address-name].
    
  2. View the IP address you just created:

    gcloud compute addresses list
    

    The output is similar to:

    NAME          ADDRESS/RANGE  TYPE      STATUS
    address-name  34.102.201.47  EXTERNAL  RESERVED
    
  3. Apply the static IP by updating mci.yaml:

    kubectl get mci zone-mci -o yaml
    

    The output is similar to:

    kind: MultiClusterIngress
    metadata:
      name: shopping-service
      namespace: prod
      annotations:
        networking.gke.io/static-ip: static-ip-address
    spec:
      template:
        spec:
           backend:
             serviceName: shopping-service
              servicePort: 80
    

Pre-shared certificates

Pre-shared certificates are certificates uploaded to Google Cloud that can be used by the load balancer for TLS termination instead of certificates stored in Kubernetes Secrets. These certificates are uploaded out of band from GKE to Google Cloud and referenced by an Ingress for Anthos object. Multiple certificates, either through pre-shared certs or Kubernetes secrets, are also supported.

Using the certificates in Ingress for Anthos requires the networking.gke.io/pre-shared-certs annotation and the names of the certs. When multiple certificates are specified for a given Ingress for Anthos object, a predetermined order governs which cert is presented to the client.

You can list the available SSL certificates by running:

gcloud compute ssl-certificates list

The example below describes client traffic to one of the specified hosts that matches the Common Name of the pre-shared certs so the respective certificate that matches the domain name will be presented.

kind: MultiClusterIngress
metadata:
  name: shopping-service
  namespace: prod
  annotations:
    networking.gke.io/pre-shared-certs: "domain1-cert, domain2-cert"
spec:
  template:
    spec:
      rules:
      - host: my-domain1.gcp.com
        http:
          paths:
          - backend:
              serviceName: domain1-svc
              servicePort: 443
      - host: my-domain2.gcp.com
        http:
          paths:
          - backend:
              serviceName: domain2-svc
              servicePort: 443

Application protocols

The connection from the load balancer proxy to your application uses HTTP by default. Using networking.gke.io/app-protocols annotation, you can configure the load balancer to use HTTPS or HTTP/2 when it forwards requests to your application.

kind: MultiClusterService
metadata:
  name: shopping-service
  namespace: prod
  annotations:
    networking.gke.io/app-protocols: '{"http2":"HTTP2"}'
spec:
  template:
    spec:
      ports:
      - port: 443
        name: http2

Known issues and limitations

These are issues reported throughout the course of the Ingress for Anthos Beta that may limit usage to certain patterns or require various workarounds.

  • Compute Engine load balancer resources are created with a name containing a prefix of "mci-[6 char hash]". This prefix is used to garbage collect resources that are no longer required. Given this prefix contains a hash, it is unlikely your project will contain Compute Engine resources outside the realm of Ingress for Anthos with this same prefix. However, if this is the case, keep in mind that these resources will be deleted. If you are experiencing this, reach out to gke-mci-feedback@google.com.

  • Hub Memberships must be between clusters in the same project.

  • Configuration of HTTPS now requires a pre-allocated static IP address. Future releases will alleviate this requirement. Specifically, in the future, Ingress for Anthos will allocate a static IP address for you if you do not provide one.

  • Ingress for Anthos and Hub support registering a maximum of 15 clusters. The maximum cluster count will be increased in future releases. Reach out to gke-mci-feedback@google.com if you have requirements for registering more clusters.

  • In a single project, there is a hard limit of 50 MCI resources and 100 MultiClusterService resources.

Pricing

Standard Compute Engine load balancer pricing applies to traffic and load balancers created through Ingress. During Beta Ingress for Anthos will not incur any additional charges. After graduating to GA, Ingress for Anthos will be part of the Anthos on Google Cloud licensing tier and will require Anthos licensing.

Troubleshooting

Config membership stats not "OK"

If you do not see an 'OK' status for the config cluster, an internal error may have occurred. Contact gke-mci-feedback@google.com for assistance.

VIP not created

If you do not see a VIP, then an error may have occurred during its creation. To see if such an error did occur, run the following command:

kubectl describe mci shopping-service

The output may look similar to:

Name:         shopping-service
Namespace:    prod
Labels:       <none>
Annotations:  <none>
API Version:  networking.gke.io/v1beta1
Kind:         MultiClusterIngress
Metadata:
  Creation Timestamp:  2019-07-16T17:23:14Z
  Finalizers:
    mci.finalizer.networking.gke.io
Spec:
  Template:
    Spec:
      Backend:
        Service Name:  shopping-service
        Service Port:  80
Status:
  VIP:  34.102.212.68
Events:
  Type     Reason  Age   From                              Message
  ----     ------  ----  ----                              -------
  Warning  SYNC    29s   multi-cluster-ingress-controller  error translating MCI prod/shopping-service: exceeded 4 retries with final error: error translating MCI prod/shopping-service: multiclusterservice prod/shopping-service does not exist

In this example, the error was that the user did not create a MultiClusterService resource that was referenced by a MultiClusterIngress.

In most cases, the error message indicates the underlying issue. In case it does not, please contact gke-mci-feedback@google.com for assistance.

502 response

If your load balancer acquired a VIP but is consistently serving a 502 response, the load balancer health checks may be failing. Health checks could fail for two reasons:

  1. Application Pods are not healthy (see Cloud Console debugging for example).
  2. Misconfigured firewall is blocking Google health checkers from performing health checks.

In the case of #1, make sure that your application is in fact serving a 200 response on the "/" path.

In the case of #2, make sure that a firewall named "mci-default-l7" exists in your VPC. The ingress controller creates the firewall in your VPC to ensure Google health checkers can reach your backends. If the firewall does not exist, make sure there is no external automation that deletes this firewall upon its creation.

Traffic not added to or removed from cluster

When adding a new Membership, traffic should reach the backends in the underlying cluster when applicable. Similarly, if a Membership is removed, no traffic should reach the backends in the underlying cluster. If you are not observing this behavior, check for errors on the MultiClusterIngress and MultiClusterService resource.

Common cases in which this error would occur include adding a new Membership on a GKE cluster that is not in VPC-native mode or adding a new Membership but not deploying an application in the GKE cluster.

  1. Describe the MultiClusterService:

    kubectl describe mcs zone-svc
    
  2. Describe the MultiClusterIngress:

    kubectl describe mci zone-mci
    

If the above commands do not reveal the error, contact gke-mci-feedback@google.com for assistance.

Console debugging

In most cases, checking the exact state of the load balancer is helpful when debugging an issue. You can find the load balancer by going to Load balancing in the Google Cloud Console.

What's next