Ingress for Anthos

Overview

Ingress for Anthos (Ingress) is a cloud-hosted multi-cluster ingress controller for Anthos GKE clusters. It's a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions.

Multi-cluster networking

Many factors drive multi-cluster topologies, including close user proximity for apps, cluster and regional high availability, security and organizational separation, cluster migration, and data locality. These use cases are rarely isolated. As the reasons for multiple clusters grow, the need for a formal and productized multi-cluster platform becomes more urgent.

Ingress for Anthos is designed to meet the load balancing needs of multi-cluster, multi-regional environments. It's a controller for the external HTTP(S) load balancer to provide ingress for traffic coming from the internet across one or more clusters.

Ingress for Anthos's multi-cluster support satisfies many use cases including:

  • A single, consistent virtual IP (VIP) for an app, independent of where the app is deployed globally.
  • Multi-regional, multi-cluster availability through health checking and traffic failover.
  • Proximity-based routing through public Anycast VIPs for low client latency.
  • Transparent cluster migration for upgrades or cluster rebuilds.

How Ingress for Anthos works

Ingress for Anthos builds on the architecture of the External HTTP(S) Load Balancing. HTTP(S) Load Balancing is a globally distributed load balancer with proxies deployed at 100+ Google points of presence (PoPs) around the world. These proxies sit at the edge of Google's network to be positioned close to clients. Load balancer VIPs are advertised as Anycast IPs. Requests from clients are routed cold potato to Google PoPs, meaning that internet traffic goes to the closest PoP and gets to the Google backbone as fast as possible.

Terminating HTTP and HTTPS connections at the edge allows the Google load balancer to decide where to route traffic by determining backend availability before traffic enters a data center or region. This gives traffic the most efficient path from the client to the backend while considering the backends' health and capacity.

Ingress for Anthos is an ingress controller that programs the external HTTP(S) load balancer using network endpoint groups (NEGs). When you create a MultiClusterIngress resource, GKE deploys Compute Engine load balancer resources and configures the appropriate Pods across clusters as backends. The NEGs are used to track Pod endpoints dynamically so the Google load balancer has the right set of healthy backends.

Ingress for Anthos traffic flow

As you deploy applications across clusters in GKE, Ingress for Anthos ensures that the load balancer is in sync with events that occur in the cluster:

  • A Deployment is created with the right matching labels.
  • A Pod's process dies and fails its health check.
  • A cluster is removed from the pool of backends.

Ingress for Anthos updates the load balancer, keeping it consistent.

Ingress for Anthos architecture

Ingress for Anthos runs as a service outside of the cluster and is managed by Google Cloud. The ingress controller watches for MultiClusterIngress and MultiClusterService resources in GKE and configures load balancers and NEGs as a result.

The following three components make up Ingress for Anthos:

  • Anthos ingress controller - This is a globally distributed control plane that runs as a service outside of your clusters. This allows the lifecycle and operations of the controller to be independent of GKE clusters.

  • Config cluster - This is a chosen GKE cluster running on Google Cloud where the MultiClusterIngress and MultiClusterService resources are deployed. This is a centralized point of control for these multi-cluster resources. These multi-cluster resources exist in and are accessible from a single logical API to retain consistency across all clusters. The ingress controller watches the config cluster and reconciles the load balancing infrastructure.

  • Member cluster - To participate as a backend through Ingress for Anthos, you must register each GKE cluster as a member. Registration makes the ingress controller aware of the cluster so that it can act as a backend for a given MultiClusterIngress.

Ingress for Anthos arch

Deployment workflow

The following steps illustrate a high-level workflow for using Ingress for Anthos across multiple clusters.

  1. Register GKE clusters as member clusters.

  2. Configure a GKE cluster as the central config cluster. This cluster can be a dedicated control plane, or it can run other workloads.

  3. Deploy applications to the GKE clusters where they need to run.

  4. Deploy one or more MultiClusterService resources in the config cluster with label and cluster matches to select clusters, namespace, and Pods that are considered backends for a given Service. This creates NEGs in Compute Engine, which begins to register and manage service endpoints.

  5. Deploy a MultiClusterIngress resource in the config cluster that references one or more MultiClusterService resources as backends for the load balancer. This deploys the Compute Engine external load balancer resources and exposes the endpoints across clusters through a single load balancer VIP.

Compute Engine to Ingress for Anthos resource mappings

The table below shows the mapping of Hub resources to resources created in the Kubernetes clusters and Google Cloud:

Kubernetes resource Google Cloud resource Description
MultiClusterIngress Forwarding rule HTTP(S) load balancer VIP.
Target proxy HTTP/S terminations settings taken from annotations and the TLS block.
URL map Virtual host path mapping from the rules section.
MultiClusterService Kubernetes Service Derived resource from template.
Backend service A backend service is created for each (Service, ServicePort) pair.
Network endpoint groups Set of backend Pods participating in the Service.

What's next?