Kubernetes cluster overview

Google Distributed Cloud (GDC) air-gapped provides a managed Kubernetes service with Google Kubernetes Engine (GKE) Enterprise edition, letting you deploy and run container workloads using industry standard Kubernetes methodologies. GKE on GDC brings core features and functionality of GKE Enterprise to a disconnected environment. Additional GKE Enterprise features will be available for GKE on GDC over time.

GKE on GDC provides enterprise features such as:

  • Multi-cluster lifecycle management
  • Fully supported Kubernetes distribution
  • Cost visibility
  • Multi-team management
  • GitOps-based configuration management
  • Managed service mesh
  • Policy control

All of these features come standard with GKE on GDC, and are available for use with clusters created by the managed Kubernetes service.

For the purposes of documentation, GKE on GDC clusters are termed as Kubernetes clusters or Clusters.

GDC cluster architecture

Kubernetes clusters are logically separated from each other to provide different failure domains and isolation guarantees. In some cases, they are even physically separated. Each organization in GDC has a dedicated set of Kubernetes clusters. The following cluster types are available in each organization:

  • Org admin cluster: Runs the control plane components of managed and Marketplace services for the organization. It also hosts some core infrastructure services.
  • System cluster: Runs virtual machine (VM) workloads and some managed service data plane workloads for the organization. The number of worker nodes depends on the utilization of the cluster.
  • User cluster: Runs container-based workloads for the organization. The number of worker nodes depends on the utilization of the cluster. You can scale them as your needs evolve.

When your Infrastructure Operator (IO) creates an organization, GDC automatically generates the org admin and system clusters. The initial configuration for those two cluster types are set during organization creation, and don't run customer workloads.

As an Administrator, you create and manage user clusters. This section of topics covers the management of user clusters. Your containerized Kubernetes workloads all run in a user cluster. For more information on creating and managing containers in a user cluster, see the Deploy container workloads section.

A user cluster consists of a control plane and worker machines called nodes. The control plane and nodes make up the Kubernetes cluster orchestration system. GKE on GDC manages the entire underlying infrastructure of clusters, including the control plane and all system components. You are responsible for managing the worker nodes that run your containerized workloads.

The following diagram shows the architecture of a Kubernetes cluster:

A Kubernetes cluster consists of a control plane, nodes, and services.

About the control plane

The control plane runs processes such as the Kubernetes API server, scheduler, and core resource controllers. GKE on GDC manages the control plane lifecycle from cluster creation to deletion. This includes upgrades to the Kubernetes version running on the control plane, which GDC performs automatically, or manually at your request if you prefer to upgrade earlier than the automatic schedule.

Control plane and the Kubernetes API

The control plane is the unified endpoint for your cluster. You interact with the control plane through Kubernetes API calls. The control plane runs the Kubernetes API server process, or kube-apiserver, to handle API requests. You can make Kubernetes API calls in the following ways:

  • Direct calls: KRM
  • Indirect calls: Kubernetes command-line clients, such as kubectl, or the GDC console.

The API server process is the hub for all communication for the cluster. All internal cluster components such as nodes, system processes, and application controllers act as clients of the API server.

Your API requests tell Kubernetes what your chosen state is for the objects in your cluster. Kubernetes attempts to constantly maintain that state. Kubernetes lets you configure objects in the API either imperatively or declaratively.

Worker node management

The control plane manages what runs on all of the cluster's nodes. The control plane schedules workloads and manages the workloads' lifecycle, scaling, and upgrades. The control plane also manages network and storage resources for those workloads. The control plane and nodes communicate with each other using Kubernetes APIs.

About nodes

Nodes are the worker machines that run your containerized applications and other workloads. The individual machines are virtual machines (VMs) that GKE on GDC creates. The control plane manages and receives updates on each node's self-reported status.

A node runs the services necessary to support the containers that make up your cluster's workloads. These include the runtime and the Kubernetes node agent, or kubelet, which communicates with the control plane and is responsible for starting and running containers scheduled on the node.

GKE on GDC also runs a number of system containers that run as per-node agents, called DaemonSets, that provide features such as log collection and intra-cluster network connectivity.

Limitations for GKE on GDC

The following GKE capabilities are limitations not available for GKE on GDC:

  • Connect gateway
  • Attaching multicloud clusters
  • Binary Authorization
  • Multi-cluster Data transfer in