Autopilot cluster architecture


A cluster is the foundation of Google Kubernetes Engine (GKE): the Kubernetes objects that represent your containerized applications all run on top of a cluster.

In Autopilot mode, a cluster consists of a highly available control plane and multiple worker machines called nodes. These control plane and node machines run the Kubernetes cluster orchestration system. For clusters in Autopilot mode, GKE manages the entire underlying infrastructure of the clusters, including the control plane, nodes, and all system components.

The following diagram provides an overview of the architecture for an Autopilot cluster in GKE:

GKE provisions, maintains, and operates everything in the control plane and only provisions the nodes.

Control plane

The control plane runs the control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The lifecycle of the control plane is managed by GKE when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the control plane, which GKE performs automatically for Autopilot clusters.

Control plane and the Kubernetes API

The control plane is the unified endpoint for your cluster. All interactions with the cluster are done via Kubernetes API calls, and the control plane runs the Kubernetes API Server process to handle those requests. You can make Kubernetes API calls directly via HTTP/gRPC, or indirectly, by running commands from the Kubernetes command-line client (kubectl) or interacting with the UI in the Cloud Console.

The API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) all act as clients of the API server; the API server is the single "source of truth" for the entire cluster.

Control plane and node interaction

The control plane is responsible for deciding what runs on all of the cluster's nodes. This can include scheduling workloads, like containerized applications, and managing the workloads' lifecycle, scaling, and upgrades. The control plane also manages network and storage resources for those workloads.

The control plane and nodes also communicate using Kubernetes APIs.

Control plane interactions with the gcr.io Container Registry

When you create or update a cluster, container images for the Kubernetes software running on the control plane (and nodes) are pulled from the gcr.io Container Registry. An outage affecting the gcr.io Container Registry may cause the following types of failures:

  • Creating new clusters will fail during the outage.
  • Upgrading clusters will fail during the outage.
  • Disruptions to workloads may occur even without user intervention, depending on the specific nature and duration of the outage.

In the event of a regional outage of the gcr.io Container Registry, Google may redirect requests to a zone or region not affected by the outage.

To check the current status of Google Cloud services, go to the Google Cloud status dashboard.

Nodes

A cluster has nodes, which are the worker machines that run your containerized applications and other workloads. Autopilot manages the nodes and the control plane receives updates on each node's self-reported status. The node size, node type, and amount of nodes are automatically determined by Autopilot based on all workload specified resources and the actual load on the cluster. For Autopilot clusters, you can view the nodes with the kubectl describe nodes command but you don't have to manage or interact with them. Unlike Standard mode, the nodes are not visible through Compute Engine or the gcloud compute instances list command.

OS images

Autopilot clusters use the Container-Optimized OS with Containerd (cos_containerd) for running your containers.