In Container Engine, a container cluster consists of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.
A container cluster is the foundation of Container Engine: the Kubernetes objects that represent your containerized applications all run on top of a cluster.
The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The master's lifecycle is managed by Container Engine when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the cluster master, which Container Engine performs automatically, or manually at your request if you prefer to upgrade earlier than the automatic schedule.
Cluster master and the Kubernetes API
The master is the unified endpoint for your cluster. All interactions with the
cluster are done via Kubernetes API calls, and the master runs the Kubernetes
API Server process to handle those requests. You can make Kubernetes API calls
directly via HTTP/gRPC, or indirectly, by running commands from the
Kubernetes command-line client (
kubectl) or interacting with the UI in the
Google Cloud Platform Console.
The cluster master's API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) all act as clients of the API server; the API server is the single "source of truth" for the entire cluster.
Master and node interaction
The cluster master is responsible for deciding what runs on all of the cluster's nodes. This can include scheduling workloads, like containerized applications, and managing the workloads' lifecycle, scaling, and upgrades. The master also manages network and storage resources for those workloads.
The master and nodes also communicate using Kubernetes APIs.
A container cluster typically has one or more nodes, which are the worker machines that run your containerized applications and other workloads. The individual machines are Compute Engine VM instances that Container Engine creates on your behalf when you create a cluster.
Each node is managed from the master, which receives updates on the each node's self-reported status. You can exercise some manual control over node lifecycle, or you can have Container Engine perform automatic repairs and automatic upgrades on your cluster's nodes.
A node runs the services necessary to the support the Docker containers that
make up your cluster's workloads. These include the Docker runtime and the
Kubernetes node agent (
kubelet) which communicates with the master and is
responsible for starting and running Docker containers scheduled on that node.
In Container Engine, there are also a number of special containers that run as per-node agents to provide functionality such as log collection and intra-cluster network connectivity.
Node machine type
Node allocatable resources
Note that some of a node's resources are required to run the Container Engine and Kubernetes resources necessary to make that node function as part of your cluster. As such, you may notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in Container Engine.
The following table shows the amount of allocatable resources that are available for scheduling your cluster's workloads for each node machine type:
|Machine type||Memory capacity (GB)||Allocatable memory (GB)||CPU capacity (cores)||Allocatable CPU (cores)|
Node OS images
Each node runs a specialized OS image for running your containers and other assorted node processes. You can configure your clusters and node pools to choose which node image you want your cluster's nodes to run. For more information, refer to Node Images.
Minimum CPU platform
When you create a container cluster or node pool, you can specify a baseline minimum CPU platform for its nodes. Choosing a specific CPU platform can be advantageous for advanced or compute-intensive workloads. For more information, refer to Minimum CPU Platform.