Container Cluster Architecture

In Container Engine, a container cluster consists of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.

A container cluster is the foundation of Container Engine: the Kubernetes objects that represent your containerized applications all run on top of a cluster.

Cluster master

The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The master's lifecycle is managed by Container Engine when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the cluster master, which Container Engine performs automatically, or manually at your request if you prefer to upgrade earlier than the automatic schedule.

Cluster master and the Kubernetes API

The master is the unified endpoint for your cluster. All interactions with the cluster are done via Kubernetes API calls, and the master runs the Kubernetes API Server process to handle those requests. You can make Kubernetes API calls directly via HTTP/gRPC, or indirectly, by running commands from the Kubernetes command-line client (kubectl) or interacting with the UI in the Google Cloud Platform Console.

The cluster master's API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) all act as clients of the API server; the API server is the single "source of truth" for the entire cluster.

Master and node interaction

The cluster master is responsible for deciding what runs on all of the cluster's nodes. This can include scheduling workloads, like containerized applications, and managing the workloads' lifecycle, scaling, and upgrades. The master also manages network and storage resources for those workloads.

The master and nodes also communicate using Kubernetes APIs.

Nodes

A container cluster typically has one or more nodes, which are the worker machines that run your containerized applications and other workloads. The individual machines are Compute Engine VM instances that Container Engine creates on your behalf when you create a cluster.

Each node is managed from the master, which receives updates on the each node's self-reported status. You can exercise some manual control over node lifecycle, or you can have Container Engine perform automatic repairs and automatic upgrades on your cluster's nodes.

A node runs the services necessary to the support the Docker containers that make up your cluster's workloads. These include the Docker runtime and the Kubernetes node agent (kubelet) which communicates with the master and is responsible for starting and running Docker containers scheduled on that node.

In Container Engine, there are also a number of special containers that run as per-node agents to provide functionality such as log collection and intra-cluster network connectivity.

Node machine type

Each node is of a standard Compute Engine machine type. The default type is n1-standard-1, with 1 virtual CPU and 3.75 GB of memory. You can select a different machine type when creating a cluster.

Node allocatable resources

Note that some of a node's resources are required to run the Container Engine and Kubernetes resources necessary to make that node function as part of your cluster. As such, you may notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in Container Engine.

The following table shows the amount of allocatable resources that are available for scheduling your cluster's workloads for each node machine type:

Machine type Memory capacity (GB) Allocatable memory (GB) CPU capacity (cores) Allocatable CPU (cores)
f1-micro (exempt from memory) 0.6 0.6 0.5 0.47
g1-small 1.7 1.0 0.5 0.47
n1-standard-1 (default) 3.75 2.6 1 0.94
n1-standard-2 7.5 5.6 2 1.93
n1-standard-4 15 12.3 4 3.92
n1-standard-8 30 26.3 8 7.91
n1-standard-16 60 54.5 16 15.89
n1-standard-32 120 110.9 32 31.85
n1-standard-64 240 228.2 64 63.77
n1-highmem-2 13 10.5 2 1.93
n1-highmem-4 26 22.6 4 3.92
n1-highmem-8 52 47.0 8 7.91
n1-highmem-16 104 95.9 16 15.89
n1-highmem-32 208 196.8 32 31.85
n1-highmem-64 416 400.7 64 63.77
n1-highcpu-2 1.8 1.1 2 1.93
n1-highcpu-4 3.6 2.5 4 3.92
n1-highcpu-8 7.2 5.3 8 7.91
n1-highcpu-16 14.4 11.7 16 15.89
n1-highcpu-32 28.8 25.2 32 31.85
n1-highcpu-64 57.6 52.3 64 63.77

Node OS images

Each node runs a specialized OS image for running your containers and other assorted node processes. You can configure your clusters and node pools to choose which node image you want your cluster's nodes to run. For more information, refer to Node Images.

Minimum CPU platform

When you create a container cluster or node pool, you can specify a baseline minimum CPU platform for its nodes. Choosing a specific CPU platform can be advantageous for advanced or compute-intensive workloads. For more information, refer to Minimum CPU Platform.

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Container Engine