Cluster Architecture

In Google Kubernetes Engine, a cluster consists of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.

A cluster is the foundation of GKE: the Kubernetes objects that represent your containerized applications all run on top of a cluster.

Cluster master

The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The master's lifecycle is managed by GKE when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the cluster master, which GKE performs automatically, or manually at your request if you prefer to upgrade earlier than the automatic schedule.

Cluster master and the Kubernetes API

The master is the unified endpoint for your cluster. All interactions with the cluster are done via Kubernetes API calls, and the master runs the Kubernetes API Server process to handle those requests. You can make Kubernetes API calls directly via HTTP/gRPC, or indirectly, by running commands from the Kubernetes command-line client (kubectl) or interacting with the UI in the GCP Console.

The cluster master's API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) all act as clients of the API server; the API server is the single "source of truth" for the entire cluster.

Master and node interaction

The cluster master is responsible for deciding what runs on all of the cluster's nodes. This can include scheduling workloads, like containerized applications, and managing the workloads' lifecycle, scaling, and upgrades. The master also manages network and storage resources for those workloads.

The master and nodes also communicate using Kubernetes APIs.

Nodes

A cluster typically has one or more nodes, which are the worker machines that run your containerized applications and other workloads. The individual machines are Compute Engine VM instances that GKE creates on your behalf when you create a cluster.

Each node is managed from the master, which receives updates on each node's self-reported status. You can exercise some manual control over node lifecycle, or you can have GKE perform automatic repairs and automatic upgrades on your cluster's nodes.

A node runs the services necessary to support the Docker containers that make up your cluster's workloads. These include the Docker runtime and the Kubernetes node agent (kubelet) which communicates with the master and is responsible for starting and running Docker containers scheduled on that node.

In GKE, there are also a number of special containers that run as per-node agents to provide functionality such as log collection and intra-cluster network connectivity.

Node machine type

Each node is of a standard Compute Engine machine type. The default type is n1-standard-1, with 1 virtual CPU and 3.75 GB of memory. You can select a different machine type when you create a cluster.

Node OS images

Each node runs a specialized OS image for running your containers. You can specify which OS image your clusters and node pools use.

Minimum CPU platform

When you create a cluster or node pool, you can specify a baseline minimum CPU platform for its nodes. Choosing a specific CPU platform can be advantageous for advanced or compute-intensive workloads. For more information, refer to Minimum CPU Platform.

Node allocatable resources

Some of a node's resources are required to run the GKE and Kubernetes node components necessary to make that node function as part of your cluster. As such, you may notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in GKE.

You can make a request for resources for your Pods or limit their resource usage. To learn how to request or limit resource usage for Pods, refer to Managing Compute Resources for Containers.

To inspect the node allocatable resources available in a cluster, run the following command:

kubectl describe node [NODE_NAME] | grep Allocatable -B 4 -A 3

The returned output contains Capacity and Allocatable fields with measurements for ephemeral storage, memory, and CPU.

Allocatable memory and CPU resources

Allocatable resources are calculated in the following way:

Allocatable = Capacity - Reserved - Eviction Threshold

For memory resources, GKE reserves the following:

  • 25% of the first 4GB of memory
  • 20% of the next 4GB of memory (up to 8GB)
  • 10% of the next 8GB of memory (up to 16GB)
  • 6% of the next 112GB of memory (up to 128GB)
  • 2% of any memory above 128GB

GKE reserves an additional 100 MiB memory on each node for kubelet eviction.

For CPU resources, GKE reserves the following:

  • 6% of the first core
  • 1% of the next core (up to 2 cores)
  • 0.5% of the next 2 cores (up to 4 cores)
  • 0.25% of any cores above 4 cores

The following table shows the amount of allocatable memory and CPU resources that are available for scheduling your cluster's workloads for each standard node machine type:

Machine type Memory capacity (GB) Allocatable memory (GB) CPU capacity (cores) Allocatable CPU (cores)
f1-micro (exempt from memory) 0.6 0.6 1 0.94
g1-small 1.7 1.2 1 0.94
n1-standard-1 (default) 3.75 2.7 1 0.94
n1-standard-2 7.5 5.7 2 1.93
n1-standard-4 15 12.3 4 3.92
n1-standard-8 30 26.6 8 7.91
n1-standard-16 60 54.7 16 15.89
n1-standard-32 120 111.2 32 31.85
n1-standard-64 240 228.4 64 63.77
n1-standard-96 360 346.4 96 95.69
n1-highmem-2 13 10.7 2 1.93
n1-highmem-4 26 22.8 4 3.92
n1-highmem-8 52 47.2 8 7.91
n1-highmem-16 104 96.0 16 15.89
n1-highmem-32 208 197.4 32 31.85
n1-highmem-64 416 400.8 64 63.77
n1-highmem-96 624 605.1 96 95.69
n1-highcpu-2 1.8 1.3 2 1.93
n1-highcpu-4 3.6 2.6 4 3.92
n1-highcpu-8 7.2 5.5 8 7.91
n1-highcpu-16 14.4 11.9 16 15.89
n1-highcpu-32 28.8 25.3 32 31.85
n1-highcpu-64 57.6 52.5 64 63.77
n1-highcpu-96 86.4 79.6 96 95.69

Allocatable local ephemeral storage resources

Beginning in GKE version 1.10, you can manage your local ephemeral storage resources as you do your CPU and memory resources. System reservations for local storage are made primarily for disk space used by container images.

If your node does not consume all reserved storage, Pods are still able to use the space. This does not prevent disk space from being used in any scenario.

Allocatable local ephemeral storage resources are calculated using the following formula, with an eviction threshold of 10% of storage capacity:

Allocatable = Capacity - Reserved - Eviction Threshold
Disk Capacity (GB) Reserved (GB) Allocatable (GB)
8 4 3.2
16 8 6.4
32 16 12.8
64 28.4 29.2
128 50.8 64.4
256 95.6 134.8
512 100 360.8
1024 100 821.6
2048 100 1743.2
4096 100 3586.4
Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine