This page explains how node pools work in Google Kubernetes Engine (GKE).
To learn how to manage node pools, see Adding and managing node pools.
A node pool is a group of nodes within a cluster that all have
the same configuration. Node pools use a NodeConfig specification. Each node
in the pool has a Kubernetes node label,
has the node pool's name as its value.
When you create a cluster, the number of nodes and type of nodes that you specify becomes the default node pool. Then, you can add additional custom node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another.
For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, preemptible VMs, a specific node image, or different machine types. Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.
You can create, upgrade, and delete node pools individually without affecting the whole cluster. You cannot configure a single node in a node pool; any configuration changes affect all nodes in the node pool.
You can resize node pools in a cluster by adding or removing nodes.
By default, all new node pools run the latest stable version of Kubernetes. Existing node pools can be manually upgraded or automatically upgraded. You can also run multiple Kubernetes node versions on each node pool in your cluster, update each node pool independently, and target different node pools for specific deployments.
Deploying Services to specific node pools
You can explicitly deploy a Pod to a specific node pool by setting a
nodeSelectorin the Pod manifest. This forces a Pod to run only on nodes in that node pool. For an example see, Deploying a Pod to a specific node pool.
You can specify resource requests for the containers. The Pod only runs on nodes that satisfy the resource requests. For instance, if the Pod definition includes a container that requires four CPUs, the Service does not select Pods running on nodes with two CPUs.
Nodes in multi-zonal or regional clusters
If you created a multi-zonal or regional cluster, all of the node pools are replicated to those zones automatically. Any new node pool is automatically created in those zones. Similarly, any deletions delete those node pools from the additional zones as well.
Note that because of this multiplicative effect, this may consume more of your project's quota for a specific region when creating node pools.
Deleting node pools
When you delete a node pool,
GKE drains all the nodes in the node pool. The draining process
involves GKE evicting Pods on each node in the node pool. Each
node in a node pool is drained by evicting Pods with an allotted graceful
termination period of
MAX_POD + one hour (for PodDisruptionBudgets).
MAX_POD is the maximum
set on the Pods scheduled to the node with a cap of one hour.
- Learn about the cluster architecture in GKE.
- Learn how to add and manage node pools.
- Learn how to auto-provision nodes.
- Learn about node taints, restrictions for Pods running in a given node pool.