About node pools


This page explains how node pools work in Google Kubernetes Engine (GKE).

To learn how to manage node pools, see Adding and managing node pools.

Overview

A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification. Each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool, which has the node pool's name as its value.

When you create a cluster, the number of nodes and type of nodes that you specify are used to create the first node pool of the cluster. Then, you can add additional node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another.

For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, Spot VMs, a specific node image, different machine types, or a more efficient virtual network interface.

Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.

You can create, upgrade, and delete node pools individually without affecting the whole cluster. You cannot configure a single node in a node pool; any configuration changes affect all nodes in the node pool.

You can resize node pools in a cluster by adding or removing nodes.

By default, all new node pools run the same version of Kubernetes as the control plane. Existing node pools can be manually upgraded or automatically upgraded. You can also run multiple Kubernetes node versions on each node pool in your cluster, update each node pool independently, and target different node pools for specific deployments.

Deploying Services to specific node pools

When you define a Service, you can indirectly control which node pool it is deployed into. The node pool is not dependent on the configuration of the Service, but on the configuration of the Pod.

  • You can explicitly deploy a Pod to a specific node pool by setting a nodeSelector in the Pod manifest. This forces a Pod to run only on nodes in that node pool. For an example see, Deploying a Pod to a specific node pool.

  • You can specify resource requests for the containers. The Pod only runs on nodes that satisfy the resource requests. For example, if the Pod definition includes a container that requires four CPUs, the Service does not select Pods running on nodes with two CPUs.

Nodes in multi-zonal or regional clusters

If you created a multi-zonal or regional cluster, all of the node pools are replicated to those zones automatically. Any new node pool is automatically created in those zones. Similarly, any deletions delete those node pools from the additional zones as well.

Note that because of this multiplicative effect, this may consume more of your project's quota for a specific region when creating node pools.

Deleting node pools

When you delete a node pool, GKE drains all the nodes in the node pool, deleting and rescheduling all Pods. The draining process involves GKE deleting Pods on each node in the node pool. Each node in a node pool is drained by deleting Pods with an allotted graceful termination period of MAX_POD. MAX_POD is the maximum terminationGracePeriodSeconds set on the Pods scheduled on the node, with a cap of one hour. PodDisruptionBudget settings are not honored during node pool deletion.

If the Pods have specific node selectors, the Pods might remain in an unschedulable condition if no other node in the cluster satisfies the criteria.

When a cluster is deleted, GKE does not follow this process of gracefully terminating the nodes by draining them. If the workloads running on a cluster must be gracefully terminated, use kubectl drain to clean up the workloads before you delete the cluster.

To delete a node pool, see Delete a node pool.

What's next