This page explains how node pools work in Google Kubernetes Engine. You can also learn how to add and manage node pools.
A node pool is a group of [nodes] within a cluster that all have
the same configuration. Node pools use a NodeConfig specification. Each node
in the pool has a Kubernetes node label,
has the node pool's name as its value. A node pool can contain only a single
node or many nodes.
When you create a [cluster], the number and type of nodes that you specify becomes the default node pool. Then, you can add additional custom node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another.
For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, preemptible VMs, a specific node image, larger instance sizes, or different machine types. Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.
You can create, upgrade, and delete node pools individually without
affecting the whole cluster using the
gcloud container node-pools
command. You cannot configure a single node in a node pool; any configuration
changes affect all nodes in the node pool.
By default, all new node pools run the latest stable version of Kubernetes. Existing node pools can be manually upgraded or automatically upgraded. You can also run multiple Kubernetes node versions on each node pool in your cluster, update each node pool independently, and target different node pools for specific deployments.
Deploying Services to specific node pools
You can explicitly force a Pod to deploy to a specific node pool by setting a nodeSelector in the Pod manifest. This forces a Pod to run only on Nodes in that node pool.
You can specify resource requests for the containers. The Pod will only run on nodes that satisfy the resource requests. For instance, if the Pod definition includes a container that requires four CPUs, the Service will not select Pods running on Nodes with two CPUs.
Nodes in multi-zone clusters
If you created a multi-zone cluster, all of the node pools are replicated to those zones automatically. Any new node pool is automatically created in those zones. Similarly, any deletions delete those node pools from the additional zones as well.
Note that because of this multiplicative effect, this may consume more of your project's quota for a specific region when creating node pools.