创建 Standard 模式集群时,指定的节点数和节点类型用于创建集群的第一个节点池。默认情况下,第一个节点池(称为默认节点池)由集群的每个计算可用区中的三个节点组成,具有默认的节点映像cos_containerd 和通用机器类型。您可以根据工作负载要求为节点池指定各种属性。例如,您可以在集群中创建一个具有本地 SSD、满足最低要求的 CPU 平台、Spot 虚拟机、不同节点映像、不同机器类型或更高效的虚拟网络接口的节点池。
然后,您可以向集群添加其他大小和类型不同的节点池。任何给定节点池中的所有节点都彼此相同。
如果您要调度的 Pod 比其他 Pod 需要更多的资源(例如更多内存或更多本地磁盘可用空间),则自定义节点池会非常有用。如果您需要更好地控制 Pod 的调度位置,可以使用节点污点。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# About node pools\n\n[Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page explains how node pools work in Google Kubernetes Engine (GKE). A *node\npool* is a group of [nodes](/kubernetes-engine/docs/concepts/cluster-architecture#nodes) within a cluster that all have the same\nconfiguration. In GKE Standard mode, you can choose from\na number of options for your node pools to meet your workload needs. If you\nchoose to use Autopilot, you don't need to configure node pools:\nGKE manages the nodes for you.\n\nTo learn more about creating Standard mode clusters, see [Create a\nregional cluster](/kubernetes-engine/docs/how-to/creating-a-regional-cluster)\nand [Create a zonal\ncluster](/kubernetes-engine/docs/how-to/creating-a-zonal-cluster). To learn how\nto manage node pools in existing Standard clusters, see [Adding and\nmanaging node pools](/kubernetes-engine/docs/how-to/node-pools).\n\nOverview\n--------\n\nA node pool is a group of [nodes](/kubernetes-engine/docs/concepts/cluster-architecture#nodes) within a cluster that all have the same\nconfiguration. Node pools use a [NodeConfig](/kubernetes-engine/docs/reference/rest/v1/NodeConfig) specification. Each node in the\npool has a Kubernetes node label, `cloud.google.com/gke-nodepool`, which has the\nnode pool's name as its value.\n\nWhen you create a Standard mode [cluster](/kubernetes-engine/docs/concepts/cluster-architecture), the number of nodes and type\nof nodes that you specify are used to create the first node pool of the cluster.\nBy default, this first node pool (known as the *default node pool* ) consists of\nthree nodes in each of the cluster's compute zones, with the default [node\nimage](/kubernetes-engine/docs/concepts/node-images) `cos_containerd`, and a general-purpose [machine\ntype](/compute/docs/machine-types). You can specify a variety of properties for\nthe node pool, depending on your workload requirements. For example, you might\ncreate a node pool in your cluster with [local SSDs](/kubernetes-engine/docs/concepts/local-ssd), a [minimum CPU platform](/kubernetes-engine/docs/how-to/min-cpu-platform),\n[Spot VMs](/kubernetes-engine/docs/concepts/spot-vms), a different node\nimage, different [machine types](/compute/docs/machine-types), or a more efficient [virtual network\ninterface](/kubernetes-engine/docs/how-to/using-gvnic).\n\nYou can then add additional node pools of different sizes and types to your\ncluster. All nodes in any given node pool are identical to one another.\n\nCustom node pools are useful when you need to schedule Pods that require more\nresources than others, such as more memory or more local disk space. If you need\nmore control of where Pods are scheduled, you can use [node taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).\n\nYou can [create, upgrade, and delete node\npools](/kubernetes-engine/docs/how-to/node-pools) individually without affecting\nthe whole cluster. You cannot configure a single node in a node pool; any\nconfiguration changes affect all nodes in the node pool.\n\nYou can resize node pools in a cluster by [adding or removing\nnodes](/kubernetes-engine/docs/how-to/node-pools#resizing_a_node_pool).\n\nBy default, all new node pools run the same version of Kubernetes as the control\nplane. Existing node pools can be [manually upgraded](/kubernetes-engine/docs/how-to/upgrading-a-container-cluster#manually_upgrading_your_nodes) or [automatically\nupgraded](/kubernetes-engine/docs/concepts/node-auto-upgrade). You can also run multiple Kubernetes node versions on each node pool\nin your cluster, update each node pool independently, and target different node\npools for specific deployments.\n\nDeploying Services to specific node pools\n-----------------------------------------\n\nWhen you define a\n[Service](https://kubernetes.io/docs/concepts/services-networking/service/), you\ncan indirectly control which node pool it is deployed into. The node pool is\n*not* dependent on the configuration of the Service, but on the configuration of\nthe [Pod](/kubernetes-engine/docs/concepts/pod).\n\n- You can explicitly deploy a Pod to a specific node pool by setting a\n [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)\n in the Pod manifest. This forces a Pod to run only on nodes in that node pool.\n For an example see, [Deploying a Pod to a specific node\n pool](/kubernetes-engine/docs/how-to/node-pools#deploy).\n\n- You can [specify resource requests for the\n containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/).\n The Pod only runs on nodes that satisfy the resource requests. For example,\n if the Pod definition includes a container that requires four CPUs, the\n Service does not select Pods running on nodes with two CPUs.\n\nNodes in multi-zonal clusters\n-----------------------------\n\nIf you created a\n[multi-zonal](/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters) cluster, all\nof the node pools are replicated to those zones automatically. Any new node pool\nis automatically created in those zones. Similarly, any deletions delete those\nnode pools from the additional zones as well.\n\nNote that because of this multiplicative effect, this may consume more of your\nproject's [quota](/kubernetes-engine/quotas) for a specific region when creating\nnode pools.\n\nDeleting node pools\n-------------------\n\nWhen you [delete a node\npool](/kubernetes-engine/docs/how-to/node-pools#deleting_a_node_pool),\nGKE drains all the nodes in the node pool, deleting and\nrescheduling all Pods. The draining process involves GKE deleting\nPods on each node in the node pool. Each node in a node pool is drained by\ndeleting Pods with an allotted graceful termination period of `MAX_POD`.\n`MAX_POD` is the maximum\n[`terminationGracePeriodSeconds`](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)\nset on the Pods scheduled on the node, with a cap of one hour.\n[`PodDisruptionBudget`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/)\nsettings are not honored during node pool deletion.\n\nIf the Pods have specific node selectors, the Pods might remain in an\nunschedulable condition if no other node in the cluster satisfies the criteria.\n\nWhen a cluster is deleted, GKE does not follow this process of\ngracefully terminating the nodes by draining them. If the workloads running on a\ncluster must be gracefully terminated, use [`kubectl\ndrain`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain)\nto clean up the workloads before you delete the cluster.\n\nTo delete a node pool, see [Delete a node\npool](/kubernetes-engine/docs/how-to/node-pools#deleting_a_node_pool).\n\nWhat's next\n-----------\n\n- [Learn about the cluster architecture in\n GKE](/kubernetes-engine/docs/concepts/cluster-architecture).\n - [Learn how to add and manage node pools](/kubernetes-engine/docs/how-to/node-pools).\n- [Learn how to auto-provision\n nodes](/kubernetes-engine/docs/how-to/node-auto-provisioning)."]]