[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eA node pool is a group of identically configured nodes within a Kubernetes cluster, each identified by a shared Kubernetes node label.\u003c/p\u003e\n"],["\u003cp\u003eYou can create multiple node pools with varying sizes and configurations to accommodate different workload requirements, like increased memory or disk space.\u003c/p\u003e\n"],["\u003cp\u003eNode pools can be resized by upscaling or downscaling, with downscaling automatically removing arbitrary nodes from the pool without allowing for specific node selection.\u003c/p\u003e\n"],["\u003cp\u003eNode pools can be added to and deleted from user clusters individually, and it is done either through the GDC console or by using the \u003ccode\u003ekubectl\u003c/code\u003e CLI.\u003c/p\u003e\n"],["\u003cp\u003eYou can select various machine types for worker nodes in node pools, ranging from general-purpose to memory-optimized and GPU-equipped, to best suit your workloads.\u003c/p\u003e\n"]]],[],null,["# Manage node pools\n\nA *node pool* is a group of nodes within a Kubernetes cluster that all have the\nsame\nconfiguration. Node pools use a `NodePool` specification. Each node in the pool\nhas a Kubernetes node label, which has the name of the node pool as its value.\nBy default, all new node pools run the same version of Kubernetes as the control\nplane.\n\nWhen you create a user cluster, the number of nodes and type of nodes that you\nspecify create the first node pool of the cluster. You can add additional node\npools of different sizes and types to your cluster. All nodes in any given node\npool are identical to one another.\n\nCustom node pools are useful when scheduling pods that require more resources\nthan others, such as more memory or local disk space. You can use node taints if\nyou need more control over scheduling the pods.\n\nYou can create and delete node pools individually without affecting the whole\ncluster. You cannot configure a single node in a node pool. Any configuration\nchanges affect all nodes in the node pool.\n\nYou can [resize node pools](/distributed-cloud/hosted/docs/latest/appliance/platform-application/pa-ao-operations/cluster#resize-node-pools)\nin a cluster by upscaling or downscaling the pool. Downscaling a node pool is an\nautomated process where you decrease the pool size and the\nGDC system automatically drains and evicts an arbitrary\nnode. You cannot select a specific node to remove when downscaling a node pool.\n\nBefore you begin\n----------------\n\nTo manage node pools in a user cluster, you must have the User Cluster Admin\nrole (`user-cluster-admin` role).\n\nAdd a node pool\n---------------\n\nWhen creating a user cluster from the GDC console, you\ncan customize the default node pool and create additional node pools before the\ncluster creation initializes. If you must add a node pool to an existing user\ncluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools** \\\u003e **Add node pool**.\n4. Assign a name for the node pool. You cannot modify the name after you create the node pool.\n5. Specify the number of worker nodes to create in the node pool.\n6. Select your machine class that best suits your workload requirements. The machine classes show in the following settings:\n - Machine type\n - vCPU\n - Memory\n7. Optional: Add Kubernetes key-value pair labels to organize the resources of your node pool.\n8. Click **Save**.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: The name of the user cluster.\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: The admin cluster's kubeconfig file path.\n2. Add a new entry in the `nodePools` section:\n\n nodePools:\n ...\n - machineTypeName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMACHINE_TYPE\u003c/span\u003e\u003c/var\u003e\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNODE_POOL_NAME\u003c/span\u003e\u003c/var\u003e\n nodeCount: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNUMBER_OF_WORKER_NODES\u003c/span\u003e\u003c/var\u003e\n taints: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eTAINTS\u003c/span\u003e\u003c/var\u003e\n labels: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eLABELS\u003c/span\u003e\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The machine type for the worker nodes of the node pool. View the [available machine types](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/manage-node-pools#available-machine-types) for what is available to configure.\n - \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: The name of the node pool.\n - \u003cvar translate=\"no\"\u003eNUMBER_OF_WORKER_NODES\u003c/var\u003e: The number of worker nodes to provision in the node pool.\n - \u003cvar translate=\"no\"\u003eTAINTS\u003c/var\u003e: The taints to apply to the nodes of this node pool. This is an optional field.\n - \u003cvar translate=\"no\"\u003eLABELS\u003c/var\u003e: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.\n3. Save the file and exit the editor.\n\nView node pools\n---------------\n\nTo view existing node pools in a user cluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools**.\n\nThe list of node pools running in the cluster is displayed. You can manage the\nnode pools of the cluster from this page.\n\n### API\n\n- View the node pools of a specific user cluster:\n\n kubectl get clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n -o json --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e | \\\n jq .status.workerNodePoolStatuses\n\n The output is similar to the following: \n\n [\n {\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"NodepoolReady\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"ReconciliationCompleted\",\n \"status\": \"False\",\n \"type\": \"Reconciling\"\n }\n ],\n \"name\": \"worker-node-pool\",\n \"readyNodes\": 3,\n \"readyTimestamp\": \"2023-08-31T18:59:46Z\",\n \"reconcilingNodes\": 0,\n \"stalledNodes\": 0,\n \"unknownNodes\": 0\n }\n ]\n\nDelete a node pool\n------------------\n\nDeleting a node pool deletes the nodes and routes to them. These nodes evict and\nreschedule any pods running on them. If the pods have specific node selectors,\nthe pods might remain in a non-schedulable condition if no other node in the\ncluster satisfies the criteria.\n| **Important:** Control plane node pools and load balancer node pools are critical to a cluster's function and consequently can't be removed from a cluster. You can only delete worker node pools.\n\nEnsure you have at least three worker nodes before deleting a node pool to\nensure your cluster has enough compute space to run effectively.\n\nTo delete a node pool, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Clusters**.\n\n2. Click the cluster that is hosting the node pool you want to delete.\n\n3. Select **Node pools**.\n\n4. Click *delete* **Delete** next to the node\n pool to delete.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: The name of the user cluster.\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: The admin cluster's kubeconfig file path.\n2. Remove the node pool entry from the `nodePools` section. For example, in\n the following snippet, you must remove the `machineTypeName`, `name`, and\n `nodeCount` fields:\n\n nodePools:\n ...\n - machineTypeName: n2-standard-2-gdc\n name: nodepool-1\n nodeCount: 3\n\n Be sure to remove all fields for the node pool you are deleting.\n3. Save the file and exit the editor.\n\nWorker node machine types\n-------------------------\n\nWhen you create a user cluster in Google Distributed Cloud (GDC) air-gapped appliance, you create\nnode pools that are responsible for running your container workloads in the\ncluster. You provision nodes based on your container workload requirements, and\ncan update them as your requirements evolve.\n\nGDC provides predefined machine types for your worker\nnodes that are selectable when you [add a node pool](#add-a-node-pool).\n\n### Available machine types\n\nGDC defines machine types with some parameters\nfor a user cluster node, which include CPU, memory, and GPU.\nGDC has various machine types for different purposes.\nFor example, user clusters use `n2-standard-2-gdc` for general purpose container\nworkloads. You can also find machine types for memory-optimized purposes, such\nas `n2-highcpu-8-gdc`. If you plan to run deep learning containers, you must\nprovision GPU machines, such as `a2-highgpu-1g-gdc`.\n\nThe following is a list of all GDC predefined machine\ntypes available for user cluster worker nodes:"]]