A node pool is a group of nodes within a Kubernetes cluster that all have the
same
configuration. Node pools use a NodePool
specification. Each node in the pool
has a Kubernetes node label, which has the name of the node pool as its value.
By default, all new node pools run the same version of Kubernetes as the control
plane.
When you create a user cluster, the number of nodes and type of nodes that you specify create the first node pool of the cluster. You can add additional node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another.
Custom node pools are useful when scheduling pods that require more resources than others, such as more memory or local disk space. You can use node taints if you need more control over scheduling the pods.
You can create and delete node pools individually without affecting the whole cluster. You cannot configure a single node in a node pool. Any configuration changes affect all nodes in the node pool.
You can resize node pools in a cluster by upscaling or downscaling the pool. Downscaling a node pool is an automated process where you decrease the pool size and the GDC system automatically drains and evicts an arbitrary node. You cannot select a specific node to remove when downscaling a node pool.
Before you begin
To manage node pools in a user cluster, you must have the User Cluster Admin
role (user-cluster-admin
role).
Add a node pool
When creating a user cluster from the GDC console, you can customize the default node pool and create additional node pools before the cluster creation initializes. If you must add a node pool to an existing user cluster, complete the following steps:
Console
- In the navigation menu, select Clusters.
- Click the cluster from the cluster list. The Cluster details page is displayed.
- Select Node pools > Add node pool.
- Assign a name for the node pool. You cannot modify the name after you create the node pool.
- Specify the number of worker nodes to create in the node pool.
- Select your machine class that best suits your workload requirements. The
machine classes show in the following settings:
- Machine type
- vCPU
- Memory
- Optional: Add Kubernetes key-value pair labels to organize the resources of your node pool.
- Click Save.
API
Open the
Cluster
custom resource spec with thekubectl
CLI using the interactive editor:kubectl edit clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG
Replace the following:
USER_CLUSTER_NAME
: The name of the user cluster.ADMIN_CLUSTER_KUBECONFIG
: The admin cluster's kubeconfig file path.
Add a new entry in the
nodePools
section:nodePools: ... - machineTypeName: MACHINE_TYPE name: NODE_POOL_NAME nodeCount: NUMBER_OF_WORKER_NODES taints: TAINTS labels: LABELS
Replace the following:
MACHINE_TYPE
: The machine type for the worker nodes of the node pool. View the available machine types for what is available to configure.NODE_POOL_NAME
: The name of the node pool.NUMBER_OF_WORKER_NODES
: The number of worker nodes to provision in the node pool.TAINTS
: The taints to apply to the nodes of this node pool. This is an optional field.LABELS
: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.
Save the file and exit the editor.
View node pools
To view existing node pools in a user cluster, complete the following steps:
Console
- In the navigation menu, select Clusters.
- Click the cluster from the cluster list. The Cluster details page is displayed.
- Select Node pools.
The list of node pools running in the cluster is displayed. You can manage the node pools of the cluster from this page.
API
View the node pools of a specific user cluster:
kubectl get clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \ -o json --kubeconfig ADMIN_CLUSTER_KUBECONFIG | \ jq .status.workerNodePoolStatuses
The output is similar to the following:
[ { "conditions": [ { "lastTransitionTime": "2023-08-31T22:16:17Z", "message": "", "observedGeneration": 2, "reason": "NodepoolReady", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2023-08-31T22:16:17Z", "message": "", "observedGeneration": 2, "reason": "ReconciliationCompleted", "status": "False", "type": "Reconciling" } ], "name": "worker-node-pool", "readyNodes": 3, "readyTimestamp": "2023-08-31T18:59:46Z", "reconcilingNodes": 0, "stalledNodes": 0, "unknownNodes": 0 } ]
Delete a node pool
Deleting a node pool deletes the nodes and routes to them. These nodes evict and reschedule any pods running on them. If the pods have specific node selectors, the pods might remain in a non-schedulable condition if no other node in the cluster satisfies the criteria.
Ensure you have at least three worker nodes before deleting a node pool to ensure your cluster has enough compute space to run effectively.
To delete a node pool, complete the following steps:
Console
In the navigation menu, select Clusters.
Click the cluster that is hosting the node pool you want to delete.
Select Node pools.
Click delete Delete next to the node pool to delete.
API
Open the
Cluster
custom resource spec with thekubectl
CLI using the interactive editor:kubectl edit clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG
Replace the following:
USER_CLUSTER_NAME
: The name of the user cluster.ADMIN_CLUSTER_KUBECONFIG
: The admin cluster's kubeconfig file path.
Remove the node pool entry from the
nodePools
section. For example, in the following snippet, you must remove themachineTypeName
,name
, andnodeCount
fields:nodePools: ... - machineTypeName: n2-standard-2-gdc name: nodepool-1 nodeCount: 3
Be sure to remove all fields for the node pool you are deleting.
Save the file and exit the editor.
Worker node machine types
When you create a user cluster in Google Distributed Cloud (GDC) air-gapped appliance, you create node pools that are responsible for running your container workloads in the cluster. You provision nodes based on your container workload requirements, and can update them as your requirements evolve.
GDC provides predefined machine types for your worker nodes that are selectable when you add a node pool.
Available machine types
GDC defines machine types with some parameters
for a user cluster node, which include CPU, memory, and GPU.
GDC has various machine types for different purposes.
For example, user clusters use n2-standard-2-gdc
for general purpose container
workloads. You can also find machine types for memory-optimized purposes, such
as n2-highcpu-8-gdc
. If you plan to run deep learning containers, you must
provision GPU machines, such as a2-highgpu-1g-gdc
.
The following is a list of all GDC predefined machine types available for user cluster worker nodes:
Name | vCPUs | Memory | GPU count |
---|---|---|---|
n2-standard-2-gdc | 2 | 8G | N/A |
n2-standard-4-gdc | 4 | 16G | N/A |
n2-highmem-4-gdc | 4 | 32G | N/A |
n2-highcpu-8-gdc | 8 | 8G | N/A |
n2-standard-8-gdc | 8 | 32G | N/A |
n2-highmem-8-gdc | 8 | 64G | N/A |
a2-highgpu-1g-gdc | 12 | 85G | 1 |
a2-ultragpu-1g-gdc | 12 | 170G | 1 |