Manage node pools

A node pool is a group of nodes within a Kubernetes cluster that all have the same configuration. Node pools use a NodePool specification. Each node in the pool has a Kubernetes node label, which has the name of the node pool as its value. By default, all new node pools run the same version of Kubernetes as the control plane.

When you create a user cluster, the number of nodes and type of nodes that you specify create the first node pool of the cluster. You can add additional node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another.

Custom node pools are useful when scheduling pods that require more resources than others, such as more memory or local disk space. You can use node taints if you need more control over scheduling the pods.

You can create and delete node pools individually without affecting the whole cluster. You cannot configure a single node in a node pool. Any configuration changes affect all nodes in the node pool.

You can resize node pools in a cluster by upscaling or downscaling the pool. Downscaling a node pool is an automated process where you decrease the pool size and the GDC system automatically drains and evicts an arbitrary node. You cannot select a specific node to remove when downscaling a node pool.

Before you begin

To view and manage node pools in a user cluster, you must have the following roles:

  • User Cluster Admin (user-cluster-admin)
  • User Cluster Node Viewer (user-cluster-node-viewer)

Add a node pool

When creating a user cluster from the GDC console, you can customize the default node pool and create additional node pools before the cluster creation initializes. If you must add a node pool to an existing user cluster, complete the following steps:

Console

  1. In the navigation menu, select Clusters.
  2. Click the cluster from the cluster list. The Cluster details page is displayed.
  3. Select Node pools > Add node pool.
  4. Assign a name for the node pool. You cannot modify the name after you create the node pool.
  5. Specify the number of worker nodes to create in the node pool.
  6. Select your machine class that best suits your workload requirements. The machine classes show in the following settings:
    • Machine type
    • vCPU
    • Memory
  7. Optional: Add Kubernetes key-value pair labels to organize the resources of your node pool.
  8. Click Save.

API

  1. Open the Cluster custom resource spec with the kubectl CLI using the interactive editor:

    kubectl edit clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \
        --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG
    

    Replace the following:

    • USER_CLUSTER_NAME: The name of the user cluster.
    • ORG_ADMIN_CLUSTER_KUBECONFIG: The org admin cluster's kubeconfig file path.
  2. Add a new entry in the nodePools section:

    nodePools:
    ...
    - machineTypeName: MACHINE_TYPE
      name: NODE_POOL_NAME
      nodeCount: NUMBER_OF_WORKER_NODES
      taints: TAINTS
      labels: LABELS
    

    Replace the following:

    • MACHINE_TYPE: The machine type for the worker nodes of the node pool. View the available machine types for what is available to configure.
    • NODE_POOL_NAME: The name of the node pool.
    • NUMBER_OF_WORKER_NODES: The number of worker nodes to provision in the node pool.
    • TAINTS: The taints to apply to the nodes of this node pool. This is an optional field.
    • LABELS: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.
  3. Save the file and exit the editor.

View node pools

To view existing node pools in a user cluster, complete the following steps:

Console

  1. In the navigation menu, select Clusters.
  2. Click the cluster from the cluster list. The Cluster details page is displayed.
  3. Select Node pools.

The list of node pools running in the cluster is displayed. You can manage the node pools of the cluster from this page.

API

  • View the node pools of a specific user cluster:

    kubectl get clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \
        -o json --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG | \
        jq .status.workerNodePoolStatuses
    

    The output is similar to the following:

    [
      {
        "conditions": [
          {
            "lastTransitionTime": "2023-08-31T22:16:17Z",
            "message": "",
            "observedGeneration": 2,
            "reason": "NodepoolReady",
            "status": "True",
            "type": "Ready"
          },
          {
            "lastTransitionTime": "2023-08-31T22:16:17Z",
            "message": "",
            "observedGeneration": 2,
            "reason": "ReconciliationCompleted",
            "status": "False",
            "type": "Reconciling"
          }
        ],
        "name": "worker-node-pool",
        "readyNodes": 3,
        "readyTimestamp": "2023-08-31T18:59:46Z",
        "reconcilingNodes": 0,
        "stalledNodes": 0,
        "unknownNodes": 0
      }
    ]
    

Delete a node pool

Deleting a node pool deletes the nodes and routes to them. These nodes evict and reschedule any pods running on them. If the pods have specific node selectors, the pods might remain in a non-schedulable condition if no other node in the cluster satisfies the criteria.

Ensure you have at least three worker nodes before deleting a node pool to ensure your cluster has enough compute space to run effectively.

To delete a node pool, complete the following steps:

Console

  1. In the navigation menu, select Clusters.

  2. Click the cluster that is hosting the node pool you want to delete.

  3. Select Node pools.

  4. Click Delete next to the node pool to delete.

API

  1. Open the Cluster custom resource spec with the kubectl CLI using the interactive editor:

    kubectl edit clusters.cluster.gdc.goog/USER_CLUSTER_NAME -n platform \
          --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG
    

    Replace the following:

    • USER_CLUSTER_NAME: The name of the user cluster.
    • ORG_ADMIN_CLUSTER_KUBECONFIG: The org admin cluster's kubeconfig file path.
  2. Remove the node pool entry from the nodePools section. For example, in the following snippet, you must remove the machineTypeName, name, and nodeCount fields:

    nodePools:
    ...
    - machineTypeName: n2-standard-2-gdc
      name: nodepool-1
      nodeCount: 3
    

    Be sure to remove all fields for the node pool you are deleting.

  3. Save the file and exit the editor.

Worker node machine types

When you create a user cluster in Google Distributed Cloud (GDC) air-gapped, you create node pools that are responsible for running your container workloads in the cluster. You provision nodes based on your container workload requirements, and can update them as your requirements evolve.

GDC provides predefined machine types for your worker nodes that are selectable when you add a node pool.

Available machine types

GDC defines machine types with some parameters for a user cluster node, which include CPU, memory, and GPU. GDC has various machine types for different purposes. For example, user clusters use n2-standard-2-gdc for general purpose container workloads. You can also find machine types for memory-optimized purposes, such as n2-highcpu-8-gdc. If you plan to run artificial intelligence (AI) and machine learning (ML) notebooks, you must provision GPU machines, such as a2-highgpu-1g-gdc.

The following is a list of all GDC predefined machine types available for user cluster worker nodes:

Name vCPUs Memory GPU count
n2-standard-2-gdc 2 8G N/A
n2-standard-4-gdc 4 16G N/A
n2-highmem-4-gdc 4 32G N/A
n2-highcpu-8-gdc 8 8G N/A
n2-standard-8-gdc 8 32G N/A
n2-highmem-8-gdc 8 64G N/A
a2-highgpu-1g-gdc 12 85G 1
a2-ultragpu-1g-gdc 12 170G 1