In Google Distributed Cloud, you add or remove node pools in a cluster by
creating or deleting node pool custom resources. You use kubectl
to make node
pool changes.
You can only add or delete worker node pools for an existing cluster. The control plane and load balancer node pools added during cluster creation are critical to the cluster's function and cannot be deleted.
To add or remove nodes from an existing worker node pool, see Add or remove nodes in a cluster.
Check node status
Before adding or removing node pools, use kubectl get
to check the status of
nodes and their respective node pools. For more information, including a sample
command and response, see
Viewing node status.
Add a new node pool
You can add new node pools by creating a new
nodepools.baremetal.cluster.gke.io
resource in the admin cluster. You then use
the IP address of nodes to add them to the node pool. For example, you specify
the following configuration to add a new node pool named "nodepool- new" with
node IP addresses 10.200.0.7
and 10.200.0.8
:
apiVersion: baremetal.cluster.gke.io/v1
kind: NodePool
metadata:
name: node-pool-new
namespace: cluster-my-cluster
spec:
clusterName: my-cluster
nodes:
- address: 10.200.0.7
- address: 10.200.0.8
taints:
- key: <key1>
value: <value1>
effect: NoSchedule
labels:
key1: <value1>
key2: <value2>
NodePool.spec.taints
and
NodePool.spec.labels
configurations are reconciled to nodes. All taints and labels that are directly
added are removed. The control plane doesn't remove the taints and labels that
you add directly during the sync process.
To bypass this reconciliation step, you can annotate
the node with baremetal.cluster.gke.io/label-taint-no-sync
.
The node pool resource must be created in the same namespace as the associated
cluster and reference the cluster name in the spec.clusterName
field.
Store the configuration in a file named node-pool-new.yaml
. Apply the
configuration to the admin cluster with the following command. Use the
--kubeconfig
flag to explicitly specify the admin cluster config, if needed:
kubectl apply -f node-pool-new.yaml
Remove a node pool
You remove node pools with kubectl delete
. For example, to remove the node
pool added in the preceding section, node-pool-new
, use the following command:
kubectl -n cluster-my-cluster delete nodepool node-pool-new
Removing a worker node pool in a cluster can cause Pod Disruptions. If there is a PodDisruptionBudget (PDB) in place, you may be blocked from removing a node pool. For more information about pod disruption policies, refer to Removing nodes blocked by the Pod Disruption Budget.
What's next
If your workload requirements change after you create node pools, you can update a worker node pool to add or remove nodes. To add or remove nodes from a worker node pool, see Add or remove nodes in a cluster.