Version 1.12. This is the most recent version. It's supported as outlined in the Anthos version support policy, offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on bare metal. For release details, see the release notes 1.12. For a complete list of each minor and patch release in chronological order, see the combined release notes.

For version 1.12, the documentation structure has changed. See New documentation structure.

Available supported versions: 1.12  |   1.11  |   1.10  |  

Add or remove node pools in a cluster

In Anthos clusters on bare metal, you add or remove node pools in a cluster by creating or deleting node pool custom resources. You use kubectl to make node pool changes.

You can only add or delete worker node pools for an existing cluster. The control plane and load balancer node pools added during cluster creation are critical to the cluster's function and cannot be deleted.

Check node status

Before adding or removing node pools, use kubectl get to check the status of nodes and their respective node pools. For more information, including a sample command and response, see Viewing node status.

Add a new node pool

You can add new node pools by creating a new nodepools.baremetal.cluster.gke.io resource in the admin cluster. For example, you specify the following configuration to add a new node pool named "nodepool- new" with node IP addresses 10.200.0.7 and 10.200.0.8:

  apiVersion: baremetal.cluster.gke.io/v1
  kind: NodePool
  metadata:
    name: node-pool-new
    namespace: cluster-my-cluster
  spec:
    clusterName: my-cluster
    nodes:
    - address: 10.200.0.7
    - address: 10.200.0.8
    taints:
    - key: <key1>
      value: <value1>
      effect: NoSchedule
    labels:
      key1: <value1>
      key2: <value2>

NodePool.spec.taints and NodePool.spec.labels configurations are reconciled to nodes. All taints and labels that are directly added are removed. The control plane doesn't remove the taints and labels that you add directly during the sync process.

To bypass this reconciliation step, you can annotate the node with baremetal.cluster.gke.io/label-taint-no-sync.

The node pool resource must be created in the same namespace as the associated cluster and reference the cluster name in the spec.clusterName field.

Store the configuration in a file named node-pool-new.yaml. Apply the configuration to the admin cluster with the following command. Use the --kubeconfig flag to explicitly specify the admin cluster config, if needed:

  kubectl apply -f node-pool-new.yaml

Remove a node pool

You remove node pools with kubectl delete. For example, to remove the node pool added in the preceding section, node-pool-new, use the following command:

  kubectl -n cluster-my-cluster delete nodepool node-pool-new

Removing a worker node pool in a cluster can cause Pod Disruptions. If there is a PodDisruptionBudget (PDB) in place, you may be blocked from removing a node pool. For more information about pod disruption policies, refer to Removing nodes blocked by the Pod Disruption Budget.