Version 1.7. This version is supported as outlined in the Anthos version support policy, offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on bare metal. For more details, see the release notes 1.7. This is the most recent version. For a complete list of each minor and patch release in chronological order, see the combined release notes.

Available versions: 1.7  |   1.6

Adding or removing nodes in a cluster

In Anthos clusters on bare metal, you add or remove nodes in a cluster by editing the cluster's nodepool definitions. You can use the kubectl command to change these definitions.

There are three different kinds of node pools in Anthos clusters on bare metal: control plane, load balancer, and worker node pools. You edit control plane and load balancer nodes through the definitions in their associated cluster resources, while you edit worker node pool definitions directly.

Viewing node status

You can view the status of nodes and their respective node pools with the kubectl get command.

For example, the following command shows the status of the node pools in the cluster namespace my-cluster:

 kubectl -n my-cluster get nodepools.baremetal.cluster.gke.io

The system returns results similar to the following:

  NAME                    READY   RECONCILING   STALLED   UNDERMAINTENANCE   UNKNOWN
  my-cluster              3       0             0         0                  0
  my-cluster-lb           2       0             0         0                  0
  np1                     3       0             0         0                  0

If you need more information on diagnosing your clusters, see Diagnosing and resetting clusters.

Changing control plane nodes

You add or remove control plane nodes by editing a cluster's spec.controlPlane.nodePoolSpec.nodes array of addresses in the cluster resource definition.

Note that editing the control plane nodepool directly doesn't work, because the cluster specification is the authoritative definition of the control plane node pool.

In a high availability (HA) configuration, an odd number of control plane node pools (three or more) are required to establish a quorum to ensure that if a control plane fails, others will take over. If you have an even number of nodes temporarily while ading or removing nodes for maintenance or replacement, your deployment maintains HA as long as you have enough quorum.

Changing load balancer nodes

You can add or remove a cluster's load balancer node pools by editing the cluster's spec.loadBalancer.nodePoolSpec.nodes array of addresses in the cluster config file.

Changing worker nodes

You can add or remove worker nodes directly with the kubectl command. Worker node pools must have at least one desired node.

In the following example, the command deletes a nodepool named np1 where the variable for the cluster namespace is my-cluster:

  kubectl -n my-cluster delete nodepool np1
 

Similarly, node pools can be resized by editing the spec.nodes array of addresses.

Note that when you remove nodes from a cluster, they are first drained of any pods. Nodes will not be removed from the cluster if pods can't be rescheduled on other nodes. Removing nodes only removes the node from the control plane; the contents of the node are not reset.

The following kubectl edit command lets you edit and then commit changes for the cluster namespace my-cluster and the nodepool np1:

  kubectl -n my-cluster edit nodepool np1