In Google Distributed Cloud, you add or remove nodes in a cluster by
editing the cluster's node pool definitions. You can use the kubectl
command to change these definitions.
There are three different kinds of node pools in Google Distributed Cloud: control plane, load balancer, and worker node pools. You edit control plane and load balancer nodes through the definitions in their associated cluster resources, while you edit worker node pool definitions directly.
Viewing node status
You can view the status of nodes and their respective node pools with the
kubectl get
command.
For example, the following command shows the status of the node pools in the
cluster namespace my-cluster
:
kubectl -n my-cluster get nodepools.baremetal.cluster.gke.io
The system returns results similar to the following:
NAME READY RECONCILING STALLED UNDERMAINTENANCE UNKNOWN
my-cluster 3 0 0 0 0
my-cluster-lb 2 0 0 0 0
np1 3 0 0 0 0
If you need more information on diagnosing your clusters, see Diagnosing and resetting clusters.
Changing nodes
Most node changes are specified in the cluster config file, which is then
applied to the cluster. We recommend you use the cluster config file as the
primary source for updating your cluster. It is a best practice to store your
config file in a version control system to track changes for troubleshooting
purposes. Note that the bmctl update
command is supported for standalone
clusters only. For admin, user, and hybrid clusters, use kubectl apply
to
update your cluster with your node pool changes.
The Google Distributed Cloud cluster config file includes a header
section with credential information. The credential entries and the rest of the
config file are valid YAML, but the credential entries are not valid for the
cluster resource. Remove the credential key path entries, such as gcrKeyPath
and sshPrivateKeyPath
, before using kubectl apply
. Use
bmctl update credentials
for credential updates.
Alternatively, you can use kubectl edit
to modify the cluster resource
directly. For example:
kubectl edit cluster -n CLUSTER_NAMESPACE CLUSTER_NAME
The following sections describe some important differences for updating specific node types.
Control plane and load balancer nodes
The control plane and load balancer node pool specifications for Google Distributed Cloud are special. These specifications declare and control critical cluster resources. The canonical source for these resources is their respective sections in the cluster config file:
spec.controlPlane.nodePoolSpec
spec.LoadBalancer.nodePoolSpec
You add or remove control plane or load balancer nodes by editing the array of
addresses under nodes
in the corresponding section of the
cluster config file.
In a high availability (HA) configuration, an odd number of control plane node pools (three or more) are required to establish a quorum to ensure that if a control plane fails, others will take over. If you have an even number of nodes temporarily while ading or removing nodes for maintenance or replacement, your deployment maintains HA as long as you have enough quorum.
Worker nodes
You can add or remove worker nodes directly with the kubectl
command.
Worker node pools must have at least one desired node.
In the following example, the command deletes a node pool named np1
where
the variable for the cluster namespace is my-cluster
:
kubectl -n my-cluster delete nodepool np1
Similarly, node pools can be resized by editing the spec.nodes
array of
addresses.
Note that when you remove nodes from a cluster, they are first drained of any pods. Nodes will not be removed from the cluster if pods can't be rescheduled on other nodes. Removing nodes only removes the node from the control plane; the contents of the node are not reset.
The following kubectl edit
command lets you edit and then commit changes
for the cluster namespace my-cluster and the node pool np1:
kubectl -n my-cluster edit nodepool np1