In a user cluster, you can create a group of nodes that all have the same
configuration by filling in the
nodePools
section of the
cluster configuration file.
You can then manage the node pool without affecting the other nodes in the
cluster.
Learn more about node pools.
You can also update a node pool to have a different osImageType
.
Before you begin
Deleting a node pool causes immediate removal of the pool's nodes regardless of whether those nodes are running workloads.
You can update the replicas
field of a nodePool
section without
interrupting workloads. But if you update any other fields, the nodes in the
pool are deleted and re-created.
If you want to attach tags to all VMs in a node pool, your vCenter user account must have these vSphere tagging privileges:
- vSphere Tagging.Assign or Unassign vSphere Tag
- vSphere Tagging.Assign or Unassign vSphere Tag on Object (vSphere 7)
When you update a nodePool
section, Google Distributed Cloud creates a new node
and then deletes an old node. It repeats this process until all the old nodes
have been replaced with new nodes. This means that the cluster must have an
extra IP address available to use during the update.
Suppose a node pool will have N nodes at the end of an update. Then you must have at least N + 1 IP addresses available for nodes in that pool. This means that if you are resizing a cluster by adding nodes to one or more pools, you must have at least one more IP address than the total number of nodes that will be in all of the cluster's node pools at the end of the resizing. For more information, see Verify that enough IP addresses are available.
Filling in the nodePools
section of the cluster configuration file
In your
user cluster configuration file,
fill in the
nodePools
section.
For each node pool, you must specify the following fields:
nodePools.[i].name
nodePools[i].cpus
nodePools.[i].memoryMB
nodePools.[i].replicas
The following fields are optional:
nodePools[i].labels
nodePools[i].taints
nodePools[i].bootDiskSizeGB
nodePools[i].osImageType
nodePools[i].vsphere.datastore
nodePools[i].vsphere.tags
Creating node pools in a new cluster
In your user cluster configuration file, fill in the nodePools
section, and
then create the cluster:
gkectl create cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG: the kubeconfig file for the admin cluster
USER_CLUSTER_CONFIG: the user cluster configuration file
Updating the node pools in an existing cluster
In your user cluster configuration file, edit the nodePools
section, and
then update the cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG
Verifying your changes
To verify that your node pools have been created or updated as intended, inspect the cluster nodes:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get nodes --output wide
If you need to revert your changes, edit the cluster configuration file and
run gkectl update cluster
.
Deleting a node pool
To delete a node pool from a user cluster:
Remove its definition from the
nodePools
section of the user cluster configuration file.Ensure that there are no workloads running on the affected nodes.
Update the cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG
Examples
In the following example configuration, there are four node pools, each with different attributes:
pool-1
: only the minimum required attributes are specifiedpool-2
: includesvsphere.datastore
andvsphere.tags
pool-3
: includestaints
andlabels
pool-4
: includesosImageType
andbootDiskSizeGB
nodePools:
- name: pool-1
cpus: 4
memoryMB: 8192
replicas: 5
- name: pool-2
cpus: 8
memoryMB: 16384
replicas: 3
vsphere:
datastore: my_datastore
tags:
- category: "purpose"
name: "testing"
- name: pool-3
cpus: 4
memoryMB: 8192
replicas: 5
taints:
- key: "example-key"
effect: NoSchedule
labels:
environment: production
app: nginx
- name: pool-4
cpus: 4
memoryMB: 8192
replicas: 5
osImageType: cos
bootDiskSizeGB: 40
Update the osImageType
used by a node pool
You can update a node pool to use a different osImageType
. Update the configuration file for the node pool, as shown in the following example, and run gkectl update cluster
.
nodePools: - name: np-1 cpus: 4 memoryMB: 8192 replicas: 3 osImageType: ubuntu_containerd
Troubleshooting
In general, the
gkectl update cluster
command provides specifics when it fails. If the command succeeded and you don't see the nodes, you can troubleshoot with the Diagnosing cluster issues guide.It is possible that there are insufficient cluster resources like a lack of available IP addresses during node pool creation or update. See the Resizing a user cluster topic for details about verifying that IP addresses are available.
You can also review the general Troubleshooting guide.
Won't proceed past
Creating node MachineDeployment(s) in user cluster…
.It can take a while to create or update the node pools in your user cluster. However, if the wait time is extremely long and you suspect that something might have failed, you can run the following commands:
- Run
kubectl get nodes
to obtain the state of your nodes. - For any nodes that are not ready, run
kubectl describe node NODE_NAME
to obtain details.
- Run