Upgrade node pools

This document shows how to upgrade the control plane and selected node pools in a user cluster.

Upgrading selected node pools is supported for Ubuntu and COS node pools, but not for Windows node pools.

In certain situations, you might want to upgrade some, but not all of the node pools in a user cluster. For example, you could first upgrade a node pool that has light traffic or runs your least critical workloads. After you are convinced that your workloads run correctly on the new version, you could upgrade additional node pools, until eventually all the node pools are upgraded.

Upgrade your admin workstation

Upgrade your admin workstation to the target version of your upgrade. For instructions, see Upgrading Google Distributed Cloud.

Import new OS images to vSphere

Run gkectl prepare to import OS images to vSphere:

gkectl prepare \
    --bundle-path /var/lib/gke/bundles/gke-onprem-vsphere-TARGET_VERSION.tgz \
    --kubeconfig ADMIN_CLUSTER_KUBECONFIG

Replace the following:

  • TARGET_VERSION: the target version of your upgrade

  • ADMIN_CLUSTER_KUBECONFIG: the path of your admin cluster kubeconfig file

Check available versions

Run the following command to see which versions are available for upgrade:

gkectl version --kubeconfig ADMIN_CLUSTER_KUBECONFIG

The output shows the current version and the versions available for upgrade. For example:

gkectl version: 1.14.0-gke.x

current admin cluster version: 1.13.1-gke.35

current user cluster versions:
- 1.13.1-gke.35

available admin cluster versions:
- 1.13.1-gke.35

available user cluster versions:
- 1.13.1-gke.35
- 1.14.0-gke.x

Upgrade your user cluster control plane and selected node pools

Follow the instructions in Upgrade a user cluster. In your user cluster configuration file, set gkeOnPremVersion to the target version of your upgrade. For each node pool that you want to upgrade, remove the nodePools.nodePool[i].gkeOnPremVersion field, or set it to the empty string. For each node pool that you do not want to upgrade, set nodePools.nodePool[i].gkeOnPremVersion to the current version.

For example, suppose your user cluster is at version 1.13.1-gke.35 and has two node pools: pool-1 and pool-2. Also suppose that you want to upgrade the control plane and pool-1 to 1.14.0-gke.x, but you want pool-2 to remain at version 1.13.1-gke.35.

Here is a portion of a user cluster configuration file. It specifies that the control plane and pool-1 will be upgraded to version 1.14.0-gke.x, but pool-2 will remain at the current version of 1.13.1-gke.35.

gkeOnPremVersion: 1.14.0-gke.0

nodePools:
- name: pool-1
  gkeOnPremVersion: ""
  cpus: 4
  memoryMB: 8192
  replicas: 3
  osImageType: ubuntu_containerd
- name: pool-2
  gkeOnPremVersion: 1.13.1-gke.35
  cpus: 4
  memoryMB: 8192
  replicas: 5
  osImageType: ubuntu_containerd

Continue with your upgrade as described in Upgrade a user cluster.

Upgrade additional node pools

Suppose everything is working well with pool-1, and now you want to upgrade pool-2.

In your user cluster configuration file, under pool-2, remove gkeOnPremVersion, or set it to the empty string:

gkeOnPremVersion: 1.14.0-gke.0

nodePools:
- name: pool-1
  gkeOnPremVersion: ""
  cpus: 4
  memoryMB: 8192
  replicas: 3
  osImageType: ubuntu_containerd
- name: pool-2
  gkeOnPremVersion: ""
  cpus: 4
  memoryMB: 8192
  replicas: 5
  osImageType: ubuntu_containerd

Run gkectl update cluster to apply the change:

gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
    --config USER_CLUSTER_CONFIG

Replace the following:

  • ADMIN_CLUSTER_KUBECONFIG: the path of your admin cluster kubeconfig file

  • USER_CLUSTER_CONFIG: the path of your user cluster configuration file

Troubleshooting

If you encounter an issue after upgrading a node pool, you can roll back to the previous version. For more information, see Rolling back a node pool after an upgrade.