Upgrade clusters

When you install a new version of bmctl, you can upgrade your existing clusters that were created with an earlier version. Upgrading a cluster to the latest Google Distributed Cloud version brings added features and fixes to your cluster. It also ensures that your cluster remains supported. You can upgrade admin, hybrid, standalone, or user clusters with the bmctl upgrade cluster command.

Upgrade considerations

The following sections outline rules and best practices to consider before you upgrade a cluster.

Preview features

Preview features are subject to change and are provided for testing and evaluation purposes only. Do not use Preview features on your production clusters. We do not guarantee that clusters that use Preview features can be upgraded. In some cases, we explicitly block upgrades for clusters that use Preview features.

For information about breaking changes related to upgrading, see the release notes.

SELinux

If you want to enable SELinux to secure your containers, you must make sure that SELinux is enabled in Enforced mode on all your host machines. Starting with Google Distributed Cloud release 1.9.0 or later, you can enable or disable SELinux before or after cluster creation or cluster upgrades. SELinux is enabled by default on Red Hat Enterprise Linux (RHEL) and CentOS. If SELinux is disabled on your host machines or you aren't sure, see Securing your containers using SELinux for instructions on how to enable it.

Google Distributed Cloud supports SELinux in only RHEL and CentOS systems.

Upgrade preflight checks

Preflight checks are run as part of the cluster upgrade to validate cluster status and node health. The cluster upgrade doesn't proceed if the preflight checks fail. For more information on preflight checks, see Understand preflight checks.

You can check if the clusters are ready for an upgrade by running the preflight check before running the upgrade. For more information, see Preflight checks for upgrades.

Number of nodes

If your cluster has more than 51 nodes, then the standard upgrade operation, which uses a bootstrap cluster, is susceptible to failures. The failures are due to the limited quantity of Pod IP addresses allocated to the bootstrap cluster. The default IP address range available for pods in the bootstrap cluster uses a mask of /24 in CIDR blocks notation.

There are two workarounds in this situation:

  1. (Recommended) Use the --use-bootstrap=false flag with the bmctl upgrade cluster command to perform an in-place upgrade. This flag causes the upgrade to bypass the bootstrap cluster and the related Pod address limitation entirely. Please note that there is an in-place upgrade known issue for version 1.13.0 clusters. If your cluster is at version 1.13.0, see the known issue for a workaround and additional information.

  2. Use the --bootstrap-cluster-pod-cidr flag with the bmctl upgrade cluster command to increase the quantity of Pod IP addresses allocated to the bootstrap cluster. For example, when you specify --bootstrap-cluster-pod-cidr=192.168.122.0/23 Pods running for the upgrade operation can use IP addresses from 192.168.122.0/23 (512 addresses), instead of the default CIDR 192.168.122.0/24 (256 addresses). These added addresses should unblock upgrades for clusters with as many as 52 nodes.

    At a maximum, the number of pods running concurrently during an upgrade can be five times the number of nodes. To ensure that your upgrade is successful, specify a CIDR block containing a quantity of IP addresses that is five times the number of nodes. This flag requires internal IP addresses.

  3. If you don't want to use either of the preceding options, you can use the --skip-bootstrap-cidr-check flag to bypass validation. However, passing this argument means that the upgrade could fail due to insufficient IP addresses available in the pod CIDR for the bootstrap cluster.

In-place upgrades for self-managed clusters

Starting with Google Distributed Cloud release 1.13.1, you can perform in-place upgrades on admin, hybrid, and standalone clusters. An in-place upgrade eliminates the need for a bootstrap cluster, which simplifies the process and reduces resource requirements for an upgrade. Before you can perform an in-place upgrade on your self-managed cluster, it must be at version 1.13.0 or higher.

To perform an in-place upgrade, you can use either bmctl or kubectl:

bmctl

The upgrade process is identical to the standard upgrade process except for the bmctl upgrade cluster command.

  • To start the in-place upgrade, use the --use-bootstrap=false flag with the upgrade command:

    bmctl upgrade cluster -c CLUSTER_NAME --use-bootstrap=false \
        --kubeconfig ADMIN_KUBECONFIG
    

    Replace the following:

    • CLUSTER_NAME: the name of the cluster to upgrade.
    • ADMIN_KUBECONFIG: the path to the admin cluster kubeconfig file.

As with the standard upgrade process, preflight checks are run as part of the cluster upgrade to validate cluster status and node health. If the preflight checks fail, the cluster upgrade is halted. To troubleshoot any failures, examine the cluster and related logs, since no bootstrap cluster is created.

kubectl

To upgrade a self-managed cluster with kubectl, perform the following steps:

  1. Edit the cluster configuration file to set anthosBareMetalVersion to the upgrade target version.

  2. To initiate the upgrade, run the following command:

    kubectl apply -f CLUSTER_CONFIG_PATH
    

    Replace CLUSTER_CONFIG_PATH with the path to the cluster configuration file.

As with the standard upgrade process, preflight checks are run as part of the cluster upgrade to validate cluster status and node health. If the preflight checks fail, the cluster upgrade is halted. To troubleshoot any failures, examine the cluster and related logs, since no bootstrap cluster is created.

Pod density

Google Distributed Cloud supports the configuration of up to 250 maximum pods per node with nodeConfig.PodDensity.MaxPodsPerNode. You can configure pod density during cluster creation only. You can't update pod density settings for existing clusters.

Known issues

For information about potential problems related to cluster upgrades, see Upgrading Anthos clusters on bare metal on the Known Issues page.

Upgrade admin, standalone, hybrid, or user clusters

When you download and install a new version of bmctl, you can upgrade your admin, hybrid, standalone, and user clusters created with an earlier version. For a given version of bmctl, a cluster can be upgraded to the same version only.

First, download the latest bmctl, then modify the appropriate cluster config files, and then issue the bmctl upgrade cluster command to complete the upgrade.

  1. Download the latest bmctl from the Cloud Storage bucket and use chmod to give bmctl execute permissions to all users:

    gcloud storage cp gs://anthos-baremetal-release/bmctl/1.14.11/linux-amd64/bmctl bmctl
    chmod a+x bmctl
    
  2. Modify the cluster config file to change the Google Distributed Cloud cluster version from 1.13.2 to 1.14.11. The following shows an example from an admin cluster config:

    ---
    apiVersion: baremetal.cluster.gke.io/v1
    kind: Cluster
    metadata:
      name: cluster1
      namespace: cluster-cluster1
    spec:
      # Cluster type. This can be:
      #   1) admin:  to create an admin cluster. This can later be used to create user clusters.
      #   2) user:   to create a user cluster. Requires an existing admin cluster.
      #   3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads.
      #   4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters.
      type: admin
      # Anthos cluster version.
      # Change the following line from 1.13.2 to 1.14.11, shown below
      anthosBareMetalVersion: 1.14.11
    
  3. When upgrading clusters to 1.14.11, you must register the clusters with Connect to your project fleet, if they have not been registered already.

    1. Manually create service accounts and retrieve the JSON key files as described in Configuring service accounts for use with Connect on the Enabling Google services and service accounts page.
    2. Reference the downloaded JSON keys in the associated gkeConnectAgentServiceAccountKeyPath and gkeConnectRegisterServiceAccountKeyPath fields of the cluster config file.
  4. Use the bmctl upgrade cluster command to complete the upgrade:

    bmctl upgrade cluster -c CLUSTER_NAME --kubeconfig ADMIN_KUBECONFIG
    

    Replace the following:

    • CLUSTER_NAME: the name of the cluster to upgrade.
    • ADMIN_KUBECONFIG: the path to the admin cluster kubeconfig file.

    Preflight checks are run as part of the cluster upgrade to validate cluster status and node health. The cluster upgrade doesn't proceed if the preflight checks fail.

Parallel upgrades of nodes

In a typical cluster upgrade, each cluster node is upgraded sequentially, one after the other. This section shows how to configure your cluster so that multiple nodes upgrade in parallel when you upgrade your cluster.

Upgrading nodes in parallel speeds up cluster upgrades, especially for clusters that have hundreds of nodes. Parallel upgrades of nodes are configured on a node pool basis, and only nodes in a worker node pool can be upgraded in parallel. Nodes in control plane or load balancer node pools can only be upgraded one at a time.

Parallel upgrades of worker nodes is a Preview feature, so don't use this feature on your production clusters.

How to perform a parallel upgrade

To perform a parallel upgrade of nodes in a worker node pool, do the following:

  1. Add the annotation preview.baremetal.cluster.gke.io/parallel-upgrade: "enable" to the cluster configuration file:

    ---    
    gcrKeyPath: /path/to/gcr-sa
    gkeConnectAgentServiceAccountKeyPath: /path/to/gke-connect
    gkeConnectRegisterServiceAccountKeyPath: /path/to/gke-register
    sshPrivateKeyPath: /path/to/private-ssh-key
    cloudOperationsServiceAccountKeyPath: /path/to/logging-sa
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: cluster-cluster1
    ---
    apiVersion: baremetal.cluster.gke.io/v1
    kind: Cluster
    metadata:
      name: cluster1
      namespace: cluster-cluster1
      annotations:
        baremetal.cluster.gke.io/maintenance-mode-deadline-seconds: "180"
        preview.baremetal.cluster.gke.io/parallel-upgrade: "enable"
        ...
    
  2. Add an upgradeStrategy section to the worker node pool manifest. This manifest needs to be in the cluster configuration file. If it appears in a separate manifest file, the bmctl upgrade cluster command won't act upon it. Here's an example:

    ---
    apiVersion: baremetal.cluster.gke.io/v1
    kind: NodePool
    metadata:
      name: np1
      namespace: cluster-ci-bf8b9aa43c16c47
    spec:
      clusterName: ci-bf8b9aa43c16c47
      nodes:
      - address:  10.200.0.7
      - address:  10.200.0.8
      - address:  10.200.0.9
      upgradeStrategy:
        parallelUpgrade:
          concurrentNodes: 5
      
    

    In this example, the value of the field concurrentNodes is 5, which means that 5 nodes will upgrade in parallel. The minimum (and default) value of this field is 1, and the maximum allowed value is the number of nodes in the worker node pool. However, we recommend that you set this value to be no higher than 3% of the total number of nodes in your cluster. If the value of concurrentNodes is too high, workloads can be compromised during a parallel upgrade.

  3. Upgrade the cluster as described in the preceding Upgrade admin, standalone, hybrid, or user clusters section.

How to disable parallel upgrades of nodes

To disable parallel upgrades of nodes, set the annotation preview.baremetal.cluster.gke.io/parallel-upgrade to disable in the cluster configuration file.