Control scheduling with taints and tolerations

This page provides an overview of taints and tolerations on Google Distributed Cloud. When you schedule workloads to be deployed on your cluster, node taints help you control which nodes they are allowed to run on.

Overview

When you submit a workload to run in a cluster, the scheduler determines where to place the Pods associated with the workload. The scheduler is free to place a Pod on any node that satisfies the Pod's CPU, memory, and custom resource requirements.

If your cluster runs a variety of workloads, you might want to exercise some control over which workloads can run on a particular pool of nodes.

A node taint lets you mark a node so that the scheduler avoids or prevents using it for certain Pods. A complementary feature, tolerations, lets you designate Pods that can be used on "tainted" nodes.

Taints and tolerations work together to ensure that Pods are not scheduled onto inappropriate nodes.

Taints are key-value pairs associated with an effect. The following table lists the available effects:

Effect Description
NoSchedule Pods that do not tolerate this taint are not scheduled on the node; existing Pods are not evicted from the node.
PreferNoSchedule Kubernetes avoids scheduling Pods that do not tolerate this taint onto the node.
NoExecute The Pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.

Advantages of setting node taints in Google Distributed Cloud

Although you can set node taints using the kubectl taint command, using gkectl or the Google Cloud console to set a node taint has the following advantages over kubectl:

  • Taints are preserved when a node is restarted or replaced.
  • Taints are created automatically when a node is added to a node pool.
  • When using gkectl to add taints, the taints are created automatically during cluster autoscaling. (Autoscaling for nodepools created in the Google Cloud console isn't available currently.)

Set node taints

You can set node taints in a node pool either when you create a user cluster or after the cluster is created. This section shows adding taints to clusters that have already been created, but the process is similar when creating new clusters.

You can either add a new node pool and set a taint, or you can update an existing node pool and set a taint. Before you add another node pool, verify that enough IP addresses are available on the cluster.

If you created the cluster in the Google Cloud console, you can use the Google Cloud console to add or update a node pool.

Set taints in a new node pool

Console

  1. In the Google Cloud console, go to the GKE Enterprise clusters page.

    Go to the GKE Enterprise clusters page

  2. Select the Google Cloud project that the user cluster is in.

  3. In the cluster list, click the name of the cluster, and then click View details in the Details panel.

  4. Click Add node pool.

  5. Configure the node pool:

    1. Enter the Node pool name.
    2. Enter the number of vCPUs for each node in the pool (minimum 4 per user cluster worker).
    3. Enter the memory size in mebibytes (MiB) for each node in the pool (minimum 8192 MiB per user cluster worker node and must be a multiple of 4).
    4. In the Replicas field, enter the number of nodes in the pool (minimum of 3).
    5. Select the OS image type: Ubuntu Containerd, Ubuntu, or COS.

    6. Enter the Boot disk size in gibibytes (GiB) (default is 40 GiB).

  6. In the Node pool metadata (optional) section, click + Add Taint. Enter the Key, Value, and Effect for the taint. Repeat as needed.

  7. Optionally, click + Add Kubernetes Labels. Enter the Key and Value for the label. Repeat as needed.

  8. Click Create.

  9. The Google Cloud console displays Cluster status: changes in progress. Click Show Details to view the Resource status condition and Status messages.

Command line

  1. In your user cluster configuration file, fill in the nodePools section.

    You must specify the following fields:

    • nodePools.[i].name
    • nodePools[i].cpus
    • nodePools.[i].memoryMB
    • nodePools.[i].replicas

    The following fields are optional. If you don't include nodePools[i].bootDiskSizeGB or nodePools[i].osImageType, the default values are used.

  2. Fill in the nodePools[i].taints section. For example:

    nodePools:
    - name: "my-node-pool"
      taints:
      - key: "staging"
        value: "true"
        effect: "NoSchedule"
    
  3. Optionally, fill in the following sections:

    • nodePools[i].labels
    • nodePools[i].bootDiskSizeGB
    • nodePools[i].osImageType
    • nodePools[i].vsphere.datastore
    • nodePools[i].vsphere.tags
  4. Run the following command:

    gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG
    

    Replace the following:

    • [ADMIN_CLUSTER_KUBECONFIG] with the path of the kubeconfig file for your admin cluster.

    • [USER_CLUSTER_CONFIG] with the path of your user cluster configuration file.

Set taints in an existing node pool

Console

  1. In the Google Cloud console, go to the GKE Enterprise clusters page.

    Go to the GKE Enterprise clusters page

  2. Select the Google Cloud project that the user cluster is in.

  3. In the cluster list, click the name of the cluster, and then click View details in the Details panel.

  4. Click the Nodes tab.

  5. Click the name of the node pool that you want to modify.

  6. Click Edit next to the Node pool metadata (optional) section, and click + Add Taint. Enter the Key, Value, and Effect for the taint. Repeat as needed.

  7. Click Done.

  8. Click to go back to the previous page.

  9. The Google Cloud console displays Cluster status: changes in progress. Click Show Details to view the Resource status condition and Status messages.

Command line

  1. In your user cluster configuration file, go to the nodePools section of the node pool that you want to update.

  2. Fill in the nodePools[i].taints For example:

    nodePools:
    - name: "my-node-pool"
      taints:
      - key: "staging"
        value: "true"
        effect: "NoSchedule"
    
  3. Run the following command:

    gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG
    

    Replace the following:

    • [ADMIN_CLUSTER_KUBECONFIG] with the path of the kubeconfig file for your admin cluster.

    • [USER_CLUSTER_CONFIG] with the path of your user cluster configuration file.

Configure Pods to tolerate a taint

You can configure Pods to tolerate a taint by including the tolerations field in the Pods' specification. In the following example, the Pod can be scheduled on a node that has the dedicated=experimental:NoSchedule taint:

tolerations:
- key: dedicated
  operator: Equal
  value: experimental
  effect: NoSchedule

For additional examples, see Taints and Tolerations.