This page provides an overview of
taints and tolerations
on Google Distributed Cloud. When you schedule workloads to be
deployed on your cluster, node taints help you control which nodes they are
allowed to run on.
Overview
When you submit a workload to run in a cluster, the
scheduler
determines where to place the Pods associated with the workload. The scheduler
is free to place a Pod on any node that satisfies the Pod's CPU, memory, and
custom resource requirements.
If your cluster runs a variety of workloads, you might want to exercise some
control over which workloads can run on a particular pool of nodes.
A node taint lets you mark a node so that the scheduler avoids or prevents
using it for certain Pods. A complementary feature, tolerations, lets you
designate Pods that can be used on "tainted" nodes.
Taints and tolerations work together to ensure that Pods are not scheduled onto
inappropriate nodes.
Taints are key-value pairs associated with an effect. The following table
lists the available effects:
Effect
Description
NoSchedule
Pods that do not tolerate this taint are not scheduled on the node;
existing Pods are not evicted from the node.
PreferNoSchedule
Kubernetes avoids scheduling Pods that do not tolerate this taint onto
the node.
NoExecute
The Pod is evicted from the node if it is already running on the node,
and is not scheduled onto the node if it is not yet running on the node.
Advantages of setting node taints in Google Distributed Cloud
Although you can set node taints using the
kubectl taint
command, using gkectl or the Google Cloud console to set a node taint has the
following advantages over kubectl:
Taints are preserved when a node is restarted or replaced.
Taints are created automatically when a node is added to a node pool.
When using gkectl to add taints, the taints are created automatically during
cluster autoscaling. (Autoscaling for nodepools created in the
Google Cloud console isn't available currently.)
Set node taints
You can set node taints in a node pool either when you create a user cluster or
after the cluster is created. This section shows adding taints to clusters that
have already been created, but the process is similar when creating new
clusters.
Select the Google Cloud project that the user cluster is in.
In the cluster list, click the name of the cluster, and then click
View details in the Details panel.
Click the Nodes tab.
Click the name of the node pool that you want to modify.
Click editEdit next to the
Node pool metadata (optional) section, and click + Add Taint.
Enter the Key, Value, and Effect for the taint. Repeat as
needed.
Click Done.
Click arrow_back to go back to the
previous page.
The Google Cloud console displays Cluster status: changes in
progress. Click Show Details to view the Resource status
condition and Status messages.
[ADMIN_CLUSTER_KUBECONFIG] with the path of the
kubeconfig file for your admin cluster.
[USER_CLUSTER_CONFIG] with the path of your user cluster
configuration file.
Configure Pods to tolerate a taint
You can configure Pods to tolerate a taint by including the tolerations field
in the Pods' specification. In the following example, the Pod can be scheduled
on a node that has the dedicated=experimental:NoSchedule taint:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eNode taints in Google Distributed Cloud allow you to control which nodes workloads can run on, preventing inappropriate scheduling by marking nodes with key-value pairs and effects.\u003c/p\u003e\n"],["\u003cp\u003eTolerations are a complementary feature that enables specific Pods to be scheduled on nodes that have been marked with taints, ensuring that necessary workloads can still utilize those resources.\u003c/p\u003e\n"],["\u003cp\u003eSetting node taints through \u003ccode\u003egkectl\u003c/code\u003e or the Google Cloud console offers advantages over using \u003ccode\u003ekubectl\u003c/code\u003e, such as taint preservation during node restarts or replacements, and automatic creation during node addition and autoscaling.\u003c/p\u003e\n"],["\u003cp\u003eTaints can be applied to node pools either during the creation of a new user cluster or by updating existing node pools, which can be managed through both the Google Cloud console and command-line interface.\u003c/p\u003e\n"],["\u003cp\u003ePods can be configured to tolerate taints by defining the \u003ccode\u003etolerations\u003c/code\u003e field within their specification, allowing them to be scheduled on nodes that have specific taints.\u003c/p\u003e\n"]]],[],null,["# Control scheduling with taints and tolerations\n\n\u003cbr /\u003e\n\nThis page provides an overview of\n[taints and tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)\non Google Distributed Cloud. When you schedule workloads to be\ndeployed on your cluster, node taints help you control which nodes they are\nallowed to run on.\n\nOverview\n--------\n\nWhen you submit a workload to run in a cluster, the\n[scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/)\ndetermines where to place the Pods associated with the workload. The scheduler\nis free to place a Pod on any node that satisfies the Pod's CPU, memory, and\ncustom resource requirements.\n\nIf your cluster runs a variety of workloads, you might want to exercise some\ncontrol over which workloads can run on a particular pool of nodes.\n\nA **node taint** lets you mark a node so that the scheduler avoids or prevents\nusing it for certain Pods. A complementary feature, *tolerations*, lets you\ndesignate Pods that can be used on \"tainted\" nodes.\n\nTaints and tolerations work together to ensure that Pods are not scheduled onto\ninappropriate nodes.\n\nTaints are *key-value pairs* associated with an *effect*. The following table\nlists the available effects:\n\nAdvantages of setting node taints in Google Distributed Cloud\n-------------------------------------------------------------\n\nAlthough you can set node taints using the\n[`kubectl taint`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint)\ncommand, using `gkectl` or the Google Cloud console to set a node taint has the\nfollowing advantages over `kubectl`:\n\n- Taints are preserved when a node is restarted or replaced.\n- Taints are created automatically when a node is added to a node pool.\n- When using `gkectl` to add taints, the taints are created automatically during cluster autoscaling. (Autoscaling for nodepools created in the Google Cloud console isn't available currently.)\n\nSet node taints\n---------------\n\nYou can set node taints in a node pool either when you create a user cluster or\nafter the cluster is created. This section shows adding taints to clusters that\nhave already been created, but the process is similar when creating new\nclusters.\n\nYou can either\n[add a new node pool](/anthos/clusters/docs/on-prem/1.12/how-to/managing-node-pools#add_a_node_pool)\nand set a taint, or you can\n[update an existing node pool](/anthos/clusters/docs/on-prem/1.12/how-to/managing-node-pools#update_a_node_pool)\nand set a taint. Before you add another node pool,\n[verify that enough IP addresses are available](/anthos/clusters/docs/on-prem/1.12/how-to/resizing-a-user-cluster#verify_ips)\non the cluster.\n\nIf you created the cluster in the Google Cloud console, you can use the\nGoogle Cloud console to add or update a node pool.\n\n### Set taints in a new node pool\n\n### Console\n\n1. In the Google Cloud console, go to the GKE Enterprise clusters page.\n\n [Go to the GKE Enterprise clusters page](https://console.cloud.google.com/anthos/clusters)\n2. Select the Google Cloud project that the user cluster is in.\n\n3. In the cluster list, click the name of the cluster, and then click\n **View details** in the **Details** panel.\n\n4. Click add_box **Add node pool**.\n\n5. Configure the node pool:\n\n 1. Enter the **Node pool name**.\n 2. Enter the number of **vCPUs** for each node in the pool (minimum 4 per user cluster worker).\n 3. Enter the **memory** size in mebibytes (MiB) for each node in the pool (minimum 8192 MiB per user cluster worker node and must be a multiple of 4).\n 4. In the **Replicas** field, enter the number of nodes in the pool (minimum of 3).\n 5. Select the **OS image type** : **Ubuntu Containerd** , **Ubuntu** ,\n or **COS**.\n\n | **Note:** The **Ubuntu** OS Image type for node pools is deprecated in Google Distributed Cloud version 1.12 and will be unsupported in version 1.13. Consider changing your node pools now to use either **Ubuntu Containerd** or **COS** for the OS Image type.\n 6. Enter the **Boot disk size** in gibibytes (GiB) (default is 40 GiB).\n\n6. In the **Node pool metadata (optional)** section, click **+ Add Taint** .\n Enter the **Key** , **Value** , and **Effect** for the taint. Repeat as\n needed.\n\n7. Optionally, click **+ Add Kubernetes Labels** . Enter the **Key** and\n **Value** for the label. Repeat as needed.\n\n8. Click **Create**.\n\n9. The Google Cloud console displays **Cluster status: changes in\n progress** . Click **Show Details** to view the **Resource status\n condition** and **Status messages**.\n\n### Command line\n\n1. In your\n [user cluster configuration file](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file),\n fill in the\n [`nodePools`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepools-section)\n section.\n\n You must specify the following fields:\n - `nodePools.[i].name`\n - `nodePools[i].cpus`\n - `nodePools.[i].memoryMB`\n - `nodePools.[i].replicas`\n\n The following fields are optional. If you don't include\n [`nodePools[i].bootDiskSizeGB`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepool-bootdisksizegb-field)\n or\n [`nodePools[i].osImageType`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepool-osimagetype-field),\n the default values are used.\n2. Fill in the [`nodePools[i].taints`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepool-taints-field)\n section. For example:\n\n nodePools:\n - name: \"my-node-pool\"\n taints:\n - key: \"staging\"\n value: \"true\"\n effect: \"NoSchedule\"\n\n3. Optionally, fill in the following sections:\n\n - `nodePools[i].labels`\n - `nodePools[i].bootDiskSizeGB`\n - `nodePools[i].osImageType`\n - `nodePools[i].vsphere.datastore`\n - `nodePools[i].vsphere.tags`\n\n | **Note:** The **Ubuntu** OS Image type for node pools will be deprecated in Google Distributed Cloud version 1.12 and unsupported in version 1.13. Consider changing your node pools now to use either **Ubuntu\n | Containerd** or **COS** for the OS Image type.\n4. Run the following command:\n\n ```\n gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003e[ADMIN_CLUSTER_KUBECONFIG]\u003c/var\u003e with the path of the\n kubeconfig file for your admin cluster.\n\n - \u003cvar translate=\"no\"\u003e[USER_CLUSTER_CONFIG]\u003c/var\u003e with the path of your user cluster\n configuration file.\n\n### Set taints in an existing node pool\n\n### Console\n\n1. In the Google Cloud console, go to the GKE Enterprise clusters page.\n\n [Go to the GKE Enterprise clusters page](https://console.cloud.google.com/anthos/clusters)\n2. Select the Google Cloud project that the user cluster is in.\n\n3. In the cluster list, click the name of the cluster, and then click\n **View details** in the **Details** panel.\n\n4. Click the **Nodes** tab.\n\n5. Click the name of the node pool that you want to modify.\n\n6. Click edit **Edit** next to the\n **Node pool metadata (optional)** section, and click **+ Add Taint** .\n Enter the **Key** , **Value** , and **Effect** for the taint. Repeat as\n needed.\n\n7. Click **Done**.\n\n8. Click arrow_back to go back to the\n previous page.\n\n9. The Google Cloud console displays **Cluster status: changes in\n progress** . Click **Show Details** to view the **Resource status\n condition** and **Status messages**.\n\n### Command line\n\n1. In your\n [user cluster configuration file](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file),\n go to the\n [`nodePools`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepools-section)\n section of the node pool that you want to update.\n\n2. Fill in the\n [`nodePools[i].taints`](/anthos/clusters/docs/on-prem/1.12/how-to/user-cluster-configuration-file#nodepool-taints-field)\n For example:\n\n nodePools:\n - name: \"my-node-pool\"\n taints:\n - key: \"staging\"\n value: \"true\"\n effect: \"NoSchedule\"\n\n3. Run the following command:\n\n ```\n gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003e[ADMIN_CLUSTER_KUBECONFIG]\u003c/var\u003e with the path of the\n kubeconfig file for your admin cluster.\n\n - \u003cvar translate=\"no\"\u003e[USER_CLUSTER_CONFIG]\u003c/var\u003e with the path of your user cluster\n configuration file.\n\nConfigure Pods to tolerate a taint\n----------------------------------\n\nYou can configure Pods to tolerate a taint by including the `tolerations` field\nin the Pods' specification. In the following example, the Pod can be scheduled\non a node that has the `dedicated=experimental:NoSchedule` taint: \n\n tolerations:\n - key: dedicated\n operator: Equal\n value: experimental\n effect: NoSchedule\n\nFor additional examples, see\n[Taints and Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)."]]