[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-05 (世界標準時間)。"],[[["\u003cp\u003eThis page details how to configure Kubernetes scheduling for AlloyDB Omni Pods, including specifying tolerations and node affinity in the Kubernetes manifest.\u003c/p\u003e\n"],["\u003cp\u003eTolerations allow scheduling AlloyDB Omni Pods on nodes with specific taints, ensuring compatibility with node characteristics or isolating them from other application Pods.\u003c/p\u003e\n"],["\u003cp\u003eNode affinity provides a way to define rules for Pod placement, enabling the preference for scheduling Pods on nodes with specific labels, either as a requirement or as a preference.\u003c/p\u003e\n"],["\u003cp\u003eThe manifest's \u003ccode\u003etolerations\u003c/code\u003e and \u003ccode\u003enodeaffinity\u003c/code\u003e sections can be applied to both primary and read pool instances in the \u003ccode\u003eschedulingConfig\u003c/code\u003e, allowing for distinct scheduling rules for each.\u003c/p\u003e\n"],["\u003cp\u003eThe provided example demonstrates using both tolerations and node affinity to schedule Pods on control plane nodes, while preferring nodes with specific labels for balancing load and optimizing resource usage.\u003c/p\u003e\n"]]],[],null,["# Assign nodes to a database cluster using scheduling\n\nSelect a documentation version: 15.7.0keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/assign-nodes-cluster-scheduling)\n- [16.8.0](/alloydb/omni/16.8.0/docs/assign-nodes-cluster-scheduling)\n- [16.3.0](/alloydb/omni/16.3.0/docs/assign-nodes-cluster-scheduling)\n- [15.12.0](/alloydb/omni/15.12.0/docs/assign-nodes-cluster-scheduling)\n- [15.7.1](/alloydb/omni/15.7.1/docs/assign-nodes-cluster-scheduling)\n- [15.7.0](/alloydb/omni/15.7.0/docs/assign-nodes-cluster-scheduling)\n- [15.5.5](/alloydb/omni/15.5.5/docs/assign-nodes-cluster-scheduling)\n- [15.5.4](/alloydb/omni/15.5.4/docs/assign-nodes-cluster-scheduling)\n\n\u003cbr /\u003e\n\nIn [AlloyDB Omni Kubernetes operator](/alloydb/omni/15.7.0/docs/deploy-kubernetes), *scheduling* is a process for matching new database Pods to nodes to balance node distribution across the cluster and help optimize performance. Pods and nodes are matched based on several criteria and available resources, such as CPU and memory.\n\n\u003cbr /\u003e\n\nFor more information about scheduling, see [Scheduling, Preemption and Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/) in the Kubernetes documentation.\n\nThis page shows how to specify tolerations and node affinity scheduling configurations for primary and read pool instances in your Kubernetes manifest.\n\nFor information about how to define *taints* on nodes, see\n[Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts)\nin the Kubernetes documentation.\n\nSpecify tolerations\n-------------------\n\nTo schedule your AlloyDB Omni Pods to nodes free of other application Pods or match a specific *taint* defined on those nodes, apply one or more tolerations to the nodes as follows:\n\n1. Modify the AlloyDB Omni Kubernetes operator cluster's manifest to include a `tolerations` section in the `schedulingConfig` section of either of the following sections:\n - `primarySpec` for primary instances\n - `spec` for read pool instances\n\n ```\n tolerations:\n - key: \"TAINT_KEY\"\n operator: \"OPERATOR_VALUE\"\n value: \"VALUE\"\n effect: \"TAINT_EFFECT\"\n \n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eTAINT_KEY\u003c/var\u003e: The existing unique name of the taint key such as a node's hostname or another locally-inferred value that the toleration applies to. The taint key is already defined on a node. An empty field and the \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e set to `exists` signify that the toleration must match all values and all keys.\n - \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e: Represents a key's relationship to a set of values. Set the parameter to one of the following:\n - `exists`: Kubernetes matches any value if the taint is defined regardless of the taint's value.\n - `equal`: Kubernetes does not schedule a Pod to a node if the values are different. The operator requires the taint `true` value.\n - \u003cvar translate=\"no\"\u003eVALUE\u003c/var\u003e: The taint value the toleration matches to. If the operator is Exists, the value is empty, otherwise it is a regular string. For example, `true`.\n - \u003cvar translate=\"no\"\u003eTAINT_EFFECT\u003c/var\u003e: Indicates the taint effect to match. An empty field signifies that all taint effects must be matched. Set the parameter to one of the following:\n - `NoSchedule`: Kubernetes does not schedule new Pods on the tainted node.\n - `PreferNoSchedule`: Kubernetes avoids placing new Pods on the tainted node unless necessary.\n - `NoExecute`: Kubernetes evicts existing Pods that don't tolerate the taint.\n2. Re-apply the manifest.\n\nDefine node affinity\n--------------------\n\nThe Kubernetes scheduler uses node affinity as a set of rules to determine where to place a Pod. Node affinity is a more flexible and expressive version of node selectors.\n\nTo specify which nodes must be scheduled for running your database, follow these steps:\n\n1. Modify the database cluster manifest to include the `nodeaffinity`section after the `tolerations` section in the `schedulingConfig` section of either `primarySpec` for primary instances or `spec` for read pool instances: \n\n ```\n nodeaffinity:\n NODE_AFFINITY_TYPE:\n - weight: WAIT_VALUE\n preference:\n matchExpressions:\n - key: LABEL_KEY\n operator: OPERATOR_VALUE\n values:\n - LABEL_KEY_VALUE\n \n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eNODE_AFFINITY_TYPE\u003c/var\u003e: Set the parameter to one of the following:\n - `requiredDuringSchedulingIgnoredDuringExecution`: Kubernetes schedules the Pod based exactly on the defined rules.\n - `preferredDuringSchedulingIgnoredDuringExecution`: The Kubernetes scheduler tries to find a node that meets the defined rule for scheduling. However, if there is no such node, Kubernetes schedules to a different node in the cluster.\n - \u003cvar translate=\"no\"\u003eWAIT_VALUE\u003c/var\u003e: Indicates the preference weight for the specified nodes. Higher values indicate a stronger preference. Valid values are from `1` to `100`.\n - \u003cvar translate=\"no\"\u003eLABEL_KEY\u003c/var\u003e: The node's label for the key that serves as a location indicator and facilitates even Pod distribution across the cluster. For example, `disktype=ssd`.\n - \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e: Represents a key's relationship to a set of values. Set the parameter to one of the following:\n - `In`: The values array must be non-empty.\n - `NotIn`: The values array must be non-empty.\n - `Exists`: The values array must be empty.\n - `DoesNotExist`: The values array must be empty.\n - `Gt`: The values array must have a single element, which is interpreted as an integer.\n - `Lt`: The values array must have a single element, which is interpreted as an integer.\n - \u003cvar translate=\"no\"\u003eLABEL_KEY_VALUE\u003c/var\u003e: The value for your label key. Set the parameter to an array of string values as follows:\n - If the operator is `In` or `NotIn`, the values array must be non-empty.\n - If the operator is `Exists` or `DoesNotExist`, the values array must be empty.\n - If the operator is `Gt` or `Lt`, the values array must have a single element, which is interpreted as an integer.\n2. Reapply the manifest.\n\nExample\n-------\n\nThe following example illustrates scheduling Pods in AlloyDB Omni Kubernetes operator primary and read pool instances. Such scheduling setup helps ensure that the primary instance of the database cluster is scheduled on appropriate nodes while allowing some flexibility in node selection. This flexibility can be useful for balancing load, optimizing resource usage, or adhering to specific node roles and characteristics. \n\n schedulingconfig:\n tolerations:\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n nodeaffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n\nThe example toleration allows the Pod to be scheduled on nodes that are marked as control plane nodes because of the following details:\n\n- The `node-role.kubernetes.io/control-plane` taint key indicates that the node has a control plane node.\n- The `Exists` operator means that the toleration matches any taint with the specified taint key regardless of the value.\n- The `NoSchedule` effect means that Pods aren't going to be scheduled on the control plane node unless they have a matching toleration.\n\nThe `preferredDuringSchedulingIgnoredDuringExecution` node affinity type specifies that the rules defined for the node affinity are preferred but are not required during scheduling. If the preferred nodes are not available, the Pod might still be scheduled on other nodes. The `1` weight value indicates a weak preference. Node selection criteria are defined in the `preference` section. The `matchExpressions` section contains an array of expressions used to match nodes. The `another-node-label-key` key represents the key of the node label to match. The `In` operator means the node must have the key with one of the specified values. The `another-node-label-key` key must have the `another-node-label-value` value.\n\nThe example node affinity rule indicates a preference for scheduling the Pod on nodes that have the `another-node-label-key` label with the `another-node-label-value` value. The preference is weak so it's not a strong requirement.\n\nThe example combines the following:\n\n- Tolerations that allow the Pod to be scheduled on control plane nodes by tolerating the `NoSchedule` taint.\n- A node affinity that prefers nodes with a specific label but does not strictly require it; hence, it offers flexibility in scheduling.\n\nWhat's next\n-----------\n\n- [Set up AlloyDB Omni for production](/alloydb/omni/15.7.0/docs/configure-omni)"]]