[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-05。"],[[["\u003cp\u003eScheduling in AlloyDB Omni Kubernetes Operator involves matching database Pods to nodes based on criteria like CPU and memory, aiming to balance distribution and optimize performance.\u003c/p\u003e\n"],["\u003cp\u003eTolerations can be specified in the Kubernetes manifest's \u003ccode\u003eschedulingConfig\u003c/code\u003e section to allow AlloyDB Omni Pods to be scheduled on nodes with specific taints, using \u003ccode\u003ekey\u003c/code\u003e, \u003ccode\u003eoperator\u003c/code\u003e, \u003ccode\u003evalue\u003c/code\u003e, and \u003ccode\u003eeffect\u003c/code\u003e to define the matching criteria.\u003c/p\u003e\n"],["\u003cp\u003eNode affinity, defined within the \u003ccode\u003eschedulingConfig\u003c/code\u003e section of the manifest, allows setting rules for where Pods are placed, with options like \u003ccode\u003erequiredDuringSchedulingIgnoredDuringExecution\u003c/code\u003e for strict placement and \u003ccode\u003epreferredDuringSchedulingIgnoredDuringExecution\u003c/code\u003e for preferred but not required placement.\u003c/p\u003e\n"],["\u003cp\u003eNode affinity utilizes \u003ccode\u003ematchExpressions\u003c/code\u003e to match nodes based on labels and operators like \u003ccode\u003eIn\u003c/code\u003e, \u003ccode\u003eNotIn\u003c/code\u003e, \u003ccode\u003eExists\u003c/code\u003e, \u003ccode\u003eDoesNotExist\u003c/code\u003e, \u003ccode\u003eGt\u003c/code\u003e, or \u003ccode\u003eLt\u003c/code\u003e, and they also have a weight system (\u003ccode\u003eWAIT_VALUE\u003c/code\u003e) to indicate the preference level for matching nodes.\u003c/p\u003e\n"],["\u003cp\u003eAn example demonstrates combining tolerations (to allow control plane node scheduling) and node affinity (for preferred node selection), showcasing how to define flexible scheduling rules for primary and read pool instances.\u003c/p\u003e\n"]]],[],null,["# Assign nodes to a database cluster using scheduling\n\nSelect a documentation version: 15.5.4keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/assign-nodes-cluster-scheduling)\n- [16.8.0](/alloydb/omni/16.8.0/docs/assign-nodes-cluster-scheduling)\n- [16.3.0](/alloydb/omni/16.3.0/docs/assign-nodes-cluster-scheduling)\n- [15.12.0](/alloydb/omni/15.12.0/docs/assign-nodes-cluster-scheduling)\n- [15.7.1](/alloydb/omni/15.7.1/docs/assign-nodes-cluster-scheduling)\n- [15.7.0](/alloydb/omni/15.7.0/docs/assign-nodes-cluster-scheduling)\n- [15.5.5](/alloydb/omni/15.5.5/docs/assign-nodes-cluster-scheduling)\n- [15.5.4](/alloydb/omni/15.5.4/docs/assign-nodes-cluster-scheduling)\n\n\u003cbr /\u003e\n\nIn [AlloyDB Omni Kubernetes Operator](/alloydb/omni/15.5.4/docs/deploy-kubernetes), *scheduling* is a process for matching new database Pods to nodes to balance node distribution across the cluster and help optimize performance. Pods and nodes are matched based on several criteria and available resources, such as CPU and memory.\n\n\u003cbr /\u003e\n\nFor more information about scheduling, see [Scheduling, Preemption and Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/) in the Kubernetes documentation.\n\nThis page shows how to specify tolerations and node affinity scheduling configurations for primary and read pool instances in your Kubernetes manifest.\n\nFor information about how to define *taints* on nodes, see\n[Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts)\nin the Kubernetes documentation.\n\nSpecify tolerations\n-------------------\n\nTo schedule your AlloyDB Omni Pods to nodes free of other application Pods or match a specific *taint* defined on those nodes, apply one or more tolerations to the nodes as follows:\n\n1. Modify the AlloyDB Omni Kubernetes Operator cluster's manifest to include a `tolerations` section in the `schedulingConfig` section of either of the following sections:\n - `primarySpec` for primary instances\n - `spec` for read pool instances\n\n ```\n tolerations:\n - key: \"TAINT_KEY\"\n operator: \"OPERATOR_VALUE\"\n value: \"VALUE\"\n effect: \"TAINT_EFFECT\"\n \n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eTAINT_KEY\u003c/var\u003e: The existing unique name of the taint key such as a node's hostname or another locally-inferred value that the toleration applies to. The taint key is already defined on a node. An empty field and the \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e set to `exists` signify that the toleration must match all values and all keys.\n - \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e: Represents a key's relationship to a set of values. Set the parameter to one of the following:\n - `exists`: Kubernetes matches any value if the taint is defined regardless of the taint's value.\n - `equal`: Kubernetes does not schedule a Pod to a node if the values are different. The operator requires the taint `true` value.\n - \u003cvar translate=\"no\"\u003eVALUE\u003c/var\u003e: The taint value the toleration matches to. If the operator is Exists, the value is empty, otherwise it is a regular string. For example, `true`.\n - \u003cvar translate=\"no\"\u003eTAINT_EFFECT\u003c/var\u003e: Indicates the taint effect to match. An empty field signifies that all taint effects must be matched. Set the parameter to one of the following:\n - `NoSchedule`: Kubernetes does not schedule new Pods on the tainted node.\n - `PreferNoSchedule`: Kubernetes avoids placing new Pods on the tainted node unless necessary.\n - `NoExecute`: Kubernetes evicts existing Pods that don't tolerate the taint.\n2. Re-apply the manifest.\n\nDefine node affinity\n--------------------\n\nThe Kubernetes scheduler uses node affinity as a set of rules to determine where to place a Pod. Node affinity is a more flexible and expressive version of node selectors.\n\nTo specify which nodes must be scheduled for running your database, follow these steps:\n\n1. Modify the database cluster manifest to include the `nodeaffinity`section after the `tolerations` section in the `schedulingConfig` section of either `primarySpec` for primary instances or `spec` for read pool instances: \n\n ```\n nodeaffinity:\n NODE_AFFINITY_TYPE:\n - weight: WAIT_VALUE\n preference:\n matchExpressions:\n - key: LABEL_KEY\n operator: OPERATOR_VALUE\n values:\n - LABEL_KEY_VALUE\n \n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eNODE_AFFINITY_TYPE\u003c/var\u003e: Set the parameter to one of the following:\n - `requiredDuringSchedulingIgnoredDuringExecution`: Kubernetes schedules the Pod based exactly on the defined rules.\n - `preferredDuringSchedulingIgnoredDuringExecution`: The Kubernetes scheduler tries to find a node that meets the defined rule for scheduling. However, if there is no such node, Kubernetes schedules to a different node in the cluster.\n - \u003cvar translate=\"no\"\u003eWAIT_VALUE\u003c/var\u003e: Indicates the preference weight for the specified nodes. Higher values indicate a stronger preference. Valid values are from `1` to `100`.\n - \u003cvar translate=\"no\"\u003eLABEL_KEY\u003c/var\u003e: The node's label for the key that serves as a location indicator and facilitates even Pod distribution across the cluster. For example, `disktype=ssd`.\n - \u003cvar translate=\"no\"\u003eOPERATOR_VALUE\u003c/var\u003e: Represents a key's relationship to a set of values. Set the parameter to one of the following:\n - `In`: The values array must be non-empty.\n - `NotIn`: The values array must be non-empty.\n - `Exists`: The values array must be empty.\n - `DoesNotExist`: The values array must be empty.\n - `Gt`: The values array must have a single element, which is interpreted as an integer.\n - `Lt`: The values array must have a single element, which is interpreted as an integer.\n - \u003cvar translate=\"no\"\u003eLABEL_KEY_VALUE\u003c/var\u003e: The value for your label key. Set the parameter to an array of string values as follows:\n - If the operator is `In` or `NotIn`, the values array must be non-empty.\n - If the operator is `Exists` or `DoesNotExist`, the values array must be empty.\n - If the operator is `Gt` or `Lt`, the values array must have a single element, which is interpreted as an integer.\n2. Reapply the manifest.\n\nExample\n-------\n\nThe following example illustrates scheduling Pods in AlloyDB Omni Kubernetes Operator primary and read pool instances. Such scheduling setup helps ensure that the primary instance of the database cluster is scheduled on appropriate nodes while allowing some flexibility in node selection. This flexibility can be useful for balancing load, optimizing resource usage, or adhering to specific node roles and characteristics. \n\n schedulingconfig:\n tolerations:\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n nodeaffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n\nThe example toleration allows the Pod to be scheduled on nodes that are marked as control plane nodes because of the following details:\n\n- The `node-role.kubernetes.io/control-plane` taint key indicates that the node has a control plane node.\n- The `Exists` operator means that the toleration matches any taint with the specified taint key regardless of the value.\n- The `NoSchedule` effect means that Pods aren't going to be scheduled on the control plane node unless they have a matching toleration.\n\nThe `preferredDuringSchedulingIgnoredDuringExecution` node affinity type specifies that the rules defined for the node affinity are preferred but are not required during scheduling. If the preferred nodes are not available, the Pod might still be scheduled on other nodes. The `1` weight value indicates a weak preference. Node selection criteria are defined in the `preference` section. The `matchExpressions` section contains an array of expressions used to match nodes. The `another-node-label-key` key represents the key of the node label to match. The `In` operator means the node must have the key with one of the specified values. The `another-node-label-key` key must have the `another-node-label-value` value.\n\nThe example node affinity rule indicates a preference for scheduling the Pod on nodes that have the `another-node-label-key` label with the `another-node-label-value` value. The preference is weak so it's not a strong requirement.\n\nThe example combines the following:\n\n- Tolerations that allow the Pod to be scheduled on control plane nodes by tolerating the `NoSchedule` taint.\n- A node affinity that prefers nodes with a specific label but does not strictly require it; hence, it offers flexibility in scheduling.\n\nWhat's next\n-----------\n\n- [Set up AlloyDB Omni for production](/alloydb/omni/15.5.4/docs/configure-omni)"]]