Resource: NodePool
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade.
maxUnavailable controls the number of nodes that can be simultaneously unavailable.
maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes.
(maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time).
Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.)
Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.
JSON representation |
---|
{ "name": string, "config": { object ( |
Fields | |
---|---|
name |
The name of the node pool. |
config |
The node configuration of the pool. |
initialNodeCount |
The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. |
locations[] |
The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed. |
networkConfig |
Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults. |
selfLink |
[Output only] Server-defined URL for the resource. |
version |
The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here. |
instanceGroupUrls[] |
[Output only] The resource URLs of the managed instance groups associated with this node pool. During the node pool blue-green upgrade operation, the URLs contain both blue and green resources. |
status |
[Output only] The status of the nodes in this pool instance. |
statusMessage |
[Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available. |
autoscaling |
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. |
management |
NodeManagement configuration for this NodePool. |
maxPodsConstraint |
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. |
conditions[] |
Which conditions caused the current node pool state. |
podIpv4CidrSize |
[Output only] The pod CIDR block size per node in this node pool. |
upgradeSettings |
Upgrade settings control disruption and speed of the upgrade. |
placementPolicy |
Specifies the node placement policy. |
updateInfo |
Output only. [Output only] Update info contains relevant information during a node pool update. |
etag |
This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding. |
NodeNetworkConfig
Parameters for node pool-level network config.
JSON representation |
---|
{ "createPodRange": boolean, "podRange": string, "podIpv4CidrBlock": string, "podCidrOverprovisionConfig": { object ( |
Fields | |
---|---|
createPodRange |
Input only. Whether to create a new range for pod IPs in this node pool. Defaults are provided for If neither Only applicable if This field cannot be changed after the node pool has been created. |
podRange |
The ID of the secondary range for pod IPs. If Only applicable if This field cannot be changed after the node pool has been created. |
podIpv4CidrBlock |
The IP address range for pod IPs in this node pool. Only applicable if Set to blank to have a range chosen with the default size. Set to /netmask (e.g. Set to a CIDR notation (e.g. Only applicable if This field cannot be changed after the node pool has been created. |
podCidrOverprovisionConfig |
[PRIVATE FIELD] Pod CIDR size overprovisioning config for the nodepool. Pod CIDR size per node depends on maxPodsPerNode. By default, the value of maxPodsPerNode is rounded off to next power of 2 and we then double that to get the size of pod CIDR block per node. Example: maxPodsPerNode of 30 would result in 64 IPs (/26). This config can disable the doubling of IPs (we still round off to next power of 2) Example: maxPodsPerNode of 30 will result in 32 IPs (/27) when overprovisioning is disabled. |
enablePrivateNodes |
Whether nodes have internal IP addresses only. If enablePrivateNodes is not specified, then the value is derived from [cluster.privateClusterConfig.enablePrivateNodes][google.container.v1beta1.PrivateClusterConfig.enablePrivateNodes] |
networkPerformanceConfig |
Network bandwidth tier configuration. |
NetworkPerformanceConfig
Configuration of all network bandwidth tiers
JSON representation |
---|
{ "totalEgressBandwidthTier": enum ( |
Fields | |
---|---|
totalEgressBandwidthTier |
Specifies the total network bandwidth tier for the NodePool. |
externalIpEgressBandwidthTier |
Specifies the network bandwidth tier for the NodePool for traffic to external/public IP addresses. |
Tier
Node network tier
Enums | |
---|---|
TIER_UNSPECIFIED |
Default value |
TIER_1 |
Higher bandwidth, actual values based on VM size. |
Status
The current status of the node pool instance.
Enums | |
---|---|
STATUS_UNSPECIFIED |
Not set. |
PROVISIONING |
The PROVISIONING state indicates the node pool is being created. |
RUNNING |
The RUNNING state indicates the node pool has been created and is fully usable. |
RUNNING_WITH_ERROR |
The RUNNING_WITH_ERROR state indicates the node pool has been created and is partially usable. Some error state has occurred and some functionality may be impaired. Customer may need to reissue a request or trigger a new update. |
RECONCILING |
The RECONCILING state indicates that some work is actively being done on the node pool, such as upgrading node software. Details can be found in the statusMessage field. |
STOPPING |
The STOPPING state indicates the node pool is being deleted. |
ERROR |
The ERROR state indicates the node pool may be unusable. Details can be found in the statusMessage field. |
NodePoolAutoscaling
NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.
JSON representation |
---|
{
"enabled": boolean,
"minNodeCount": integer,
"maxNodeCount": integer,
"autoprovisioned": boolean,
"locationPolicy": enum ( |
Fields | |
---|---|
enabled |
Is autoscaling enabled for this node pool. |
minNodeCount |
Minimum number of nodes for one location in the NodePool. Must be >= 1 and <= maxNodeCount. |
maxNodeCount |
Maximum number of nodes for one location in the NodePool. Must be >= minNodeCount. There has to be enough quota to scale up the cluster. |
autoprovisioned |
Can this node pool be deleted automatically. |
locationPolicy |
Location policy used when scaling up a nodepool. |
totalMinNodeCount |
Minimum number of nodes in the node pool. Must be greater than 1 less than totalMaxNodeCount. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
totalMaxNodeCount |
Maximum number of nodes in the node pool. Must be greater than totalMinNodeCount. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
LocationPolicy
Location policy specifies how zones are picked when scaling up the nodepool.
Enums | |
---|---|
LOCATION_POLICY_UNSPECIFIED |
Not set. |
BALANCED |
BALANCED is a best effort policy that aims to balance the sizes of different zones. |
ANY |
ANY policy picks zones that have the highest capacity available. |
PlacementPolicy
PlacementPolicy defines the placement policy used by the node pool.
JSON representation |
---|
{
"type": enum ( |
Fields | |
---|---|
type |
The type of placement. |
Type
Type defines the type of placement policy.
Enums | |
---|---|
TYPE_UNSPECIFIED |
TYPE_UNSPECIFIED specifies no requirements on nodes placement. |
COMPACT |
COMPACT specifies node placement in the same availability domain to ensure low communication latency. |
UpdateInfo
UpdateInfo contains resource (instance groups, etc), status and other intermediate information relevant to a node pool upgrade.
JSON representation |
---|
{
"blueGreenInfo": {
object ( |
Fields | |
---|---|
blueGreenInfo |
Information of a blue-green upgrade. |
BlueGreenInfo
Information relevant to blue-green upgrade.
JSON representation |
---|
{
"phase": enum ( |
Fields | |
---|---|
phase |
Current blue-green upgrade phase. |
blueInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with blue pool. |
greenInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with green pool. |
bluePoolDeletionStartTime |
Time to start deleting blue pool to complete blue-green upgrade, in RFC3339 text format. |
greenPoolVersion |
Version of green pool. |
Phase
Phase represents the different stages blue-green upgrade is running in.
Enums | |
---|---|
PHASE_UNSPECIFIED |
Unspecified phase. |
UPDATE_STARTED |
blue-green upgrade has been initiated. |
CREATING_GREEN_POOL |
Start creating green pool nodes. |
CORDONING_BLUE_POOL |
Start cordoning blue pool nodes. |
DRAINING_BLUE_POOL |
Start draining blue pool nodes. |
NODE_POOL_SOAKING |
Start soaking time after draining entire blue pool. |
DELETING_BLUE_POOL |
Start deleting blue nodes. |
ROLLBACK_STARTED |
Rollback has been initiated. |
Methods |
|
---|---|
|
CompleteNodePoolUpgrade will signal an on-going node pool upgrade to complete. |
|
Creates a node pool for a cluster. |
|
Deletes a node pool from a cluster. |
|
Retrieves the requested node pool. |
|
Lists the node pools for a cluster. |
|
Rolls back a previously Aborted or Failed NodePool upgrade. |
|
Sets the autoscaling settings of a specific node pool. |
|
Sets the NodeManagement options for a node pool. |
|
SetNodePoolSizeRequest sets the size of a node pool. |
|
Updates the version and/or image type of a specific node pool. |