- 2.54.0 (latest)
- 2.53.0
- 2.52.0
- 2.50.0
- 2.49.0
- 2.48.0
- 2.47.1
- 2.46.0
- 2.45.0
- 2.44.0
- 2.43.0
- 2.42.0
- 2.41.0
- 2.40.0
- 2.39.0
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.0
- 2.29.0
- 2.28.0
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.0
- 2.23.0
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.4
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.2
- 2.11.2
- 2.10.8
- 2.9.0
- 2.8.1
- 2.7.1
- 2.6.1
- 2.5.0
- 2.4.1
- 2.3.1
- 2.2.0
- 2.1.0
- 2.0.1
- 1.0.3
- 0.5.0
- 0.4.0
- 0.3.0
NodePool(mapping=None, *, ignore_unknown_fields=False, **kwargs)
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
Attributes | |
---|---|
Name | Description |
name |
str
The name of the node pool. |
config |
google.cloud.container_v1.types.NodeConfig
The node configuration of the pool. |
initial_node_count |
int
The initial node count for the pool. You must ensure that your Compute Engine `resource quota |
locations |
Sequence[str]
The list of Google Compute Engine zones __
in which the NodePool's nodes should be located.
If this value is unspecified during node pool creation, the
Cluster.Locations __
value will be used, instead.
Warning: changing node pool locations will result in nodes
being added and/or removed.
|
self_link |
str
[Output only] Server-defined URL for the resource. |
version |
str
The version of the Kubernetes of this node. |
instance_group_urls |
Sequence[str]
[Output only] The resource URLs of the `managed instance groups |
status |
google.cloud.container_v1.types.NodePool.Status
[Output only] The status of the nodes in this pool instance. |
status_message |
str
[Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available. |
autoscaling |
google.cloud.container_v1.types.NodePoolAutoscaling
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. |
management |
google.cloud.container_v1.types.NodeManagement
NodeManagement configuration for this NodePool. |
max_pods_constraint |
google.cloud.container_v1.types.MaxPodsConstraint
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. |
conditions |
Sequence[google.cloud.container_v1.types.StatusCondition]
Which conditions caused the current node pool state. |
pod_ipv4_cidr_size |
int
[Output only] The pod CIDR block size per node in this node pool. |
upgrade_settings |
google.cloud.container_v1.types.NodePool.UpgradeSettings
Upgrade settings control disruption and speed of the upgrade. |
Classes
Status
Status(value)
The current status of the node pool instance.
UpgradeSettings
UpgradeSettings(mapping=None, *, ignore_unknown_fields=False, **kwargs)
These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade.
maxUnavailable controls the number of nodes that can be simultaneously unavailable.
maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes.
(maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time).
Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.)
Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.