Class NodePool (2.36.0rc0)

NodePool(mapping=None, *, ignore_unknown_fields=False, **kwargs)

NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.

Attributes

NameDescription
name str
The name of the node pool.
config google.cloud.container_v1beta1.types.NodeConfig
The node configuration of the pool.
initial_node_count int
The initial node count for the pool. You must ensure that your Compute Engine `resource quota
locations MutableSequence[str]
The list of Google Compute Engine zones __ in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations __ value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed.
network_config google.cloud.container_v1beta1.types.NodeNetworkConfig
Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults.
self_link str
[Output only] Server-defined URL for the resource.
version str
The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here __.
instance_group_urls MutableSequence[str]
[Output only] The resource URLs of the `managed instance groups
status google.cloud.container_v1beta1.types.NodePool.Status
[Output only] The status of the nodes in this pool instance.
status_message str
[Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
autoscaling google.cloud.container_v1beta1.types.NodePoolAutoscaling
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.
management google.cloud.container_v1beta1.types.NodeManagement
NodeManagement configuration for this NodePool.
max_pods_constraint google.cloud.container_v1beta1.types.MaxPodsConstraint
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool.
conditions MutableSequence[google.cloud.container_v1beta1.types.StatusCondition]
Which conditions caused the current node pool state.
pod_ipv4_cidr_size int
[Output only] The pod CIDR block size per node in this node pool.
upgrade_settings google.cloud.container_v1beta1.types.NodePool.UpgradeSettings
Upgrade settings control disruption and speed of the upgrade.
placement_policy google.cloud.container_v1beta1.types.NodePool.PlacementPolicy
Specifies the node placement policy.
update_info google.cloud.container_v1beta1.types.NodePool.UpdateInfo
Output only. [Output only] Update info contains relevant information during a node pool update.
etag str
This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding.
queued_provisioning google.cloud.container_v1beta1.types.NodePool.QueuedProvisioning
Specifies the configuration of queued provisioning.
best_effort_provisioning google.cloud.container_v1beta1.types.BestEffortProvisioning
Enable best effort provisioning for nodes

Classes

PlacementPolicy

PlacementPolicy(mapping=None, *, ignore_unknown_fields=False, **kwargs)

PlacementPolicy defines the placement policy used by the node pool.

QueuedProvisioning

QueuedProvisioning(mapping=None, *, ignore_unknown_fields=False, **kwargs)

QueuedProvisioning defines the queued provisioning used by the node pool.

Status

Status(value)

The current status of the node pool instance.

Values: STATUS_UNSPECIFIED (0): Not set. PROVISIONING (1): The PROVISIONING state indicates the node pool is being created. RUNNING (2): The RUNNING state indicates the node pool has been created and is fully usable. RUNNING_WITH_ERROR (3): The RUNNING_WITH_ERROR state indicates the node pool has been created and is partially usable. Some error state has occurred and some functionality may be impaired. Customer may need to reissue a request or trigger a new update. RECONCILING (4): The RECONCILING state indicates that some work is actively being done on the node pool, such as upgrading node software. Details can be found in the statusMessage field. STOPPING (5): The STOPPING state indicates the node pool is being deleted. ERROR (6): The ERROR state indicates the node pool may be unusable. Details can be found in the statusMessage field.

UpdateInfo

UpdateInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)

UpdateInfo contains resource (instance groups, etc), status and other intermediate information relevant to a node pool upgrade.

UpgradeSettings

UpgradeSettings(mapping=None, *, ignore_unknown_fields=False, **kwargs)

These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade.

maxUnavailable controls the number of nodes that can be simultaneously unavailable.

maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes.

(maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time).

Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.)

Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.

These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool.

If the strategy is SURGE, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade.

  1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes.
  2. maxUnavailable controls the number of nodes that can be simultaneously unavailable.
  3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time).

If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings.

  1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained.
  2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields