- Resource: NodePool
- Methods
Resource: NodePool
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
JSON representation |
---|
{ "name": string, "config": { object ( |
Fields | |
---|---|
name |
The name of the node pool. |
config |
The node configuration of the pool. |
initial |
The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. |
locations[] |
The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed. |
network |
Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults. |
self |
Output only. Server-defined URL for the resource. |
version |
The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here. |
instance |
Output only. The resource URLs of the managed instance groups associated with this node pool. During the node pool blue-green upgrade operation, the URLs contain both blue and green resources. |
status |
Output only. The status of the nodes in this pool instance. |
statusMessage |
Output only. Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available. |
autoscaling |
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. |
management |
NodeManagement configuration for this NodePool. |
max |
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. |
conditions[] |
Which conditions caused the current node pool state. |
pod |
Output only. The pod CIDR block size per node in this node pool. |
upgrade |
Upgrade settings control disruption and speed of the upgrade. |
placement |
Specifies the node placement policy. |
update |
Output only. Update info contains relevant information during a node pool update. |
etag |
This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding. |
queued |
Specifies the configuration of queued provisioning. |
best |
Enable best effort provisioning for nodes |
NodeNetworkConfig
Parameters for node pool-level network config.
JSON representation |
---|
{ "createPodRange": boolean, "podRange": string, "podIpv4CidrBlock": string, "podCidrOverprovisionConfig": { object ( |
Fields | |
---|---|
create |
Input only. Whether to create a new range for pod IPs in this node pool. Defaults are provided for If neither Only applicable if This field cannot be changed after the node pool has been created. |
pod |
The ID of the secondary range for pod IPs. If Only applicable if This field cannot be changed after the node pool has been created. |
pod |
The IP address range for pod IPs in this node pool. Only applicable if Set to blank to have a range chosen with the default size. Set to /netmask (e.g. Set to a CIDR notation (e.g. Only applicable if This field cannot be changed after the node pool has been created. |
pod |
[PRIVATE FIELD] Pod CIDR size overprovisioning config for the nodepool. Pod CIDR size per node depends on maxPodsPerNode. By default, the value of maxPodsPerNode is rounded off to next power of 2 and we then double that to get the size of pod CIDR block per node. Example: maxPodsPerNode of 30 would result in 64 IPs (/26). This config can disable the doubling of IPs (we still round off to next power of 2) Example: maxPodsPerNode of 30 will result in 32 IPs (/27) when overprovisioning is disabled. |
additional |
We specify the additional node networks for this node pool using this list. Each node network corresponds to an additional interface |
additional |
We specify the additional pod networks for this node pool using this list. Each pod network corresponds to an additional alias IP range for the node |
pod |
Output only. The utilization of the IPv4 range for the pod. The ratio is Usage/[Total number of IPs in the secondary range], Usage=numNodes*numZones*podIPsPerNode. |
enable |
Whether nodes have internal IP addresses only. If enablePrivateNodes is not specified, then the value is derived from [Cluster.NetworkConfig.default_enable_private_nodes][] |
network |
Network bandwidth tier configuration. |
NetworkPerformanceConfig
Configuration of all network bandwidth tiers
JSON representation |
---|
{
"totalEgressBandwidthTier": enum ( |
Fields | |
---|---|
total |
Specifies the total network bandwidth tier for the NodePool. |
Tier
Node network tier
Enums | |
---|---|
TIER_UNSPECIFIED |
Default value |
TIER_1 |
Higher bandwidth, actual values based on VM size. |
AdditionalNodeNetworkConfig
AdditionalNodeNetworkConfig is the configuration for additional node networks within the NodeNetworkConfig message
JSON representation |
---|
{ "network": string, "subnetwork": string } |
Fields | |
---|---|
network |
Name of the VPC where the additional interface belongs |
subnetwork |
Name of the subnetwork where the additional interface belongs |
AdditionalPodNetworkConfig
AdditionalPodNetworkConfig is the configuration for additional pod networks within the NodeNetworkConfig message
JSON representation |
---|
{
"subnetwork": string,
"secondaryPodRange": string,
"networkAttachment": string,
"maxPodsPerNode": {
object ( |
Fields | |
---|---|
subnetwork |
Name of the subnetwork where the additional pod network belongs. |
secondary |
The name of the secondary range on the subnet which provides IP address for this pod range. |
network |
The name of the network attachment for pods to communicate to; cannot be specified along with subnetwork or secondaryPodRange. |
max |
The maximum number of pods per node which use this pod network. |
Status
The current status of the node pool instance.
Enums | |
---|---|
STATUS_UNSPECIFIED |
Not set. |
PROVISIONING |
The PROVISIONING state indicates the node pool is being created. |
RUNNING |
The RUNNING state indicates the node pool has been created and is fully usable. |
RUNNING_WITH_ERROR |
The RUNNING_WITH_ERROR state indicates the node pool has been created and is partially usable. Some error state has occurred and some functionality may be impaired. Customer may need to reissue a request or trigger a new update. |
RECONCILING |
The RECONCILING state indicates that some work is actively being done on the node pool, such as upgrading node software. Details can be found in the statusMessage field. |
STOPPING |
The STOPPING state indicates the node pool is being deleted. |
ERROR |
The ERROR state indicates the node pool may be unusable. Details can be found in the statusMessage field. |
NodePoolAutoscaling
NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.
JSON representation |
---|
{
"enabled": boolean,
"minNodeCount": integer,
"maxNodeCount": integer,
"autoprovisioned": boolean,
"locationPolicy": enum ( |
Fields | |
---|---|
enabled |
Is autoscaling enabled for this node pool. |
min |
Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to maxNodeCount. |
max |
Maximum number of nodes for one location in the node pool. Must be >= minNodeCount. There has to be enough quota to scale up the cluster. |
autoprovisioned |
Can this node pool be deleted automatically. |
location |
Location policy used when scaling up a nodepool. |
total |
Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to totalMaxNodeCount. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
total |
Maximum number of nodes in the node pool. Must be greater than or equal to totalMinNodeCount. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
LocationPolicy
Location policy specifies how zones are picked when scaling up the nodepool.
Enums | |
---|---|
LOCATION_POLICY_UNSPECIFIED |
Not set. |
BALANCED |
BALANCED is a best effort policy that aims to balance the sizes of different zones. |
ANY |
ANY policy picks zones that have the highest capacity available. |
PlacementPolicy
PlacementPolicy defines the placement policy used by the node pool.
JSON representation |
---|
{
"type": enum ( |
Fields | |
---|---|
type |
The type of placement. |
tpu |
Optional. TPU placement topology for pod slice node pool. https://cloud.google.com/tpu/docs/types-topologies#tpu_topologies |
policy |
If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned. |
Type
Type defines the type of placement policy.
Enums | |
---|---|
TYPE_UNSPECIFIED |
TYPE_UNSPECIFIED specifies no requirements on nodes placement. |
COMPACT |
COMPACT specifies node placement in the same availability domain to ensure low communication latency. |
UpdateInfo
UpdateInfo contains resource (instance groups, etc), status and other intermediate information relevant to a node pool upgrade.
JSON representation |
---|
{
"blueGreenInfo": {
object ( |
Fields | |
---|---|
blue |
Information of a blue-green upgrade. |
BlueGreenInfo
Information relevant to blue-green upgrade.
JSON representation |
---|
{
"phase": enum ( |
Fields | |
---|---|
phase |
Current blue-green upgrade phase. |
blue |
The resource URLs of the managed instance groups associated with blue pool. |
green |
The resource URLs of the managed instance groups associated with green pool. |
blue |
Time to start deleting blue pool to complete blue-green upgrade, in RFC3339 text format. |
green |
Version of green pool. |
Phase
Phase represents the different stages blue-green upgrade is running in.
Enums | |
---|---|
PHASE_UNSPECIFIED |
Unspecified phase. |
UPDATE_STARTED |
blue-green upgrade has been initiated. |
CREATING_GREEN_POOL |
Start creating green pool nodes. |
CORDONING_BLUE_POOL |
Start cordoning blue pool nodes. |
DRAINING_BLUE_POOL |
Start draining blue pool nodes. |
NODE_POOL_SOAKING |
Start soaking time after draining entire blue pool. |
DELETING_BLUE_POOL |
Start deleting blue nodes. |
ROLLBACK_STARTED |
Rollback has been initiated. |
QueuedProvisioning
QueuedProvisioning defines the queued provisioning used by the node pool.
JSON representation |
---|
{ "enabled": boolean } |
Fields | |
---|---|
enabled |
Denotes that this nodepool is QRM specific, meaning nodes can be only obtained through queuing via the Cluster Autoscaler ProvisioningRequest API. |
BestEffortProvisioning
Best effort provisioning.
JSON representation |
---|
{ "enabled": boolean, "minProvisionNodes": integer } |
Fields | |
---|---|
enabled |
When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes |
min |
Minimum number of nodes to be provisioned to be considered as succeeded, and the rest of nodes will be provisioned gradually and eventually when stockout issue has been resolved. |
Methods |
|
---|---|
|
CompleteNodePoolUpgrade will signal an on-going node pool upgrade to complete. |
|
Creates a node pool for a cluster. |
|
Deletes a node pool from a cluster. |
|
Retrieves the requested node pool. |
|
Lists the node pools for a cluster. |
|
Rolls back a previously Aborted or Failed NodePool upgrade. |
|
Sets the autoscaling settings for the specified node pool. |
|
Sets the NodeManagement options for a node pool. |
|
Sets the size for a specific node pool. |
|
Updates the version and/or image type for the specified node pool. |