- Resource: NodePool
- Methods
Resource: NodePool
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
JSON representation |
---|
{ "name": string, "config": { object ( |
Fields | |
---|---|
name |
The name of the node pool. |
config |
The node configuration of the pool. |
initialNodeCount |
The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. |
locations[] |
The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed. |
networkConfig |
Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults. |
selfLink |
Output only. Server-defined URL for the resource. |
version |
The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here. |
instanceGroupUrls[] |
Output only. The resource URLs of the managed instance groups associated with this node pool. During the node pool blue-green upgrade operation, the URLs contain both blue and green resources. |
status |
Output only. The status of the nodes in this pool instance. |
statusMessage |
Output only. Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available. |
autoscaling |
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. |
management |
NodeManagement configuration for this NodePool. |
maxPodsConstraint |
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. |
conditions[] |
Which conditions caused the current node pool state. |
podIpv4CidrSize |
Output only. The pod CIDR block size per node in this node pool. |
upgradeSettings |
Upgrade settings control disruption and speed of the upgrade. |
placementPolicy |
Specifies the node placement policy. |
updateInfo |
Output only. Update info contains relevant information during a node pool update. |
etag |
This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding. |
queuedProvisioning |
Specifies the configuration of queued provisioning. |
bestEffortProvisioning |
Enable best effort provisioning for nodes |
NodeNetworkConfig
Parameters for node pool-level network config.
JSON representation |
---|
{ "createPodRange": boolean, "podRange": string, "podIpv4CidrBlock": string, "podCidrOverprovisionConfig": { object ( |
Fields | |
---|---|
createPodRange |
Input only. Whether to create a new range for pod IPs in this node pool. Defaults are provided for If neither Only applicable if This field cannot be changed after the node pool has been created. |
podRange |
The ID of the secondary range for pod IPs. If Only applicable if This field cannot be changed after the node pool has been created. |
podIpv4CidrBlock |
The IP address range for pod IPs in this node pool. Only applicable if Set to blank to have a range chosen with the default size. Set to /netmask (e.g. Set to a CIDR notation (e.g. Only applicable if This field cannot be changed after the node pool has been created. |
podCidrOverprovisionConfig |
[PRIVATE FIELD] Pod CIDR size overprovisioning config for the nodepool. Pod CIDR size per node depends on maxPodsPerNode. By default, the value of maxPodsPerNode is rounded off to next power of 2 and we then double that to get the size of pod CIDR block per node. Example: maxPodsPerNode of 30 would result in 64 IPs (/26). This config can disable the doubling of IPs (we still round off to next power of 2) Example: maxPodsPerNode of 30 will result in 32 IPs (/27) when overprovisioning is disabled. |
additionalNodeNetworkConfigs[] |
We specify the additional node networks for this node pool using this list. Each node network corresponds to an additional interface |
additionalPodNetworkConfigs[] |
We specify the additional pod networks for this node pool using this list. Each pod network corresponds to an additional alias IP range for the node |
podIpv4RangeUtilization |
Output only. The utilization of the IPv4 range for the pod. The ratio is Usage/[Total number of IPs in the secondary range], Usage=numNodes*numZones*podIPsPerNode. |
enablePrivateNodes |
Whether nodes have internal IP addresses only. If enablePrivateNodes is not specified, then the value is derived from [cluster.privateClusterConfig.enablePrivateNodes][google.container.v1beta1.PrivateClusterConfig.enablePrivateNodes] |
networkPerformanceConfig |
Network bandwidth tier configuration. |
NetworkPerformanceConfig
Configuration of all network bandwidth tiers
JSON representation |
---|
{ "totalEgressBandwidthTier": enum ( |
Fields | |
---|---|
totalEgressBandwidthTier |
Specifies the total network bandwidth tier for the NodePool. |
externalIpEgressBandwidthTier |
Specifies the network bandwidth tier for the NodePool for traffic to external/public IP addresses. |
Tier
Node network tier
Enums | |
---|---|
TIER_UNSPECIFIED |
Default value |
TIER_1 |
Higher bandwidth, actual values based on VM size. |
AdditionalNodeNetworkConfig
AdditionalNodeNetworkConfig is the configuration for additional node networks within the NodeNetworkConfig message
JSON representation |
---|
{ "network": string, "subnetwork": string } |
Fields | |
---|---|
network |
Name of the VPC where the additional interface belongs |
subnetwork |
Name of the subnetwork where the additional interface belongs |
AdditionalPodNetworkConfig
AdditionalPodNetworkConfig is the configuration for additional pod networks within the NodeNetworkConfig message
JSON representation |
---|
{
"subnetwork": string,
"secondaryPodRange": string,
"networkAttachment": string,
"maxPodsPerNode": {
object ( |
Fields | |
---|---|
subnetwork |
Name of the subnetwork where the additional pod network belongs. |
secondaryPodRange |
The name of the secondary range on the subnet which provides IP address for this pod range. |
networkAttachment |
The name of the network attachment for pods to communicate to; cannot be specified along with subnetwork or secondaryPodRange. |
maxPodsPerNode |
The maximum number of pods per node which use this pod network. |
Status
The current status of the node pool instance.
Enums | |
---|---|
STATUS_UNSPECIFIED |
Not set. |
PROVISIONING |
The PROVISIONING state indicates the node pool is being created. |
RUNNING |
The RUNNING state indicates the node pool has been created and is fully usable. |
RUNNING_WITH_ERROR |
The RUNNING_WITH_ERROR state indicates the node pool has been created and is partially usable. Some error state has occurred and some functionality may be impaired. Customer may need to reissue a request or trigger a new update. |
RECONCILING |
The RECONCILING state indicates that some work is actively being done on the node pool, such as upgrading node software. Details can be found in the statusMessage field. |
STOPPING |
The STOPPING state indicates the node pool is being deleted. |
ERROR |
The ERROR state indicates the node pool may be unusable. Details can be found in the statusMessage field. |
NodePoolAutoscaling
NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.
JSON representation |
---|
{
"enabled": boolean,
"minNodeCount": integer,
"maxNodeCount": integer,
"autoprovisioned": boolean,
"locationPolicy": enum ( |
Fields | |
---|---|
enabled |
Is autoscaling enabled for this node pool. |
minNodeCount |
Minimum number of nodes for one location in the NodePool. Must be >= 1 and <= maxNodeCount. |
maxNodeCount |
Maximum number of nodes for one location in the NodePool. Must be >= minNodeCount. There has to be enough quota to scale up the cluster. |
autoprovisioned |
Can this node pool be deleted automatically. |
locationPolicy |
Location policy used when scaling up a nodepool. |
totalMinNodeCount |
Minimum number of nodes in the node pool. Must be greater than 1 less than totalMaxNodeCount. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
totalMaxNodeCount |
Maximum number of nodes in the node pool. Must be greater than totalMinNodeCount. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
LocationPolicy
Location policy specifies how zones are picked when scaling up the nodepool.
Enums | |
---|---|
LOCATION_POLICY_UNSPECIFIED |
Not set. |
BALANCED |
BALANCED is a best effort policy that aims to balance the sizes of different zones. |
ANY |
ANY policy picks zones that have the highest capacity available. |
PlacementPolicy
PlacementPolicy defines the placement policy used by the node pool.
JSON representation |
---|
{
"type": enum ( |
Fields | |
---|---|
type |
The type of placement. |
tpuTopology |
TPU placement topology for pod slice node pool. https://cloud.google.com/tpu/docs/types-topologies#tpu_topologies |
policyName |
If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned. |
Type
Type defines the type of placement policy.
Enums | |
---|---|
TYPE_UNSPECIFIED |
TYPE_UNSPECIFIED specifies no requirements on nodes placement. |
COMPACT |
COMPACT specifies node placement in the same availability domain to ensure low communication latency. |
UpdateInfo
UpdateInfo contains resource (instance groups, etc), status and other intermediate information relevant to a node pool upgrade.
JSON representation |
---|
{
"blueGreenInfo": {
object ( |
Fields | |
---|---|
blueGreenInfo |
Information of a blue-green upgrade. |
BlueGreenInfo
Information relevant to blue-green upgrade.
JSON representation |
---|
{
"phase": enum ( |
Fields | |
---|---|
phase |
Current blue-green upgrade phase. |
blueInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with blue pool. |
greenInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with green pool. |
bluePoolDeletionStartTime |
Time to start deleting blue pool to complete blue-green upgrade, in RFC3339 text format. |
greenPoolVersion |
Version of green pool. |
Phase
Phase represents the different stages blue-green upgrade is running in.
Enums | |
---|---|
PHASE_UNSPECIFIED |
Unspecified phase. |
UPDATE_STARTED |
blue-green upgrade has been initiated. |
CREATING_GREEN_POOL |
Start creating green pool nodes. |
CORDONING_BLUE_POOL |
Start cordoning blue pool nodes. |
WAITING_TO_DRAIN_BLUE_POOL |
Start waiting after cordoning the blue pool and before draining it. |
DRAINING_BLUE_POOL |
Start draining blue pool nodes. |
NODE_POOL_SOAKING |
Start soaking time after draining entire blue pool. |
DELETING_BLUE_POOL |
Start deleting blue nodes. |
ROLLBACK_STARTED |
Rollback has been initiated. |
QueuedProvisioning
QueuedProvisioning defines the queued provisioning used by the node pool.
JSON representation |
---|
{ "enabled": boolean } |
Fields | |
---|---|
enabled |
Denotes that this nodepool is QRM specific, meaning nodes can be only obtained through queuing via the Cluster Autoscaler ProvisioningRequest API. |
BestEffortProvisioning
Best effort provisioning.
JSON representation |
---|
{ "enabled": boolean, "minProvisionNodes": integer } |
Fields | |
---|---|
enabled |
When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes |
minProvisionNodes |
Minimum number of nodes to be provisioned to be considered as succeeded, and the rest of nodes will be provisioned gradually and eventually when stockout issue has been resolved. |
Methods |
|
---|---|
|
CompleteNodePoolUpgrade will signal an on-going node pool upgrade to complete. |
|
Creates a node pool for a cluster. |
|
Deletes a node pool from a cluster. |
|
Retrieves the requested node pool. |
|
Lists the node pools for a cluster. |
|
Rolls back a previously Aborted or Failed NodePool upgrade. |
|
Sets the autoscaling settings of a specific node pool. |
|
Sets the NodeManagement options for a node pool. |
|
SetNodePoolSizeRequest sets the size of a node pool. |
|
Updates the version and/or image type of a specific node pool. |