gcloud container node-pools update

NAME
gcloud container node-pools update - updates a node pool in a running cluster
SYNOPSIS
gcloud container node-pools update NAME (--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]     | --containerd-config-from-file=PATH_TO_FILE     | --enable-confidential-nodes     | --enable-gvnic     | --enable-image-streaming     | --enable-insecure-kubelet-readonly-port     | --enable-private-nodes     | --enable-queued-provisioning     | --labels=[KEY=VALUE,…]     | --logging-variant=LOGGING_VARIANT     | --network-performance-configs=[PROPERTY=VALUE,…]     | --node-labels=[NODE_LABEL,…]     | --node-locations=ZONE,[ZONE,…]     | --node-taints=[NODE_TAINT,…]     | --resource-manager-tags=[KEY=VALUE,…]     | --storage-pools=STORAGE_POOL,[…]     | --system-config-from-file=PATH_TO_FILE     | --tags=[TAG,…]     | --windows-os-version=WINDOWS_OS_VERSION     | --workload-metadata=WORKLOAD_METADATA     | --disk-size=DISK_SIZE --disk-type=DISK_TYPE --machine-type=MACHINE_TYPE     | --enable-autoprovisioning --enable-autoscaling --location-policy=LOCATION_POLICY --max-nodes=MAX_NODES --min-nodes=MIN_NODES --total-max-nodes=TOTAL_MAX_NODES --total-min-nodes=TOTAL_MIN_NODES     | --enable-autorepair --enable-autoupgrade     | --enable-blue-green-upgrade --enable-surge-upgrade --max-surge-upgrade=MAX_SURGE_UPGRADE --max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE --node-pool-soak-duration=NODE_POOL_SOAK_DURATION --standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]) [--async] [--cluster=CLUSTER] [--location=LOCATION     | --region=REGION     | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG]
DESCRIPTION
gcloud container node-pools update updates a node pool in a Google Kubernetes Engine cluster.
EXAMPLES
To turn on node autoupgrade in "node-pool-1" in the cluster "sample-cluster", run:
gcloud container node-pools update node-pool-1 --cluster=sample-cluster --enable-autoupgrade
POSITIONAL ARGUMENTS
NAME
The name of the node pool.
REQUIRED FLAGS
Exactly one of these must be specified:
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]
Attaches accelerators (e.g. GPUs) to all nodes.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of accelerator to attach to the instances. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Optional) The number of accelerators to attach to the instances. The default value is 1.
gpu-driver-version
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be one of:
`default`: Install the default driver version for this GKE version.
`latest`: Install the latest driver version available for this GKE version.
Can only be used for nodes that use Container-Optimized OS.
`disabled`: Skip automatic driver installation. You must manually install a
driver after you create the cluster. If you omit the flag `gpu-driver-version`,
this is the default option. To learn how to manually install the GPU driver,
refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers
gpu-partition-size
(Optional) The GPU partition size used when running multi-instance GPUs. For information about multi-instance GPUs, refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy
(Optional) The GPU sharing strategy (e.g. time-sharing) to use. For information about GPU sharing, refer to: https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu
(Optional) The max number of containers allowed to share each GPU on the node. This field is used together with gpu-sharing-strategy.
--containerd-config-from-file=PATH_TO_FILE
Path of the YAML file that contains containerd configuration entries like configuring access to private image registries.

For detailed information on the configuration usage, please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.

Note: Updating the containerd configuration of an existing cluster or node pool requires recreation of the existing nodes, which might cause disruptions in running workloads.

Use a full or relative path to a local file containing the value of containerd_config.

--enable-confidential-nodes
Recreate all the nodes in the node pool to be confidential VM https://cloud.google.com/compute/confidential-vm/docs/about-cvm.
--enable-gvnic
Enable the use of GVNIC for this cluster. Requires re-creation of nodes using either a node-pool upgrade or node-pool creation.
--enable-image-streaming
Specifies whether to enable image streaming on node pool.
--enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node-pool set the flag to --no-enable-insecure-kubelet-readonly-port.

--enable-private-nodes
Enables provisioning nodes with private IP addresses only.

The control plane still communicates with all nodes through private IP addresses only, regardless of whether private nodes are enabled or disabled.

--enable-queued-provisioning
Mark the nodepool as Queued only. This means that all new nodes can be obtained only through queuing via ProvisioningRequest API.
gcloud container node-pools update node-pool-1 --cluster=example-cluster --enable-queued-provisioning and other required parameters, for more details see:
https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources of node pools in the Kubernetes Engine cluster. These are unrelated to Kubernetes labels.

Examples:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --labels=label1=value1,label2=value2
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the node pool. If the node pool doesn't specify a logging variant, then the logging variant specified for the cluster will be deployed on all the nodes in the node pool. Valid logging variants are MAX_THROUGHPUT, DEFAULT. LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee high throughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achieve logging throughput up to 10MB per sec.
--network-performance-configs=[PROPERTY=VALUE,…]
Configures network performance settings for the node pool. If this flag is not specified, the pool will be created with its default network performance configuration.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardless of whether the traffic is going to internal IP or external IP destinations. The following tier values are allowed: [TIER_UNSPECIFIED,TIER_1]
--node-labels=[NODE_LABEL,…]
Replaces all the user specified Kubernetes labels on all nodes in an existing node pool with the given labels.

Examples:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --node-labels=label1=value1,label2=value2

New nodes, including ones created by resize or recreate, will have these labels on the Kubernetes API node object and can be used in nodeSelectors. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ for examples.

Note that Kubernetes labels, intended to associate cluster components and resources with one another and manage resource lifecycles, are different from Google Kubernetes Engine labels that are used for the purpose of tracking billing and usage information.

--node-locations=ZONE,[ZONE,…]
Set of zones in which the node pool's nodes should be located. Changing the locations for a node pool will result in nodes being either created or removed from the node pool, depending on whether locations are being added or removed.

Multiple locations can be specified, separated by commas. For example:

gcloud container node-pools update node-pool-1 --cluster=sample-cluster --node-locations=us-central1-a,us-central1-b
--node-taints=[NODE_TAINT,…]
Replaces all the user specified Kubernetes taints on all nodes in an existing node pool, which can be used with tolerations for pod scheduling.

Examples:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule

To read more about node-taints, see https://cloud.google.com/kubernetes-engine/docs/node-taints.

--resource-manager-tags=[KEY=VALUE,…]
Replaces all the user specified resource manager tags on all nodes in an existing node pool in a Standard cluster with the given comma-separated resource manager tags that has the GCE_FIREWALL purpose.

Examples:

gcloud container node-pools update example-node-pool --resource-manager-tags=tagKeys/1234=tagValues/2345
gcloud container node-pools update example-node-pool --resource-manager-tags=my-project/key1=value1
gcloud container node-pools update example-node-pool --resource-manager-tags=12345/key1=value1,23456/key2=value2
gcloud container node-pools update example-node-pool --resource-manager-tags=

All nodes, including nodes that are resized or re-created, will have the specified tags on the corresponding Instance object in the Compute Engine API. You can reference these tags in network firewall policy rules. For instructions, see https://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--storage-pools=STORAGE_POOL,[…]
A list of storage pools where the node pool's boot disks will be provisioned. Replaces all the current storage pools of an existing node pool, with the specified storage pools.

STORAGE_POOL must be in the format projects/project/zones/zone/storagePools/storagePool

--system-config-from-file=PATH_TO_FILE
Path of the YAML/JSON file that contains the node configuration, including Linux kernel parameters (sysctls) and kubelet configs.

Examples:

kubeletConfig:
  cpuManagerPolicy: static
linuxConfig:
  sysctl:
    net.core.somaxconn: '2048'
    net.ipv4.tcp_rmem: '4096 87380 6291456'
  hugepageConfig:
    hugepage_size2m: '1024'
    hugepage_size1g: '2'

List of supported kubelet configs in 'kubeletConfig'.

KEY VALUE
cpuManagerPolicy either 'static' or 'none'
cpuCFSQuota true or false (enabled by default)
cpuCFSQuotaPeriod interval (e.g., '100ms')
podPidsLimit integer (The value must be greater than or equal to 1024 and less than 4194304.)
List of supported sysctls in 'linuxConfig'.
KEY VALUE
net.core.netdev_max_backlog Any positive integer, less than 2147483647
net.core.rmem_max Any positive integer, less than 2147483647
net.core.wmem_default Any positive integer, less than 2147483647
net.core.wmem_max Any positive integer, less than 2147483647
net.core.optmem_max Any positive integer, less than 2147483647
net.core.somaxconn Must be [128, 2147483647]
net.ipv4.tcp_rmem Any positive integer tuple
net.ipv4.tcp_wmem Any positive integer tuple
net.ipv4.tcp_tw_reuse Must be {0, 1}
kernel.shmmni Must be [4096, 32768]
kernel.shmmax Must be [0, 18446744073692774399]
kernel.shmall Must be [0, 18446744073692774399]
List of supported hugepage size in 'hugepageConfig'.
KEY VALUE
hugepage_size2m Number of 2M huge pages, any positive integer
hugepage_size1g Number of 1G huge pages, any positive integer
Allocated hugepage size should not exceed 60% of available memory on the node. For example, c2d-highcpu-4 has 8GB memory, total allocated hugepage of 2m and 1g should not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d, h3, m3, a2, a3, g2.

Note, updating the system configuration of an existing node pool requires recreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value of system_config.

--tags=[TAG,…]
Replaces all the user specified Compute Engine tags on all nodes in an existing node pool with the given tags (comma separated).

Examples:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --tags=tag1,tag2

New nodes, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and these tags can be used in firewall rules. See https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.

--windows-os-version=WINDOWS_OS_VERSION
Specifies the Windows Server Image to use when creating a Windows node pool. Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 server image or LTSC2022 server image. If the node pool doesn't specify a Windows Server Image Os version, then Ltsc2019 will be the default one to use. WINDOWS_OS_VERSION must be one of: ltsc2019, ltsc2022.
--workload-metadata=WORKLOAD_METADATA
Type of metadata server available to pods running in the node pool. WORKLOAD_METADATA must be one of:
GCE_METADATA
Pods running in this node pool have access to the node's underlying Compute Engine Metadata Server.
GKE_METADATA
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine Metadata Server exposes a metadata API to workloads that is compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata Servers. This feature can only be enabled if Workload Identity is enabled at the cluster level.
Node config
--disk-size=DISK_SIZE
Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE
Type of the node VM boot disk. For version 1.24 and later, defaults to pd-balanced. For versions earlier than 1.24, defaults to pd-standard. DISK_TYPE must be one of: pd-standard, pd-ssd, pd-balanced, hyperdisk-balanced, hyperdisk-extreme, hyperdisk-throughput.
--machine-type=MACHINE_TYPE
The type of machine to use for nodes. Defaults to e2-medium. The list of predefined machine types is available using the following command:
gcloud compute machine-types list

You can also specify custom machine types by providing a string with the format "custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is the amount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GB of RAM:

gcloud container node-pools update high-mem-pool --machine-type=custom-2-12288
Cluster autoscaling
--enable-autoprovisioning
Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.

Cluster Autoscaler will be able to delete the node pool if it's unneeded.

--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided. If not already, --max-nodes or --total-max-nodes must also be set.

--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizes of available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of: BALANCED, ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

Node management
--enable-autorepair
Enable node autorepair feature for a node pool.
gcloud container node-pools update node-pool-1 --cluster=example-cluster --enable-autorepair

See https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair for more info.

--enable-autoupgrade
Sets autoupgrade feature for a node pool.
gcloud container node-pools update node-pool-1 --cluster=example-cluster --enable-autoupgrade

See https://cloud.google.com/kubernetes-engine/docs/node-auto-upgrades for more info.

Upgrade settings
--enable-blue-green-upgrade
Changes node pool upgrade strategy to blue-green upgrade.
--enable-surge-upgrade
Changes node pool upgrade strategy to surge upgrade.
--max-surge-upgrade=MAX_SURGE_UPGRADE
Number of extra (surge) nodes to be created on each upgrade of the node pool.

Specifies the number of extra (surge) nodes to be created during this node pool's upgrades. For example, running the following command will result in creating an extra node each time the node pool is upgraded:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --max-surge-upgrade=1   --max-unavailable-upgrade=0

Must be used in conjunction with '--max-unavailable-upgrade'.

--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of the node pool.

Specifies the number of nodes that can be unavailable at the same time during this node pool's upgrades. For example, assume the node pool has 5 nodes, running the following command will result in having 3 nodes being upgraded in parallel (1 + 2), but keeping always at least 3 (5 - 2) available each time the node pool is upgraded:

gcloud container node-pools update node-pool-1 --cluster=example-cluster --max-surge-upgrade=1   --max-unavailable-upgrade=2

Must be used in conjunction with '--max-surge-upgrade'.

--node-pool-soak-duration=NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deleting the blue pool and completing the upgrade.
gcloud container node-pools update node-pool-1 --cluster=example-cluster  --node-pool-soak-duration=600s
--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade.

Batch sizes are specified by one of, batch-node-count or batch-percent. The duration between batches is specified by batch-soak-duration.

gcloud container node-pools update node-pool-1 --cluster=example-cluster  --standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s
gcloud container node-pools update node-pool-1 --cluster=example-cluster  --standard-rollout-policy=batch-percent=0.3,batch-soak-duration=60s
OPTIONAL FLAGS
--async
Return immediately, without waiting for the operation in progress to complete.
--cluster=CLUSTER
The name of the cluster. Overrides the default container/cluster property value for this command invocation.
At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster. Overrides the default compute/region or compute/zone value for this command invocation. Prefer using this flag over the --region or --zone flags.
--region=REGION
Compute region (e.g. us-central1) for a regional cluster. Overrides the default compute/region property value for this command invocation.
--zone=ZONE, -z ZONE
Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the default compute/zone property value for this command invocation.
GCLOUD WIDE FLAGS
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
These variants are also available:
gcloud alpha container node-pools update
gcloud beta container node-pools update