gcloud alpha container node-pools create

NAME
gcloud alpha container node-pools create - create a node pool in a running cluster
SYNOPSIS
gcloud alpha container node-pools create NAME [--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]] [--additional-node-network=[network=NETWORK_NAME,subnetwork=SUBNETWORK_NAME,…]] [--additional-pod-network=[subnetwork=SUBNETWORK_NAME,pod-ipv4-range=SECONDARY_RANGE_NAME,[max-pods-per-node=NUM_PODS],…]] [--async] [--boot-disk-kms-key=BOOT_DISK_KMS_KEY] [--cluster=CLUSTER] [--containerd-config-from-file=PATH_TO_FILE] [--disable-pod-cidr-overprovision] [--disk-size=DISK_SIZE] [--disk-type=DISK_TYPE] [--enable-autoprovisioning] [--enable-autorepair] [--no-enable-autoupgrade] [--enable-blue-green-upgrade] [--enable-confidential-nodes] [--enable-confidential-storage] [--enable-gvnic] [--enable-image-streaming] [--enable-insecure-kubelet-readonly-port] [--enable-nested-virtualization] [--enable-private-nodes] [--enable-queued-provisioning] [--enable-surge-upgrade] [--image-type=IMAGE_TYPE] [--labels=[KEY=VALUE,…]] [--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]] [--local-ssd-encryption-mode=LOCAL_SSD_ENCRYPTION_MODE] [--logging-variant=LOGGING_VARIANT] [--machine-type=MACHINE_TYPE, -m MACHINE_TYPE] [--max-pods-per-node=MAX_PODS_PER_NODE] [--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1] [--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE] [--metadata=KEY=VALUE,[KEY=VALUE,…]] [--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]] [--min-cpu-platform=PLATFORM] [--network-performance-configs=[PROPERTY=VALUE,…]] [--node-group=NODE_GROUP] [--node-labels=[NODE_LABEL,…]] [--node-locations=ZONE,[ZONE,…]] [--node-pool-soak-duration=NODE_POOL_SOAK_DURATION] [--node-taints=[NODE_TAINT,…]] [--node-version=NODE_VERSION] [--num-nodes=NUM_NODES; default=3] [--placement-policy=PLACEMENT_POLICY] [--placement-type=PLACEMENT_TYPE] [--preemptible] [--resource-manager-tags=[KEY=VALUE,…]] [--sandbox=[type=TYPE]] [--secondary-boot-disk=[disk-image=DISK_IMAGE,[mode=MODE],…]] [--shielded-integrity-monitoring] [--shielded-secure-boot] [--sole-tenant-node-affinity-file=SOLE_TENANT_NODE_AFFINITY_FILE] [--spot] [--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]] [--storage-pools=STORAGE_POOL,[…]] [--system-config-from-file=PATH_TO_FILE] [--tags=TAG,[TAG,…]] [--threads-per-core=THREADS_PER_CORE] [--tpu-topology=TPU_TOPOLOGY] [--windows-os-version=WINDOWS_OS_VERSION] [--workload-metadata=WORKLOAD_METADATA] [--create-pod-ipv4-range=[KEY=VALUE,…]     | --pod-ipv4-range=NAME] [--enable-autoscaling --location-policy=LOCATION_POLICY --max-nodes=MAX_NODES --min-nodes=MIN_NODES --total-max-nodes=TOTAL_MAX_NODES --total-min-nodes=TOTAL_MIN_NODES] [--enable-best-effort-provision --min-provision-nodes=MIN_PROVISION_NODES] [--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]     | --ephemeral-storage-local-ssd[=[count=COUNT]]     | --local-nvme-ssd-block[=[count=COUNT]]     | --local-ssd-count=LOCAL_SSD_COUNT     | --local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]] [--location=LOCATION     | --region=REGION     | --zone=ZONE, -z ZONE] [--reservation=RESERVATION --reservation-affinity=RESERVATION_AFFINITY] [--scopes=[SCOPE,…]; default="gke-default" --service-account=SERVICE_ACCOUNT] [GCLOUD_WIDE_FLAG]
DESCRIPTION
(ALPHA) gcloud alpha container node-pools create facilitates the creation of a node pool in a Google Kubernetes Engine cluster. A variety of options exists to customize the node configuration and the number of nodes created.
EXAMPLES
To create a new node pool "node-pool-1" with the default options in the cluster "sample-cluster", run:
gcloud alpha container node-pools create node-pool-1 --cluster=sample-cluster

The new node pool will show up in the cluster after all the nodes have been provisioned.

To create a node pool with 5 nodes, run:

gcloud alpha container node-pools create node-pool-1 --cluster=sample-cluster --num-nodes=5
POSITIONAL ARGUMENTS
NAME
The name of the node pool to create.
FLAGS
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]
Attaches accelerators (e.g. GPUs) to all nodes.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of accelerator to attach to the instances. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Optional) The number of accelerators to attach to the instances. The default value is 1.
gpu-driver-version
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be one of:
`default`: Install the default driver version for this GKE version.
`latest`: Install the latest driver version available for this GKE version.
Can only be used for nodes that use Container-Optimized OS.
`disabled`: Skip automatic driver installation. You must manually install a
driver after you create the cluster. If you omit the flag `gpu-driver-version`,
this is the default option. To learn how to manually install the GPU driver,
refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers
gpu-partition-size
(Optional) The GPU partition size used when running multi-instance GPUs. For information about multi-instance GPUs, refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy
(Optional) The GPU sharing strategy (e.g. time-sharing) to use. For information about GPU sharing, refer to: https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu
(Optional) The max number of containers allowed to share each GPU on the node. This field is used together with gpu-sharing-strategy.
--additional-node-network=[network=NETWORK_NAME,subnetwork=SUBNETWORK_NAME,…]
Attach an additional network interface to each node in the pool. This parameter can be specified up to 7 times.

e.g. --additional-node-network network=dataplane,subnetwork=subnet-dp

network
(Required) The network to attach the new interface to.
subnetwork
(Required) The subnetwork to attach the new interface to.
--additional-pod-network=[subnetwork=SUBNETWORK_NAME,pod-ipv4-range=SECONDARY_RANGE_NAME,[max-pods-per-node=NUM_PODS],…]
Specify the details of a secondary range to be used for an additional pod network. Not needed if you use "host" typed NIC from this network. This parameter can be specified up to 35 times.

e.g. --additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8.

subnetwork
(Optional) The name of the subnetwork to link the pod network to. If not specified, the pod network defaults to the subnet connected to the default network interface.
pod-ipv4-range
(Required) The name of the secondary range in the subnetwork. The range must hold at least (2 * MAX_PODS_PER_NODE * MAX_NODES_IN_RANGE) IPs.
max-pods-per-node
(Optional) Maximum amount of pods per node that can utilize this ipv4-range. Defaults to NodePool (if specified) or Cluster value.
--async
Return immediately, without waiting for the operation in progress to complete.
--boot-disk-kms-key=BOOT_DISK_KMS_KEY
The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
--cluster=CLUSTER
The cluster to add the node pool to. Overrides the default container/cluster property value for this command invocation.
--containerd-config-from-file=PATH_TO_FILE
Path of the YAML file that contains containerd configuration entries like configuring access to private image registries.

For detailed information on the configuration usage, please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.

Note: Updating the containerd configuration of an existing cluster or node pool requires recreation of the existing nodes, which might cause disruptions in running workloads.

Use a full or relative path to a local file containing the value of containerd_config.

--disable-pod-cidr-overprovision
Disables pod cidr overprovision on nodes. Pod cidr overprovisioning is enabled by default.
--disk-size=DISK_SIZE
Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE
Type of the node VM boot disk. For version 1.24 and later, defaults to pd-balanced. For versions earlier than 1.24, defaults to pd-standard. DISK_TYPE must be one of: pd-standard, pd-ssd, pd-balanced, hyperdisk-balanced, hyperdisk-extreme, hyperdisk-throughput.
--enable-autoprovisioning
Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.

Cluster Autoscaler will be able to delete the node pool if it's unneeded.

--enable-autorepair
Enable node autorepair feature for a node pool.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --enable-autorepair

Node autorepair is enabled by default for node pools using COS, COS_CONTAINERD, UBUNTU or UBUNTU_CONTAINERD as a base image, use --no-enable-autorepair to disable.

See https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair for more info.

--enable-autoupgrade
Sets autoupgrade feature for a node pool.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --enable-autoupgrade

See https://cloud.google.com/kubernetes-engine/docs/node-auto-upgrades for more info.

Enabled by default, use --no-enable-autoupgrade to disable.

--enable-blue-green-upgrade
Changes node pool upgrade strategy to blue-green upgrade.
--enable-confidential-nodes
Enable confidential nodes for the node pool. Enabling Confidential Nodes will create nodes using Confidential VM https://cloud.google.com/compute/confidential-vm/docs/about-cvm.
--enable-confidential-storage
Enable confidential storage for the node pool. Enabling Confidential Storage will create boot disk with confidential mode
--enable-gvnic
Enable the use of GVNIC for this cluster. Requires re-creation of nodes using either a node-pool upgrade or node-pool creation.
--enable-image-streaming
Specifies whether to enable image streaming on node pool.
--enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node-pool set the flag to --no-enable-insecure-kubelet-readonly-port.

--enable-nested-virtualization
Enables the use of nested virtualization on the node pool. Defaults to false. Can only be enabled on UBUNTU_CONTAINERD base image or COS_CONTAINERD base image with version 1.28.4-gke.1083000 and above.
--enable-private-nodes
Enables provisioning nodes with private IP addresses only.

The control plane still communicates with all nodes through private IP addresses only, regardless of whether private nodes are enabled or disabled.

--enable-queued-provisioning
Mark the nodepool as Queued only. This means that all new nodes can be obtained only through queuing via ProvisioningRequest API.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --enable-queued-provisioning
… and other required parameters, for more details see:
https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--enable-surge-upgrade
Changes node pool upgrade strategy to surge upgrade.
--image-type=IMAGE_TYPE
The image type to use for the node pool. Defaults to server-specified.

Image Type specifies the base OS that the nodes in the node pool will run on. If an image type is specified, that will be assigned to the node pool and all future upgrades will use the specified image type. If it is not specified the server will pick the default image type.

The default image type and the list of valid image types are available using the following command.

gcloud container get-server-config
--labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources of node pools in the Kubernetes Engine cluster. These are unrelated to Kubernetes labels.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --labels=label1=value1,label2=value2
--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]
(DEPRECATED) Linux kernel parameters to be applied to all nodes in the new node pool as well as the pods running on the nodes.

Examples:

gcloud alpha container node-pools create node-pool-1 --linux-sysctls="net.core.somaxconn=1024,net.ipv4.tcp_rmem=4096 87380 6291456"

The --linux-sysctls flag is deprecated. Please use --system-config-from-file instead.

--local-ssd-encryption-mode=LOCAL_SSD_ENCRYPTION_MODE
Encryption mode for Local SSDs on the node pool. LOCAL_SSD_ENCRYPTION_MODE must be one of: STANDARD_ENCRYPTION, EPHEMERAL_KEY_ENCRYPTION.
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the node pool. If the node pool doesn't specify a logging variant, then the logging variant specified for the cluster will be deployed on all the nodes in the node pool. Valid logging variants are MAX_THROUGHPUT, DEFAULT. LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee high throughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achieve logging throughput up to 10MB per sec.
--machine-type=MACHINE_TYPE, -m MACHINE_TYPE
The type of machine to use for nodes. Defaults to e2-medium. The list of predefined machine types is available using the following command:
gcloud compute machine-types list

You can also specify custom machine types by providing a string with the format "custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is the amount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GB of RAM:

gcloud alpha container node-pools create high-mem-pool --machine-type=custom-2-12288
--max-pods-per-node=MAX_PODS_PER_NODE
The max number of pods per node for this node pool.

This flag sets the maximum number of pods that can be run at the same time on a node. This will override the value given with --default-max-pods-per-node flag set at the cluster level.

Must be used in conjunction with '--enable-ip-alias'.

--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1
Number of extra (surge) nodes to be created on each upgrade of the node pool.

Specifies the number of extra (surge) nodes to be created during this node pool's upgrades. For example, running the following command will result in creating an extra node each time the node pool is upgraded:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --max-surge-upgrade=1   --max-unavailable-upgrade=0

Must be used in conjunction with '--max-unavailable-upgrade'.

--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of the node pool.

Specifies the number of nodes that can be unavailable at the same time during this node pool's upgrades. For example, running the following command will result in having 3 nodes being upgraded in parallel (1 + 2), but keeping always at least 3 (5 - 2) available each time the node pool is upgraded:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --num-nodes=5   --max-surge-upgrade=1 --max-unavailable-upgrade=2

Must be used in conjunction with '--max-surge-upgrade'.

--metadata=KEY=VALUE,[KEY=VALUE,…]
Compute Engine metadata to be made available to the guest operating system running on nodes within the node pool.

Each metadata entry is a key/value pair separated by an equals sign. Metadata keys must be unique and less than 128 bytes in length. Values must be less than or equal to 32,768 bytes in length. The total size of all keys and values must be less than 512 KB. Multiple arguments can be passed to this flag. For example:

--metadata key-1=value-1,key-2=value-2,key-3=value-3

Additionally, the following keys are reserved for use by Kubernetes Engine:

  • cluster-location
  • cluster-name
  • cluster-uid
  • configure-sh
  • enable-os-login
  • gci-update-strategy
  • gci-ensure-gke-docker
  • instance-template
  • kube-env
  • startup-script
  • user-data

Google Kubernetes Engine sets the following keys by default:

  • serial-port-logging-enable

See also Compute Engine's documentation on storing and retrieving instance metadata.

--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]
Same as --metadata except that the value for the entry will be read from a local file.
--min-cpu-platform=PLATFORM
When specified, the nodes for the new node pool will be scheduled on host with specified CPU architecture or a newer one.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --min-cpu-platform=PLATFORM

To list available CPU platforms in given zone, run:

gcloud beta compute zones describe ZONE --format="value(availableCpuPlatforms)"

CPU platform selection is available only in selected zones.

--network-performance-configs=[PROPERTY=VALUE,…]
Configures network performance settings for the node pool. If this flag is not specified, the pool will be created with its default network performance configuration.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardless of whether the traffic is going to internal IP or external IP destinations. The following tier values are allowed: [TIER_UNSPECIFIED,TIER_1]
--node-group=NODE_GROUP
Assign instances of this pool to run on the specified Google Compute Engine node group. This is useful for running workloads on sole tenant nodes.

To see available sole tenant node-groups, run:

gcloud compute sole-tenancy node-groups list

To create a sole tenant node group, run:

gcloud compute sole-tenancy node-groups create [GROUP_NAME]     --location [ZONE] --node-template [TEMPLATE_NAME]     --target-size [TARGET_SIZE]

See https://cloud.google.com/compute/docs/nodes for more information on sole tenancy and node groups.

--node-labels=[NODE_LABEL,…]
Applies the given Kubernetes labels on all nodes in the new node pool.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --node-labels=label1=value1,label2=value2

New nodes, including ones created by resize or recreate, will have these labels on the Kubernetes API node object and can be used in nodeSelectors. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ for examples.

Note that Kubernetes labels, intended to associate cluster components and resources with one another and manage resource lifecycles, are different from Google Kubernetes Engine labels that are used for the purpose of tracking billing and usage information.

--node-locations=ZONE,[ZONE,…]
The set of zones in which the node pool's nodes should be located.

Multiple locations can be specified, separated by commas. For example:

gcloud alpha container node-pools create node-pool-1 --cluster=sample-cluster --node-locations=us-central1-a,us-central1-b
--node-pool-soak-duration=NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deleting the blue pool and completing the upgrade.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster  --node-pool-soak-duration=600s
--node-taints=[NODE_TAINT,…]
Applies the given kubernetes taints on all nodes in the new node pool, which can be used with tolerations for pod scheduling.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule

To read more about node-taints, see https://cloud.google.com/kubernetes-engine/docs/node-taints.

--node-version=NODE_VERSION
The Kubernetes version to use for nodes. Defaults to server-specified.

The default Kubernetes version is available using the following command.

gcloud container get-server-config
--num-nodes=NUM_NODES; default=3
The number of nodes in the node pool in each of the cluster's zones.
--placement-policy=PLACEMENT_POLICY
Indicates the desired resource policy to use.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --placement-policy my-placement
--placement-type=PLACEMENT_TYPE
Placement type allows to define the type of node placement within this node pool.

UNSPECIFIED - No requirements on the placement of nodes. This is the default option.

COMPACT - GKE will attempt to place the nodes in a close proximity to each other. This helps to reduce the communication latency between the nodes, but imposes additional limitations on the node pool size.

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --placement-type=COMPACT

PLACEMENT_TYPE must be one of: UNSPECIFIED, COMPACT.

--preemptible
Create nodes using preemptible VM instances in the new node pool.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --preemptible

New nodes, including ones created by resize or recreate, will use preemptible VM instances. See https://cloud.google.com/kubernetes-engine/docs/preemptible-vm for more information on how to use Preemptible VMs with Kubernetes Engine.

--resource-manager-tags=[KEY=VALUE,…]
Applies the specified comma-separated resource manager tags that has the GCE_FIREWALL purpose to all nodes in the new node pool.

Examples:

gcloud alpha container node-pools create example-node-pool --resource-manager-tags=tagKeys/1234=tagValues/2345
gcloud alpha container node-pools create example-node-pool --resource-manager-tags=my-project/key1=value1
gcloud alpha container node-pools create example-node-pool --resource-manager-tags=12345/key1=value1,23456/key2=value2
gcloud alpha container node-pools create example-node-pool --resource-manager-tags=

All nodes, including nodes that are resized or re-created, will have the specified tags on the corresponding Instance object in the Compute Engine API. You can reference these tags in network firewall policy rules. For instructions, see https://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--sandbox=[type=TYPE]
Enables the requested sandbox on all nodes in the node pool.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --sandbox="type=gvisor"

The only supported type is 'gvisor'.

--secondary-boot-disk=[disk-image=DISK_IMAGE,[mode=MODE],…]
Attaches secondary boot disks to all nodes.
disk-image
(Required) The full resource path to the source disk image to create the secondary boot disks from.
mode
(Optional) The configuration mode for the secondary boot disks. The default value is "CONTAINER_IMAGE_CACHE".
--shielded-integrity-monitoring
Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created.
--shielded-secure-boot
The instance will boot with secure boot enabled.
--sole-tenant-node-affinity-file=SOLE_TENANT_NODE_AFFINITY_FILE
JSON/YAML file containing the configuration of desired sole tenant nodes onto which this node pool could be backed by. These rules filter the nodes according to their node affinity labels. A node's affinity labels come from the node template of the group the node is in.

The file should contain a list of a JSON/YAML objects. For an example, see https://cloud.google.com/compute/docs/nodes/provisioning-sole-tenant-vms#configure_node_affinity_labels. The following list describes the fields:

key
Corresponds to the node affinity label keys of the Node resource.
operator
Specifies the node selection type. Must be one of: IN: Requires Compute Engine to seek for matched nodes. NOT_IN: Requires Compute Engine to avoid certain nodes.
values
Optional. A list of values which correspond to the node affinity label values of the Node resource.
--spot
Create nodes using spot VM instances in the new node pool.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --spot

New nodes, including ones created by resize or recreate, will use spot VM instances.

--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade.

Batch sizes are specified by one of, batch-node-count or batch-percent. The duration between batches is specified by batch-soak-duration.

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster  --standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster  --standard-rollout-policy=batch-percent=0.3,batch-soak-duration=60s
--storage-pools=STORAGE_POOL,[…]
A list of storage pools where the node pool's boot disks will be provisioned.

STORAGE_POOL must be in the format projects/project/zones/zone/storagePools/storagePool

--system-config-from-file=PATH_TO_FILE
Path of the YAML/JSON file that contains the node configuration, including Linux kernel parameters (sysctls) and kubelet configs.

Examples:

kubeletConfig:
  cpuManagerPolicy: static
linuxConfig:
  sysctl:
    net.core.somaxconn: '2048'
    net.ipv4.tcp_rmem: '4096 87380 6291456'
  hugepageConfig:
    hugepage_size2m: '1024'
    hugepage_size1g: '2'

List of supported kubelet configs in 'kubeletConfig'.

KEY VALUE
cpuManagerPolicy either 'static' or 'none'
cpuCFSQuota true or false (enabled by default)
cpuCFSQuotaPeriod interval (e.g., '100ms')
podPidsLimit integer (The value must be greater than or equal to 1024 and less than 4194304.)
List of supported sysctls in 'linuxConfig'.
KEY VALUE
net.core.netdev_max_backlog Any positive integer, less than 2147483647
net.core.rmem_max Any positive integer, less than 2147483647
net.core.wmem_default Any positive integer, less than 2147483647
net.core.wmem_max Any positive integer, less than 2147483647
net.core.optmem_max Any positive integer, less than 2147483647
net.core.somaxconn Must be [128, 2147483647]
net.ipv4.tcp_rmem Any positive integer tuple
net.ipv4.tcp_wmem Any positive integer tuple
net.ipv4.tcp_tw_reuse Must be {0, 1}
kernel.shmmni Must be [4096, 32768]
kernel.shmmax Must be [0, 18446744073692774399]
kernel.shmall Must be [0, 18446744073692774399]
List of supported hugepage size in 'hugepageConfig'.
KEY VALUE
hugepage_size2m Number of 2M huge pages, any positive integer
hugepage_size1g Number of 1G huge pages, any positive integer
Allocated hugepage size should not exceed 60% of available memory on the node. For example, c2d-highcpu-4 has 8GB memory, total allocated hugepage of 2m and 1g should not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d, h3, m3, a2, a3, g2.

Note, updating the system configuration of an existing node pool requires recreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value of system_config.

--tags=TAG,[TAG,…]
Applies the given Compute Engine tags (comma separated) on all nodes in the new node-pool. Example:
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --tags=tag1,tag2

New nodes, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and can be used in firewall rules. See https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.

--threads-per-core=THREADS_PER_CORE
The number of visible threads per physical core for each node. To disable simultaneous multithreading (SMT) set this to 1.
--tpu-topology=TPU_TOPOLOGY
The desired physical topology for the PodSlice.
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --tpu-topology
--windows-os-version=WINDOWS_OS_VERSION
Specifies the Windows Server Image to use when creating a Windows node pool. Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 server image or LTSC2022 server image. If the node pool doesn't specify a Windows Server Image Os version, then Ltsc2019 will be the default one to use. WINDOWS_OS_VERSION must be one of: ltsc2019, ltsc2022.
--workload-metadata=WORKLOAD_METADATA
Type of metadata server available to pods running in the node pool. WORKLOAD_METADATA must be one of:
EXPOSED
[DEPRECATED] Pods running in this node pool have access to the node's underlying Compute Engine Metadata Server.
GCE_METADATA
Pods running in this node pool have access to the node's underlying Compute Engine Metadata Server.
GKE_METADATA
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine Metadata Server exposes a metadata API to workloads that is compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata Servers. This feature can only be enabled if Workload Identity is enabled at the cluster level.
GKE_METADATA_SERVER
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine Metadata Server exposes a metadata API to workloads that is compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata Servers. This feature can only be enabled if Workload Identity is enabled at the cluster level.
SECURE
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. This is a temporary security solution available while the bootstrapping process for cluster nodes is being redesigned with significant security improvements. This feature is scheduled to be deprecated in the future and later removed.
At most one of these can be specified:
--create-pod-ipv4-range=[KEY=VALUE,…]
Create a new pod range for the node pool. The name and range of the pod range can be customized via optional name and range keys.

name specifies the name of the secondary range to be created.

range specifies the IP range for the new secondary range. This can either be a netmask size (e.g. "/20") or a CIDR range (e.g. "10.0.0.0/20"). If a netmask size is specified, the IP is automatically taken from the free space in the cluster's network.

Must be used in VPC native clusters. Can not be used in conjunction with the --pod-ipv4-range option.

Examples:

Create a new pod range with a default name and size.

gcloud alpha container node-pools create --create-pod-ipv4-range ""

Create a new pod range named my-range with netmask of size 21.

gcloud alpha container node-pools create --create-pod-ipv4-range name=my-range,range=/21

Create a new pod range with a default name with the primary range of 10.100.0.0/16.

gcloud alpha container node-pools create --create-pod-ipv4-range range=10.100.0.0/16

Create a new pod range with the name my-range with a default range.

gcloud alpha container node-pools create --create-pod-ipv4-range name=my-range

Must be used in VPC native clusters. Can not be used in conjunction with the --pod-ipv4-range option.

--pod-ipv4-range=NAME
Set the pod range to be used as the source for pod IPs for the pods in this node pool. NAME must be the name of an existing subnetwork secondary range in the subnetwork for this cluster.

Must be used in VPC native clusters. Cannot be used with --create-ipv4-pod-range.

Examples:

Specify a pod range called other-range

gcloud alpha container node-pools create --pod-ipv4-range other-range
Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided. If not already, --max-nodes or --total-max-nodes must also be set.

--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizes of available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of: BALANCED, ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

Specifies minimum number of nodes to be created when best effort provisioning enabled.
--enable-best-effort-provision
Enable best effort provision for nodes
--min-provision-nodes=MIN_PROVISION_NODES
Specifies the minimum number of nodes to be provisioned during creation
At most one of these can be specified:
--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example cluster --ephemeral-storage local-ssd-count=2

'local-ssd-count' specifies the number of local SSDs to use to back ephemeral storage. Local SDDs use NVMe interfaces. For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--ephemeral-storage-local-ssd[=[count=COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example cluster --ephemeral-storage-local-ssd count=2

'count' specifies the number of local SSDs to use to back ephemeral storage. Local SDDs use NVMe interfaces. For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-nvme-ssd-block[=[count=COUNT]]
Adds the requested local SSDs on all nodes in default node pool(s) in the new cluster.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example cluster --local-nvme-ssd-block count=2

'count' must be between 1-8 New nodes, including ones created by resize or recreate, will have these local SSDs.

For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-ssd-count=LOCAL_SSD_COUNT
--local-ssd-count is the equivalent of using --local-ssd-volumes with type=scsi,format=fs

The number of local SSD disks to provision on each node, formatted and mounted in the filesystem.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can be attached to an instance is limited by the maximum number of disks available on a machine, which differs by compute zone. See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]
Adds the requested local SSDs on all nodes in default node pool(s) in the new cluster.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --local-ssd-volumes count=2,type=nvme,format=fs

'count' must be between 1-8

'type' must be either scsi or nvme

'format' must be either fs or block

New nodes, including ones created by resize or recreate, will have these local SSDs.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can be attached to an instance is limited by the maximum number of disks available on a machine, which differs by compute zone. See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster. Overrides the default compute/region or compute/zone value for this command invocation. Prefer using this flag over the --region or --zone flags.
--region=REGION
Compute region (e.g. us-central1) for a regional cluster. Overrides the default compute/region property value for this command invocation.
--zone=ZONE, -z ZONE
Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the default compute/zone property value for this command invocation.
Specifies the reservation for the node pool.
--reservation=RESERVATION
The name of the reservation, required when --reservation-affinity=specific.
--reservation-affinity=RESERVATION_AFFINITY
The type of the reservation for the node pool. RESERVATION_AFFINITY must be one of: any, none, specific.
Options to specify the node identity.
Scopes options.
--scopes=[SCOPE,…]; default="gke-default"
Specifies scopes for the node instances.

Examples:

gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --scopes=https://www.googleapis.com/auth/devstorage.read_only
gcloud alpha container node-pools create node-pool-1 --cluster=example-cluster --scopes=bigquery,storage-rw,compute-ro

Multiple scopes can be specified, separated by commas. Various scopes are automatically added based on feature usage. Such scopes are not added if an equivalent scope already exists.

  • monitoring-write: always added to ensure metrics can be written
  • logging-write: added if Cloud Logging is enabled (--enable-cloud-logging/--logging)
  • monitoring: added if Cloud Monitoring is enabled (--enable-cloud-monitoring/--monitoring)
  • gke-default: added for Autopilot clusters that use the default service account
  • cloud-platform: added for Autopilot clusters that use any other service account

SCOPE can be either the full URI of the scope or an alias. Default scopes are assigned to all instances. Available aliases are:

Alias URI
bigquery https://www.googleapis.com/auth/bigquery
cloud-platform https://www.googleapis.com/auth/cloud-platform
cloud-source-repos https://www.googleapis.com/auth/source.full_control
cloud-source-repos-ro https://www.googleapis.com/auth/source.read_only
compute-ro https://www.googleapis.com/auth/compute.readonly
compute-rw https://www.googleapis.com/auth/compute
datastore https://www.googleapis.com/auth/datastore
default https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/pubsub
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
gke-default https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
logging-write https://www.googleapis.com/auth/logging.write
monitoring https://www.googleapis.com/auth/monitoring
monitoring-read https://www.googleapis.com/auth/monitoring.read
monitoring-write https://www.googleapis.com/auth/monitoring.write
pubsub https://www.googleapis.com/auth/pubsub
service-control https://www.googleapis.com/auth/servicecontrol
service-management https://www.googleapis.com/auth/service.management.readonly
sql (deprecated) https://www.googleapis.com/auth/sqlservice
sql-admin https://www.googleapis.com/auth/sqlservice.admin
storage-full https://www.googleapis.com/auth/devstorage.full_control
storage-ro https://www.googleapis.com/auth/devstorage.read_only
storage-rw https://www.googleapis.com/auth/devstorage.read_write
taskqueue https://www.googleapis.com/auth/taskqueue
trace https://www.googleapis.com/auth/trace.append
userinfo-email https://www.googleapis.com/auth/userinfo.email
DEPRECATION WARNING: https://www.googleapis.com/auth/sqlservice account scope and sql alias do not provide SQL instance management capabilities and have been deprecated. Please, use https://www.googleapis.com/auth/sqlservice.admin or sql-admin to manage your Google SQL Service instances.
--service-account=SERVICE_ACCOUNT
The Google Cloud Platform Service Account to be used by the node VMs. If a service account is specified, the cloud-platform and userinfo.email scopes are used. If no Service Account is specified, the project default service account is used.
GCLOUD WIDE FLAGS
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
This command is currently in alpha and might change without notice. If this command fails with API permission errors despite specifying the correct project, you might be trying to access an API with an invitation-only early access allowlist. These variants are also available:
gcloud container node-pools create
gcloud beta container node-pools create