gcloud alpha container clusters create

NAME
gcloud alpha container clusters create - create a cluster for running containers
SYNOPSIS
gcloud alpha container clusters create NAME [--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]] [--addons=[ADDON,…]] [--allow-route-overlap] [--async] [--autoprovisioning-network-tags=TAGS,[TAGS,…]] [--autoprovisioning-resource-manager-tags=[KEY=VALUE,…]] [--autoscaling-profile=AUTOSCALING_PROFILE] [--boot-disk-kms-key=BOOT_DISK_KMS_KEY] [--cloud-run-config=[load-balancer-type=EXTERNAL,…]] [--cluster-ipv4-cidr=CLUSTER_IPV4_CIDR] [--cluster-secondary-range-name=NAME] [--cluster-version=CLUSTER_VERSION] [--create-subnetwork=[KEY=VALUE,…]] [--database-encryption-key=DATABASE_ENCRYPTION_KEY] [--default-max-pods-per-node=DEFAULT_MAX_PODS_PER_NODE] [--disable-default-snat] [--disable-pod-cidr-overprovision] [--disk-size=DISK_SIZE] [--disk-type=DISK_TYPE] [--enable-autorepair] [--no-enable-autoupgrade] [--enable-cilium-clusterwide-network-policy] [--enable-cloud-logging] [--enable-cloud-monitoring] [--enable-cloud-run-alpha] [--enable-confidential-nodes] [--enable-confidential-storage] [--enable-cost-allocation] [--enable-dataplane-v2] [--enable-fleet] [--enable-fqdn-network-policy] [--enable-gke-oidc] [--enable-google-cloud-access] [--enable-gvnic] [--enable-identity-service] [--enable-image-streaming] [--enable-intra-node-visibility] [--enable-ip-alias] [--enable-kubernetes-alpha] [--enable-kubernetes-unstable-apis=API,[API,…]] [--enable-l4-ilb-subsetting] [--enable-legacy-authorization] [--enable-logging-monitoring-system-only] [--enable-managed-prometheus] [--enable-master-global-access] [--enable-multi-networking] [--enable-network-policy] [--enable-pod-security-policy] [--enable-secret-manager] [--enable-service-externalips] [--enable-shielded-nodes] [--enable-stackdriver-kubernetes] [--enable-vertical-pod-autoscaling] [--fleet-project=PROJECT_ID_OR_NUMBER] [--gateway-api=GATEWAY_API] [--identity-provider=IDENTITY_PROVIDER] [--image-type=IMAGE_TYPE] [--in-transit-encryption=IN_TRANSIT_ENCRYPTION] [--ipv6-access-type=IPV6_ACCESS_TYPE] [--issue-client-certificate] [--istio-config=[auth=MTLS_PERMISSIVE,…]] [--labels=[KEY=VALUE,…]] [--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]] [--logging=[COMPONENT,…]] [--logging-variant=LOGGING_VARIANT] [--machine-type=MACHINE_TYPE, -m MACHINE_TYPE] [--max-nodes-per-pool=MAX_NODES_PER_POOL] [--max-pods-per-node=MAX_PODS_PER_NODE] [--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1] [--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE] [--metadata=KEY=VALUE,[KEY=VALUE,…]] [--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]] [--min-cpu-platform=PLATFORM] [--monitoring=[COMPONENT,…]] [--network=NETWORK] [--network-performance-configs=[PROPERTY1=VALUE1,…]] [--node-labels=[NODE_LABEL,…]] [--node-pool-name=NODE_POOL_NAME] [--node-taints=[NODE_TAINT,…]] [--node-version=NODE_VERSION] [--notification-config=[pubsub=ENABLED|DISABLED,pubsub-topic=TOPIC,…]] [--num-nodes=NUM_NODES; default=3] [--placement-policy=PLACEMENT_POLICY] [--placement-type=PLACEMENT_TYPE] [--preemptible] [--private-endpoint-subnetwork=NAME] [--private-ipv6-google-access-type=PRIVATE_IPV6_GOOGLE_ACCESS_TYPE] [--release-channel=CHANNEL] [--resource-manager-tags=[KEY=VALUE,…]] [--security-group=SECURITY_GROUP] [--security-posture=SECURITY_POSTURE] [--services-ipv4-cidr=CIDR] [--services-secondary-range-name=NAME] [--shielded-integrity-monitoring] [--shielded-secure-boot] [--spot] [--stack-type=STACK_TYPE] [--storage-pools=STORAGE_POOL,[…]] [--subnetwork=SUBNETWORK] [--system-config-from-file=SYSTEM_CONFIG_FROM_FILE] [--tags=TAG,[TAG,…]] [--threads-per-core=THREADS_PER_CORE] [--workload-metadata=WORKLOAD_METADATA] [--workload-pool=WORKLOAD_POOL] [--workload-vulnerability-scanning=WORKLOAD_VULNERABILITY_SCANNING] [--additional-zones=ZONE,[ZONE,…]     | --node-locations=ZONE,[ZONE,…]] [--binauthz-policy-bindings=[name=BINAUTHZ_POLICY] --binauthz-evaluation-mode=BINAUTHZ_EVALUATION_MODE     | --enable-binauthz] [--cluster-dns=CLUSTER_DNS --cluster-dns-domain=CLUSTER_DNS_DOMAIN --cluster-dns-scope=CLUSTER_DNS_SCOPE] [--dataplane-v2-observability-mode=DATAPLANE_V2_OBSERVABILITY_MODE     | --disable-dataplane-v2-flow-observability     | --enable-dataplane-v2-flow-observability] [--disable-dataplane-v2-metrics     | --enable-dataplane-v2-metrics] [[--enable-autoprovisioning : --autoprovisioning-config-file=AUTOPROVISIONING_CONFIG_FILE | [--max-cpu=MAX_CPU --max-memory=MAX_MEMORY : --autoprovisioning-image-type=AUTOPROVISIONING_IMAGE_TYPE --autoprovisioning-locations=ZONE,[ZONE,…] --autoprovisioning-min-cpu-platform=PLATFORM --min-cpu=MIN_CPU --min-memory=MIN_MEMORY --autoprovisioning-max-surge-upgrade=AUTOPROVISIONING_MAX_SURGE_UPGRADE --autoprovisioning-max-unavailable-upgrade=AUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE --autoprovisioning-node-pool-soak-duration=AUTOPROVISIONING_NODE_POOL_SOAK_DURATION --autoprovisioning-standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…] --enable-autoprovisioning-blue-green-upgrade | --enable-autoprovisioning-surge-upgrade --autoprovisioning-scopes=[SCOPE,…] --autoprovisioning-service-account=AUTOPROVISIONING_SERVICE_ACCOUNT --enable-autoprovisioning-autorepair --enable-autoprovisioning-autoupgrade [--max-accelerator=[type=TYPE,count=COUNT,…] : --min-accelerator=[type=TYPE,count=COUNT,…]]]]] [--enable-autoscaling --location-policy=LOCATION_POLICY --max-nodes=MAX_NODES --min-nodes=MIN_NODES --total-max-nodes=TOTAL_MAX_NODES --total-min-nodes=TOTAL_MIN_NODES] [--enable-master-authorized-networks --master-authorized-networks=NETWORK,[NETWORK,…]] [--enable-network-egress-metering --enable-resource-consumption-metering --resource-usage-bigquery-dataset=RESOURCE_USAGE_BIGQUERY_DATASET] [--enable-private-endpoint --enable-private-nodes --master-ipv4-cidr=MASTER_IPV4_CIDR --private-cluster] [--enable-tpu --enable-tpu-service-networking     | --tpu-ipv4-cidr=CIDR] [--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]     | --ephemeral-storage-local-ssd[=[count=COUNT]]     | --local-nvme-ssd-block[=[count=COUNT]]     | --local-ssd-count=LOCAL_SSD_COUNT     | --local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]] [--location=LOCATION     | --region=REGION     | --zone=ZONE, -z ZONE] [--maintenance-window=START_TIME     | --maintenance-window-end=TIME_STAMP --maintenance-window-recurrence=RRULE --maintenance-window-start=TIME_STAMP] [--password=PASSWORD --enable-basic-auth     | --username=USERNAME, -u USERNAME] [--reservation=RESERVATION --reservation-affinity=RESERVATION_AFFINITY] [--scopes=[SCOPE,…]; default="gke-default" --service-account=SERVICE_ACCOUNT] [--security-profile=SECURITY_PROFILE --no-security-profile-runtime-rules] [GCLOUD_WIDE_FLAG]
DESCRIPTION
(ALPHA) Create a cluster for running containers.
EXAMPLES
To create a cluster with the default configuration, run:
gcloud alpha container clusters create sample-cluster
POSITIONAL ARGUMENTS
NAME
The name of the cluster to create.

The name may contain only lowercase alphanumerics and '-', must start with a letter and end with an alphanumeric, and must be no longer than 40 characters.

FLAGS
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]
Attaches accelerators (e.g. GPUs) to all nodes.
type
(Required) The specific type (e.g. nvidia-tesla-k80 for NVIDIA Tesla K80) of accelerator to attach to the instances. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Optional) The number of accelerators to attach to the instances. The default value is 1.
gpu-driver-version
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be one of:
`default`: Install the default driver version.
`latest`: Install the latest available driver version. Available only for
nodes that use Container-Optimized OS.
`disabled`: Skip automatic driver installation. You must manually install a
driver after you create the cluster. If you omit the flag `gpu-driver-version`,
this is the default option. To learn how to manually install the GPU driver,
refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers
gpu-partition-size
(Optional) The GPU partition size used when running multi-instance GPUs. For information about multi-instance GPUs, refer to: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy
(Optional) The GPU sharing strategy (e.g. time-sharing) to use. For information about GPU sharing, refer to: https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu
(Optional) The max number of containers allowed to share each GPU on the node. This field is used together with gpu-sharing-strategy.
--addons=[ADDON,…]
Addons (https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster.AddonsConfig) are additional Kubernetes cluster components. Addons specified by this flag will be enabled. The others will be disabled. Default addons: HttpLoadBalancing, HorizontalPodAutoscaling. The Istio addon is deprecated and removed. For more information and migration, see https://cloud.google.com/istio/docs/istio-on-gke/migrate-to-anthos-service-mesh. ADDON must be one of: HttpLoadBalancing, HorizontalPodAutoscaling, KubernetesDashboard, NetworkPolicy, NodeLocalDNS, ConfigConnector, GcePersistentDiskCsiDriver, GcpFilestoreCsiDriver, BackupRestore, GcsFuseCsiDriver, Istio, CloudBuild, CloudRun.
--allow-route-overlap
Allows the provided cluster CIDRs to overlap with existing routes that are less specific and do not terminate at a VM.

When enabled, --cluster-ipv4-cidr must be fully specified (e.g. 10.96.0.0/14 , but not /14). If --enable-ip-alias is also specified, both --cluster-ipv4-cidr and --services-ipv4-cidr must be fully specified.

Must be used in conjunction with '--enable-ip-alias' or '--no-enable-ip-alias'.

--async
Return immediately, without waiting for the operation in progress to complete.
--autoprovisioning-network-tags=TAGS,[TAGS,…]
Applies the given Compute Engine tags (comma separated) on all nodes in the auto-provisioned node pools of the new Standard cluster or the new Autopilot cluster.

Examples:

gcloud alpha container clusters create example-cluster --autoprovisioning-network-tags=tag1,tag2

New nodes in auto-provisioned node pools, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and can be used in firewall rules. See https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.

--autoprovisioning-resource-manager-tags=[KEY=VALUE,…]
Applies the specified comma-separated resource manager tags that has the GCE_FIREWALL purpose to all nodes in the new Autopilot cluster or all auto-provisioned nodes in the new Standard cluster.

Examples:

gcloud alpha container clusters create example-cluster --autoprovisioning-resource-manager-tags=tagKeys/1234=tagValues/2345
gcloud alpha container clusters create example-cluster --autoprovisioning-resource-manager-tags=my-project/key1=value1
gcloud alpha container clusters create example-cluster --autoprovisioning-resource-manager-tags=12345/key1=value1,23456/key2=value2
gcloud alpha container clusters create example-cluster --autoprovisioning-resource-manager-tags=

All nodes in an Autopilot cluster or all auto-provisioned nodes in a Standard cluster, including nodes that are resized or re-created, will have the specified tags on the corresponding Instance object in the Compute Engine API. You can reference these tags in network firewall policy rules. For instructions, see https://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--autoscaling-profile=AUTOSCALING_PROFILE
Set autoscaling behaviour, choices are 'optimize-utilization' and 'balanced'. Default is 'balanced'.
--boot-disk-kms-key=BOOT_DISK_KMS_KEY
The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
--cloud-run-config=[load-balancer-type=EXTERNAL,…]
Configurations for Cloud Run addon, requires --addons=CloudRun for create and --update-addons=CloudRun=ENABLED for update.
load-balancer-type
(Optional) Type of load-balancer-type EXTERNAL or INTERNAL.

Examples:

gcloud alpha container clusters create example-cluster --cloud-run-config=load-balancer-type=INTERNAL
--cluster-ipv4-cidr=CLUSTER_IPV4_CIDR
The IP address range for the pods in this cluster in CIDR notation (e.g. 10.0.0.0/14). Prior to Kubernetes version 1.7.0 this must be a subset of 10.0.0.0/8; however, starting with version 1.7.0 can be any RFC 1918 IP range.

If you omit this option, a range is chosen automatically. The automatically chosen range is randomly selected from 10.0.0.0/8 and will not include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. You should specify --cluster-ipv4-cidr to prevent conflicts.

--cluster-secondary-range-name=NAME
Set the secondary range to be used as the source for pod IPs. Alias ranges will be allocated from this secondary range. NAME must be the name of an existing secondary range in the cluster subnetwork. Cannot be specified unless '--enable-ip-alias' option is also specified. Cannot be used with '--create-subnetwork' option.
--cluster-version=CLUSTER_VERSION
The Kubernetes version to use for the master and nodes. Defaults to server-specified.

The default Kubernetes version is available using the following command.

gcloud container get-server-config
--create-subnetwork=[KEY=VALUE,…]
Create a new subnetwork for the cluster. The name and range of the subnetwork can be customized via optional 'name' and 'range' key-value pairs.

'name' specifies the name of the subnetwork to be created.

'range' specifies the IP range for the new subnetwork. This can either be a netmask size (e.g. '/20') or a CIDR range (e.g. '10.0.0.0/20'). If a netmask size is specified, the IP is automatically taken from the free space in the cluster's network.

Examples:

Create a new subnetwork with a default name and size.

gcloud alpha container clusters create --create-subnetwork ""

Create a new subnetwork named "my-subnet" with netmask of size 21.

gcloud alpha container clusters create --create-subnetwork name=my-subnet,range=/21

Create a new subnetwork with a default name with the primary range of 10.100.0.0/16.

gcloud alpha container clusters create --create-subnetwork range=10.100.0.0/16

Create a new subnetwork with the name "my-subnet" with a default range.

gcloud alpha container clusters create --create-subnetwork name=my-subnet
Cannot be specified unless '--enable-ip-alias' option is also specified. Cannot be used in conjunction with '--subnetwork' option.
--database-encryption-key=DATABASE_ENCRYPTION_KEY
Enable Database Encryption.

Enable database encryption that will be used to encrypt Kubernetes Secrets at the application layer. The key provided should be the resource ID in the format of projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets.

--default-max-pods-per-node=DEFAULT_MAX_PODS_PER_NODE
The default max number of pods per node for node pools in the cluster.

This flag sets the default max-pods-per-node for node pools in the cluster. If --max-pods-per-node is not specified explicitly for a node pool, this flag value will be used.

Must be used in conjunction with '--enable-ip-alias'.

--disable-default-snat
Disable default source NAT rules applied in cluster nodes.

By default, cluster nodes perform source network address translation (SNAT) for packets sent from Pod IP address sources to destination IP addresses that are not in the non-masquerade CIDRs list. For more details about SNAT and IP masquerading, see: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#how_ipmasq_works SNAT changes the packet's source IP address to the node's internal IP address.

When this flag is set, GKE does not perform SNAT for packets sent to any destination. You must set this flag if the cluster uses privately reused public IPs.

The --disable-default-snat flag is only applicable to private GKE clusters, which are inherently VPC-native. Thus, --disable-default-snat requires that you also set --enable-ip-alias and --enable-private-nodes.

--disable-pod-cidr-overprovision
Disables pod cidr overprovision on nodes. Pod cidr overprovisioning is enabled by default.
--disk-size=DISK_SIZE
Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE
Type of the node VM boot disk. For version 1.24 and later, defaults to pd-balanced. For versions earlier than 1.24, defaults to pd-standard. DISK_TYPE must be one of: pd-standard, pd-ssd, pd-balanced, hyperdisk-balanced, hyperdisk-extreme, hyperdisk-throughput.
--enable-autorepair
Enable node autorepair feature for a cluster's default node pool(s).
gcloud alpha container clusters create example-cluster --enable-autorepair

Node autorepair is enabled by default for clusters using COS, COS_CONTAINERD, UBUNTU or UBUNTU_CONTAINERD as a base image, use --no-enable-autorepair to disable.

See https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair for more info.

--enable-autoupgrade
Sets autoupgrade feature for a cluster's default node pool(s).
gcloud alpha container clusters create example-cluster --enable-autoupgrade

See https://cloud.google.com/kubernetes-engine/docs/node-auto-upgrades for more info.

Enabled by default, use --no-enable-autoupgrade to disable.

--enable-cilium-clusterwide-network-policy
Enable Cilium Clusterwide Network Policies on the cluster. Disabled by default.
--enable-cloud-logging
(DEPRECATED) Automatically send logs from the cluster to the Google Cloud Logging API.

Legacy Logging and Monitoring is deprecated. Thus, flag --enable-cloud-logging is also deprecated and will be removed in an upcoming release. Please use --logging (optionally with --monitoring). For more details, please read: https://cloud.google.com/stackdriver/docs/solutions/gke/installing.

--enable-cloud-monitoring
(DEPRECATED) Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting.

Legacy Logging and Monitoring is deprecated. Thus, flag --enable-cloud-monitoring is also deprecated. Please use --monitoring (optionally with --logging). For more details, please read: https://cloud.google.com/stackdriver/docs/solutions/gke/installing.

--enable-cloud-run-alpha
Enable Cloud Run alpha features on this cluster. Selecting this option will result in the cluster having all Cloud Run alpha API groups and features turned on.

Cloud Run alpha clusters are not covered by the Cloud Run SLA and should not be used for production workloads.

--enable-confidential-nodes
Enable confidential nodes for the cluster. Enabling Confidential Nodes will create nodes using Confidential VM https://cloud.google.com/compute/confidential-vm/docs/about-cvm.
--enable-confidential-storage
Enable confidential storage for the cluster. Enabling Confidential Storage will create boot disk with confidential mode
--enable-cost-allocation
Enable the cost management feature.

When enabled, you can get informational GKE cost breakdowns by cluster, namespace and label in your billing data exported to BigQuery (https://cloud.google.com/billing/docs/how-to/export-data-bigquery).

--enable-dataplane-v2
Enables the new eBPF dataplane for GKE clusters that is required for network security, scalability and visibility features.
--enable-fleet
Set cluster project as the fleet host project. This will register the cluster to the same project. To register the cluster to a fleet in a different project, please use --fleet-project=FLEET_HOST_PROJECT. Example: $ gcloud alpha container clusters create --enable-fleet
--enable-fqdn-network-policy
Enable FQDN Network Policies on the cluster. FQDN Network Policies are disabled by default.
--enable-gke-oidc
(DEPRECATED) Enable GKE OIDC authentication on the cluster.

When enabled, users would be able to authenticate to Kubernetes cluster after properly setting OIDC config.

GKE OIDC is by default disabled when creating a new cluster. To disable GKE OIDC in an existing cluster, explicitly set flag --no-enable-gke-oidc.

GKE OIDC is being replaced by Identity Service across Anthos and GKE. Thus, flag --enable-gke-oidc is also deprecated. Please use --enable-identity-service to enable the Identity Service component

--enable-google-cloud-access
When you enable Google Cloud Access, any public IP addresses owned by Google Cloud can reach the public control plane endpoint of your cluster.
--enable-gvnic
Enable the use of GVNIC for this cluster. Requires re-creation of nodes using either a node-pool upgrade or node-pool creation.
--enable-identity-service
Enable Identity Service component on the cluster.

When enabled, users can authenticate to Kubernetes cluster with external identity providers.

Identity Service is by default disabled when creating a new cluster. To disable Identity Service in an existing cluster, explicitly set flag --no-enable-identity-service.

--enable-image-streaming
Specifies whether to enable image streaming on cluster.
--enable-intra-node-visibility
Enable Intra-node visibility for this cluster.

Enabling intra-node visibility makes your intra-node pod-to-pod traffic visible to the networking fabric. With this feature, you can use VPC flow logging or other VPC features for intra-node traffic.

Enabling it on an existing cluster causes the cluster master and the cluster nodes to restart, which might cause a disruption.

--enable-ip-alias
Enable use of alias IPs (https://cloud.google.com/compute/docs/alias-ip/) for Pod IPs. This will require at least two secondary ranges in the subnetwork, one for the pod IPs and another to reserve space for the services range.
--enable-kubernetes-alpha
Enable Kubernetes alpha features on this cluster. Selecting this option will result in the cluster having all Kubernetes alpha API groups and features turned on. Cluster upgrades (both manual and automatic) will be disabled and the cluster will be automatically deleted after 30 days.

Alpha clusters are not covered by the Kubernetes Engine SLA and should not be used for production workloads.

--enable-kubernetes-unstable-apis=API,[API,…]
Enable Kubernetes beta API features on this cluster. Beta APIs are not expected to be production ready and should be avoided in production-grade environments.
--enable-l4-ilb-subsetting
Enable Subsetting for L4 ILB services created on this cluster.
--enable-legacy-authorization
Enables the legacy ABAC authentication for the cluster. User rights are granted through the use of policies which combine attributes together. For a detailed look at these properties and related formats, see https://kubernetes.io/docs/admin/authorization/abac/. To use RBAC permissions instead, create or update your cluster with the option --no-enable-legacy-authorization.
--enable-logging-monitoring-system-only
(DEPRECATED) Enable Cloud Operations system-only monitoring and logging.

The --enable-logging-monitoring-system-only flag is deprecated and will be removed in an upcoming release. Please use --logging and --monitoring instead. For more information, please read: https://cloud.google.com/stackdriver/docs/solutions/gke/installing.

--enable-managed-prometheus
Enables managed collection for Managed Service for Prometheus in the cluster.

See https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed#enable-mgdcoll-gke for more info.

Enabled by default for cluster versions 1.27 or greater, use --no-enable-managed-prometheus to disable.

--enable-master-global-access
Use with private clusters to allow access to the master's private endpoint from any Google Cloud region or on-premises environment regardless of the private cluster's region.
--enable-multi-networking
Enables multi-networking on the cluster. Multi-networking is disabled by default.
--enable-network-policy
Enable network policy enforcement for this cluster. If you are enabling network policy on an existing cluster the network policy addon must first be enabled on the master by using --update-addons=NetworkPolicy=ENABLED flag.
--enable-pod-security-policy
Enables the pod security policy admission controller for the cluster. The pod security policy admission controller adds fine-grained pod create and update authorization controls through the PodSecurityPolicy API objects. For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies.
--enable-secret-manager
Enables the Secret Manager CSI driver provider component. See https://secrets-store-csi-driver.sigs.k8s.io/introduction https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp

To disable in an existing cluster, explicitly set flag to --no-enable-secret-manager

--enable-service-externalips
Enables use of services with externalIPs field.
--enable-shielded-nodes
Enable Shielded Nodes for this cluster. Enabling Shielded Nodes will enable a more secure Node credential bootstrapping implementation. Starting with version 1.18, clusters will have Shielded GKE nodes by default.
--enable-stackdriver-kubernetes
(DEPRECATED) Enable Cloud Operations for GKE.

The --enable-stackdriver-kubernetes flag is deprecated and will be removed in an upcoming release. Please use --logging and --monitoring instead. For more information, please read: https://cloud.google.com/stackdriver/docs/solutions/gke/installing.

Flags for vertical pod autoscaling:

At most one of these can be specified:

--enable-vertical-pod-autoscaling
Enable vertical pod autoscaling for a cluster.
--fleet-project=PROJECT_ID_OR_NUMBER
Sets fleet host project for the cluster. If specified, the current cluster will be registered as a fleet membership under the fleet host project.

Example: $ gcloud alpha container clusters create --fleet-project=my-project

--gateway-api=GATEWAY_API
Enables GKE Gateway controller in this cluster. The value of the flag specifies which Open Source Gateway API release channel will be used to define Gateway resources. GATEWAY_API must be one of:
disabled
Gateway controller will be disabled in the cluster.
standard
Gateway controller will be enabled in the cluster. Resource definitions from the standard OSS Gateway API release channel will be installed.
--identity-provider=IDENTITY_PROVIDER
Enable 3P identity provider on the cluster.
--image-type=IMAGE_TYPE
The image type to use for the cluster. Defaults to server-specified.

Image Type specifies the base OS that the nodes in the cluster will run on. If an image type is specified, that will be assigned to the cluster and all future upgrades will use the specified image type. If it is not specified the server will pick the default image type.

The default image type and the list of valid image types are available using the following command.

gcloud container get-server-config
--in-transit-encryption=IN_TRANSIT_ENCRYPTION
Enable Dataplane V2 in-transit encryption. Dataplane v2 in-transit encryption is disabled by default. IN_TRANSIT_ENCRYPTION must be one of: inter-node-transparent, none.
--ipv6-access-type=IPV6_ACCESS_TYPE
IPv6 access type of the subnetwork. Defaults to 'external'. IPV6_ACCESS_TYPE must be one of: external, internal.
--issue-client-certificate
Issue a TLS client certificate with admin permissions.

When enabled, the certificate and private key pair will be present in MasterAuth field of the Cluster object. For cluster versions before 1.12, a client certificate will be issued by default. As of 1.12, client certificates are disabled by default.

--istio-config=[auth=MTLS_PERMISSIVE,…]
(REMOVED) Configurations for Istio addon, requires --addons contains Istio for create, or --update-addons Istio=ENABLED for update.
auth
(Optional) Type of auth MTLS_PERMISSIVE or MTLS_STRICT.

Examples:

gcloud alpha container clusters create example-cluster --istio-config=auth=MTLS_PERMISSIVE

The --istio-config flag is no longer supported. For more information and migration, see https://cloud.google.com/istio/docs/istio-on-gke/migrate-to-anthos-service-mesh.

--labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources in use by the Kubernetes Engine cluster. These are unrelated to Kubernetes labels.

Examples:

gcloud alpha container clusters create example-cluster --labels=label_a=value1,label_b=,label_c=value3
--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]
(DEPRECATED) Linux kernel parameters to be applied to all nodes in the new cluster's default node pool as well as the pods running on the nodes.

Examples:

gcloud alpha container clusters create example-cluster --linux-sysctls="net.core.somaxconn=1024,net.ipv4.tcp_rmem=4096 87380 6291456"

The --linux-sysctls flag is deprecated. Please use --system-config-from-file instead.

--logging=[COMPONENT,…]
Set the components that have logging enabled. Valid component values are: SYSTEM, WORKLOAD, API_SERVER, CONTROLLER_MANAGER, SCHEDULER, NONE

For more information, see https://cloud.google.com/stackdriver/docs/solutions/gke/installing#available-logs

Examples:

gcloud alpha container clusters create --logging=SYSTEM
gcloud alpha container clusters create --logging=SYSTEM,API_SERVER,WORKLOAD
gcloud alpha container clusters create --logging=NONE
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the cluster. Valid logging variants are MAX_THROUGHPUT, DEFAULT. If no value is specified, DEFAULT is used. LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee high throughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achieve logging throughput up to 10MB per sec.
--machine-type=MACHINE_TYPE, -m MACHINE_TYPE
The type of machine to use for nodes. Defaults to e2-medium. The list of predefined machine types is available using the following command:
gcloud compute machine-types list

You can also specify custom machine types by providing a string with the format "custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is the amount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GB of RAM:

gcloud alpha container clusters create high-mem-pool --machine-type=custom-2-12288
--max-nodes-per-pool=MAX_NODES_PER_POOL
The maximum number of nodes to allocate per default initial node pool. Kubernetes Engine will automatically create enough nodes pools such that each node pool contains less than --max-nodes-per-pool nodes. Defaults to 1000 nodes, but can be set as low as 100 nodes per pool on initial create.
--max-pods-per-node=MAX_PODS_PER_NODE
The max number of pods per node for this node pool.

This flag sets the maximum number of pods that can be run at the same time on a node. This will override the value given with --default-max-pods-per-node flag set at the cluster level.

Must be used in conjunction with '--enable-ip-alias'.

--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1
Number of extra (surge) nodes to be created on each upgrade of a node pool.

Specifies the number of extra (surge) nodes to be created during this node pool's upgrades. For example, running the following command will result in creating an extra node each time the node pool is upgraded:

gcloud alpha container clusters create example-cluster --max-surge-upgrade=1 --max-unavailable-upgrade=0

Must be used in conjunction with '--max-unavailable-upgrade'.

--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of a node pool.

Specifies the number of nodes that can be unavailable at the same time while this node pool is being upgraded. For example, running the following command will result in having 3 nodes being upgraded in parallel (1 + 2), but keeping always at least 3 (5 - 2) available each time the node pool is upgraded:

gcloud alpha container clusters create example-cluster --num-nodes=5 --max-surge-upgrade=1      --max-unavailable-upgrade=2

Must be used in conjunction with '--max-surge-upgrade'.

--metadata=KEY=VALUE,[KEY=VALUE,…]
Compute Engine metadata to be made available to the guest operating system running on nodes within the node pool.

Each metadata entry is a key/value pair separated by an equals sign. Metadata keys must be unique and less than 128 bytes in length. Values must be less than or equal to 32,768 bytes in length. The total size of all keys and values must be less than 512 KB. Multiple arguments can be passed to this flag. For example:

--metadata key-1=value-1,key-2=value-2,key-3=value-3

Additionally, the following keys are reserved for use by Kubernetes Engine:

  • cluster-location
  • cluster-name
  • cluster-uid
  • configure-sh
  • enable-os-login
  • gci-update-strategy
  • gci-ensure-gke-docker
  • instance-template
  • kube-env
  • startup-script
  • user-data

Google Kubernetes Engine sets the following keys by default:

  • serial-port-logging-enable

See also Compute Engine's documentation on storing and retrieving instance metadata.

--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]
Same as --metadata except that the value for the entry will be read from a local file.
--min-cpu-platform=PLATFORM
When specified, the nodes for the new cluster's default node pool will be scheduled on host with specified CPU architecture or a newer one.

Examples:

gcloud alpha container clusters create example-cluster --min-cpu-platform=PLATFORM

To list available CPU platforms in given zone, run:

gcloud beta compute zones describe ZONE --format="value(availableCpuPlatforms)"

CPU platform selection is available only in selected zones.

--monitoring=[COMPONENT,…]
Set the components that have monitoring enabled. Valid component values are: SYSTEM, WORKLOAD (Deprecated), NONE, API_SERVER, CONTROLLER_MANAGER, SCHEDULER, DAEMONSET, DEPLOYMENT, HPA, POD, STATEFULSET, STORAGE

For more information, see https://cloud.google.com/stackdriver/docs/solutions/gke/installing#available-metrics

Examples:

gcloud alpha container clusters create --monitoring=SYSTEM,API_SERVER,POD
gcloud alpha container clusters create --monitoring=NONE
--network=NETWORK
The Compute Engine Network that the cluster will connect to. Google Kubernetes Engine will use this network when creating routes and firewalls for the clusters. Defaults to the 'default' network.
--network-performance-configs=[PROPERTY1=VALUE1,…]
Configures network performance settings for the cluster. Node pools can override with their own settings.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardless of whether the traffic is going to internal IP or external IP destinations. The following tier values are allowed: [TIER_UNSPECIFIED,TIER_1].

See https://cloud.google.com/compute/docs/networking/configure-vm-with-high-bandwidth-configuration for more information.

--node-labels=[NODE_LABEL,…]
Applies the given Kubernetes labels on all nodes in the new node pool.

Examples:

gcloud alpha container clusters create example-cluster --node-labels=label-a=value1,label-2=value2

New nodes, including ones created by resize or recreate, will have these labels on the Kubernetes API node object and can be used in nodeSelectors. See http://kubernetes.io/docs/user-guide/node-selection/ for examples.

Note that Kubernetes labels, intended to associate cluster components and resources with one another and manage resource lifecycles, are different from Google Kubernetes Engine labels that are used for the purpose of tracking billing and usage information.

--node-pool-name=NODE_POOL_NAME
Name of the initial node pool that will be created for the cluster.

Specifies the name to use for the initial node pool that will be created with the cluster. If the settings specified require multiple node pools to be created, the name for each pool will be prefixed by this name. For example running the following will result in three node pools being created, example-node-pool-0, example-node-pool-1 and example-node-pool-2:

gcloud alpha container clusters create example-cluster --num-nodes 9 --max-nodes-per-pool 3     --node-pool-name example-node-pool
--node-taints=[NODE_TAINT,…]
Applies the given kubernetes taints on all nodes in default node pool(s) in new cluster, which can be used with tolerations for pod scheduling.

Examples:

gcloud alpha container clusters create example-cluster --node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule

To read more about node-taints, see https://cloud.google.com/kubernetes-engine/docs/node-taints.

--node-version=NODE_VERSION
The Kubernetes version to use for nodes. Defaults to server-specified.

The default Kubernetes version is available using the following command.

gcloud container get-server-config
--notification-config=[pubsub=ENABLED|DISABLED,pubsub-topic=TOPIC,…]
The notification configuration of the cluster. GKE supports publishing cluster upgrade notifications to any Pub/Sub topic you created in the same project. Create a subscription for the topic specified to receive notification messages. See https://cloud.google.com/pubsub/docs/admin on how to manage Pub/Sub topics and subscriptions. You can also use the filter option to specify which event types you'd like to receive from the following options: SecurityBulletinEvent, UpgradeEvent, UpgradeAvailableEvent.

Examples:

gcloud alpha container clusters create example-cluster --notification-config=pubsub=ENABLED,pubsub-topic=projects/{project}/topics/{topic-name}
gcloud alpha container clusters create example-cluster --notification-config=pubsub=ENABLED,pubsub-topic=projects/{project}/topics/{topic-name},filter="SecurityBulletinEvent|UpgradeEvent"

The project of the Pub/Sub topic must be the same one as the cluster. It can be either the project ID or the project number.

--num-nodes=NUM_NODES; default=3
The number of nodes to be created in each of the cluster's zones.
--placement-policy=PLACEMENT_POLICY
Indicates the desired resource policy to use.
gcloud alpha container clusters create node-pool-1 --cluster=example-cluster --placement-policy my-placement
--placement-type=PLACEMENT_TYPE
Placement type allows to define the type of node placement within the default node pool of this cluster.

UNSPECIFIED - No requirements on the placement of nodes. This is the default option.

COMPACT - GKE will attempt to place the nodes in a close proximity to each other. This helps to reduce the communication latency between the nodes, but imposes additional limitations on the node pool size.

gcloud alpha container clusters create example-cluster --placement-type=COMPACT

PLACEMENT_TYPE must be one of: UNSPECIFIED, COMPACT.

--preemptible
Create nodes using preemptible VM instances in the new cluster.
gcloud alpha container clusters create example-cluster --preemptible

New nodes, including ones created by resize or recreate, will use preemptible VM instances. See https://cloud.google.com/kubernetes-engine/docs/preemptible-vm for more information on how to use Preemptible VMs with Kubernetes Engine.

--private-endpoint-subnetwork=NAME
Sets the subnetwork GKE uses to provision the control plane's private endpoint.
--private-ipv6-google-access-type=PRIVATE_IPV6_GOOGLE_ACCESS_TYPE
Sets the type of private access to Google services over IPv6.

PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of:

bidirectional
  Allows Google services to initiate connections to GKE pods in this
  cluster. This is not intended for common use, and requires previous
  integration with Google services.
disabled
  Default value. Disables private access to Google services over IPv6.
outbound-only
  Allows GKE pods to make fast, secure requests to Google services
  over IPv6. This is the most common use of private IPv6 access.
gcloud alpha container clusters create       --private-ipv6-google-access-type=disabled
gcloud alpha container clusters create       --private-ipv6-google-access-type=outbound-only
gcloud alpha container clusters create       --private-ipv6-google-access-type=bidirectional

PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of: bidirectional, disabled, outbound-only.

--release-channel=CHANNEL
Release channel a cluster is subscribed to.

If left unspecified and a version is specified, the cluster is enrolled in the most mature release channel where the version is available (first checking STABLE, then REGULAR, and finally RAPID). Otherwise, if no release channel and no version is specified, the cluster is enrolled in the REGULAR channel with its default version. When a cluster is subscribed to a release channel, Google maintains both the master version and the node version. Node auto-upgrade is enabled by default for release channel clusters and can be controlled via upgrade-scope exclusions.

CHANNEL must be one of:

None
Use 'None' to opt-out of any release channel.
rapid
'rapid' channel is offered on an early access basis for customers who want to test new releases.

WARNING: Versions available in the 'rapid' channel may be subject to unresolved issues with no known workaround and are not subject to any SLAs.

regular
Clusters subscribed to 'regular' receive versions that are considered GA quality. 'regular' is intended for production users who want to take advantage of new features.
stable
Clusters subscribed to 'stable' receive versions that are known to be stable and reliable in production.
--resource-manager-tags=[KEY=VALUE,…]
Applies the specified comma-separated resource manager tags that has the GCE_FIREWALL purpose to all nodes in the new default node pool(s) of a new cluster.

Examples:

gcloud alpha container clusters create example-cluster --resource-manager-tags=tagKeys/1234=tagValues/2345
gcloud alpha container clusters create example-cluster --resource-manager-tags=my-project/key1=value1
gcloud alpha container clusters create example-cluster --resource-manager-tags=12345/key1=value1,23456/key2=value2
gcloud alpha container clusters create example-cluster --resource-manager-tags=

All nodes, including nodes that are resized or re-created, will have the specified tags on the corresponding Instance object in the Compute Engine API. You can reference these tags in network firewall policy rules. For instructions, see https://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--security-group=SECURITY_GROUP
The name of the RBAC security group for use with Google security groups in Kubernetes RBAC (https://kubernetes.io/docs/reference/access-authn-authz/rbac/).

To include group membership as part of the claims issued by Google during authentication, a group must be designated as a security group by including it as a direct member of this group.

If unspecified, no groups will be returned for use with RBAC.

--security-posture=SECURITY_POSTURE
Sets the mode of the Kubernetes security posture API's off-cluster features.

To enable in standard mode explicitly set the flag to --security-posture=standard

To disable in an existing cluster, explicitly set the flag to --security-posture=disabled.

SECURITY_POSTURE must be one of: disabled, standard.

--services-ipv4-cidr=CIDR
Set the IP range for the services IPs.

Can be specified as a netmask size (e.g. '/20') or as in CIDR notion (e.g. '10.100.0.0/20'). If given as a netmask size, the IP range will be chosen automatically from the available space in the network.

If unspecified, the services CIDR range will be chosen with a default mask size.

Cannot be specified unless '--enable-ip-alias' option is also specified.

--services-secondary-range-name=NAME
Set the secondary range to be used for services (e.g. ClusterIPs). NAME must be the name of an existing secondary range in the cluster subnetwork. Cannot be specified unless '--enable-ip-alias' option is also specified. Cannot be used with '--create-subnetwork' option.
--shielded-integrity-monitoring
Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created.
--shielded-secure-boot
The instance will boot with secure boot enabled.
--spot
Create nodes using spot VM instances in the new cluster.
gcloud alpha container clusters create example-cluster --spot

New nodes, including ones created by resize or recreate, will use spot VM instances.

--stack-type=STACK_TYPE
IP stack type of the node VMs. STACK_TYPE must be one of: ipv4, ipv4-ipv6.
--storage-pools=STORAGE_POOL,[…]
A list of storage pools where the cluster's boot disks will be provisioned.

STORAGE_POOL must be in the format projects/project/zones/zone/storagePools/storagePool

--subnetwork=SUBNETWORK
The Google Compute Engine subnetwork (https://cloud.google.com/compute/docs/subnetworks) to which the cluster is connected. The subnetwork must belong to the network specified by --network.

Cannot be used with the "--create-subnetwork" option.

--system-config-from-file=SYSTEM_CONFIG_FROM_FILE
Path of the YAML/JSON file that contains the node configuration, including Linux kernel parameters (sysctls) and kubelet configs.

Examples:

kubeletConfig:
  cpuManagerPolicy: static
linuxConfig:
  sysctl:
    net.core.somaxconn: '2048'
    net.ipv4.tcp_rmem: '4096 87380 6291456'
hugepageConfig:
  hugepage_size2m: '1024'
  hugepage_size1g: '2'

List of supported kubelet configs in 'kubeletConfig'.

KEY VALUE
cpuManagerPolicy either 'static' or 'none'
cpuCFSQuota true or false (enabled by default)
cpuCFSQuotaPeriod interval (e.g., '100ms')
podPidsLimit integer (The value must be greater than or equal to 1024 and less than 4194304.)
List of supported sysctls in 'linuxConfig'.
KEY VALUE
net.core.netdev_max_backlog Any positive integer, less than 2147483647
net.core.rmem_max Any positive integer, less than 2147483647
net.core.wmem_default Any positive integer, less than 2147483647
net.core.wmem_max Any positive integer, less than 2147483647
net.core.optmem_max Any positive integer, less than 2147483647
net.core.somaxconn Must be [128, 2147483647]
net.ipv4.tcp_rmem Any positive integer tuple
net.ipv4.tcp_wmem Any positive integer tuple
net.ipv4.tcp_tw_reuse Must be {0, 1}
List of supported hugepage size in 'hugepageConfig'.
KEY VALUE
hugepage_size2m Number of 2M huge pages, any positive integer
hugepage_size1g Number of 1G huge pages, any positive integer
Allocated hugepage size should not exceed 60% of available memory on the node. For example, c2d-highcpu-4 has 8GB memory, total allocated hugepage of 2m and 1g should not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d, h3, m3, a2, a3, g2.

Note, updating the system configuration of an existing node pool requires recreation of the nodes which which might cause a disruption.

--tags=TAG,[TAG,…]
Applies the given Compute Engine tags (comma separated) on all nodes in the new node-pool.

Examples:

gcloud alpha container clusters create example-cluster --tags=tag1,tag2

New nodes, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and can be used in firewall rules. See https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.

--threads-per-core=THREADS_PER_CORE
The number of visible threads per physical core for each node. To disable simultaneous multithreading (SMT) set this to 1.
--workload-metadata=WORKLOAD_METADATA
Type of metadata server available to pods running in the node pool. WORKLOAD_METADATA must be one of:
EXPOSED
[DEPRECATED] Pods running in this node pool have access to the node's underlying Compute Engine Metadata Server.
GCE_METADATA
Pods running in this node pool have access to the node's underlying Compute Engine Metadata Server.
GKE_METADATA
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine Metadata Server exposes a metadata API to workloads that is compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata Servers. This feature can only be enabled if Workload Identity is enabled at the cluster level.
GKE_METADATA_SERVER
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. The Kubernetes Engine Metadata Server exposes a metadata API to workloads that is compatible with the V1 Compute Metadata APIs exposed by the Compute Engine and App Engine Metadata Servers. This feature can only be enabled if Workload Identity is enabled at the cluster level.
SECURE
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. This is a temporary security solution available while the bootstrapping process for cluster nodes is being redesigned with significant security improvements. This feature is scheduled to be deprecated in the future and later removed.
--workload-pool=WORKLOAD_POOL
Enable Workload Identity on the cluster.

When enabled, Kubernetes service accounts will be able to act as Cloud IAM Service Accounts, through the provided workload pool.

Currently, the only accepted workload pool is the workload pool of the Cloud project containing the cluster, PROJECT_ID.svc.id.goog.

For more information on Workload Identity, see

https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
--workload-vulnerability-scanning=WORKLOAD_VULNERABILITY_SCANNING
Sets the mode of the Kubernetes security posture API's workload vulnerability scanning.

To enable Advanced vulnerability insights mode explicitly set the flag to --workload-vulnerability-scanning=enterprise.

To enable in standard mode explicitly set the flag to --workload-vulnerability-scanning=standard.

To disable in an existing cluster, explicitly set the flag to --workload-vulnerability-scanning=disabled.

WORKLOAD_VULNERABILITY_SCANNING must be one of: disabled, standard, enterprise.

At most one of these can be specified:
--additional-zones=ZONE,[ZONE,…]
(DEPRECATED) The set of additional zones in which the specified node footprint should be replicated. All zones must be in the same region as the cluster's primary zone. If additional-zones is not specified, all nodes will be in the cluster's primary zone.

Note that NUM_NODES nodes will be created in each zone, such that if you specify --num-nodes=4 and choose one additional zone, 8 nodes will be created.

Multiple locations can be specified, separated by commas. For example:

gcloud alpha container clusters create example-cluster --zone us-central1-a --additional-zones us-central1-b,us-central1-c

This flag is deprecated. Use --node-locations=PRIMARY_ZONE,[ZONE,…] instead.

--node-locations=ZONE,[ZONE,…]
The set of zones in which the specified node footprint should be replicated. All zones must be in the same region as the cluster's master(s), specified by the -location, --zone, or --region flag. Additionally, for zonal clusters, --node-locations must contain the cluster's primary zone. If not specified, all nodes will be in the cluster's primary zone (for zonal clusters) or spread across three randomly chosen zones within the cluster's region (for regional clusters).

Note that NUM_NODES nodes will be created in each zone, such that if you specify --num-nodes=4 and choose two locations, 8 nodes will be created.

Multiple locations can be specified, separated by commas. For example:

gcloud alpha container clusters create example-cluster --location us-central1-a --node-locations us-central1-a,us-central1-b
Flags for Binary Authorization:
--binauthz-policy-bindings=[name=BINAUTHZ_POLICY]
The relative resource name of the Binary Authorization policy to audit and/or enforce. GKE policies have the following format: projects/{project_number}/platforms/gke/policies/{policy_id}.
At most one of these can be specified:
--binauthz-evaluation-mode=BINAUTHZ_EVALUATION_MODE
Enable Binary Authorization for this cluster. BINAUTHZ_EVALUATION_MODE must be one of: disabled, policy-bindings, policy-bindings-and-project-singleton-policy-enforce, project-singleton-policy-enforce.
--enable-binauthz
(DEPRECATED) Enable Binary Authorization for this cluster.

The --enable-binauthz flag is deprecated. Please use --binauthz-evaluation-mode instead.

ClusterDNS
--cluster-dns=CLUSTER_DNS
DNS provider to use for this cluster. CLUSTER_DNS must be one of:
clouddns
Selects Cloud DNS as the DNS provider for the cluster.
default
Selects the default DNS provider (kube-dns) for the cluster.
kubedns
Selects Kube DNS as the DNS provider for the cluster.
--cluster-dns-domain=CLUSTER_DNS_DOMAIN
DNS domain for this cluster. The default value is cluster.local. This is configurable when --cluster-dns=clouddns and --cluster-dns-scope=vpc are set. The value must be a valid DNS subdomain as defined in RFC 1123.
--cluster-dns-scope=CLUSTER_DNS_SCOPE
DNS scope for the Cloud DNS zone created - valid only with --cluster-dns=clouddns. Defaults to cluster.

CLUSTER_DNS_SCOPE must be one of:

cluster
Configures the Cloud DNS zone to be private to the cluster.
vpc
Configures the Cloud DNS zone to be private to the VPC Network.
At most one of these can be specified:
--dataplane-v2-observability-mode=DATAPLANE_V2_OBSERVABILITY_MODE
(REMOVED) Select Advanced Datapath Observability mode for the cluster. Defaults to DISABLED.

Advanced Datapath Observability allows for a real-time view into pod-to-pod traffic within your cluster.

Examples:

gcloud alpha container clusters create --dataplane-v2-observability-mode=DISABLED
gcloud alpha container clusters create --dataplane-v2-observability-mode=INTERNAL_VPC_LB
gcloud alpha container clusters create --dataplane-v2-observability-mode=EXTERNAL_LB

Flag --dataplane-v2-observability-mode has been removed.

DATAPLANE_V2_OBSERVABILITY_MODE must be one of:

DISABLED
Disables Advanced Datapath Observability.
EXTERNAL_LB
Makes Advanced Datapath Observability available to the external network.
INTERNAL_VPC_LB
Makes Advanced Datapath Observability available from the VPC network.
--disable-dataplane-v2-flow-observability
Disables Advanced Datapath Observability.
--enable-dataplane-v2-flow-observability
Enables Advanced Datapath Observability which allows for a real-time view into pod-to-pod traffic within your cluster.
At most one of these can be specified:
--disable-dataplane-v2-metrics
Stops exposing advanced datapath flow metrics on node port.
--enable-dataplane-v2-metrics
Exposes advanced datapath flow metrics on node port.
Node autoprovisioning
--enable-autoprovisioning
Enables node autoprovisioning for a cluster.

Cluster Autoscaler will be able to create new node pools. Requires maximum CPU and memory limits to be specified.

This flag argument must be specified if any of the other arguments in this group are specified.

At most one of these can be specified:
--autoprovisioning-config-file=AUTOPROVISIONING_CONFIG_FILE
Path of the JSON/YAML file which contains information about the cluster's node autoprovisioning configuration. Currently it contains a list of resource limits, identity defaults for autoprovisioning, node upgrade settings, node management settings, minimum cpu platform, image type, node locations for autoprovisioning, disk type and size configuration, Shielded instance settings, and customer-managed encryption keys settings.

Resource limits are specified in the field 'resourceLimits'. Each resource limits definition contains three fields: resourceType, maximum and minimum. Resource type can be "cpu", "memory" or an accelerator (e.g. "nvidia-tesla-k80" for nVidia Tesla K80). Use gcloud compute accelerator-types list to learn about available accelerator types. Maximum is the maximum allowed amount with the unit of the resource. Minimum is the minimum allowed amount with the unit of the resource.

Identity default contains at most one of the below fields: serviceAccount: The Google Cloud Platform Service Account to be used by node VMs in autoprovisioned node pools. If not specified, the project's default service account is used. scopes: A list of scopes to be used by node instances in autoprovisioned node pools. Multiple scopes can be specified, separated by commas. For information on defaults, look at: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes

Node Upgrade settings are specified under the field 'upgradeSettings', which has the following fields: maxSurgeUpgrade: Number of extra (surge) nodes to be created on each upgrade of an autoprovisioned node pool. maxUnavailableUpgrade: Number of nodes that can be unavailable at the same time on each upgrade of an autoprovisioned node pool.

Node Management settings are specified under the field 'management', which has the following fields: autoUpgrade: A boolean field that indicates if node autoupgrade is enabled for autoprovisioned node pools. autoRepair: A boolean field that indicates if node autorepair is enabled for autoprovisioned node pools.

minCpuPlatform (deprecated): If specified, new autoprovisioned nodes will be scheduled on host with specified CPU architecture or a newer one. Note: Min CPU platform can only be specified in Beta and Alpha.

Autoprovisioned node image is specified under the 'imageType' field. If not specified the default value will be applied.

Autoprovisioning locations is a set of zones where new node pools can be created by Autoprovisioning. Autoprovisioning locations are specified in the field 'autoprovisioningLocations'. All zones must be in the same region as the cluster's master(s).

Disk type and size are specified under the 'diskType' and 'diskSizeGb' fields, respectively. If specified, new autoprovisioned nodes will be created with custom boot disks configured by these settings.

Shielded instance settings are specified under the 'shieldedInstanceConfig' field, which has the following fields: enableSecureBoot: A boolean field that indicates if secure boot is enabled for autoprovisioned nodes. enableIntegrityMonitoring: A boolean field that indicates if integrity monitoring is enabled for autoprovisioned nodes.

Customer Managed Encryption Keys (CMEK) used by new auto-provisioned node pools can be specified in the 'bootDiskKmsKey' field.

Flags to configure autoprovisioned nodes
--max-cpu=MAX_CPU
Maximum number of cores in the cluster.

Maximum number of cores to which the cluster can scale.

This flag argument must be specified if any of the other arguments in this group are specified.

--max-memory=MAX_MEMORY
Maximum memory in the cluster.

Maximum number of gigabytes of memory to which the cluster can scale.

This flag argument must be specified if any of the other arguments in this group are specified.

--autoprovisioning-image-type=AUTOPROVISIONING_IMAGE_TYPE
Node Autoprovisioning will create new nodes with the specified image type
--autoprovisioning-locations=ZONE,[ZONE,…]
Set of zones where new node pools can be created by autoprovisioning. All zones must be in the same region as the cluster's master(s). Multiple locations can be specified, separated by commas.
--autoprovisioning-min-cpu-platform=PLATFORM
(DEPRECATED) If specified, new autoprovisioned nodes will be scheduled on host with specified CPU architecture or a newer one.

The --autoprovisioning-min-cpu-platform flag is deprecated and will be removed in an upcoming release. More info: https://cloud.google.com/kubernetes-engine/docs/release-notes#March_08_2022

--min-cpu=MIN_CPU
Minimum number of cores in the cluster.

Minimum number of cores to which the cluster can scale.

--min-memory=MIN_MEMORY
Minimum memory in the cluster.

Minimum number of gigabytes of memory to which the cluster can scale.

Flags to specify upgrade settings for autoprovisioned nodes:
--autoprovisioning-max-surge-upgrade=AUTOPROVISIONING_MAX_SURGE_UPGRADE
Number of extra (surge) nodes to be created on each upgrade of an autoprovisioned node pool.
--autoprovisioning-max-unavailable-upgrade=AUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of an autoprovisioned node pool.
--autoprovisioning-node-pool-soak-duration=AUTOPROVISIONING_NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deleting the blue pool and completing the update. This argument should be used in conjunction with --enable-autoprovisioning-blue-green-upgrade to take effect.
--autoprovisioning-standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade. This argument should be used in conjunction with --enable-autoprovisioning-blue-green-upgrade to take effect.

Batch sizes are specified by one of, batch-node-count or batch-percent. The duration between batches is specified by batch-soak-duration.

Example: --standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s --standard-rollout-policy=batch-percent=0.05,batch-soak-duration=180s

Flag group to choose the top level upgrade option:

At most one of these can be specified:

--enable-autoprovisioning-blue-green-upgrade
Whether to use blue-green upgrade for the autoprovisioned node pool.
--enable-autoprovisioning-surge-upgrade
Whether to use surge upgrade for the autoprovisioned node pool.
Flags to specify identity for autoprovisioned nodes:
--autoprovisioning-scopes=[SCOPE,…]
The scopes to be used by node instances in autoprovisioned node pools. Multiple scopes can be specified, separated by commas. For information on defaults, look at: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes
--autoprovisioning-service-account=AUTOPROVISIONING_SERVICE_ACCOUNT
The Google Cloud Platform Service Account to be used by node VMs in autoprovisioned node pools. If not specified, the project default service account is used.
Flags to specify node management settings for autoprovisioned nodes:
--enable-autoprovisioning-autorepair
Enable node autorepair for autoprovisioned node pools. Use --no-enable-autoprovisioning-autorepair to disable.

This flag argument must be specified if any of the other arguments in this group are specified.

--enable-autoprovisioning-autoupgrade
Enable node autoupgrade for autoprovisioned node pools. Use --no-enable-autoprovisioning-autoupgrade to disable.

This flag argument must be specified if any of the other arguments in this group are specified.

Arguments to set limits on accelerators:
--max-accelerator=[type=TYPE,count=COUNT,…]
Sets maximum limit for a single type of accelerators (e.g. GPUs) in cluster.
type
(Required) The specific type (e.g. nvidia-tesla-k80 for nVidia Tesla K80) of accelerator for which the limit is set. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Required) The maximum number of accelerators to which the cluster can be scaled.

This flag argument must be specified if any of the other arguments in this group are specified.

--min-accelerator=[type=TYPE,count=COUNT,…]
Sets minimum limit for a single type of accelerators (e.g. GPUs) in cluster. Defaults to 0 for all accelerator types if it isn't set.
type
(Required) The specific type (e.g. nvidia-tesla-k80 for nVidia Tesla K80) of accelerator for which the limit is set. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Required) The minimum number of accelerators to which the cluster can be scaled.
Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided. If not already, --max-nodes or --total-max-nodes must also be set.

--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizes of available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of: BALANCED, ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

Master Authorized Networks
--enable-master-authorized-networks
Allow only specified set of CIDR blocks (specified by the --master-authorized-networks flag) to connect to Kubernetes master through HTTPS. Besides these blocks, the following have access as well:
1) The private network the cluster connects to if
`--enable-private-nodes` is specified.
2) Google Compute Engine Public IPs if `--enable-private-nodes` is not
specified.

Use --no-enable-master-authorized-networks to disable. When disabled, public internet (0.0.0.0/0) is allowed to connect to Kubernetes master through HTTPS.

--master-authorized-networks=NETWORK,[NETWORK,…]
The list of CIDR blocks (up to 100 for private cluster, 50 for public cluster) that are allowed to connect to Kubernetes master through HTTPS. Specified in CIDR notation (e.g. 1.2.3.4/30). Cannot be specified unless --enable-master-authorized-networks is also specified.
Exports cluster's usage of cloud resources
--enable-network-egress-metering
Enable network egress metering on this cluster.

When enabled, a DaemonSet is deployed into the cluster. Each DaemonSet pod meters network egress traffic by collecting data from the conntrack table, and exports the metered metrics to the specified destination.

Network egress metering is disabled if this flag is omitted, or when --no-enable-network-egress-metering is set.

--enable-resource-consumption-metering
Enable resource consumption metering on this cluster.

When enabled, a table will be created in the specified BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export.

Resource consumption metering is enabled unless --no-enable-resource- consumption-metering is set.

--resource-usage-bigquery-dataset=RESOURCE_USAGE_BIGQUERY_DATASET
The name of the BigQuery dataset to which the cluster's usage of cloud resources is exported. A table will be created in the specified dataset to store cluster resource usage. The resulting table can be joined with BigQuery Billing Export to produce a fine-grained cost breakdown.

Examples:

gcloud alpha container clusters create example-cluster --resource-usage-bigquery-dataset=example_bigquery_dataset_name
Private Clusters
--enable-private-endpoint
Cluster is managed using the private IP address of the master API endpoint.
--enable-private-nodes
Cluster is created with no public IP addresses on the cluster nodes.
--master-ipv4-cidr=MASTER_IPV4_CIDR
IPv4 CIDR range to use for the master network. This should have a netmask of size /28 and should be used in conjunction with the --enable-private-nodes flag.
--private-cluster
(DEPRECATED) Cluster is created with no public IP addresses on the cluster nodes.

The --private-cluster flag is deprecated and will be removed in a future release. Use --enable-private-nodes instead.

Flags relating to Cloud TPUs:
--enable-tpu
Enable Cloud TPUs for this cluster.

Can not be specified unless --enable-ip-alias is also specified.

At most one of these can be specified:
--enable-tpu-service-networking
Enable Cloud TPU's Service Networking mode. In this mode, the CIDR blocks used by the Cloud TPUs will be allocated and managed by Service Networking, instead of Kubernetes Engine.

This cannot be specified if tpu-ipv4-cidr is specified.

--tpu-ipv4-cidr=CIDR
Set the IP range for the Cloud TPUs.

Can be specified as a netmask size (e.g. '/20') or as in CIDR notion (e.g. '10.100.0.0/20'). If given as a netmask size, the IP range will be chosen automatically from the available space in the network.

If unspecified, the TPU CIDR range will use automatic default '/20'.

Can not be specified unless '--enable-tpu' and '--enable-ip-alias' are also specified.

At most one of these can be specified:
--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.

Examples:

gcloud alpha container clusters create example_cluster --ephemeral-storage local-ssd-count=2

'local-ssd-count' specifies the number of local SSDs to use to back ephemeral storage. Local SDDs use NVMe interfaces. For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--ephemeral-storage-local-ssd[=[count=COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.

Examples:

gcloud alpha container clusters create example_cluster --ephemeral-storage-local-ssd count=2

'count' specifies the number of local SSDs to use to back ephemeral storage. Local SDDs use NVMe interfaces. For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-nvme-ssd-block[=[count=COUNT]]
Adds the requested local SSDs on all nodes in default node pool(s) in the new cluster.

Examples:

gcloud alpha container clusters create example_cluster --local-nvme-ssd-block count=2

'count' must be between 1-8 New nodes, including ones created by resize or recreate, will have these local SSDs.

For first- and second-generation machine types, a nonzero count field is required for local ssd to be configured. For third-generation machine types, the count field is optional because the count is inferred from the machine type.

See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-ssd-count=LOCAL_SSD_COUNT
--local-ssd-count is the equivalent of using --local-ssd-volumes with type=scsi,format=fs

The number of local SSD disks to provision on each node, formatted and mounted in the filesystem.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can be attached to an instance is limited by the maximum number of disks available on a machine, which differs by compute zone. See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

--local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]
Adds the requested local SSDs on all nodes in default node pool(s) in the new cluster.

Examples:

gcloud alpha container clusters create example_cluster --local-ssd-volumes count=2,type=nvme,format=fs

'count' must be between 1-8

'type' must be either scsi or nvme

'format' must be either fs or block

New nodes, including ones created by resize or recreate, will have these local SSDs.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that can be attached to an instance is limited by the maximum number of disks available on a machine, which differs by compute zone. See https://cloud.google.com/compute/docs/disks/local-ssd for more information.

At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster.
--region=REGION
Compute region (e.g. us-central1) for the cluster.
--zone=ZONE, -z ZONE
Compute zone (e.g. us-central1-a) for the cluster. Overrides the default compute/zone property value for this command invocation.
One of either maintenance-window or the group of maintenance-window flags can be set. At most one of these can be specified:
--maintenance-window=START_TIME
Set a time of day when you prefer maintenance to start on this cluster. For example:
gcloud alpha container clusters create example-cluster --maintenance-window=12:43

The time corresponds to the UTC time zone, and must be in HH:MM format.

Non-emergency maintenance will occur in the 4 hour block starting at the specified time.

This is mutually exclusive with the recurring maintenance windows and will overwrite any existing window. Compatible with maintenance exclusions.

Set a flexible maintenance window by specifying a window that recurs per an RFC 5545 RRULE. Non-emergency maintenance will occur in the recurring windows.

Examples:

For a 9-5 Mon-Wed UTC-4 maintenance window:

gcloud alpha container clusters create example-cluster --maintenance-window-start=2000-01-01T09:00:00-04:00 --maintenance-window-end=2000-01-01T17:00:00-04:00 --maintenance-window-recurrence='FREQ=WEEKLY;BYDAY=MO,TU,WE'

For a daily window from 22:00 - 04:00 UTC:

gcloud alpha container clusters create example-cluster --maintenance-window-start=2000-01-01T22:00:00Z --maintenance-window-end=2000-01-02T04:00:00Z --maintenance-window-recurrence=FREQ=DAILY
--maintenance-window-end=TIME_STAMP
End time of the first window (can occur in the past). Must take place after the start time. The difference in start and end time specifies the length of each recurrence. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group are specified.

--maintenance-window-recurrence=RRULE
An RFC 5545 RRULE, specifying how the window will recur. Note that minimum requirements for maintenance periods will be enforced. Note that FREQ=SECONDLY, MINUTELY, and HOURLY are not supported.

This flag argument must be specified if any of the other arguments in this group are specified.

--maintenance-window-start=TIME_STAMP
Start time of the first window (can occur in the past). The start time influences when the window will start for recurrences. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group are specified.

Basic auth
--password=PASSWORD
The password to use for cluster auth. Defaults to a server-specified randomly-generated string.
Options to specify the username.

At most one of these can be specified:

--enable-basic-auth
Enable basic (username/password) auth for the cluster. --enable-basic-auth is an alias for --username=admin; --no-enable-basic-auth is an alias for --username="". Use --password to specify a password; if not, the server will randomly generate one. For cluster versions before 1.12, if neither --enable-basic-auth nor --username is specified, --enable-basic-auth will default to true. After 1.12, --enable-basic-auth will default to false.
--username=USERNAME, -u USERNAME
The user name to use for basic auth for the cluster. Use --password to specify a password; if not, the server will randomly generate one.
Specifies the reservation for the default initial node pool.
--reservation=RESERVATION
The name of the reservation, required when --reservation-affinity=specific.
--reservation-affinity=RESERVATION_AFFINITY
The type of the reservation for the default initial node pool. RESERVATION_AFFINITY must be one of: any, none, specific.
Options to specify the node identity.
Scopes options.
--scopes=[SCOPE,…]; default="gke-default"
Specifies scopes for the node instances.

Examples:

gcloud alpha container clusters create example-cluster --scopes=https://www.googleapis.com/auth/devstorage.read_only
gcloud alpha container clusters create example-cluster --scopes=bigquery,storage-rw,compute-ro

Multiple scopes can be specified, separated by commas. Various scopes are automatically added based on feature usage. Such scopes are not added if an equivalent scope already exists.

  • monitoring-write: always added to ensure metrics can be written
  • logging-write: added if Cloud Logging is enabled (--enable-cloud-logging/--logging)
  • monitoring: added if Cloud Monitoring is enabled (--enable-cloud-monitoring/--monitoring)
  • gke-default: added for Autopilot clusters that use the default service account
  • cloud-platform: added for Autopilot clusters that use any other service account

SCOPE can be either the full URI of the scope or an alias. Default scopes are assigned to all instances. Available aliases are:

Alias URI
bigquery https://www.googleapis.com/auth/bigquery
cloud-platform https://www.googleapis.com/auth/cloud-platform
cloud-source-repos https://www.googleapis.com/auth/source.full_control
cloud-source-repos-ro https://www.googleapis.com/auth/source.read_only
compute-ro https://www.googleapis.com/auth/compute.readonly
compute-rw https://www.googleapis.com/auth/compute
datastore https://www.googleapis.com/auth/datastore
default https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/pubsub
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
gke-default https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
logging-write https://www.googleapis.com/auth/logging.write
monitoring https://www.googleapis.com/auth/monitoring
monitoring-read https://www.googleapis.com/auth/monitoring.read
monitoring-write https://www.googleapis.com/auth/monitoring.write
pubsub https://www.googleapis.com/auth/pubsub
service-control https://www.googleapis.com/auth/servicecontrol
service-management https://www.googleapis.com/auth/service.management.readonly
sql (deprecated) https://www.googleapis.com/auth/sqlservice
sql-admin https://www.googleapis.com/auth/sqlservice.admin
storage-full https://www.googleapis.com/auth/devstorage.full_control
storage-ro https://www.googleapis.com/auth/devstorage.read_only
storage-rw https://www.googleapis.com/auth/devstorage.read_write
taskqueue https://www.googleapis.com/auth/taskqueue
trace https://www.googleapis.com/auth/trace.append
userinfo-email https://www.googleapis.com/auth/userinfo.email
DEPRECATION WARNING: https://www.googleapis.com/auth/sqlservice account scope and sql alias do not provide SQL instance management capabilities and have been deprecated. Please, use https://www.googleapis.com/auth/sqlservice.admin or sql-admin to manage your Google SQL Service instances.
--service-account=SERVICE_ACCOUNT
The Google Cloud Platform Service Account to be used by the node VMs. If a service account is specified, the cloud-platform and userinfo.email scopes are used. If no Service Account is specified, the project default service account is used.
Flags for Security Profile:
--security-profile=SECURITY_PROFILE
Name and version of the security profile to be applied to the cluster.

Examples:

gcloud alpha container clusters create example-cluster --security-profile=default-1.0-gke.0
--security-profile-runtime-rules
Apply runtime rules in the specified security profile to the cluster. When enabled (by default), a security profile controller and webhook are deployed on the cluster to enforce the runtime rules. If --no-security-profile-runtime-rules is specified to disable this feature, only bootstrapping rules are applied, and no security profile controller or webhook are installed. Enabled by default, use --no-security-profile-runtime-rules to disable.
GCLOUD WIDE FLAGS
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
This command is currently in alpha and might change without notice. If this command fails with API permission errors despite specifying the correct project, you might be trying to access an API with an invitation-only early access allowlist. These variants are also available:
gcloud container clusters create
gcloud beta container clusters create