gcloud beta container clusters update

NAME
gcloud beta container clusters update - update cluster settings for an existing container cluster
SYNOPSIS
gcloud beta container clusters update NAME (--autoprovisioning-cgroup-mode=AUTOPROVISIONING_CGROUP_MODE     | --autoprovisioning-enable-insecure-kubelet-readonly-port     | --autoprovisioning-network-tags=[TAGS,…]     | --autoprovisioning-resource-manager-tags=[KEY=VALUE,…]     | --autoscaling-profile=AUTOSCALING_PROFILE     | --clear-fleet-project     | --complete-credential-rotation     | --complete-ip-rotation     | --containerd-config-from-file=PATH_TO_FILE     | --database-encryption-key=DATABASE_ENCRYPTION_KEY     | --disable-database-encryption     | --disable-default-snat     | --disable-workload-identity     | --enable-autoscaling     | --[no-]enable-cilium-clusterwide-network-policy     | --enable-cost-allocation     | --enable-fleet     | --enable-fqdn-network-policy     | --enable-gke-oidc     | --enable-identity-service     | --enable-image-streaming     | --enable-insecure-kubelet-readonly-port     | --enable-intra-node-visibility     | --enable-kubernetes-unstable-apis=API,[API,…]     | --enable-l4-ilb-subsetting     | --enable-legacy-authorization     | --enable-logging-monitoring-system-only     | --enable-multi-networking     | --enable-network-policy     | --enable-pod-security-policy     | --enable-private-nodes     | --[no-]enable-ray-cluster-logging     | --[no-]enable-ray-cluster-monitoring     | --enable-secret-manager     | --enable-service-externalips     | --enable-shielded-nodes     | --enable-stackdriver-kubernetes     | --enable-vertical-pod-autoscaling     | --fleet-project=PROJECT_ID_OR_NUMBER     | --gateway-api=GATEWAY_API     | --generate-password     | --hpa-profile=HPA_PROFILE     | --identity-provider=IDENTITY_PROVIDER     | --in-transit-encryption=IN_TRANSIT_ENCRYPTION     | --logging-variant=LOGGING_VARIANT     | --maintenance-window=START_TIME     | --network-performance-configs=[PROPERTY1=VALUE1,…]     | --notification-config=[pubsub=ENABLED|DISABLED,pubsub-topic=TOPIC,…]     | --private-ipv6-google-access-type=PRIVATE_IPV6_GOOGLE_ACCESS_TYPE     | --release-channel=[CHANNEL]     | --remove-labels=[KEY,…]     | --remove-workload-policies=REMOVE_WORKLOAD_POLICIES     | --security-group=SECURITY_GROUP     | --security-posture=SECURITY_POSTURE     | --set-password     | --stack-type=STACK_TYPE     | --start-credential-rotation     | --start-ip-rotation     | --tier=TIER     | --update-addons=[ADDON=ENABLED|DISABLED,…]     | --update-labels=[KEY=VALUE,…]     | --workload-policies=WORKLOAD_POLICIES     | --workload-pool=WORKLOAD_POOL     | --workload-vulnerability-scanning=WORKLOAD_VULNERABILITY_SCANNING     | --additional-pod-ipv4-ranges=NAME,[NAME,…] --remove-additional-pod-ipv4-ranges=NAME,[NAME,…]     | --additional-zones=[ZONE,…]     | --node-locations=ZONE,[ZONE,…]     | --auto-monitoring-scope=AUTO_MONITORING_SCOPE --logging=[COMPONENT,…] --monitoring=[COMPONENT,…] --disable-managed-prometheus     | --enable-managed-prometheus     | --binauthz-policy-bindings=[name=BINAUTHZ_POLICY] --binauthz-evaluation-mode=BINAUTHZ_EVALUATION_MODE     | --enable-binauthz     | --clear-maintenance-window     | --remove-maintenance-exclusion=NAME     | [--add-maintenance-exclusion-end=TIME_STAMP : --add-maintenance-exclusion-name=NAME --add-maintenance-exclusion-scope=SCOPE --add-maintenance-exclusion-start=TIME_STAMP]     | --maintenance-window-end=TIME_STAMP --maintenance-window-recurrence=RRULE --maintenance-window-start=TIME_STAMP     | --clear-resource-usage-bigquery-dataset     | --enable-network-egress-metering --enable-resource-consumption-metering --resource-usage-bigquery-dataset=RESOURCE_USAGE_BIGQUERY_DATASET     | --cluster-dns=CLUSTER_DNS --cluster-dns-domain=CLUSTER_DNS_DOMAIN --cluster-dns-scope=CLUSTER_DNS_SCOPE --additive-vpc-scope-dns-domain=ADDITIVE_VPC_SCOPE_DNS_DOMAIN     | --disable-additive-vpc-scope     | --dataplane-v2-observability-mode=DATAPLANE_V2_OBSERVABILITY_MODE     | --disable-dataplane-v2-flow-observability     | --enable-dataplane-v2-flow-observability --disable-dataplane-v2-metrics     | --enable-dataplane-v2-metrics     | --enable-authorized-networks-on-private-endpoint --enable-dns-access --enable-google-cloud-access --enable-ip-access --enable-master-global-access --enable-private-endpoint --enable-master-authorized-networks --master-authorized-networks=NETWORK,[NETWORK,…]     | [--enable-autoprovisioning : --autoprovisioning-config-file=PATH_TO_FILE | --autoprovisioning-image-type=AUTOPROVISIONING_IMAGE_TYPE --autoprovisioning-locations=ZONE,[ZONE,…] --autoprovisioning-min-cpu-platform=PLATFORM --max-cpu=MAX_CPU --max-memory=MAX_MEMORY --min-cpu=MIN_CPU --min-memory=MIN_MEMORY --autoprovisioning-max-surge-upgrade=AUTOPROVISIONING_MAX_SURGE_UPGRADE --autoprovisioning-max-unavailable-upgrade=AUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE --autoprovisioning-node-pool-soak-duration=AUTOPROVISIONING_NODE_POOL_SOAK_DURATION --autoprovisioning-standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…] --enable-autoprovisioning-blue-green-upgrade | --enable-autoprovisioning-surge-upgrade --autoprovisioning-scopes=[SCOPE,…] --autoprovisioning-service-account=AUTOPROVISIONING_SERVICE_ACCOUNT --enable-autoprovisioning-autorepair --enable-autoprovisioning-autoupgrade [--max-accelerator=[type=TYPE,count=COUNT,…] : --min-accelerator=[type=TYPE,count=COUNT,…]]]     | --enable-insecure-binding-system-authenticated --enable-insecure-binding-system-unauthenticated     | --enable-tpu --enable-tpu-service-networking     | --tpu-ipv4-cidr=CIDR     | --logging-service=LOGGING_SERVICE --monitoring-service=MONITORING_SERVICE     | --password=PASSWORD --enable-basic-auth     | --username=USERNAME, -u USERNAME) [--async] [--cloud-run-config=[load-balancer-type=EXTERNAL,…]] [--istio-config=[auth=MTLS_PERMISSIVE,…]] [--node-pool=NODE_POOL] [--location=LOCATION     | --region=REGION     | --zone=ZONE, -z ZONE] [--location-policy=LOCATION_POLICY --max-nodes=MAX_NODES --min-nodes=MIN_NODES --total-max-nodes=TOTAL_MAX_NODES --total-min-nodes=TOTAL_MIN_NODES] [GCLOUD_WIDE_FLAG]
DESCRIPTION
(BETA) Update cluster settings for an existing container cluster.
EXAMPLES
To enable autoscaling for an existing cluster, run:
gcloud beta container clusters update sample-cluster --enable-autoscaling
POSITIONAL ARGUMENTS
NAME
The name of the cluster to update.
REQUIRED FLAGS
Exactly one of these must be specified:
--autoprovisioning-cgroup-mode=AUTOPROVISIONING_CGROUP_MODE
Sets the cgroup mode for auto-provisioned nodes.

Updating this flag triggers an update using surge upgrades of all existing auto-provisioned nodes to apply the new value of cgroup mode.

For an Autopilot cluster, the specified cgroup mode will be set on all existing and new nodes in the cluster. For a Standard cluster, the specified cgroup mode will be set on all existing and new auto-provisioned node pools in the cluster.

If not set, GKE uses cgroupv2 for new nodes when the cluster was created running 1.26 or later, and cgroupv1 for clusters created running 1.25 or earlier. To check your initial cluster version, run gcloud container clusters describe [NAME] --format="value(initialClusterVersion)"

For clusters created running version 1.26 or later, you can't set the cgroup mode to v1.

To learn more, see: https://cloud.google.com/kubernetes-engine/docs/how-to/migrate-cgroupv2.

AUTOPROVISIONING_CGROUP_MODE must be one of: default, v1, v2.

--autoprovisioning-enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port for Autoprovisioned Node Pools.

If not set, the value from nodePoolDefaults.nodeConfigDefaults will be used.

To disable the readonly port --no-autoprovisioning-enable-insecure-kubelet-readonly-port.

--autoprovisioning-network-tags=[TAGS,…]
Replaces the user specified Compute Engine tags on all nodes in all the existing auto-provisioned node pools in the Standard cluster or the Autopilot with the given tags (comma separated).

Examples:

gcloud beta container clusters update example-cluster --autoprovisioning-network-tags=tag1,tag2

New nodes in auto-provisioned node pools, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and these tags can be used in firewall rules. See https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.

--autoprovisioning-resource-manager-tags=[KEY=VALUE,…]
For an Autopilot cluster, the specified comma-separated resource manager tags that has the GCP_FIREWALL purpose replace the existing tags on all nodes in the cluster.

For a Standard cluster, the specified comma-separated resource manager tags that has the GCE_FIREWALL purpose are applied to all nodes in the new newly created auto-provisioned node pools. Existing auto-provisioned node pools retain the tags that they had before the update. To update tags on an existing auto-provisioned node pool, use the node pool level flag '--resource-manager-tags'.

Examples:

gcloud beta container clusters update example-cluster --autoprovisioning-resource-manager-tags=tagKeys/1234=tagValues/2345
gcloud beta container clusters update example-cluster --autoprovisioning-resource-manager-tags=my-project/key1=value1
gcloud beta container clusters update example-cluster --autoprovisioning-resource-manager-tags=12345/key1=value1,23456/key2=value2
gcloud beta container clusters update example-cluster --autoprovisioning-resource-manager-tags=

All nodes in an Autopilot cluster or all newly created auto-provisioned nodes in a Standard cluster, including nodes that are resized or re-created, will have the specified tags on the corresponding Instance object in the Compute Engine API. You can reference these tags in network firewall policy rules. For instructions, see https://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--autoscaling-profile=AUTOSCALING_PROFILE
Set autoscaling behaviour, choices are 'optimize-utilization' and 'balanced'. Default is 'balanced'.
--clear-fleet-project
Remove the cluster from current fleet host project. Example: $ gcloud beta container clusters update --clear-fleet-project
--complete-credential-rotation
Complete the IP and credential rotation for this cluster. For example:
gcloud beta container clusters update example-cluster --complete-credential-rotation

This causes the cluster to stop serving its old IP, return to a single IP, and invalidate old credentials. See documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/credential-rotation.

--complete-ip-rotation
Complete the IP rotation for this cluster. For example:
gcloud beta container clusters update example-cluster --complete-ip-rotation

This causes the cluster to stop serving its old IP, and return to a single IP state. See documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-rotation.

--containerd-config-from-file=PATH_TO_FILE
Path of the YAML file that contains containerd configuration entries like configuring access to private image registries.

For detailed information on the configuration usage, please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.

Note: Updating the containerd configuration of an existing cluster or node pool requires recreation of the existing nodes, which might cause disruptions in running workloads.

Use a full or relative path to a local file containing the value of containerd_config.

--database-encryption-key=DATABASE_ENCRYPTION_KEY
Enable Database Encryption.

Enable database encryption that will be used to encrypt Kubernetes Secrets at the application layer. The key provided should be the resource ID in the format of projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets.

--disable-database-encryption
Disable database encryption.

Disable Database Encryption which encrypt Kubernetes Secrets at the application layer. For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets.

--disable-default-snat
Disable default source NAT rules applied in cluster nodes.

By default, cluster nodes perform source network address translation (SNAT) for packets sent from Pod IP address sources to destination IP addresses that are not in the non-masquerade CIDRs list. For more details about SNAT and IP masquerading, see: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#how_ipmasq_works SNAT changes the packet's source IP address to the node's internal IP address.

When this flag is set, GKE does not perform SNAT for packets sent to any destination. You must set this flag if the cluster uses privately reused public IPs.

The --disable-default-snat flag is only applicable to private GKE clusters, which are inherently VPC-native. Thus, --disable-default-snat requires that the cluster was created with both --enable-ip-alias and --enable-private-nodes.

--disable-workload-identity
Disable Workload Identity on the cluster.

For more information on Workload Identity, see

https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided. If not already, --max-nodes or --total-max-nodes must also be set.

--[no-]enable-cilium-clusterwide-network-policy
Enable Cilium Clusterwide Network Policies on the cluster. Use --enable-cilium-clusterwide-network-policy to enable and --no-enable-cilium-clusterwide-network-policy to disable.
--enable-cost-allocation
Enable the cost management feature.

When enabled, you can get informational GKE cost breakdowns by cluster, namespace and label in your billing data exported to BigQuery (https://cloud.google.com/billing/docs/how-to/export-data-bigquery).

Use --no-enable-cost-allocation to disable this feature.

--enable-fleet
Set cluster project as the fleet host project. This will register the cluster to the same project. To register the cluster to a fleet in a different project, please use --fleet-project=FLEET_HOST_PROJECT. Example: $ gcloud beta container clusters update --enable-fleet
--enable-fqdn-network-policy
Enable FQDN Network Policies on the cluster. FQDN Network Policies are disabled by default.
--enable-gke-oidc
(DEPRECATED) Enable GKE OIDC authentication on the cluster.

When enabled, users would be able to authenticate to Kubernetes cluster after properly setting OIDC config.

GKE OIDC is by default disabled when creating a new cluster. To disable GKE OIDC in an existing cluster, explicitly set flag --no-enable-gke-oidc.

GKE OIDC is being replaced by Identity Service across Anthos and GKE. Thus, flag --enable-gke-oidc is also deprecated. Please use --enable-identity-service to enable the Identity Service component

--enable-identity-service
Enable Identity Service component on the cluster.

When enabled, users can authenticate to Kubernetes cluster with external identity providers.

Identity Service is by default disabled when creating a new cluster. To disable Identity Service in an existing cluster, explicitly set flag --no-enable-identity-service.

--enable-image-streaming
Specifies whether to enable image streaming on cluster.
--enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node-pool set the flag to --no-enable-insecure-kubelet-readonly-port.

--enable-intra-node-visibility
Enable Intra-node visibility for this cluster.

Enabling intra-node visibility makes your intra-node pod-to-pod traffic visible to the networking fabric. With this feature, you can use VPC flow logging or other VPC features for intra-node traffic.

Enabling it on an existing cluster causes the cluster master and the cluster nodes to restart, which might cause a disruption.

--enable-kubernetes-unstable-apis=API,[API,…]
Enable Kubernetes beta API features on this cluster. Beta APIs are not expected to be production ready and should be avoided in production-grade environments.
--enable-l4-ilb-subsetting
Enable Subsetting for L4 ILB services created on this cluster.
--enable-legacy-authorization
Enables the legacy ABAC authentication for the cluster. User rights are granted through the use of policies which combine attributes together. For a detailed look at these properties and related formats, see https://kubernetes.io/docs/admin/authorization/abac/. To use RBAC permissions instead, create or update your cluster with the option --no-enable-legacy-authorization.
--enable-logging-monitoring-system-only
(DEPRECATED) Enable Cloud Operations system-only monitoring and logging.

The --enable-logging-monitoring-system-only flag is deprecated and will be removed in an upcoming release. Please use --logging and --monitoring instead. For more information, please read: https://cloud.google.com/kubernetes-engine/docs/concepts/about-logs and https://cloud.google.com/kubernetes-engine/docs/how-to/configure-metrics.

--enable-multi-networking
Enables multi-networking on the cluster. Multi-networking is disabled by default.
--enable-network-policy
Enable network policy enforcement for this cluster. If you are enabling network policy on an existing cluster the network policy addon must first be enabled on the master by using --update-addons=NetworkPolicy=ENABLED flag.
--enable-pod-security-policy
Enables the pod security policy admission controller for the cluster. The pod security policy admission controller adds fine-grained pod create and update authorization controls through the PodSecurityPolicy API objects. For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies.
--enable-private-nodes
Standard cluster: Enable private nodes as a default behavior for all newly created node pools, if --enable-private-nodes is not provided at node pool creation time.
Modifications to this flag do not affect `--enable-private-nodes` state of the
existing node pools.

Autopilot cluster: Force new and existing workloads, without explicit cloud.google.com/private-node=true node selector, to run on nodes with no public IP address.

Modifications to this flag trigger a re-schedule operation on all existng
workloads to run on different node VMs.
--[no-]enable-ray-cluster-logging
Enable automatic log processing sidecar for Ray clusters. Use --enable-ray-cluster-logging to enable and --no-enable-ray-cluster-logging to disable.
--[no-]enable-ray-cluster-monitoring
Enable automatic metrics collection for Ray clusters. Use --enable-ray-cluster-monitoring to enable and --no-enable-ray-cluster-monitoring to disable.
--enable-secret-manager
Enables the Secret Manager CSI driver provider component. See https://secrets-store-csi-driver.sigs.k8s.io/introduction https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp

To disable in an existing cluster, explicitly set flag to --no-enable-secret-manager

--enable-service-externalips
Enables use of services with externalIPs field.
--enable-shielded-nodes
Enable Shielded Nodes for this cluster. Enabling Shielded Nodes will enable a more secure Node credential bootstrapping implementation. Starting with version 1.18, clusters will have Shielded GKE nodes by default.
--enable-stackdriver-kubernetes
(DEPRECATED) Enable Cloud Operations for GKE.

The --enable-stackdriver-kubernetes flag is deprecated and will be removed in an upcoming release. Please use --logging and --monitoring instead. For more information, please read: https://cloud.google.com/kubernetes-engine/docs/concepts/about-logs and https://cloud.google.com/kubernetes-engine/docs/how-to/configure-metrics.

Flags for vertical pod autoscaling:

At most one of these can be specified:

--enable-vertical-pod-autoscaling
Enable vertical pod autoscaling for a cluster.
--fleet-project=PROJECT_ID_OR_NUMBER
Sets fleet host project for the cluster. If specified, the current cluster will be registered as a fleet membership under the fleet host project.

Example: $ gcloud beta container clusters update --fleet-project=my-project

--gateway-api=GATEWAY_API
Enables GKE Gateway controller in this cluster. The value of the flag specifies which Open Source Gateway API release channel will be used to define Gateway resources. GATEWAY_API must be one of:
disabled
Gateway controller will be disabled in the cluster.
standard
Gateway controller will be enabled in the cluster. Resource definitions from the standard OSS Gateway API release channel will be installed.
--generate-password
Ask the server to generate a secure password and use that as the basic auth password, keeping the existing username.
--hpa-profile=HPA_PROFILE
Setting Horizontal Pod Autoscaler behavior, which is none by default. HPA_PROFILE must be one of: none, performance.
--identity-provider=IDENTITY_PROVIDER
Enable 3P identity provider on the cluster.
--in-transit-encryption=IN_TRANSIT_ENCRYPTION
Enable Dataplane V2 in-transit encryption. Dataplane v2 in-transit encryption is disabled by default. IN_TRANSIT_ENCRYPTION must be one of: inter-node-transparent, none.
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the cluster. Valid logging variants are MAX_THROUGHPUT, DEFAULT. If no value is specified, DEFAULT is used. LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee high throughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achieve logging throughput up to 10MB per sec.
--maintenance-window=START_TIME
Set a time of day when you prefer maintenance to start on this cluster. For example:
gcloud beta container clusters update example-cluster --maintenance-window=12:43

The time corresponds to the UTC time zone, and must be in HH:MM format.

Non-emergency maintenance will occur in the 4 hour block starting at the specified time.

This is mutually exclusive with the recurring maintenance windows and will overwrite any existing window. Compatible with maintenance exclusions.

To remove an existing maintenance window from the cluster, use '--clear-maintenance-window'.

--network-performance-configs=[PROPERTY1=VALUE1,…]
Configures network performance settings for the cluster. Node pools can override with their own settings.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardless of whether the traffic is going to internal IP or external IP destinations. The following tier values are allowed: [TIER_UNSPECIFIED,TIER_1].

See https://cloud.google.com/compute/docs/networking/configure-vm-with-high-bandwidth-configuration for more information.

--notification-config=[pubsub=ENABLED|DISABLED,pubsub-topic=TOPIC,…]
The notification configuration of the cluster. GKE supports publishing cluster upgrade notifications to any Pub/Sub topic you created in the same project. Create a subscription for the topic specified to receive notification messages. See https://cloud.google.com/pubsub/docs/admin on how to manage Pub/Sub topics and subscriptions. You can also use the filter option to specify which event types you'd like to receive from the following options: SecurityBulletinEvent, UpgradeEvent, UpgradeAvailableEvent.

Examples:

gcloud beta container clusters update example-cluster --notification-config=pubsub=ENABLED,pubsub-topic=projects/{project}/topics/{topic-name}
gcloud beta container clusters update example-cluster --notification-config=pubsub=ENABLED,pubsub-topic=projects/{project}/topics/{topic-name},filter="SecurityBulletinEvent|UpgradeEvent"

The project of the Pub/Sub topic must be the same one as the cluster. It can be either the project ID or the project number.

--private-ipv6-google-access-type=PRIVATE_IPV6_GOOGLE_ACCESS_TYPE
Sets the type of private access to Google services over IPv6.

PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of:

bidirectional
  Allows Google services to initiate connections to GKE pods in this
  cluster. This is not intended for common use, and requires previous
  integration with Google services.
disabled
  Default value. Disables private access to Google services over IPv6.
outbound-only
  Allows GKE pods to make fast, secure requests to Google services
  over IPv6. This is the most common use of private IPv6 access.
gcloud alpha container clusters create       --private-ipv6-google-access-type=disabled
gcloud alpha container clusters create       --private-ipv6-google-access-type=outbound-only
gcloud alpha container clusters create       --private-ipv6-google-access-type=bidirectional

PRIVATE_IPV6_GOOGLE_ACCESS_TYPE must be one of: bidirectional, disabled, outbound-only.

--release-channel=[CHANNEL]
Subscribe or unsubscribe this cluster to a release channel.

When a cluster is subscribed to a release channel, Google maintains both the master version and the node version. Node auto-upgrade is enabled by default for release channel clusters and can be controlled via upgrade-scope exclusions.

CHANNEL must be one of:

rapid

'rapid' channel is offered on an early access basis for customers
who want to test new releases.
WARNING: Versions available in the 'rapid' channel may be subject
to unresolved issues with no known workaround and are not subject
to any SLAs.

regular

Clusters subscribed to 'regular' receive versions that are
considered GA quality. 'regular' is intended for production users
who want to take advantage of new features.

extended

Clusters subscribed to 'extended' can remain on a minor version for 24 months
from when the minor version is made available in the Regular channel.

stable

Clusters subscribed to 'stable' receive versions that are known to
be stable and reliable in production.

None

Use 'None' to opt-out of any release channel.

CHANNEL must be one of: rapid, regular, extended, stable, None.

--remove-labels=[KEY,…]
Labels to remove from the Google Cloud resources in use by the Kubernetes Engine cluster. These are unrelated to Kubernetes labels.

Examples:

gcloud beta container clusters update example-cluster --remove-labels=label_a,label_b
--remove-workload-policies=REMOVE_WORKLOAD_POLICIES
Remove Autopilot workload policies from the cluster.

Examples:

gcloud beta container clusters update example-cluster --remove-workload-policies=allow-net-admin

The only supported workload policy is 'allow-net-admin'.

--security-group=SECURITY_GROUP
The name of the RBAC security group for use with Google security groups in Kubernetes RBAC (https://kubernetes.io/docs/reference/access-authn-authz/rbac/).

To include group membership as part of the claims issued by Google during authentication, a group must be designated as a security group by including it as a direct member of this group.

If unspecified, no groups will be returned for use with RBAC.

--security-posture=SECURITY_POSTURE
Sets the mode of the Kubernetes security posture API's off-cluster features.

To enable advanced mode explicitly set the flag to --security-posture=enterprise.

To enable in standard mode explicitly set the flag to --security-posture=standard

To disable in an existing cluster, explicitly set the flag to --security-posture=disabled.

For more information on enablement, see https://cloud.google.com/kubernetes-engine/docs/concepts/about-security-posture-dashboard#feature-enablement.

SECURITY_POSTURE must be one of: disabled, standard, enterprise.

--set-password
Set the basic auth password to the specified value, keeping the existing username.
--stack-type=STACK_TYPE
IP stack type of the node VMs. STACK_TYPE must be one of: ipv4, ipv4-ipv6.
--start-credential-rotation
Start the rotation of IP and credentials for this cluster. For example:
gcloud beta container clusters update example-cluster --start-credential-rotation

This causes the cluster to serve on two IPs, and will initiate a node upgrade to point to the new IP. See documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/credential-rotation.

--start-ip-rotation
Start the rotation of this cluster to a new IP. For example:
gcloud beta container clusters update example-cluster --start-ip-rotation

This causes the cluster to serve on two IPs, and will initiate a node upgrade to point to the new IP. See documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-rotation.

--tier=TIER
Set the desired tier for the cluster. TIER must be one of: standard, enterprise.
--update-addons=[ADDON=ENABLED|DISABLED,…]
Cluster addons to enable or disable. Options are HorizontalPodAutoscaling=ENABLED|DISABLED HttpLoadBalancing=ENABLED|DISABLED KubernetesDashboard=ENABLED|DISABLED Istio=ENABLED|DISABLED BackupRestore=ENABLED|DISABLED NetworkPolicy=ENABLED|DISABLED CloudRun=ENABLED|DISABLED ConfigConnector=ENABLED|DISABLED NodeLocalDNS=ENABLED|DISABLED GcePersistentDiskCsiDriver=ENABLED|DISABLED GcpFilestoreCsiDriver=ENABLED|DISABLED GcsFuseCsiDriver=ENABLED|DISABLED
--update-labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources in use by the Kubernetes Engine cluster. These are unrelated to Kubernetes labels.

Examples:

gcloud beta container clusters update example-cluster --update-labels=label_a=value1,label_b=value2
--workload-policies=WORKLOAD_POLICIES
Add Autopilot workload policies to the cluster.

Examples:

gcloud beta container clusters update example-cluster --workload-policies=allow-net-admin

The only supported workload policy is 'allow-net-admin'.

--workload-pool=WORKLOAD_POOL
Enable Workload Identity on the cluster.

When enabled, Kubernetes service accounts will be able to act as Cloud IAM Service Accounts, through the provided workload pool.

Currently, the only accepted workload pool is the workload pool of the Cloud project containing the cluster, PROJECT_ID.svc.id.goog.

For more information on Workload Identity, see

https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
--workload-vulnerability-scanning=WORKLOAD_VULNERABILITY_SCANNING
Sets the mode of the Kubernetes security posture API's workload vulnerability scanning.

To enable Advanced vulnerability insights mode explicitly set the flag to --workload-vulnerability-scanning=enterprise.

To enable in standard mode explicitly set the flag to --workload-vulnerability-scanning=standard.

To disable in an existing cluster, explicitly set the flag to --workload-vulnerability-scanning=disabled.

For more information on enablement, see https://cloud.google.com/kubernetes-engine/docs/concepts/about-security-posture-dashboard#feature-enablement.

WORKLOAD_VULNERABILITY_SCANNING must be one of: disabled, standard, enterprise.

--additional-pod-ipv4-ranges=NAME,[NAME,…]
Additional IP address ranges(by name) for pods that need to be added to the cluster.

Examples:

gcloud beta container clusters update example-cluster --additional-pod-ipv4-ranges=range1,range2
--remove-additional-pod-ipv4-ranges=NAME,[NAME,…]
Previously added additional pod ranges(by name) for pods that are to be removed from the cluster.

Examples:

gcloud beta container clusters update example-cluster --remove-additional-pod-ipv4-ranges=range1,range2
At most one of these can be specified:
--additional-zones=[ZONE,…]
(DEPRECATED) The set of additional zones in which the cluster's node footprint should be replicated. All zones must be in the same region as the cluster's primary zone.

Note that the exact same footprint will be replicated in all zones, such that if you created a cluster with 4 nodes in a single zone and then use this option to spread across 2 more zones, 8 additional nodes will be created.

Multiple locations can be specified, separated by commas. For example:

gcloud beta container clusters update example-cluster --zone us-central1-a --additional-zones us-central1-b,us-central1-c

To remove all zones other than the cluster's primary zone, pass the empty string to the flag. For example:

gcloud beta container clusters update example-cluster --zone us-central1-a --additional-zones ""

This flag is deprecated. Use --node-locations=PRIMARY_ZONE,[ZONE,…] instead.

--node-locations=ZONE,[ZONE,…]
The set of zones in which the specified node footprint should be replicated. All zones must be in the same region as the cluster's master(s), specified by the -location, --zone, or --region flag. Additionally, for zonal clusters, --node-locations must contain the cluster's primary zone. If not specified, all nodes will be in the cluster's primary zone (for zonal clusters) or spread across three randomly chosen zones within the cluster's region (for regional clusters).

Note that NUM_NODES nodes will be created in each zone, such that if you specify --num-nodes=4 and choose two locations, 8 nodes will be created.

Multiple locations can be specified, separated by commas. For example:

gcloud beta container clusters update example-cluster --location us-central1-a --node-locations us-central1-a,us-central1-b
--auto-monitoring-scope=AUTO_MONITORING_SCOPE
Enables Auto-Monitoring for a specific scope within the cluster. ALL: Enables Auto-Monitoring for all supported workloads within the cluster. NONE: Disables Auto-Monitoring. AUTO_MONITORING_SCOPE must be one of: ALL, NONE.
--logging=[COMPONENT,…]
Set the components that have logging enabled. Valid component values are: SYSTEM, WORKLOAD, API_SERVER, CONTROLLER_MANAGER, SCHEDULER, NONE

For more information, see https://cloud.google.com/kubernetes-engine/docs/concepts/about-logs#available-logs

Examples:

gcloud beta container clusters update --logging=SYSTEM
gcloud beta container clusters update --logging=SYSTEM,API_SERVER,WORKLOAD
gcloud beta container clusters update --logging=NONE
--monitoring=[COMPONENT,…]
Set the components that have monitoring enabled. Valid component values are: SYSTEM, WORKLOAD (Deprecated), NONE, API_SERVER, CONTROLLER_MANAGER, SCHEDULER, DAEMONSET, DEPLOYMENT, HPA, POD, STATEFULSET, STORAGE, CADVISOR, KUBELET, DCGM

For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/configure-metrics#available-metrics

Examples:

gcloud beta container clusters update --monitoring=SYSTEM,API_SERVER,POD
gcloud beta container clusters update --monitoring=NONE
At most one of these can be specified:
--disable-managed-prometheus
Disable managed collection for Managed Service for Prometheus.
--enable-managed-prometheus
Enables managed collection for Managed Service for Prometheus in the cluster.

See https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed#enable-mgdcoll-gke for more info.

Enabled by default for cluster versions 1.27 or greater, use --no-enable-managed-prometheus to disable.

Flags for Binary Authorization:
--binauthz-policy-bindings=[name=BINAUTHZ_POLICY]
The relative resource name of the Binary Authorization policy to audit and/or enforce. GKE policies have the following format: projects/{project_number}/platforms/gke/policies/{policy_id}.
At most one of these can be specified:
--binauthz-evaluation-mode=BINAUTHZ_EVALUATION_MODE
Enable Binary Authorization for this cluster. BINAUTHZ_EVALUATION_MODE must be one of: disabled, policy-bindings, policy-bindings-and-project-singleton-policy-enforce, project-singleton-policy-enforce.
--enable-binauthz
(DEPRECATED) Enable Binary Authorization for this cluster.

The --enable-binauthz flag is deprecated. Please use --binauthz-evaluation-mode instead.

At most one of these can be specified:
--clear-maintenance-window
If set, remove the maintenance window that was set with --maintenance-window family of flags.
--remove-maintenance-exclusion=NAME
Name of a maintenance exclusion to remove. If you hadn't specified a name, one was auto-generated. Get it with $ gcloud container clusters describe.
Sets a period of time in which maintenance should not occur. This is compatible with both daily and recurring maintenance windows. If --add-maintenance-exclusion-scope is not specified, the exclusion will exclude all upgrades.

Examples:

gcloud beta container clusters update example-cluster   --add-maintenance-exclusion-name=holidays-2000   --add-maintenance-exclusion-start=2000-11-20T00:00:00   --add-maintenance-exclusion-end=2000-12-31T23:59:59   --add-maintenance-exclusion-scope=no_upgrades
--add-maintenance-exclusion-end=TIME_STAMP
End time of the exclusion window. Must take place after the start time. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group are specified.

--add-maintenance-exclusion-name=NAME
A descriptor for the exclusion that can be used to remove it. If not specified, it will be autogenerated.
--add-maintenance-exclusion-scope=SCOPE
Scope of the exclusion window to specify the type of upgrades that the exclusion will apply to. Must be in one of no_upgrades, no_minor_upgrades or no_minor_or_node_upgrades. If not specified in an exclusion, defaults to no_upgrades.
--add-maintenance-exclusion-start=TIME_STAMP
Start time of the exclusion window (can occur in the past). If not specified, the current time will be used. See $ gcloud topic datetimes for information on time formats.
Set a flexible maintenance window by specifying a window that recurs per an RFC 5545 RRULE. Non-emergency maintenance will occur in the recurring windows.

Examples:

For a 9-5 Mon-Wed UTC-4 maintenance window:

gcloud beta container clusters update example-cluster --maintenance-window-start=2000-01-01T09:00:00-04:00 --maintenance-window-end=2000-01-01T17:00:00-04:00 --maintenance-window-recurrence='FREQ=WEEKLY;BYDAY=MO,TU,WE'

For a daily window from 22:00 - 04:00 UTC:

gcloud beta container clusters update example-cluster --maintenance-window-start=2000-01-01T22:00:00Z --maintenance-window-end=2000-01-02T04:00:00Z --maintenance-window-recurrence=FREQ=DAILY
--maintenance-window-end=TIME_STAMP
End time of the first window (can occur in the past). Must take place after the start time. The difference in start and end time specifies the length of each recurrence. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group are specified.

--maintenance-window-recurrence=RRULE
An RFC 5545 RRULE, specifying how the window will recur. Note that minimum requirements for maintenance periods will be enforced. Note that FREQ=SECONDLY, MINUTELY, and HOURLY are not supported.

This flag argument must be specified if any of the other arguments in this group are specified.

--maintenance-window-start=TIME_STAMP
Start time of the first window (can occur in the past). The start time influences when the window will start for recurrences. See $ gcloud topic datetimes for information on time formats.

This flag argument must be specified if any of the other arguments in this group are specified.

Exports cluster's usage of cloud resources

At most one of these can be specified:

--clear-resource-usage-bigquery-dataset
Disables exporting cluster resource usage to BigQuery.
--enable-network-egress-metering
Enable network egress metering on this cluster.

When enabled, a DaemonSet is deployed into the cluster. Each DaemonSet pod meters network egress traffic by collecting data from the conntrack table, and exports the metered metrics to the specified destination.

Network egress metering is disabled if this flag is omitted, or when --no-enable-network-egress-metering is set.

--enable-resource-consumption-metering
Enable resource consumption metering on this cluster.

When enabled, a table will be created in the specified BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export.

To disable resource consumption metering, set --no-enable-resource-consumption- metering. If this flag is omitted, then resource consumption metering will remain enabled or disabled depending on what is already configured for this cluster.

--resource-usage-bigquery-dataset=RESOURCE_USAGE_BIGQUERY_DATASET
The name of the BigQuery dataset to which the cluster's usage of cloud resources is exported. A table will be created in the specified dataset to store cluster resource usage. The resulting table can be joined with BigQuery Billing Export to produce a fine-grained cost breakdown.

Examples:

gcloud beta container clusters update example-cluster --resource-usage-bigquery-dataset=example_bigquery_dataset_name
ClusterDNS
--cluster-dns=CLUSTER_DNS
DNS provider to use for this cluster. CLUSTER_DNS must be one of:
clouddns
Selects Cloud DNS as the DNS provider for the cluster.
default
Selects the default DNS provider (kube-dns) for the cluster.
kubedns
Selects Kube DNS as the DNS provider for the cluster.
--cluster-dns-domain=CLUSTER_DNS_DOMAIN
DNS domain for this cluster. The default value is cluster.local. This is configurable when --cluster-dns=clouddns and --cluster-dns-scope=vpc are set. The value must be a valid DNS subdomain as defined in RFC 1123.
--cluster-dns-scope=CLUSTER_DNS_SCOPE
DNS scope for the Cloud DNS zone created - valid only with --cluster-dns=clouddns. Defaults to cluster.

CLUSTER_DNS_SCOPE must be one of:

cluster
Configures the Cloud DNS zone to be private to the cluster.
vpc
Configures the Cloud DNS zone to be private to the VPC Network.
At most one of these can be specified:
--additive-vpc-scope-dns-domain=ADDITIVE_VPC_SCOPE_DNS_DOMAIN
The domain used in Additive VPC scope. Only works with Cluster Scope.
--disable-additive-vpc-scope
Disables Additive VPC Scope.
At most one of these can be specified:
--dataplane-v2-observability-mode=DATAPLANE_V2_OBSERVABILITY_MODE
(REMOVED) Select Advanced Datapath Observability mode for the cluster. Defaults to DISABLED.

Advanced Datapath Observability allows for a real-time view into pod-to-pod traffic within your cluster.

Examples:

gcloud beta container clusters update --dataplane-v2-observability-mode=DISABLED
gcloud beta container clusters update --dataplane-v2-observability-mode=INTERNAL_VPC_LB
gcloud beta container clusters update --dataplane-v2-observability-mode=EXTERNAL_LB

Flag --dataplane-v2-observability-mode has been removed.

DATAPLANE_V2_OBSERVABILITY_MODE must be one of:

DISABLED
Disables Advanced Datapath Observability.
EXTERNAL_LB
Makes Advanced Datapath Observability available to the external network.
INTERNAL_VPC_LB
Makes Advanced Datapath Observability available from the VPC network.
--disable-dataplane-v2-flow-observability
Disables Advanced Datapath Observability.
--enable-dataplane-v2-flow-observability
Enables Advanced Datapath Observability which allows for a real-time view into pod-to-pod traffic within your cluster.
At most one of these can be specified:
--disable-dataplane-v2-metrics
Stops exposing advanced datapath flow metrics on node port.
--enable-dataplane-v2-metrics
Exposes advanced datapath flow metrics on node port.
--enable-authorized-networks-on-private-endpoint
Enable enforcement of --master-authorized-networks CIDR ranges for traffic reaching cluster's control plane via private IP.
--enable-dns-access
Enable access to the cluster's control plane over DNS-based endpoint.

DNS-based control plane access is recommended.

--enable-google-cloud-access
When you enable Google Cloud Access, any public IP addresses owned by Google Cloud can reach the public control plane endpoint of your cluster.
--enable-ip-access
Enable access to the cluster's control plane over private IP and public IP if --enable-private-endpoint is not enabled.
--enable-master-global-access
Use with private clusters to allow access to the master's private endpoint from any Google Cloud region or on-premises environment regardless of the private cluster's region.
--enable-private-endpoint
Enables cluster's control plane to be accessible using private IP address only.
Master Authorized Networks
--enable-master-authorized-networks
Allow only specified set of CIDR blocks (specified by the --master-authorized-networks flag) to connect to Kubernetes master through HTTPS. Besides these blocks, the following have access as well:
1) The private network the cluster connects to if
`--enable-private-nodes` is specified.
2) Google Compute Engine Public IPs if `--enable-private-nodes` is not
specified.

Use --no-enable-master-authorized-networks to disable. When disabled, public internet (0.0.0.0/0) is allowed to connect to Kubernetes master through HTTPS.

--master-authorized-networks=NETWORK,[NETWORK,…]
The list of CIDR blocks (up to 100 for private cluster, 50 for public cluster) that are allowed to connect to Kubernetes master through HTTPS. Specified in CIDR notation (e.g. 1.2.3.4/30). Cannot be specified unless --enable-master-authorized-networks is also specified.
Node autoprovisioning
--enable-autoprovisioning
Enables node autoprovisioning for a cluster.

Cluster Autoscaler will be able to create new node pools. Requires maximum CPU and memory limits to be specified.

This flag argument must be specified if any of the other arguments in this group are specified.

At most one of these can be specified:
--autoprovisioning-config-file=PATH_TO_FILE
Path of the JSON/YAML file which contains information about the cluster's node autoprovisioning configuration. Currently it contains a list of resource limits, identity defaults for autoprovisioning, node upgrade settings, node management settings, minimum cpu platform, image type, node locations for autoprovisioning, disk type and size configuration, Shielded instance settings, and customer-managed encryption keys settings.

Resource limits are specified in the field 'resourceLimits'. Each resource limits definition contains three fields: resourceType, maximum and minimum. Resource type can be "cpu", "memory" or an accelerator (e.g. "nvidia-tesla-t4" for NVIDIA T4). Use gcloud compute accelerator-types list to learn about available accelerator types. Maximum is the maximum allowed amount with the unit of the resource. Minimum is the minimum allowed amount with the unit of the resource.

Identity default contains at most one of the below fields: serviceAccount: The Google Cloud Platform Service Account to be used by node VMs in autoprovisioned node pools. If not specified, the project's default service account is used. scopes: A list of scopes to be used by node instances in autoprovisioned node pools. Multiple scopes can be specified, separated by commas. For information on defaults, look at: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes

Node Upgrade settings are specified under the field 'upgradeSettings', which has the following fields: maxSurgeUpgrade: Number of extra (surge) nodes to be created on each upgrade of an autoprovisioned node pool. maxUnavailableUpgrade: Number of nodes that can be unavailable at the same time on each upgrade of an autoprovisioned node pool.

Node Management settings are specified under the field 'management', which has the following fields: autoUpgrade: A boolean field that indicates if node autoupgrade is enabled for autoprovisioned node pools. autoRepair: A boolean field that indicates if node autorepair is enabled for autoprovisioned node pools.

minCpuPlatform (deprecated): If specified, new autoprovisioned nodes will be scheduled on host with specified CPU architecture or a newer one. Note: Min CPU platform can only be specified in Beta and Alpha.

Autoprovisioned node image is specified under the 'imageType' field. If not specified the default value will be applied.

Autoprovisioning locations is a set of zones where new node pools can be created by Autoprovisioning. Autoprovisioning locations are specified in the field 'autoprovisioningLocations'. All zones must be in the same region as the cluster's master(s).

Disk type and size are specified under the 'diskType' and 'diskSizeGb' fields, respectively. If specified, new autoprovisioned nodes will be created with custom boot disks configured by these settings.

Shielded instance settings are specified under the 'shieldedInstanceConfig' field, which has the following fields: enableSecureBoot: A boolean field that indicates if secure boot is enabled for autoprovisioned nodes. enableIntegrityMonitoring: A boolean field that indicates if integrity monitoring is enabled for autoprovisioned nodes.

Customer Managed Encryption Keys (CMEK) used by new auto-provisioned node pools can be specified in the 'bootDiskKmsKey' field.

Use a full or relative path to a local file containing the value of autoprovisioning_config_file.

Flags to configure autoprovisioned nodes
--autoprovisioning-image-type=AUTOPROVISIONING_IMAGE_TYPE
Node Autoprovisioning will create new nodes with the specified image type
--autoprovisioning-locations=ZONE,[ZONE,…]
Set of zones where new node pools can be created by autoprovisioning. All zones must be in the same region as the cluster's master(s). Multiple locations can be specified, separated by commas.
--autoprovisioning-min-cpu-platform=PLATFORM
(DEPRECATED) If specified, new autoprovisioned nodes will be scheduled on host with specified CPU architecture or a newer one.

The --autoprovisioning-min-cpu-platform flag is deprecated and will be removed in an upcoming release. More info: https://cloud.google.com/kubernetes-engine/docs/release-notes#March_08_2022

--max-cpu=MAX_CPU
Maximum number of cores in the cluster.

Maximum number of cores to which the cluster can scale.

--max-memory=MAX_MEMORY
Maximum memory in the cluster.

Maximum number of gigabytes of memory to which the cluster can scale.

--min-cpu=MIN_CPU
Minimum number of cores in the cluster.

Minimum number of cores to which the cluster can scale.

--min-memory=MIN_MEMORY
Minimum memory in the cluster.

Minimum number of gigabytes of memory to which the cluster can scale.

Flags to specify upgrade settings for autoprovisioned nodes:
--autoprovisioning-max-surge-upgrade=AUTOPROVISIONING_MAX_SURGE_UPGRADE
Number of extra (surge) nodes to be created on each upgrade of an autoprovisioned node pool.
--autoprovisioning-max-unavailable-upgrade=AUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of an autoprovisioned node pool.
--autoprovisioning-node-pool-soak-duration=AUTOPROVISIONING_NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deleting the blue pool and completing the update. This argument should be used in conjunction with --enable-autoprovisioning-blue-green-upgrade to take effect.
--autoprovisioning-standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade. This argument should be used in conjunction with --enable-autoprovisioning-blue-green-upgrade to take effect.

Batch sizes are specified by one of, batch-node-count or batch-percent. The duration between batches is specified by batch-soak-duration.

Example: --standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s --standard-rollout-policy=batch-percent=0.05,batch-soak-duration=180s

Flag group to choose the top level upgrade option:

At most one of these can be specified:

--enable-autoprovisioning-blue-green-upgrade
Whether to use blue-green upgrade for the autoprovisioned node pool.
--enable-autoprovisioning-surge-upgrade
Whether to use surge upgrade for the autoprovisioned node pool.
Flags to specify identity for autoprovisioned nodes:
--autoprovisioning-scopes=[SCOPE,…]
The scopes to be used by node instances in autoprovisioned node pools. Multiple scopes can be specified, separated by commas. For information on defaults, look at: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes
--autoprovisioning-service-account=AUTOPROVISIONING_SERVICE_ACCOUNT
The Google Cloud Platform Service Account to be used by node VMs in autoprovisioned node pools. If not specified, the project default service account is used.
Flags to specify node management settings for autoprovisioned nodes:
--enable-autoprovisioning-autorepair
Enable node autorepair for autoprovisioned node pools. Use --no-enable-autoprovisioning-autorepair to disable.

This flag argument must be specified if any of the other arguments in this group are specified.

--enable-autoprovisioning-autoupgrade
Enable node autoupgrade for autoprovisioned node pools. Use --no-enable-autoprovisioning-autoupgrade to disable.

This flag argument must be specified if any of the other arguments in this group are specified.

Arguments to set limits on accelerators:
--max-accelerator=[type=TYPE,count=COUNT,…]
Sets maximum limit for a single type of accelerators (e.g. GPUs) in cluster.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of accelerator for which the limit is set. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Required) The maximum number of accelerators to which the cluster can be scaled.

This flag argument must be specified if any of the other arguments in this group are specified.

--min-accelerator=[type=TYPE,count=COUNT,…]
Sets minimum limit for a single type of accelerators (e.g. GPUs) in cluster. Defaults to 0 for all accelerator types if it isn't set.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of accelerator for which the limit is set. Use gcloud compute accelerator-types list to learn about all available accelerator types.
count
(Required) The minimum number of accelerators to which the cluster can be scaled.
--enable-insecure-binding-system-authenticated
Allow using system:authenticated as a subject in ClusterRoleBindings and RoleBindings. Allowing bindings that reference system:authenticated is a security risk and is not recommended.

To disallow binding system:authenticated in a cluster, explicitly set the --no-enable-insecure-binding-system-authenticated flag instead.

--enable-insecure-binding-system-unauthenticated
Allow using system:unauthenticated and system:anonymous as subjects in ClusterRoleBindings and RoleBindings. Allowing bindings that reference system:unauthenticated and system:anonymous are a security risk and is not recommended.

To disallow binding system:authenticated in a cluster, explicitly set the --no-enable-insecure-binding-system-unauthenticated flag instead.

Flags relating to Cloud TPUs:
--enable-tpu
Enable Cloud TPUs for this cluster.

Can not be specified unless --enable-ip-alias is also specified.

At most one of these can be specified:
--enable-tpu-service-networking
Enable Cloud TPU's Service Networking mode. In this mode, the CIDR blocks used by the Cloud TPUs will be allocated and managed by Service Networking, instead of Kubernetes Engine.

This cannot be specified if tpu-ipv4-cidr is specified.

--tpu-ipv4-cidr=CIDR
Set the IP range for the Cloud TPUs.

Can be specified as a netmask size (e.g. '/20') or as in CIDR notion (e.g. '10.100.0.0/20'). If given as a netmask size, the IP range will be chosen automatically from the available space in the network.

If unspecified, the TPU CIDR range will use automatic default '/20'.

Can not be specified unless '--enable-tpu' and '--enable-ip-alias' are also specified.

--logging-service=LOGGING_SERVICE
(DEPRECATED) Logging service to use for the cluster. Options are: "logging.googleapis.com/kubernetes" (the Google Cloud Logging service with Kubernetes-native resource model enabled), "logging.googleapis.com" (the Google Cloud Logging service), "none" (logs will not be exported from the cluster)

The --logging-service flag is deprecated and will be removed in an upcoming release. Please use --logging instead. For more information, please read: https://cloud.google.com/kubernetes-engine/docs/concepts/about-logs.

--monitoring-service=MONITORING_SERVICE
(DEPRECATED) Monitoring service to use for the cluster. Options are: "monitoring.googleapis.com/kubernetes" (the Google Cloud Monitoring service with Kubernetes-native resource model enabled), "monitoring.googleapis.com" (the Google Cloud Monitoring service), "none" (no metrics will be exported from the cluster)

The --monitoring-service flag is deprecated and will be removed in an upcoming release. Please use --monitoring instead. For more information, please read: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-metrics.

Basic auth
--password=PASSWORD
The password to use for cluster auth. Defaults to a server-specified randomly-generated string.
Options to specify the username.

At most one of these can be specified:

--enable-basic-auth
Enable basic (username/password) auth for the cluster. --enable-basic-auth is an alias for --username=admin; --no-enable-basic-auth is an alias for --username="". Use --password to specify a password; if not, the server will randomly generate one. For cluster versions before 1.12, if neither --enable-basic-auth nor --username is specified, --enable-basic-auth will default to true. After 1.12, --enable-basic-auth will default to false.
--username=USERNAME, -u USERNAME
The user name to use for basic auth for the cluster. Use --password to specify a password; if not, the server will randomly generate one.
OPTIONAL FLAGS
--async
Return immediately, without waiting for the operation in progress to complete.
--cloud-run-config=[load-balancer-type=EXTERNAL,…]
Configurations for Cloud Run addon, requires --addons=CloudRun for create and --update-addons=CloudRun=ENABLED for update.
load-balancer-type
(Optional) Type of load-balancer-type EXTERNAL or INTERNAL.

Examples:

gcloud beta container clusters update example-cluster --cloud-run-config=load-balancer-type=INTERNAL
--istio-config=[auth=MTLS_PERMISSIVE,…]
(REMOVED) Configurations for Istio addon, requires --addons contains Istio for create, or --update-addons Istio=ENABLED for update.
auth
(Optional) Type of auth MTLS_PERMISSIVE or MTLS_STRICT.

Examples:

gcloud beta container clusters update example-cluster --istio-config=auth=MTLS_PERMISSIVE

The --istio-config flag is no longer supported. For more information and migration, see https://cloud.google.com/istio/docs/istio-on-gke/migrate-to-anthos-service-mesh.

--node-pool=NODE_POOL
Node pool to be updated.
At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster. Overrides the default compute/region or compute/zone value for this command invocation. Prefer using this flag over the --region or --zone flags.
--region=REGION
Compute region (e.g. us-central1) for a regional cluster. Overrides the default compute/region property value for this command invocation.
--zone=ZONE, -z ZONE
Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the default compute/zone property value for this command invocation.
Cluster autoscaling
--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizes of available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of: BALANCED, ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.

GCLOUD WIDE FLAGS
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
This command is currently in beta and might change without notice. These variants are also available:
gcloud container clusters update
gcloud alpha container clusters update