Using node auto-provisioning


This page explains how to use node auto-provisioning in Standard Google Kubernetes Engine (GKE) clusters.

With Autopilot clusters, you don't need to worry about provisioning nodes or managing node pools because node pools are automatically provisioned through node auto-provisioning, and are automatically scaled to meet the requirements of your workloads.

Overview

Node auto-provisioning automatically manages a set of node pools on the user's behalf. Without node auto-provisioning, GKE considers starting new nodes only from the set of user created node pools. With node auto-provisioning, new node pools can be created and deleted automatically.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with Autopilot or regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update

Requirements

Node auto-provisioning is available in the following GKE releases:

  • Version 1.11.2-gke.25 and later for zonal clusters.
  • Version 1.12.x and later for regional clusters.

Unsupported features

The following features are not supported by node auto-provisioning, meaning that node auto-provisioning will not provision node pools with these features, but will autoscale existing node pools:

Operation

Node auto-provisioning is a mechanism of the cluster autoscaler, which scales on a per-node pool basis. With node auto-provisioning enabled, the cluster autoscaler can extend node pools automatically based on the specifications of unschedulable Pods.

Node auto-provisioning creates node pools based on the following information:

Resource limits

Node auto-provisioning and the cluster autoscaler have limits at two levels:

  • Node pool level
  • Cluster level

Limits for node pools

Node pools created by node auto-provisioning are limited to 1000 nodes.

Limits for clusters

The limits you define are enforced based on the total CPU and memory resources used across your cluster, not just auto-provisioned pools.

Cluster autoscaler does not create new nodes if doing so would exceed one of the defined limits. If limits are already exceeded, nodes are not automatically deleted.

Workload separation

If pending Pods have node affinities and tolerations, node auto-provisioning can provision nodes with matching labels and taints.

Node auto-provisioning might create node pools with labels and taints if all of the following conditions are met:

  • A pending Pod requires a node with a specific label key and value.
  • The Pod has a toleration for a taint with the same key.
  • The toleration is for the NoSchedule effect, NoExecute effect, or all effects.

The Pod's specification can express that it requires nodes with specific labels in two ways:

  • Using a nodeSelector field.
  • Using a nodeAffinity field with an In operator and exactly one value.

The following example is an excerpt of a Pod specification that is interpreted as a workload separation request. In this example, the cluster administrator has chosen dedicated as the key that will be used for workload isolation, and the UI team has determined that they need dedicated nodes for their workloads.

The Pod has a toleration for nodes labeled with dedicated=ui-team and uses nodeAffinity for node selection:

spec:
  tolerations:
  - key: dedicated
    operator: Equal
    value: ui-team
    effect: NoSchedule
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: dedicated
            operator: In
            values:
            - ui-team

If this Pod exists, nodes with taint dedicated=ui-team:NoSchedule and label dedicated=ui-team are considered for creation by node auto-provisioning.

The following example uses nodeSelector and has the same effect:

spec:
  tolerations:
  - key: dedicated
    operator: Equal
    value: ui-team
    effect: NoSchedule
  nodeSelector:
    dedicated: ui-team

Deletion of auto-provisioned node pools

When there are no nodes in an auto-provisioned node pool, GKE deletes the node pool. Node pools that are not marked as auto-provisioned are not deleted.

Supported machine types

As of GKE 1.19.7-gke.800, node auto-provisioning considers all machine types when creating node pools, with the exception that for N1 machines, node auto-provisioning only considers machines with up to 64 vCPUs. By default, the E2 machine family is used, unless:

  • The workload requests a feature that is not available in the E2 machine family. For example, if a GPU is requested by the workload, the N1 machine family will be used for the new node pool.
  • The workload uses the machine-family label. This may be overidden if the workload request cannot be fulfilled (per the previous point). For more information, see Using a custom machine family.

In versions prior to GKE 1.19.7-gke.800, node auto-provisioning only considers N1 machine types with up to 64 vCPUs.

Supported node images

Node auto-provisioning creates node pools using one of the following node images:

  • Container-Optimized OS (cos or cos_containerd).
  • Ubuntu (ubuntu or ubuntu_containerd).

Support for preemptible VMs

Node auto-provisioning supports creating node pools based on preemptible virtual machine (VM) instances.

Creating preemptible VM-based node pools is only considered if unschedulable pods with toleration for cloud.google.com/gke-preemptible="true":NoSchedule taint exist.

The nodes that are based on preemptible VMs and created by auto-provisioned node pools have the taint applied.

Support for Pods requesting ephemeral storage

Node auto-provisioning supports creating node pools when Pods request ephemeral storage. The size of the boot disk provisioned in the node pools is constant for all new auto-provisoned node pools. This size of the boot disk can be customised, where the default is 100 GiB. Ephemeral storage backed by local SSDs is not supported.

Node auto-provisioning will provision a node pool only if the allocatable ephemeral storage of a node with a specified boot disk is greater than or equal to the ephemeral storage request of a pending Pod. If the ephemeral storage request is higher than what is allocatable, node auto-provisioning will not provision a node pool. Disk sizes for nodes are not dynamically configured based on ephemeral storage requests of pending Pods.

This ephemeral storage support for Pods is supported in GKE versions 1.19.7-gke.800 or later, and 1.18.12-gke.1210 or later.

Scalability limitations

Node auto-provisioning has the same limitations as the cluster autoscaler, as well as the following additional limitations:

Limit on number of separated workloads
Node auto-provisioning supports a maximum of 100 distinct separated workloads.
Limit on number of node pools
Node auto-provisioning de-prioritizes creating new node pools when the number of pools approaches 100. Creating over 100 node pools is possible but taken when creating a node pool is the only option to schedule a pending pod.

Enabling node auto-provisioning

You enable node auto-provisioning on a cluster with gcloud or the Google Cloud Console.

gcloud

To enable node auto-provisioning, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning \
    --min-cpu MINIMUM_CPU \
    --min-memory MIMIMUM_MEMORY \
    --max-cpu MAXIMUM_CPU \
    --max-memory MAXIMUM_MEMORY
    --autoprovisioning-scopes=https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring,https://www.googleapis.com/auth/devstorage.read_only

Replace the following:

  • CLUSTER_NAME: the name of the cluster to enable node auto-provisioning.
  • MINIMUM_CPU: the minimum number of cores in the cluster.
  • MINIMUM_MEMORY: the minimum number of gigabytes of memory in the cluster.
  • MAXIMUM_CPU: the maximum number of cores in the cluster.
  • MAXIMUM_MEMORY: the maximum number of gigabytes of memory in the cluster.

The following example enables node auto-provisioning on the dev-cluster and allows scaling between a total cluster size of 1 CPU and 1 gigabyte of memory to a maximum of 10 CPU and 64 gigabytes of memory:

gcloud container clusters update dev-cluster \
    --enable-autoprovisioning \
    --min-cpu 1 \
    --min-memory 1 \
    --max-cpu 10 \
    --max-memory 64

Console

To enable node auto-provisioning, perform the following steps:

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. Select the desired cluster.

  3. Click the icon.

  4. Scroll down to Node auto-provisioning and select Enabled.

  5. Set your desired minimum and maximum CPU and memory usage for the cluster.

  6. Click Save. GKE updates your cluster.

Using an auto-provisioning config file

Node auto-provisioning can be configured by using a YAML configuration file. The configuration file can contain just a single line if it's used to change a single setting. Multiple settings can be specified in a single config file. In this case, all those setting will be changed when the config file is applied.

Some advanced configurations can only be specified by using a configuration file.

Example 1: Applying the following configuration file enables node auto-repair and auto-upgrade for any new node pools created by node auto-provisioning:

management:
  autoRepair: true
  autoUpgrade: true

Example 2: Applying the following configuration file would change the following settings:

  • Sets resource limits for CPU, memory and GPU. Node auto-provisioning will not create a node if the total size of the cluster exceeds the specified resource limits.
  • Enables node auto-repair and auto-upgrade for any new node pools created by node auto-provisioning.
  • Enables Secure boot and integrity monitoring for any new node pools created by node auto-provisioning.
  • Sets boot disk size to 100 GB for any new node pools created by node auto-provisioning.
resourceLimits:
  -resourceType: 'cpu'
   minimum: 4
   maximum: 10
  -resourceType: 'memory'
   maximum: 64
  -resourceType: 'nvidia-tesla-k80'
   maximum: 4
management:
  autoRepair: true
  autoUpgrade: true
shieldedInstanceConfig:
  enableSecureBoot: true
  enableIntegrityMonitoring: true
diskSizeGb: 100

To use an auto-provisioning configuration file:

  1. Create a file with the desired configuration in a location where gcloud can access it.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

    Replace the following:

    • CLUSTER_NAME: the name of the cluster.
    • FILE_NAME: the name of the configuration file.

    For more information, see the gcloud container clusters update documentation.

Auto-provisioning defaults

Node auto-provisioning looks at Pod requirements in your cluster to determine what type of nodes would best fit those Pods. However, some node pool settings are not directly specified by Pods (for example settings related to node upgrades). You can set default values for those settings, which will be applied to all newly created node pools.

Setting the default node image type

You can specify the node image type to use for all new auto-provisioned node pools using the gcloud tool or a configuration file. This setting is only available for GKE cluster version 1.20.6-gke.1800 and later.

gcloud

To set the default node image type, run the following command:

gcloud container clusters update CLUSTER_NAME \
  --enable-autoprovisioning \
  --autoprovisioning-image-type IMAGE_TYPE

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • IMAGE_TYPE: the node image type, which can be one of the following:

    • cos_containerd: Container-Optimized OS with containerd.
    • cos: Container-Optimized OS with Docker.
    • ubuntu_containerd: Ubuntu with containerd.
    • ubuntu: Ubuntu with Docker.

File

For all new auto-provisioned node pools, you can specify the node image type to use by using a configuration file. The following YAML configuration specifies that for new auto-provisioned node pools, the image type is cos_containerd, and has associated resource limits for CPU and memory. You must specify maximum values for CPU and memory to enable auto-provisioning.

  1. Save the YAML configuration:

    resourceLimits:
      - resourceType: 'cpu'
          minimum: 4
          maximum: 10
      - resourceType: 'memory'
          maximum: 64
    autoprovisioningNodePoolDefaults:
      imageType: 'cos_containerd'
    
  2. Apply the configuration:

    gcloud container clusters update CLUSTER_NAME \
      --enable-autoprovisioning \
      --autoprovisioning-config-file FILE_NAME
    

    Replace the following:

    • CLUSTER_NAME: the name of the cluster.
    • FILE_NAME: the name of the configuration file.

Setting identity defaults for auto-provisioned node pools

Permissions for Google Cloud resources are provided by identities.

You can specify the default identity (either a service account or one or more scopes) for new auto-provisioned node pools using the gcloud tool or through a configuration file.

gcloud

To specify the default IAM service account used by node auto-provisioning, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning --autoprovisioning-service-account=SERVICE_ACCOUNT

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • SERVICE_ACCOUNT: the name of the default service account.

The following example sets test-service-account@google.com as the default service account on the dev-cluster cluster:

gcloud container clusters update dev-cluster \
    --enable-autoprovisioning --autoprovisioning-service-account=test-service-account@google.com

To specify the default scopes used by node auto-provisioning, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning --autoprovisioning-scopes=SCOPE

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • SCOPE: the Google Cloud scopes used by auto-provisioned node pools. To specify multiple scopes, separate the scopes by a comma (for example, SCOPE1, SCOPE2,...).

The following example sets the default scope on the dev-cluster cluster to devstorage.read_only:

gcloud container clusters update dev-cluster \
    --enable-autoprovisioning \
    --autoprovisioning-scopes=https://www.googleapis.com/auth/pubsub,https://www.googleapis.com/auth/devstorage.read_only

File

You can specify identity default used by node auto-provisioning by using a configuration file. The following YAML configuration sets IAM service account:

  serviceAccount: SERVICE_ACCOUNT

Replace SERVICE_ACCOUNT with the name of the default service account.

Alternatively, you can use the following YAML configuration to specify default scopes used by node auto-provisioning:

  scopes: SCOPE

Replace SCOPE with the Google Cloud scope used by auto-provisioned node pools. To specify multiple scopes, separate the scopes by a comma (for example, SCOPE1, SCOPE2,...).

To use an auto-provisioning configuration file:

  1. Create a configuration file specifying identity defaults in a location where gcloud can access it.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • FILE_NAME: the name of the configuration file.

Customer-managed encryption keys (CMEK)

You can specify Customer Managed Encryption Keys (CMEK) used by new auto-provisioned node pools.

You can enable customer-managed encryption for boot drives by using a configuration file. The following YAML configuration sets the CMEK key:

  bootDiskKmsKey: projects/KEY_PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY_NAME

Replace the following:

  • KEY_PROJECT_ID: your key project ID.
  • LOCATION: the location of your key ring.
  • KEY_RING: the name of your key ring.
  • KEY_NAME: the name of your key.

To use an auto-provisioning configuration file:

  1. Create a configuration file specifying a CMEK key in a location where gcloud can access it.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

    Replace the following:

    • CLUSTER_NAME: the name of the cluster.
    • FILE_NAME: the name of the configuration file.

Node integrity

Node auto-provisioning supports creating node pools with Secure Boot and Integrity Monitoring enabled.

You can enable Secure Boot and Integrity Monitority by using a configuration file. The following YAML configuration enables Secure Boot and disables Integrity Monitoring:

  shieldedInstanceConfig:
    enableSecureBoot: true
    enableIntegrityMonitoring: false

To use an auto-provisioning configuration file:

  1. Copy the configuration above to a file in a location where gcloud can access it. Edit the values for enableSecureBoot and enableIntegrityMonitoring. Save the file.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

    Replace the following:

    • CLUSTER_NAME: the name of the cluster.
    • FILE_NAME: the name of the configuration file.

Node auto-repair and auto-upgrade

Node auto-provisioning supports creating node pools with node auto-repair and node auto-upgrade enabled.

gcloud

To enable auto-repair and auto-upgrade for all new auto-provisioned node pools, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning --enable-autoprovisioning-autorepair \
    --enable-autoprovisioning-autoupgrade

Replace CLUSTER_NAME with the name of the cluster.

To disable auto-repair and auto-upgrade for all new auto-provisioned node pools, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning --no-enable-autoprovisioning-autorepair \
    --no-enable-autoprovisioning-autoupgrade

Replace CLUSTER_NAME with the name of the cluster.

File

You can enable or disable node auto-repair and auto-upgrade by using a configuration file. The following YAML configuration enables auto-repair and disables auto-upgrade:

  management:
    autoRepair: true
    autoUpgrade: false

To use an auto-provisioning configuration file:

  1. Copy the configuration above to a file in a location where gcloud can access it. Edit the values for autoUpgrade and autoRepair. Save the file.

  2. Apply the configuration to your cluster by running the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning \
    --autoprovisioning-config-file FILE_NAME

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • FILE_NAME: the name of the configuration file.

Node surge upgrades settings

You can specify surge upgrade settings on all new auto- provisioned node pools using the gcloud tool or a configuration file.

gcloud

To specify surge upgrade settings for all new auto-provisioned node pools, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --autoprovisioning-max-surge-upgrade MAX_SURGE \
    --autoprovisioning-max-unavailable-upgrade MAX_UNAVAILABLE

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • MAX_SURGE: the maximum number of nodes that can be added to the node pool during upgrades.
  • MAX_UNAVAILABLE: the maximum number of nodes in the node pool that can be simultaneously unavailable during upgrades.

File

You can specify surge upgrade settings for all new auto-provisioned node pools by using a configuration file like the following:

  upgradeSettings:
    maxSurgeUpgrade: 1
    maxUnavailableUpgrade: 2

To use an auto-provisioning configuration file:

  1. Copy the configuration above to a file in a location where gcloud can access it. Edit the values for maxSurgeUpgrade and maxUnavailableUpgrade. Save the file.

  2. Apply the configuration to your cluster by running the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning \
    --autoprovisioning-config-file FILE_NAME

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • FILE_NAME: the name of the configuration file.

For more information, see the gcloud container clusters update documentation.

Custom boot disks

Node auto-provisioning supports creating node pools with Custom boot disks.

You can customize boot disk setting using a configuration file. The following YAML configuration causes node auto-provisioning to create node pools with 100 GB SSD disks:

  diskSizeGb: 100
  diskType: pd-ssd

Specify the following:

  • diskSizeGb: the size of the disk, specified in GB.
  • diskType: the type of disk, which can be one of the following values:
    • pd-standard: a standard persistent disk (default).
    • pd-ssd: an SSD persistent disk.

To use an auto-provisioning configuration file:

  1. Create a file with desired boot disk configuration in a location where gcloud can access it.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

    Replace the following:

    • CLUSTER_NAME: the name of the cluster.
    • FILE_NAME: the name of the configuration file.

Minimum CPU platform

Node auto-provisioning supports creating node pools with minimum CPU platform.

You can specify the default minimum CPU platform for new auto-provisioned node pools using the gcloud tool or a configuration file.

gcloud

To set the default minimum CPU platform, run the following command:

gcloud container clusters update CLUSTER_NAME \
  --enable-autoprovisioning \
  --autoprovisioning-min-cpu-platform MIN_CPU_PLATFORM

Replace the following:

File

To set the default minimum CPU platform, you can use a configuration file. The following YAML configuration sets the default minimum CPU platform:

minCpuPlatform: MIN_CPU_PLATFORM

Replace MIN_CPU_PLATFORM with the minimum CPU platform you want.

To use an auto-provisioning configuration file:

  1. Create a configuration file specifying minimum CPU platform in a location where gcloud can access it.

  2. Apply the configuration to your cluster by running the following command:

    gcloud container clusters update CLUSTER_NAME \
       --enable-autoprovisioning \
       --autoprovisioning-config-file FILE_NAME

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • FILE_NAME: the name of the configuration file.

Configuring GPU limits

When using node auto-provisioning with GPUs, you can set the maximum limit for each GPU type in the cluster by using the gcloud tool or the Google Cloud Console. To configure multiple types of GPU, you must use a configuration file.

To list the available resourceTypes, run gcloud compute accelerator-types list.

gcloud

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning \
    --max-cpu MAXIMUM_CPU \
    --max-memory MAXIMUM_MEMORY \
    --min-accelerator type=GPU_TYPE,count=MINIMUM_ACCELERATOR \
    --max-accelerator type=GPU_TYPE,count=MAXIMUM_ACCELERATOR

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • MAXIMUM_CPU: the maximum number of cores in the cluster.
  • MAXIMUM_MEMORY: the maximum number of gigabytes of memory in the cluster.
  • GPU_TYPE: the GPU type.
  • MINIMUM_ACCELERATOR: the minimum number of GPU accelerators in the cluster.
  • MAXIMUM_ACCELERATOR: the maximum number of GPU accelerators in the cluster.

The following example sets the GPU limits for the nvidia-tesla-k80 GPU accelerator type in the dev-cluster cluster:

gcloud container clusters update dev-cluster \
    --enable-autoprovisioning \
    --max-cpu 10 \
    --max-memory 64 \
    --min-accelerator type=nvidia-tesla-k80,count=1 \
    --max-accelerator type=nvidia-tesla-k80,count=4

File

You can load limits for multiple types of GPU by using a configuration file. The following YAML configuration configures two different types of GPUs:

  resourceLimits:
    -resourceType: 'cpu'
     minimum: 4
     maximum: 10
    -resourceType: 'memory'
     maximum: 64
    -resourceType: 'nvidia-tesla-k80'
     maximum: 4
    -resourceType: 'nvidia-tesla-v100'
     maximum: 2

To use an auto-provisioning configuration file:

  1. Copy the configuration above to a file in a location where gcloud can access it. Edit the values for cpu and memory. Add as many values for resourceType as you need. Save the file.

  2. Apply the configuration to your cluster by running the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning \
    --autoprovisioning-config-file FILE_NAME

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • FILE_NAME: the name of the configuration file.

For more information, see the gcloud container clusters update documentation.

Console

To enable node auto-provisioning with GPU resources, perform the following steps:

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. Select the desired cluster.

  3. Click the Edit icon.

  4. Scroll down to Node auto-provisioning and select Enabled.

  5. Set your desired minimum and maximum CPU and memory usage for the cluster.

  6. Click Add resource.

  7. Select the type of GPU (for example, NVIDIA TESLA K80) you wish to add. Set your desired minimum and maximum number of GPUs to add to the cluster.

  8. Accept the limitations of GPUs in GKE.

  9. Click Save. GKE updates your cluster.

Node auto-provisioning locations

You set the zones where node auto-provisioning can create new node pools. Regional locations are not supported. Zones must belong to the same region as the cluster but are not limited to node locations defined on the cluster level. Changing node auto-provisioning locations doesn't affect any existing node pools.

To set locations where node auto-provisioning can create new node pools, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --enable-autoprovisioning --autoprovisioning-locations=ZONE

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • ZONE: the zone where node auto-provisioning can create new node pools. To specify multiple zones, separate the zones by a comma (for example, ZONE1, ZONE2,...).

Disabling node auto-provisioning

When you disable node auto-provisioning for a cluster, node pools are no longer auto-provisioned.

gcloud

To disable node auto-provisioning for a cluster, run the following command:

gcloud container clusters update CLUSTER_NAME \
  --no-enable-autoprovisioning

Replace CLUSTER_NAME with the name of your cluster.

Console

To disable node auto-provisioning using the Google Cloud Console:

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. Select the desired cluster.

  3. Click the Edit icon.

  4. Scroll down to Node auto-provisioning and select Disabled.

Marking node pool as auto-provisioned

After enabling node auto-provisioning on the cluster, you can specify which node pools are auto-provisioned. An auto-provisioned node pool is automatically deleted when no workloads are using it.

To mark a node pool as auto-provisioned, run the following command:

gcloud container node-pools update NODE_POOL_NAME \
  --enable-autoprovisioning

Replace NODE_POOL_NAME with the name of the node pool.

Marking node pool as not auto-provisioned

To mark a node pool as not auto-provisioned, run the following command:

gcloud container node-pools update NODE_POOL_NAME \
  --no-enable-autoprovisioning

Replace NODE_POOL_NAME with the name of the node pool.

Using a custom machine family

Starting with GKE 1.19.7-gke.800, you can choose a machine-family for your workloads. This is accomplished by either:

  • Setting the node affinity with the key of cloud.google.com/machine-family, operator In, and the value being the desired machine family (for example, n2).
  • Adding a nodeSelector with the key of cloud.google.com/machine-family and the value being the desired machine family.

Here is an example that sets the nodeAffinity to a machine family of n2:

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: cloud.google.com/machine-family
            operator: In
            values:
            - n2

After applying the changes, node auto-provisioning chooses the best node pool with a machine type within the specified machine family. If multiple values are used for the match expression, one value will be chosen arbitrarily.

What's next