Cluster autoscaler

This page shows you how to autoscale your clusters. To learn about how the cluster autoscaler works, refer to Cluster autoscaler.

Cluster autoscaling resizes the number of nodes in a given node pool based on the demands of your workloads. You specify minReplicas and maxReplicas values for each node pool in your cluster.

For an individual node pool, minReplicas must be ≥ 1. However, the sum of the untainted user cluster nodes at any given time must be at least 3. This means the sum of the minReplicas values for all autoscaled node pools, plus the sum of the replicas values for all non-autoscaled node pools, must be at least 3.

Create a user cluster with autoscaling

To create a user cluster with autoscaling, add the autoscaling field to the nodePools section in the user cluster configuration file.

nodePools:
- name: pool‐1
  …
  replicas: 3
  ...
  autoscaling:
    minReplicas: 1
    maxReplicas: 5

This configuration creates a node pool with 3 replicas, and applies autoscaling with the minimum node pool size as 1 and the maximum node pool size as 5.

The minReplicas value must be ≥ 1.

Add a node pool with autoscaling

To add a node pool with autoscaling to an existing cluster:

  1. Edit the user cluster configuration file to add a new node pool, and include the autoscaling field. Adapt the values of minReplicas and maxReplicas as needed.

    nodePools:
    - name: my-new-node-pool
      …
      replicas: 3
      ...
      autoscaling:
        minReplicas: 1
        maxReplicas: 5
    
  2. Run the following command:

    gkectl update cluster --config USER_CLUSTER_CONFIG \
      --kubeconfig ADMIN_CLUSTER_KUBECONFIG
    

Enable an existing node pool for autoscaling

To enable autoscaling for a node pool in an existing cluster:

  1. Edit a specific nodePool in the user cluster configuration file, and include the autoscaling field. Adapt the values of minReplicas and maxReplicas as needed.

    nodePools:
    - name: my-existing-node-pool
      …
      replicas: 3
      ...
      autoscaling:
        minReplicas: 1
        maxReplicas: 5
    
  2. Run the following command:

    gkectl update cluster --config USER_CLUSTER_CONFIG \
      --kubeconfig ADMIN_CLUSTER_KUBECONFIG
    

Disable autoscaling for an existing node pool

To disable autoscaling for a specific node pool:

  1. Edit the user cluster configuration file and remove the autoscaling field for that node pool.

  2. Run the gkectl update cluster command.

Check cluster autoscaler behavior

You can determine what the cluster autoscaler is doing in several ways.

Check cluster autoscaler logs

First, find the name of the cluster autoscaler Pod. Run this command, replacing `USER_CLUSTER_NAME with the user cluster name:

kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get pods -n USER_CLUSTER_NAME | grep cluster-autoscaler

To check logs on the cluster autoscaler Pod, replacing POD_NAME with the Pod name:

kubectl --kubeconfig ADMIN_KUBECONFIG logs cluster-autoscaler-POD_NAME --container cluster-autoscaler -n USER_CLUSTER_NAME

Check the configuration map

The cluster autoscaler publishes the kube-system/cluster-autoscaler-status configuration map. To see this map, run this command:

kubectl --kubeconfig USER_KUBECONFIG get configmap cluster-autoscaler-status -n kube-system -o yaml

Check cluster autoscale events.

You can check cluster autoscale events:

  • On pods (particularly those that cannot be scheduled, or on underutilized nodes)
  • On nodes
  • On the kube-system/cluster-autoscaler-status config map.

Troubleshooting

See the following troubleshooting information for cluster autoscaler:

  • You might be experiencing one of the limitations for cluster autoscaler.
  • If you are having problems with downscaling your cluster, see Pod scheduling and disruption. You might have to add a PodDisruptionBudget for the kube-system Pods. For more information about manually adding a PodDisruptionBudget for the kube-system Pods, see the Kubernetes cluster autoscaler FAQ.
  • When scaling down, cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:
    • The Pod's affinity or anti-affinity rules prevent rescheduling.
    • The Pod has local storage.
    • The Pod is not managed by a Controller such as a Deployment, StatefulSet, Job or ReplicaSet.

For more information about cluster autoscaler and preventing disruptions, see the following questions in the Kubernetes cluster autoscaler FAQ: