Autoscaling groups of instances

Managed instance groups (MIGs) offer autoscaling capabilities that let you automatically add or delete virtual machine (VM) instances from a MIG based on increases or decreases in load. Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. You define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load and the options you configure.

Autoscaling works by adding more VMs to your MIG when there is more load (scaling out, sometimes referred to as scaling up), and deleting VMs when the need for VMs is lowered (scaling in or down).

Specifications

  • Autoscaling only works with zonal and regional managed instance groups (MIGs). Unmanaged instance groups are not supported.
  • If you want to autoscale a regional MIG, the following limitations apply:

  • You cannot create instances with specific names while autoscaling is turned on. However, you can turn on autoscaler after VM instances with specific names are created.

  • You cannot use autoscaling if your MIG has stateful configuration.

  • Do not use Compute Engine autoscaling with MIGs that are owned by Google Kubernetes Engine. For Google Kubernetes Engine groups, use cluster autoscaling instead.

    If you are not sure whether your group is part of a GKE cluster, look for the gke prefix in the MIG name. For example, gke-test-1-3-default-pool-eadji9ah.

  • An autoscaler can make scaling decisions based on multiple metrics, but it can handle only one signal per metric type except in the case of Cloud Monitoring metrics; an autoscaler can handle up to five signals based on Monitoring metrics. The autoscaler calculates the recommended number of virtual machines for each signal and then scales based on the signal that provides the largest number of virtual machines in the group.

  • Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you specify.

  • If you autoscale a regional MIG, an instance can be added then immediately deleted from one of the zones. This happens when the utilization in the zone triggers a scale out but the overall utilization in the regional MIG does not require the additional instance or the additional instance is required in a different zone.

Prerequisites

The autoscaler uses the Compute Engine System service account to add and remove instances in the group. Google Cloud automatically creates this service account, as well as its IAM policy binding, when the Compute Engine API is enabled.

If your project is missing this account—for instance, if you have removed it—you can add it manually:

Console

  1. In the Cloud Console, go to the IAM page.

    Go to IAM

  2. Click Add.

  3. Enter service-PROJECT_NUMBER@compute-system.iam.gserviceaccount.com.

  4. Select the Compute Engine Service Agent role.

  5. Click Save.

gcloud

gcloud projects add-iam-policy-binding PROJECT_ID \
   --member serviceAccount:service-PROJECT_NUMBER@compute-system.iam.gserviceaccount.com \
   --role roles/compute.serviceAgent

Fundamentals

Autoscaling uses the following fundamental concepts and services.

Managed instance groups

Autoscaling is a feature of managed instance groups (MIGs). A managed instance group is a collection of virtual machine (VM) instances that are created from a common instance template. An autoscaler adds or deletes instances from a managed instance group based on the group's autoscaling policy. Although Compute Engine has both managed and unmanaged instance groups, only managed instance groups can be used with an autoscaler.

To understand the difference between a managed instance group and an unmanaged instance group, see Instance groups.

To learn how to create a managed instance group, see Creating MIGs.

Autoscaling policy

When you define an autoscaling policy for your group, you specify one or more signals that the autoscaler uses to scale the group. When you set multiple signals in a policy, the autoscaler calculates the recommended number of VMs for each signal and sets your group's recommended size to the largest number.

The following sections provide an overview of signals based on target utilization metrics and signals based on schedules.

Target utilization metrics

You can autoscale based on one or more of the following metrics that reflect the load of the instance group:

  • Average CPU utilization.
  • HTTP load balancing serving capacity, which can be based on either utilization or requests per second.
  • Cloud Monitoring metrics.

The autoscaler continuously collects usage information based on the selected utilization metric, compares actual utilization to your desired target utilization, and uses this information to determine whether the group needs to remove instances (scale in) or add instances (scale out).

The target utilization level is the level at which you want to maintain your virtual machine (VM) instances. For example, if you scale based on CPU utilization, you can set your target utilization level at 75% and the autoscaler will maintain the CPU utilization of the specified group of instances at or close to 75%. The utilization level for each metric is interpreted differently based on the autoscaling policy.

For more information about scaling based on target utilization metrics, see the following pages:

Schedules

You can use schedule-based autoscaling to allocate capacity for anticipated loads. You can have up to 128 scaling schedules per instance group. For each scaling schedule, specify the following:

  • Capacity: minimum required VM instances
  • Schedule: start time, duration, and recurrence (for example, once, daily, weekly, or monthly)

Each scaling schedule is active from its start time and for the configured duration. During this time, autoscaler scales the group to have at least as many instances as defined by the scaling schedule.

For more information, see Scaling based on schedules.

Cool down period

The cool down period is also known as the application initialization period. Compute Engine uses the cool down period for scaling decisions in two ways:

  • To omit unusual usage data after a VM is created and while its application is initializing.
  • If predictive autoscaling is enabled, to inform the autoscaler how much time in advance to scale out ahead of anticipated load, so that applications are initialized when the load arrives.

While an application is initializing on an instance, the instance's usage might not reflect normal circumstances, so that usage information might not be reliable for autoscaler decisions and you might want to omit that data. Specify a cool down period to let your instances finish initializing before the autoscaler begins collecting usage information from them. By default, the cool down period is 60 seconds.

If you enable predictive mode, the cool down period informs the predictive autoscaler to scale out further in advance of anticipated load, so that applications are initialized when the load arrives. For example, if you set the cool down period to 300 seconds, then predictive autoscaler creates VMs 5 minutes ahead of forecasted load.

Actual initialization times vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create an instance and time the startup process from when the instance becomes RUNNING until the application is ready.

If you set a cool down period value that is significantly longer than the time it takes for an instance to initialize, then your autoscaler might ignore legitimate utilization data, and it might underestimate the required size of your group, causing a delay in scaling out.

Stabilization period

For the purposes of scaling in, the autoscaler calculates the group's recommended target size based on peak load over the last 10 minutes. These last 10 minutes are referred to as the stabilization period.

Using the stabilization period, the autoscaler ensures that the recommended size for your managed instance group is always sufficient to serve the peak load observed during the previous 10 minutes.

This 10-minute stabilization period might appear as a delay in scaling in, but it is actually a built-in feature of autoscaling. The delay ensures that the smaller group size is enough to support peak load from the last 10 minutes.

Autoscaling mode

If you need to investigate or configure your group without interference from autoscaler operations, you can temporarily turn off or restrict autoscaling activities. The autoscaler's configuration persists while it is turned off or restricted, and all autoscaling activities resume when you turn it on again or lift the restriction.

Predictive autoscaling

If you enable predictive autoscaling to optimize your MIG for availability, the autoscaler forecasts future load based on historical data and scales out a MIG in advance of predicted load, so that new instances are ready to serve when the load arrives.

Predictive autoscaling works best if your workload meets the following criteria:

  • Your application takes a long time to initialize—for example, if you configure a cool down period of more than 2 minutes.
  • Your workload varies predictably with daily or weekly cycles.

For more information, see Using predictive autoscaling.

Scale-in controls

If your workloads take many minutes to initialize (for example, due to lengthy installation tasks), you can reduce the risk of response latency caused by abrupt scale-in events by configuring scale-in controls. Specifically, if you expect load spikes to follow soon after declines, you can limit the scale-in rate to prevent autoscaling from reducing a MIG's size by more VM instances than your workload can tolerate.

You don't have to configure scale-in controls if your application initializes quickly enough to pick up load spikes on scale out.

To configure scale-in controls, set the following properties in your autoscaling policy.

  • Maximum allowed reduction. The number of VM instances that your workload can afford to lose (from its peak size) within the specified trailing time window. Use this parameter to limit how much your group can be scaled in so that you can still serve a likely load spike until more instances start serving. The smaller you set the maximum allowed reduction, the longer it takes for your group to scale in.

  • Trailing time window. The history within which the autoscaler monitors the peak size required by your workload. The autoscaler will not resize below the maximum allowed reduction subtracted from the peak size observed in this period. You can use this parameter to define how long the autoscaler should wait before removing instances, as defined by the maximum allowed reduction. With a longer trailing time window, the autoscaler considers more historical peaks, making scale-in more conservative and stable.

For more information, see Configuring scale-in controls and Understanding autoscaler decisions.

The recommended group size is the autoscaler's recommended number of VMs that the managed instance group should maintain, based on peak load observed during the last 10 minutes. These last 10 minutes are referred to as the stabilization period. The recommended size is recalculated constantly. If you set an autoscaling policy with scale-in controls, then the recommended size is constrained by your scale-in controls.

Pricing

There is no additional charge for configuring an autoscaling policy. Autoscaler dynamically adds or deletes VM instances, so you are charged only for the resources that your MIG uses. You can control resource cost by configuring the minimum and maximum number of instances in the autoscaling policy. For Compute Engine pricing information, see Pricing.

What's next

  1. If you don't have an existing MIG, review how to create a managed instance group.
  2. Create an autoscaler that scales on:

  3. Manage your autoscaler, for example, to get information about it, to configure scale-in controls, or to temporarily restrict it.