Managed instance groups (MIGs) offer autoscaling capabilities that let you automatically add or delete virtual machine (VM) instances from a MIG based on increases or decreases in load. Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. You define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load and the options you configure.
Autoscaling works by adding more VMs to your MIG when there is more load (scaling out, sometimes referred to as scaling up), and deleting VMs when the need for VMs is lowered (scaling in or down).
- Autoscaling works with managed instance groups (MIGs) only. Unmanaged instance groups are not supported.
- If you want to autoscale a regional MIG, the following limitations apply:
- You cannot create instances with specific names while autoscaling is turned on. However, you can turn on autoscaler after VM instances with specific names are created.
- You cannot use autoscaling if your MIG has stateful configuration.
If you are not sure whether your group is part of a GKE cluster, look for the
gkeprefix in the MIG name. For example,
An autoscaling policy must always have at least one scaling signal.
An autoscaler can make scaling decisions based on multiple signals, but it can handle only one signal per metric type except in the case of Cloud Monitoring metrics; an autoscaler can handle up to five signals based on Monitoring metrics. The autoscaler calculates the recommended number of virtual machines for each signal and then scales based on the signal that provides the largest number of virtual machines in the group.
Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (
minNumReplicas) that you specify.
If you autoscale a regional MIG, an instance can be added and then immediately deleted from one of the zones. This happens when the utilization in the zone triggers a scale out but the overall utilization in the regional MIG does not require the additional instance or the additional instance is required in a different zone.
The autoscaler uses the Compute Engine Service Agent to add and remove instances in the group. Google Cloud automatically creates this service account, as well as its IAM policy binding to the Compute Engine Service Agent role, when the Compute Engine API is enabled.
If your project is missing this account—for instance, if you have removed it—you can add it manually:
In the Cloud console, go to the IAM page.
Select the Compute Engine Service Agent role.
gcloud projects add-iam-policy-binding PROJECT_ID \ --member serviceAccount:service-PROJECT_NUMBER@compute-system.iam.gserviceaccount.com \ --role roles/compute.serviceAgent
Autoscaling uses the following fundamental concepts and services.
Managed instance groups
Autoscaling is a feature of managed instance groups (MIGs). A managed instance group is a collection of virtual machine (VM) instances that are created from a common instance template. An autoscaler adds or deletes instances from a managed instance group based on the group's autoscaling policy. Although Compute Engine has both managed and unmanaged instance groups, only managed instance groups can be used with an autoscaler.
To understand the difference between a managed instance group and an unmanaged instance group, see Instance groups.
To learn how to create a managed instance group, see Creating MIGs.
When you define an autoscaling policy for your group, you specify one or more signals that the autoscaler uses to scale the group. When you set multiple signals in a policy, the autoscaler calculates the recommended number of VMs for each signal and sets your group's recommended size to the largest number.
The following sections provide an overview of signals based on target utilization metrics and signals based on schedules.
Target utilization metrics
You can autoscale based on one or more of the following metrics that reflect the load of the instance group:
- Average CPU utilization.
- HTTP load balancing serving capacity, which can be based on either utilization or requests per second.
- Cloud Monitoring metrics.
The autoscaler continuously collects usage information based on the selected utilization metric, compares actual utilization to your desired target utilization, and uses this information to determine whether the group needs to remove instances (scale in) or add instances (scale out).
The target utilization level is the level at which you want to maintain your virtual machine (VM) instances. For example, if you scale based on CPU utilization, you can set your target utilization level at 75% and the autoscaler will maintain the CPU utilization of the specified group of instances at or close to 75%. The utilization level for each metric is interpreted differently based on the autoscaling policy.
For more information about scaling based on target utilization metrics, see the following pages:
- Scaling based on CPU utilization
- Scaling based on the serving capacity of an external HTTP(S) load balancer
- Scaling based on Cloud Monitoring metrics
You can use schedule-based autoscaling to allocate capacity for anticipated loads. You can have up to 128 scaling schedules per instance group. For each scaling schedule, specify the following:
- Capacity: minimum required VM instances
- Schedule: start time, duration, and recurrence (for example, once, daily, weekly, or monthly)
Each scaling schedule is active from its start time and for the configured duration. During this time, autoscaler scales the group to have at least as many instances as defined by the scaling schedule.
For more information, see Scaling based on schedules.
Cool down period
The cool down period is also known as the application initialization period. While an application is initializing on an instance, the instance's usage data might not reflect normal circumstances. So the autoscaler uses the cool down period for scaling decisions in the following ways:
- For scale-in decisions, the autoscaler considers usage data from all instances, even an instance that is still within its cool down period. The autoscaler recommends to remove instances if the average utilization from all instances is less than the target utilization.
- For scale-out decisions, the autoscaler ignores usage data from instances that are still in their cool down period.
- If you enable predictive mode, the cool down period informs the predictive autoscaler to scale out further in advance of anticipated load, so that applications are initialized when the load arrives. For example, if you set the cool down period to 300 seconds, then predictive autoscaler creates VMs 5 minutes ahead of forecasted load.
Specify a cool down period to indicate how long it takes applications on your instance to initialize. By default, the cool down period is 60 seconds.
Actual initialization times vary because of numerous factors. We recommend that
you test how long your application takes to initialize. To do this, create an
instance and time the startup process from when the instance becomes
until the application is ready.
If you set a cool down period value that is significantly longer than the time it takes for an instance to initialize, then your autoscaler might ignore legitimate utilization data, and it might underestimate the required size of your group, causing a delay in scaling out.
For the purposes of scaling in, the autoscaler calculates the group's recommended target size based on peak load over the last 10 minutes. These last 10 minutes are referred to as the stabilization period.
Using the stabilization period, the autoscaler ensures that the recommended size for your managed instance group is always sufficient to serve the peak load observed during the previous 10 minutes.
This 10-minute stabilization period might appear as a delay in scaling in, but it is actually a built-in feature of autoscaling. The delay ensures that the smaller group size is enough to support peak load from the last 10 minutes.
If you need to investigate or configure your group without interference from autoscaler operations, you can temporarily turn off or restrict autoscaling activities. The autoscaler's configuration persists while it is turned off or restricted, and all autoscaling activities resume when you turn it on again or lift the restriction.
If you enable predictive autoscaling to optimize your MIG for availability, the autoscaler forecasts future load based on historical data and scales out a MIG in advance of predicted load, so that new instances are ready to serve when the load arrives.
Predictive autoscaling works best if your workload meets the following criteria:
- Your application takes a long time to initialize—for example, if you configure a cool down period of more than 2 minutes.
- Your workload varies predictably with daily or weekly cycles.
For more information, see Scaling based on predictions.
If your workloads take many minutes to initialize (for example, due to lengthy installation tasks), you can reduce the risk of response latency caused by abrupt scale-in events by configuring scale-in controls. Specifically, if you expect load spikes to follow soon after declines, you can limit the scale-in rate to prevent autoscaling from reducing a MIG's size by more VM instances than your workload can tolerate.
You don't have to configure scale-in controls if your application initializes quickly enough to pick up load spikes on scale out.
To configure scale-in controls, set the following properties in your autoscaling policy.
Maximum allowed reduction. The number of VM instances that your workload can afford to lose (from its peak size) within the specified trailing time window. Use this parameter to limit how much your group can be scaled in so that you can still serve a likely load spike until more instances start serving. The smaller you set the maximum allowed reduction, the longer it takes for your group to scale in.
Trailing time window. The history within which the autoscaler monitors the peak size required by your workload. The autoscaler will not resize below the maximum allowed reduction subtracted from the peak size observed in this period. You can use this parameter to define how long the autoscaler should wait before removing instances, as defined by the maximum allowed reduction. With a longer trailing time window, the autoscaler considers more historical peaks, making scale-in more conservative and stable.
The recommended group size is the autoscaler's recommended number of VMs that the managed instance group should maintain, based on peak load observed during the last 10 minutes. These last 10 minutes are referred to as the stabilization period. The recommended size is recalculated constantly. If you set an autoscaling policy with scale-in controls, then the recommended size is constrained by your scale-in controls.
There is no additional charge for configuring an autoscaling policy. Autoscaler dynamically adds or deletes VM instances, so you are charged only for the resources that your MIG uses. You can control resource cost by configuring the minimum and maximum number of instances in the autoscaling policy. For Compute Engine pricing information, see Pricing.
- If you don't have an existing MIG, review how to create a managed instance group.
Create an autoscaler that scales on:
Manage your autoscaler, for example, to get information about it, to configure scale-in controls, or to temporarily restrict it.