Distributing instances using regional managed instance groups

This page describes how to create managed instance groups (MIGs) with instances that are distributed across a single region.

Unlike zonal managed instance groups that belong to a single zone, regional managed instance groups improve your app availability by spreading your instances across multiple zones within a single region. For example, by default, a regional managed instance group in the region us-east1 maintains an even distribution of instances in three zones within the region: us-east1-b, us-east1-c, and us-east1-d.

For regions that contain more than three zones, the regional managed instance group selects three of those zones in which to create instances. You can also selectively choose which zones to create instances in or choose to create instances in regions with fewer than three zones. For example, to accelerate workloads with GPUs, select zones that support GPUs.

Like zonal managed instance groups, regional managed instance groups support autoscaling, internal load balancing, and external load balancing.

Before you begin

Limitations

  • Each regional managed instance group can contain up to 2,000 instances.
  • When updating a managed instance group, no more than 1,000 instances can be specified in a single request.
  • You can't use regional managed instance groups with a load balancer that uses the maxRate balancing option.

Choosing regional managed instance groups

Google recommends regional managed instance groups over zonal managed instance groups because they allow you to spread your app load across multiple zones, rather than confining your app to a single zone or managing multiple instance groups across different zones. This replication protects against zonal failures and unforeseen scenarios where an entire group of instances in a single zone malfunctions. If that happens, your app can continue serving traffic from instances running in another zone in the same region.

In the case of a zone failure, or if a group of instances in a zone stops responding, a regional managed instance group continues supporting your VM instances as follows:

  • The number of instances that are part of the regional managed instance group in the remaining zones continue to serve traffic. No new instances are added and no instances are redistributed (unless you set up autoscaling).

  • After the failed zone has recovered, the instance group starts serving traffic again from that zone.

When designing for robust and scalable apps, use regional managed instance groups.

Proactive instance redistribution

By default, a regional managed instance group attempts to maintain an even distribution of VM instances across zones in the region to maximize the availability of your app in the event of a zone-level failure.

If you delete or abandon VM instances from your group, causing uneven distribution across zones, the group proactively redistributes instances to reestablish an even distribution.

To reestablish an even distribution across zones, the group deletes instances in zones with more VM instances and adds new instances in zones with fewer VM instances. The group automatically picks which instances to delete.

Proactive redistribution reestablishes even distribution across zones.
Example of proactive redistribution

For example, suppose you have a regional managed instance group with 4 instances spread across 3 zones: a, b, and c. If you delete 3 VM instances in c, the group attempts to rebalance so that the instances are again evenly distributed across the zones. In this case, the group deletes 2 instances (one from a and one from b) and creates 2 instances in zone c, so that each zone has 3 instances and even distribution is achieved. There is no way to selectively determine which instances are deleted. The group temporarily loses capacity while the new instances start up.

To prevent automatic redistribution of your instances, you can turn off proactive instance redistribution when creating or updating a regional managed instance group.

This is useful when you need to:

  • Delete or abandon VMs from the group without affecting other running VM instances. For example, you can delete a batch worker VM instance after job completion without affecting other workers.
  • Protect VMs with stateful apps from undesirable automatic deletion by proactive redistribution.
Disabling proactive redistribution can affect capacity during a
            zonal failure.
Uneven distribution after disabling proactive redistribution

If you turn off proactive instance redistribution, a managed instance group does not proactively add or remove instances to achieve balance but still opportunistically converges toward balance during resize operations, treating each resize operation as an opportunity to balance the group. For example, when scaling down, the group automatically uses the rescaling as an opportunity to remove instances from bigger zones; when scaling up, the group uses the opportunity to add instances to smaller zones.

Provisioning the correct managed instance group size to ensure availability

A variety of events may cause one or more instances to become unavailable, and you can help mitigate this issue using multiple GCP services:

However, even if you use these services, your users may still experience issues if too many of your instances are simultaneously unavailable.

To be prepared for the extreme case where one zone fails or an entire group of instances stops responding, Google strongly recommend overprovisioning your managed instance group. Depending on your application needs, overprovisioning your group prevents your system from failing entirely if a zone or group of instances becomes unresponsive.

Google makes recommendations for overprovisioning with the priority of keeping your app available for your users. These recommendations include provisioning and paying for more VM instances than your app might need on a day-to-day basis. Base your overprovisioning decisions on app needs and cost limitations.

Estimating the recommended instance group size

Compute Engine recommends that you provision enough instances such that, if all of the instances in any one zone are unavailable, your remaining instances still meet the minimum number of instances that you require.

Use the following table to determine the minimum recommended size for your instance group:

Number of Zones Additional VMs Recommended Total VMs
2 +100% 200%
3 +50% 150%
4 +33% 133%

The recommended number of additional VMs is inversely proportional to the number of zones where your managed instance group is located. Thus, you can reduce the number of additional VMs by evenly distributing your application across a higher number of zones.

Provisioning a regional managed instance group in three or more zones

When you create a regional managed instance group in a region with at least three zones, Google recommends overprovisioning your instance group by at least 50%. By default, a regional managed instance group creates instances in three zones. Having VM instances in three zones already helps you preserve at least 2/3 of your serving capacity, and if a single zone fails, the other two zones in the region can continue to serve traffic without interruption. By overprovisioning to 150%, you can ensure that if 1/3 of the capacity is lost, 100% of traffic is supported by the remaining zones.

For example, if you need 20 VM instances in your managed instance group across three zones, we recommend, at a minimum, an additional 50% of instances. In this case, 50% of 20 is 10 more instances, for a total of 30 instances in the instance group. If you create a regional managed instance group with a size of 30, the instance group distributes your instances as evenly as possible across the three zones, like so:

Zone Number of instances
example-zone-1 10
example-zone-2 10
example-zone-3 10

If any single zone fails, you still have 20 instances serving traffic.

Provisioning a regional managed instance group in two zones

To provision your instances in two zones instead of three, Google recommends doubling the number of instances. For example, if you need 20 VM instances for your service, distributed across two zones, we recommend that you configure a regional managed instance group with 40 instances, so that each zone has 20 instances each. If a single zone fails, you still have 20 instances serving traffic.

Zone Number of instances
example-zone-1 20
example-zone-2 20

If the number of instances in your group is not easily divisible across two zones, Compute Engine divides the instances as evenly as possible and randomly puts the remaining instances in one of the zones.

Provisioning a regional managed instance group in one zone

It is possible to create a regional managed instance group with just one zone. This is similar to creating a zonal managed instance group.

Creating a single zone regional managed instance group is not recommended because it offers the minimum guarantee for highly available applications. If the zone fails, your entire managed instance group is unavailable, potentially disrupting your users.

Selecting zones for your group

The default configuration for a regional managed instance group is to distribute instances as evenly as possible across three zones. For various reasons, you might want to select specific zones for your app. For example, if you require GPUs for your instances, you might select only zones that support GPUs. You might have persistent disks that are only available in certain zones, or you might want to start with VM instances in just a few zones, rather than in three random zones within a region.

If you want to choose the number of zones or choose the specific zones the instance group should run in, you must do that when you first create the group. After you choose specific zones during creation, you cannot change or update the zones later.

  • To select more than three zones within a region, you must explicitly specify the individual zones. For example, to select all four zones within a region, you must provide all four zones explicitly in your request. If you do not, Compute Engine selects three zones by default.

  • To select two or fewer zones in a region, you must explicitly specify the individual zones. Even if the region only contains two zones, you must still explicitly specify the zones in your request.

Regardless whether you choose specific zones or whether you just want to select the region and allow Compute Engine to create instances in all zones within the region, by default, the new VM instances are distributed evenly across all zones. As a best practice, make sure you provision enough VM instances to support your apps in the specified number of zones.

Creating a regional managed instance group

Create a regional managed instance group in the gcloud command-line tool, the console, or the API.

If there is not enough capacity in each zone to support instances from the instance group, Compute Engine creates as many instances as possible and continues attempting to create the remaining instances when additional quota becomes available.

Because you are creating a regional managed instance group, keep in mind that certain resources are zonal, such as persistent disks. If you are specifying zonal resources in your instance template, like additional persistent disks, the disk must be present in all zones so it can be attached to the instances created by this regional managed instance group.

By default, if you do not explicitly specify individual zones in your request, Compute Engine automatically chooses three zones to create instances in. If you need to create instances in more than or fewer than three zones, or you want to pick which zones are used, provide a list of zones in your request. For more information, see Selecting zones for your group.

By default, proactive instance redistribution is enabled. If you need to manually manage the number of instances in each zone, you can disable proactive instance redistribution and you cannot configure autoscaling. For more information, see proactive instance redistribution.

Console

  1. Go to the Instance Groups page on the GCP Console.

    Go to the Instance Groups page

  2. Click Create Instance Group to create a new instance group.
  3. Under Location, select Multiple zones.
  4. Choose a desired region.
  5. If you want to choose specific zones, click Configure zones to select the zones you want to use.
  6. If you want to disable proactive instance redistribution
    1. Ensure that Autoscaling mode is set to Off.
    2. Set Instance redistribution to Off.
  7. Choose an instance template for the instance group or create a new one.
  8. Specify the number of instances for this group. Remember to provision enough instances to support your application if a zone failure happens.
  9. Continue with the rest of the managed instance group creation process.

gcloud

All managed instance groups require an instance template. If you don't have one, create an instance template. For example, the following command creates a basic instance template with default properties:

gcloud compute instance-templates create example-template

Next, use the instance-groups managed create subcommand with the --region flag. For example, this command creates a regional managed instance group in three zones within the us-east1 region:

gcloud compute instance-groups managed create example-rmig \
    --template example-template --base-instance-name example-instances \
    --size 30 --region us-east1

If you want to select the specific zones the group should use, provide the --zones flag:

gcloud compute instance-groups managed create example-rmig \
    --template example-template --base-instance-name example-instances \
    --size 30 --zones us-east1-b,us-east1-c

If you want to disable proactive instance redistribution, set the --instance-redistribution-type flag to NONE. Because this feature is in Beta, you must use the gcloud beta tool. You cannot disable proactive instance redistribution if autoscaling is enabled.

gcloud beta compute instance-groups managed create example-rmig \
    --template example-template --base-instance-name example-instances \
    --size 30 --instance-redistribution-type NONE

Note: If you are disabling proactive instance redistribution, use the gcloud beta component because the disable proactive instance redistribution feature is currently in Beta.

API

All managed instance groups require an instance template. If you don't have one, create an instance template.

In the API, construct a POST request to the regionInstanceGroupManagers.insert method. In the request body, include the desired group name, group size, base name for instances in the group, and the URL to the instance template.

POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers

{
  "baseInstanceName": "[BASE_INSTANCE_NAME]",
  "instanceTemplate": "global/instanceTemplates/[INSTANCE_TEMPLATE_NAME]",
  "name": "[INSTANCE_GROUP_NAME]",
  "targetSize": "[TARGET_SIZE]",
  "distributionPolicy": {
     "zones": [
       {"zone": "zones/[ZONE]"},
       {"zone": "zones/[ZONE]"}
      ]
   }
}

where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [BASE_INSTANCE_NAME] is the instance name for each instance that is created as part of the instance group. For example, a base instance name of example-instance would create instances that have names like example-instance-[RANDOM_STRING] where [RANDOM_STRING] is generated by the server.
  • [INSTANCE_TEMPLATE_NAME] is the instance template to use.
  • [TARGET_SIZE] is the target number of instances of the instance group.

If you want to select specific zones or if you are creating instances in a region with less than or more than three zones, include the distributionPolicy property in your request and supply a list of zones. Replace [ZONE]with the name of the zone to create instances in.

POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers

{
  "baseInstanceName": "[BASE_INSTANCE_NAME]",
  "instanceTemplate": "global/instanceTemplates/[INSTANCE_TEMPLATE_NAME]",
  "name": "[INSTANCE_GROUP_NAME]",
  "targetSize": "[TARGET_SIZE]",
  "distributionPolicy": {
     "zones": [
       {"zone": "zones/[ZONE]"},
       {"zone": "zones/[ZONE]"}
      ]
   }
}

For example, the following creates an instance group named example-rmig with 10 instances distributed across us-east1-b and us-east1-c zones:

POST https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1/instanceGroupManagers
{

  "baseInstanceName": "example-instance",
  "instanceTemplate": "global/instanceTemplates/example-instance",
  "name": "example-rmig",
  "targetSize": 10,
  "distributionPolicy": {
      "zones": [
        {"zone": "zones/us-east1-b"},
        {"zone": "zones/us-east1-c"}
      ]
   }
}

If you want to disable proactive instance redistribution, include the updatePolicy property in your request and set its instanceRedistributionType to NONE. Because this feature is in Beta, you must use the Beta API. You cannot disable proactive instance redistribution if autoscaling is enabled.

POST https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers

{
  "baseInstanceName": "[BASE_INSTANCE_NAME]",
  "instanceTemplate": "global/instanceTemplates/[INSTANCE_TEMPLATE_NAME]",
  "name": "[INSTANCE_GROUP_NAME]",
  "targetSize": "[TARGET_SIZE]",
  "updatePolicy": {
     "instanceRedistributionType": "NONE"
   },
}

Listing instances in a regional managed instance group

To get a list of instances for your regional managed instance group, use the the GCP Console, the instance-groups managed list-instances command in the gcloud command-line tool, or make a request to the regionInstanceGroupManagers.listManagedInstances method.

Console

  1. Go to the Instance Groups page on the GCP Console.

    Go to the Instance Groups page

  2. Click the name of the regional managed instance group you want to view the instances of.

The instance group details page loads with a list of instances in the instance group.

gcloud

Run the instance-groups managed list-instances command:

gcloud compute instance-groups managed list-instances [INSTANCE_GROUP_NAME] --region [REGION]

where:

  • [INSTANCE_GROUP_NAME] is the name of the instance group.
  • [REGION] is the region of the instance group.

For example, the following command lists instances that part of an instance group named example-rmig in the region us-east1:

gcloud compute instance-groups managed list-instances example-rmig --region us-east1

API

In the API, construct an empty GET request to the regionInstanceGroupManagers.listManagedInstances method.

GET https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers/[INSTANCE_GROUP_NAME]

For example:

GET https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1/instanceGroupManagers/example-rmig

Updating a regional managed instance group

You can update a regional managed instance group using the Instance Group Updater feature. The Updater allows you to update a subset of your instances or all of your instances within a group to a new instance template. You can also use the Updater to perform canary updates and control the speed of your update.

Relatedly, you can also change the instance template of an instance group without updating existing instances by using the set-instance-template command in gcloud or the setInstanceTemplate method in the API. Note that changing the instance template does not automatically update existing instances to the new instance template. You must recreate individual instances or run the Instance Group Updater to apply the changes. However, new VM instances to the group will use the new instance template.

Disabling and reenabling proactive instance redistribution

Proactive instance redistribution maintains an even number of instances across the selected zones in the region. This configuration maximizes the availability of your application in the event of a zone-level failure.

Proactive instance redistribution is turned on by default for regional managed instance groups, but you can turn it off for non-autoscaled managed instance groups. When proactive instance redistribution is turned off, the group does not attempt to proactively redistribute instances across zones. This is useful if you need to:

  • Manually delete or abandon VM instances from the group without affecting other running instances.
  • Automatically delete a batch worker instance upon job completion without affecting other workers.
  • Protect instances with stateful applications from unintended automatic deletion by proactive instance redistribution.

If you delete or abandon VM instances from the group and it causes an imbalance of instances across zones, then you must manually rebalance the group before you can reenable proactive redistribution. To manually rebalance the group, resize it or delete instances from zones with more instances.

When you resize a managed instance group that has proactive instance redistribution turned off, the group still opportunistically converges toward balance, treating each resize operation as an opportunity to balaance the group: when the group grows, the group always tries to add instances to the zones with the smallest number of VMs; when the group shrinks, the group always removes instances from the zones with the largest number of instances.

Example of manually resizing a group to achieve even redistribution

Use the console, the gcloud tool, or the API to create an instance group with proactive instance redistribution disabled, or to turn proactive instance redistribution off or on for an existing group.

Creating a group with proactive redistribution disabled

Console

  1. Go to the Instance Groups page on the GCP Console.

    Go to the Instance Groups page

  2. Click Create Instance Group to create a new instance group.
  3. Under Location, select Multiple zones.
  4. Choose a desired region.
  5. If you want to choose specific zones, click Configure zones to select the zones you want to use.
  6. To disable proactive instance redistribution:
    1. Ensure that Autoscaling mode is set to Off.
    2. Set Instance redistributiond to Off.
  7. Choose an instance template for the instance group or create a new one.
  8. Specify the number of instances for this group. Remember to provision enough instances to support your application if a zone failure happens.
  9. Continue with the rest of the managed instance group creation process.

gcloud

To create a new regional managed instance group without proactive instance redistribution, use the gcloud beta compute instance-groups managed create command with the --instance-redistribution-type flag set to NONE.

gcloud beta compute instance-groups managed create [INSTANCE_GROUP_NAME] \
    --template [TEMPLATE] \
    --size [SIZE] \
    --zones [ZONES] \
    --instance-redistribution-type NONE

Where:

  • [INSTANCE_GROUP_NAME] is the name of the instance group.
  • [TEMPLATE] is the name of the instance template to use for the group.
  • [SIZE] is the target size of the instance group
  • [ZONES] is the list of zones in a single region where you need to deploy VM instances.

For example:

gcloud beta compute instance-groups managed create example-rmig \
    --template example-template \
    --size 30 \
    --zones us-east1-b,us-east1-c \
    --instance-redistribution-type NONE

API

To create a non-autoscaled regional managed instance group without proactive instance redistribution, construct a POST request to invoke the regionInstanceGroupManagers.insert method. In the request body, include the updatePolicy property and set its instanceRedistributionType field to NONE.

POST
https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers/[INSTANCE_GROUP_NAME]
{
    "name": "[INSTANCE_GROUP_NAME]",
    "baseInstanceName": "[BASE_INSTANCE_NAME]",
    "instanceTemplate": "global/instanceTemplates/[TEMPLATE]",
    "targetSize": "[TARGET_SIZE]",
    "distributionPolicy": {
        "zones": [
            {"zone": "zones/[ZONE]"},
            {"zone": "zones/[ZONE]"}
        ]
    },
    "updatePolicy": {
        "instanceRedistributionType": "NONE"
    }
}

Where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [INSTANCE_GROUP_NAME] is the name of the instance group.
  • [BASE_INSTANCE_NAME] is the name prefix for each instance. The instance name suffix is auto generated.
  • [TEMPLATE] is the name of the instance template to use for the group.
  • [TARGET_SIZE] is the target size of the instance group.
  • [ZONE] is the name of a zone in the single region where you need to deploy VM instances.

Turning off proactive instance redistribution

Before you can turn off proactive instance redistribution, you must turn off autoscaling.

Console

  1. Go to the Instance Groups page on the GCP Console.

    Go to the Instance Groups page

  2. Select the instance group to update and click Edit group.
  3. Ensure that Autoscaling mode is set to Off.
  4. Set Instance redistribution to Off to disable automatic redistribution.
  5. Click Save.

gcloud

To turn off proactive instance redistribution for a non-autoscaled regional managed instance group, use the compute instance-groups managed update command with the --instance-redistribution-type flag set to NONE.

 gcloud beta compute instance-groups managed update [INSTANCE_GROUP_NAME]
    --instance-redistribution-type NONE \
    --region [REGION]

where:

  • [INSTANCE_GROUP_NAME] is the name of the instance group.
  • [REGION] is the region of the instance group.

Note: If you are disabling proactive instance redistribution, use the gcloud beta component because Turning off instance redistribution is currently in Beta.

API

In the API, construct a PATCH request to the regionInstanceGroupManagers.patch method. In the request body, include the updatePolicy property and set its instanceRedistributionType field to NONE

PATCH
https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers/[INSTANCE_GROUP]

{
    "updatePolicy": {
         "instanceRedistributionType": "NONE"
    }
}

where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [INSTANCE_GROUP] is the name of a non-autoscaled managed instance group.

Note: If you are disabling proactive instance redistribution, use the beta API because Turning off instance redistribution is currently in Beta.

Turning on proactive instance distribution

To turn on proactive instance distribution, use a similar command as for turning off proactive instance redistribution, but set the instance redistribution type to PROACTIVE.

If you manually deleted or abandoned some instances resulting in an uneven distribution of instances across the region, then, before you can reenable proactive instance redistribution, you must manually rebalance the group. The difference in the number of VM instances between any two zones should not exceed 1 VM.

You can achieve an even distribution of instances across zones manually by deleting VMs from zones with more instances or by resizing the group up to fill up the zones with fewer instances until the distribution is even.

A regional managed instance group does not allow turning on proactive instance redistribution when instances are distributed unevenly across zones (the difference in the number of VM instances between two zones is 2 or more VMs). This is to prevent an unintended automatic deletion of VMs from zones with more instances, which would be triggered to achieve even distribution.

Autoscaling a regional managed instance group

Compute Engine offers autoscaling for managed instance groups, which allows your instance groups to automatically add or remove instances based on increases or decreases in load.

If you enable autoscaling for a regional managed instance group, the feature behaves as follows:

  • An autoscaling policy is applied to the group as a whole (not to individual zones). For example, if you enable autoscaler to target 66% CPU utilization, the autoscaler tracks all instances in the group to maintain an average 66% utilization across all instances in all zones.

  • Autoscaling attempts to evenly distribute instances across available zones when possible. In general, the autoscaler keeps zones balanced in size by growing smaller zones and expecting that load will get redirected from bigger zones, for example, through a load balancer. We do not recommend configuring a custom load balancer that prefers one zone as this could cause unexpected behavior.

  • If your workflow uses instances evenly in 3 zones and a zone experiences a failure, or a group of instances within a zone fails, 1/3 of capacity might be lost but 2/3 of the capacity remains in the other zones. We recommend that you overprovision your autoscaled regional managed instance group to avoid overloading surviving servers during the time a zone is lost.

  • If resources (for example, preemptible instances) are temporarily unavailable in a zone, the group continues to try to create those managed instances in that zone. After the resources become available again, the group acquires the desired number of running instances.

  • If load balanacing is enabled and if resources are unavailable in a zone causing higher utilization of existing resources in that zone, new instances might be created in zones with lower utilization rates, which can result in a temporary uneven distribution.

The autoscaler only adds instances to a zone up to 1/n of the specified maximum for the group, where n is the number of provisioned zones. For example, if you are using the default of 3 zones, and if 15 is the maxNumReplicas configured for autoscaling, the autoscaler can only add up to 1/3 * 15 = 5 instances per zone for the instance group. If one zone fails, the autoscaler only scales up to 2/3 of the maxNumReplicas in the remaining two zones combined.

Provisioning your autoscaler configuration

Similar to the advice on overprovisioning a managed instance group, you should overprovision your autoscaler configuration so that:

  • The autoscaling utilization target is 2/3 of your desired utilization target.
  • To accommodate for the lowered utilization target, autoscaler will add more instances, so you should increase the maxNumReplicas to be 50% more than the number you would have set without taking into account overprovisioning.

For example, if you expect that 20 instances can handle your peak loads and the target utilization is 80%, set the autoscaler to:

  • 2/3 * 0.8 = 0.53 or 53% for target utilization instead of 80%
  • 3/2 * 20 = 30 for max number of instances instead of 20

This setup ensures that in the case of a single-zone failure, your instance group should not run out of capacity because the remaining 2/3 of instances should be able to handle the increased load from the offline zone (since you lowered the target utilization well below its capacity). The autoscaler will also add new instances up to the maximum number of instances you specified to maintain the 2/3 utilization target.

However, you shouldn't rely solely on overprovisioning your managed instance groups to handle increased load. As a best practice, Google recommends that you regularly load test your applications to make sure it can handle the increased utilization that might be caused by a zonal outage removing 1/3 of instances.

Enabling autoscaling

Console

  1. Go to the Instance Groups page on the GCP Console.

    Go to the Instance Groups page

  2. If you do not have an instance group, create one. Otherwise, click the name of an existing regional managed instance group from the list.
  3. On the instance group details page, click the Edit Group button.
  4. Under Autoscaling, check On.
  5. Fill out the properties for the autoscaling configuration.
  6. Save your changes.

gcloud

Using the gcloud command-line tool, using the set-autoscaling subcommand to enable regional autoscaling, followed by the --region flag. For more information on creating an autoscaler, read the autoscaling documentation.

For example, the following snippets sets up autoscaler for an example instance group named example-rmig. Replace us-east1 with the region of your managed instance group, example-autoscaler with your desired autoscaler name, and example-rmig with the name of the regional managed instance group:

gcloud compute instance-groups managed set-autoscaling example-rmig \
    --target-cpu-utilization 0.8 --max-num-replicas 5 --region us-east1

API

To set up regional autoscaling in the API, make a POST request to the following URL, with your own project ID and the region of your managed instance group:

POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/regionAutoscalers/

Your request body must contain the name, target, and autoscalingPolicy fields. autoscalingPolicy must define cpuUtilization and maxNumReplicas.

{
 "name": "[AUTOSCALER_NAME]",
 "target": "regions/us-east1/instanceGroupManagers/[INSTANCE_GROUP_NAME]",
 "autoscalingPolicy": {
    "maxNumReplicas": [MAX_NUM_INSTANCES],
    "cpuUtilization": {
       "utilizationTarget": [TARGET_UTILIZATION]
     },
    "coolDownPeriodSec": [SECONDS]
  }
}

For example:

{
 "name": "example-autoscaler",
 "target": "regions/us-east1/instanceGroupManagers/example-rmig",
 "autoscalingPolicy": {
    "maxNumReplicas": 10,
    "cpuUtilization": {
       "utilizationTarget": 0.8
     },
    "coolDownPeriodSec": 30
  }
}

Updating an autoscaler

You can update a regional autoscaler as you would a zonal autoscaler. Read the documentation on updating an autoscaler.

Adding a regional managed instance group to a load balancer

Google Cloud Platform load balancing uses instance groups to serve traffic. Depending on the type of load balancer you are using, you can add instance groups to a target pool or backend service. To read more about managed instance groups and load balancing, see the Instance Groups Overview.

You can assign a regional managed instance group as a target of a backend service for external load balancing and internal load balancing or as part of a target pool for Network load balancing.

For HTTP(S) load balancing, only maxRatePerInstance and maxUtilization are supported for regional managed instance groups.

Adding a regional managed instance group to a backend service

A backend service is necessary for creating an HTTP(S), SSL Proxy, TCP proxy, or internal load balancer. A backend service can contain multiple backends. An instance group is a type of backend. The instances in the instance group respond to traffic from the load balancer. The backend service in turn knows which instances it can use, how much traffic they can handle, and how much traffic they are currently handling. In addition, the backend service monitors health checking and does not send new connections to unhealthy instances.

For instructions to add an instance group to a backend service, read Adding instance groups to a backend service.

Adding a regional managed instance group to a target pool

A target pool is an object that contains one or more virtual machine instances. A target pool is used in Network Load Balancing, where a network load balancer forwards user requests to the attached target pool. The instances that are part of that target pool serve these requests and return a response. You can add a managed instance group to a target pool so that when instances are added or removed from the instance group, the target pool is also automatically updated with the changes.

Before you can add a managed instance group to a target pool, the target pool must exist. For more information, see the documentation for Adding a target pool.

To add an existing managed instance group to a target pool, follow these instructions. This causes all VM instances that are part of the managed instance group to be added to the target pool.

Console

  1. Go to the Target Pools page in the GCP Console.

    Go to the Target Pools page

  2. Click the target pool you want to add the instance group to.
  3. Click the Edit button.
  4. Scroll down to the VM instances section and click on Select instance groups.
  5. Select an instance group from the drop-down menu.
  6. Save your changes.

gcloud

With the gcloud command-line tool, use the set-target-pools command:

gcloud beta compute instance-groups managed set-target-pools [INSTANCE_GROUP] \
    --target-pools [TARGET_POOL,..] [--region REGION]

where:

  • [INSTANCE_GROUP] is the name of the instance group.
  • [TARGET_POOL] is the name of one or more target pools to add this instance group to.
  • [REGION] is the region of the instance group.

API

In the API, make a POST request to the following URI:

POST https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/regionInstanceGroupManagers/[INSTANCE_GROUP]/setTargetPools

where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [INSTANCE_GROUP] is the name of the instance group.

The request body should contain a list of URIs to the target pools you want to add this group. For example:

{
  "targetPools": [
    "regions/us-central1/targetPools/example-targetpool-1",
    "regions/us-central1/targetPools/example-targetpool-2"
  ]
}

Simulating a zone outage for a regional managed instance group

To test that your regional managed instance group is overprovisioned enough and can survive a zone outage, you can use the following example to simulate a single zone failure.

This script stops and starts Apache as the default scenario. If this doesn't apply to your application, replace the commands that stop and start Apache with your own failure and recovery scenario.

  1. Deploy and run this script continuously in every virtual machine instance in the instance group. You can do this by adding the script to the instance template or by including the script in a custom image and using the image in the instance template.

    #!/usr/bin/env bash
    
    # Copyright 2016 Google Inc. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    set -o nounset
    set -o errexit
    set -o pipefail
    
    function GetMetadata() {
      curl -s "$1" -H "Metadata-Flavor: Google"
    }
    
    PROJECT_METADATA_URL="http://metadata.google.internal/computeMetadata/v1/project/attributes"
    INSTANCE_METADATA_URL="http://metadata.google.internal/computeMetadata/v1/instance"
    ZONE=$(GetMetadata "$INSTANCE_METADATA_URL/zone" | cut -d '/' -f 4)
    INSTANCE_NAME=$(hostname)
    
    # We keep track of the state to make sure failure and recovery is triggered only once.
    STATE="healthy"
    while true; do
      if [[ "$ZONE" = "$(GetMetadata $PROJECT_METADATA_URL/failed_zone)" ]] && \
         [[ "$INSTANCE_NAME" = *"$(GetMetadata $PROJECT_METADATA_URL/failed_instance_names)"* ]]; then
        if [[ "$STATE" = "healthy" ]]; then
          STATE="failure"
          # Do something to simulate failure here.
          echo "STARTING A FAILURE"
          /etc/init.d/apache2 stop
        fi
      else
        if [[ "$STATE" = "failure" ]] ; then
          STATE="healthy"
          # Do something to recover here.
          echo "RECOVERING FROM FAILURE"
          /etc/init.d/apache2 start
        fi
      fi
      sleep 5
    done
    
    
  2. Simulate a zone failure by setting these two project metadata fields:

    • failed_zone: Sets the zone where you want to simulate the outage (limit the failure to just one zone).
    • failed_instance_names: Choose the instances to take offline by name (to limit the failure to only instance names containing this string).

    You can set this metadata using the gcloud command-line tool. For example, the following command sets the zone outage to the europe-west1-b zone and affects instances that have names starting with instance-base-name:

    gcloud compute project-info add-metadata --metadata failed_zone='europe-west1-b',failed_instance_names='instance-base-name-'
    
  3. After you are done simulating the outage, recover from the failure by removing the metadata keys:

    gcloud compute project-info remove-metadata --keys failed_zone,failed_instance_names
    

Here are some ideas for failure scenarios you can run using this script:

  • Stop your application completely to see how the managed instance group responds.
  • Make your instances return as "unhealthy" on load balancing health checks.
  • Modify iptables to block some of the traffic to and from the instance.
  • Shutdown the virtual machine instances. By default, it will be recreated by the regional managed instance group shortly after but the new incarnation will immediately shutdown itself as soon as the script runs and as long as the metadata values are set. This will result in a crash loop.

What's next

Czy ta strona była pomocna? Podziel się z nami swoją opinią:

Wyślij opinię na temat...

Compute Engine Documentation