Distributing Instances using Regional Managed Instance Groups

This page describes how to create groups of instances that are distributed across a single region. To learn about instance groups, read the Instance Groups documentation.

Unlike zonal managed instance groups that belong to a single zone, regional managed instance groups improve your application availability by spreading your instances across multiple zones within a single region. For example, by default, a regional managed instance group in the region us-east1 will create instances in three zones within the region: us-east1-b, us-east1-c, and us-east1-d. For regions that contain more than three zones, the regional managed instance group will choose three zones to create instances in. You can also selectively choose which zones to create instance in or create instances in regions with less than three zones.

Regional managed instance groups also support autoscaling, internal load balancing, and external load balancing.

Before you begin

Limitations

  • Each regional managed instance group can contain up to 2000 instances.
  • When updating a managed instance group, no more than 1000 instances can be specified in a single request.
  • You cannot use regional managed instance groups with a load balancer that uses the maxRate balancing option.

Choosing regional managed instance groups

Google recommends regional managed instance groups over zonal managed instance groups because they allow you to spread application load across multiple zones, rather than confining your application to a single zone or managing multiple instance groups across different zones. This replication protects against zonal failures and unforeseen scenarios where an entire group of instances in a single zone malfunctions. If that happens, your application can continue serving traffic from instances running in another zone in the same region.

In the case of a zone failure, or if a group of instances in a zone stops responding, regional managed instance groups will continue supporting your VMs as follows:

  • The number of instances that are part of the regional managed instance group in the remaining zones will continue to serve traffic. No new instances will be added and no instances will be redistributed (unless you set up autoscaling).

  • After the failed zone has recovered, the instance group will start serving traffic again from that zone.

  • Perform automatic rebalancing to maintain an equal number of instances across all available zones.

When designing for robust and scalable applications, use regional managed instance groups.

Automatic rebalancing

Regional managed instance groups attempt to maintain an equal balance of VM instances across the specified number of zones to support high availability workloads. If another action, such as a method call to deleteInstances or abandonInstances is made, and it causes an imbalance in VM instances across zones, then the group will actively work to re-establish the correct balance. This could potentially cause the group to delete instances or add new instances to restore balance.

For example, if you have a regional managed instance group with 2 instances in zone us-central1-a, 1 instance in zone us-central1-b, and 1 instance in zone us-central1-c, and you deleted the VM instance in us-central1-c, the group attempts to rebalance so that the instances are again evenly distributed across the zones.

In this case, the group would remove 1 instance from zone us-central1-a, and add a new instance to zone us-central1-c so each zone will have 1 instance per zone. There is no way to selectively determine which instance is deleted.

This behavior is enabled by default for regional managed instance groups to support high availability workloads but is something you should be aware of when you delete or remove instances from a regional managed instance group.

Provisioning the correct managed instance group size

To be prepared for the extreme case where one zone fails or an entire group of instances stops responding, Compute Engine strongly recommends overprovisioning your managed instance group. Depending on your application needs, overprovisioning your group would prevent your system from failing entirely if a zone or group of instances becomes unresponsive.

Google makes recommendations for overprovisioning with the priority of keeping your application available for your users. These recommendations will include provisioning and paying for more VM instances than your application might need on a day-to-day basis. You should make the best judgement for overprovisioning based on application needs and cost limitations.

Provisioning a regional managed instance group in three or more zones

If you are creating a regional managed instance group in a region with at least three zones, Google recommends overprovisioning your instance group by at least 50%. By default, a regional managed instance group creates instances in three zones. Having VM instances in three zones already helps you preserve at least 2/3 of your serving capacity and if a single zone fails, the other two zones in the region can continue to serve traffic without interruption. By overprovisioning to 150%, you can ensure that if 1/3 of the capacity is lost, 100% of traffic is supported by the remaining zones.

For example, if you need 20 virtual machine instances in your managed instance group across three zones, we recommend, at a minimum, adding an additional 50% to the number of instances. In this case, 50% of 20 is 10 more instances, for a total of 30 instances in the instance group. If you create a regional managed instance group with a size of 30, the instance group distributes your instances as equally as possible across the three zones, like so:

Zone Number of instances
example-zone-1 10
example-zone-2 10
example-zone-3 10

In the case of any single zone failures, you would still have 20 instances serving traffic.

Provisioning a regional managed instance group in two zones

If you want to provision your instances in two zones instead of three, Google recommends doubling the number of instances. For example, if you need 20 VM instances for your service, distributed across two zones, you should configure a regional managed instance group with 40 instances, so that each zone has 20 instances each. If a single zone fails, you would still have 20 instances serving traffic.

Zone Number of instances
example-zone-1 20
example-zone-2 20

If the number of instances in your group is not easily divisible across two zones, Compute Engine divides up the instances as equally as possible and randomly puts the remaining instances in one of the zones.

Provisioning a regional managed instance group in one zone

It is possible to create a regional managed instance group with just one zone. This is similar to creating a zonal managed instance group but with some differences:

  • If creating a regional managed instance with one zone, you can add additional zones to the group later. You cannot add additional zones to a zonal managed instance group.

  • Many new features are often available in zonal managed instance groups first.

Creating a single zone regional managed instance group is not recommended because it is the least available configuration. If zone or region fails, your entire managed instance group would be unavailable, potentially disrupting your users.

Selecting zones for your group

The default configuration for a regional managed instance group is to distribute instances as equally as possible across three zones. For various reasons, you might want to select specific zones for your application. For example, if you require GPUs for your instances, you might select only zones that support GPUs. You might have persistent disks that are only available in certain zones, or you might want to start with VM instances in just a few zones, rather than in three random zones within a region.

If you want to choose the number of zones and/or choose the specific zones the instance group should run in, you must do that when you first create the group. Once you choose specific zones during creation, you cannot change or update the zones later.

  • To select more than three zones within a region, you must explicitly specify the individual zones. For example, to select all four zones within a region, you must provide all four zones explicitly in your request. If you do not, Compute Engine will select three zones by default.

  • To select two or fewer zones in a region, you must explicitly specify the individual zones. Even if the region only contains two zones, you must still explicitly specify the zones in your request.

Regardless whether you choose specific zones or whether you just want to select the region and allow Compute Engine to create instances in all zones within the region, the VM instances will be distributed equally across all zones. As a best practice, make sure you provision enough VM instances to support your applications in the specified number of zones.

Creating a regional managed instance group

Create a regional managed instance group in the gcloud command-line tool, the console, or the API.

If there is not enough capacity in each zone to support instances from the instance group, Compute Engine creates as many instances as possible and continues attempting to create the remaining instances when additional quota becomes available.

Since you are creating a regional managed instance group, keep in mind that certain resources are zonal, such as persistent disks. If you are specifying zonal resources in your instance template, like additional persistent disks, the disk must be present in all zones so it can be attached to the instances created by this regional managed instance group.

By default, if you do not explicitly specify individual zones in your request, Compute Engine choose three zones to creates instances in. If you need to create instances in more than or less than three zones, or you want to pick which zones are used, provide a list of zones in your request. Read about Selecting zones for your group.

Console

  1. Go to the Instance Groups page on the Cloud Platform Console.

    Go to the Instance Groups page

  2. Click Create Instance Group to create a new instance group.
  3. Under Location, select Multi-zone.
  4. Choose a desired region.
  5. If you want to choose specific zones, click Configure zones to select the zones you want to use.
  6. Choose an instance template for the instance group or create a new one.
  7. Specify the number of instances for this group. Remember to provision enough instances to support your application if a zone failure happens.
  8. Continue with the rest of the managed instance group creation process.

gcloud

All managed instance groups require an instance template. If you don't have one, create an instance template. For example, the following command creates a basic instance template with default properties:

gcloud compute instance-templates create example-template

Next, use the instance-groups managed create subcommand with the --region flag. For example, this command creates a regional managed instance group in three zones within the us-east1 region:

gcloud compute instance-groups managed create example-rmig \
    --template example-template --base-instance-name example-instances \
    --size 30 --region us-east1

If you want to select the specific zones the group should use, provide the --zones flag with the gcloud beta component:

gcloud beta compute instance-groups managed create example-rmig \
    --template example-template --base-instance-name example-instances \
    --size 30 --zones us-east1-b,us-east1-c

Note: If you are choosing specific zones, use the gcloud beta component because the zone selection feature is currently in Beta.

API

All managed instance groups require an instance template. If you don't have one, create an instance template.

In the API, construct a POST request to the regionInstanceGroupManagers.insert method. In the request body, include the desired group name, group size, base name for instances in the group, and the URL to the instance template.

POST https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers

{
  "baseInstanceName": "[BASE_INSTANCE_NAME]",
  "instanceTemplate": "global/instanceTemplates/[INSTANCE_TEMPLATE_NAME]",
  "name": "[INSTANCE_GROUP_NAME]",
  "targetSize": "[TARGET_SIZE]",
  "distributionPolicy": {
     "zones": [
       {"zone": "zones/[ZONE]"},
       {"zone": "zones/[ZONE]"}
      ]
   }
}

where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [BASE_INSTANCE_NAME] is the instance name for each instance that is created as part of the instance group. For example, a base instance name of example-instance would create instances that have names like example-instance-[RANDOM_STRING] where [RANDOM_STRING] is generated by the server.
  • [INSTANCE_TEMPLATE_NAME] is the instance template to use.
  • [TARGET_SIZE] is the target number of instances of the instance group.

If you want to select specific zones or if you are creating instances in a region with less than or more than three zones, include the distributionPolicy property in your request and supply a list of zones. Replace [ZONE]with the name of the zone to create instances in.

POST https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers

{ "baseInstanceName": "[BASE_INSTANCE_NAME]", "instanceTemplate": "global/instanceTemplates/[INSTANCE_TEMPLATE_NAME]", "name": "[INSTANCE_GROUP_NAME]", "targetSize": "[TARGET_SIZE]", "distributionPolicy": { "zones": [ {"zone": "zones/[ZONE]"}, {"zone": "zones/[ZONE]"} ] } }

For example, the following creates an instance group named example-rmig with 10 instances distributed across us-east1-b and us-east1-c zones:

POST https://www.googleapis.com/compute/beta/projects/myproject/regions/us-east1/instanceGroupManagers
{

  "baseInstanceName": "example-instance",
  "instanceTemplate": "global/instanceTemplates/example-instance",
  "name": "example-rmig",
  "targetSize": 10,
  "distributionPolicy": {
      "zones": [
        {"zone": "zones/us-east1-b"},
        {"zone": "zones/us-east1-c"}
      ]
   }
}

Listing instances in a regional managed instance group

To get a list of instances for your regional managed instance group, use the the Cloud Platform Console, the instance-groups managed list-instances command in the gcloud command-line tool, or make a request to the regionInstanceGroupManagers.listManagedInstances method.

Console

  1. Go to the Instance Groups page on the Cloud Platform Console.

    Go to the Instance Groups page

  2. Click on the name of the regional managed instance group you want to view the instances of.

The instance group details page loads with a list of instances in the instance group.

gcloud

Run the instance-groups managed list-instances command:

gcloud compute instance-groups managed list-instances [INSTANCE_GROUP_NAME] --region [REGION]

where:

  • [INSTANCE_GROUP_NAME] is the name of the instance group.
  • [REGION] is the region of the instance group.

For example, the following command lists instances that part of an instance group named example-rmig in the region us-east1:

gcloud compute instance-groups managed list-instances example-rmig --region us-east1

API

In the API, construct an empty GET request to the regionInstanceGroupManagers.listManagedInstances method.

GET https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/instanceGroupManagers/[INSTANCE_GROUP_NAME]

For example:

GET https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1/instanceGroupManagers/example-rmig

Updating a regional managed instance group

The Instance Group Updater does not support regional managed instance groups, so to update a regional managed instance group, you can:

  1. Change the instance template of the instance group.
  2. Recreate instances in the regional instance group to pick up the new instance template. For example, in gcloud, use the same recreate-instances subcommand and add the --region flag:

    gcloud compute instance-groups managed recreate-instances test --instances [INSTANCE], [INSTANCE..] --region [REGION]
    

Alternatively, if you have the capacity and want to maintain your existing instances as backup, you can skip step 2, and perform the following steps instead:

  1. Manually resize the instance group to a larger size. Provide the region of your managed instance group in the request. For example, in the gcloud command-line tool, use the resize subcommand and append the --region flag:

    gcloud compute instance-groups managed resize example-group --size [NEW_SIZE] --region [REGION]
    

    This causes the instance group to create new instances using the new instance template, and also keeps your old instances around.

  2. After you are happy with the new instances, delete the instances running the previous instance template, using the instance-groups managed delete-instances command:

    gcloud compute instance-groups managed delete-instances [INSTANCE_GROUP] --instances [INSTANCE],[INSTANCE..]
    

Autoscaling a regional managed instance group

Compute Engine offers autoscaling for managed instance groups, which allow your instance groups to automatically add or remove instances based on increases or decreases in load. You can enable autoscaling for regional managed instance groups as well.

If you enable autoscaling for a regional managed instance group, the feature behaves as follows:

  • An autoscaling policy is applied to the group as a whole (not to individual zones). For example, if you enable autoscaler to target 66% CPU utilization, the autoscaler will keep track of all instances in the group to maintain an average 66% utilization across all instances in all zones.

  • Autoscaling attempts to spread instances evenly across available zones when possible. In general, the autoscaler keeps zones balanced in size by growing smaller zones and expecting that load will get redirected from bigger zones. We do not recommend configuring a custom load balancer that prefers one zone as this could cause unexpected behavior.

  • If a zone experiences a failure, or a group of instances within a zone fails, 1/3 of capacity may be lost but 2/3 of the capacity will remain in the other zones. We recommend setting your autoscaling policy to overprovision your autoscaled managed instance group to avoid overloading surviving servers during the time a zone is lost.

The autoscaler will only add instances to a zone up to 1/3 of the specified maximum for the group. For example, if 15 is the maxNumReplicas configured for autoscaling, the autoscaler can only add up to 1/3 * 15 = 5 instances per zone for the instance group. If one zone fails, the autoscaler will only scale up to 2/3 of the maxNumReplicas in the remaining two zones combined.

Provisioning your autoscaler configuration

Similar to the advice on overprovisioning a managed instance group, you should overprovision your autoscaler configuration so that:

  • The autoscaling utilization target is 2/3 of your desired utilization target.
  • To accommodate for the lowered utilization target, autoscaler will add more instances, so you should increase the maxNumReplicas to be 50% more than the number you would have set without taking into account overprovisioning.

For example, if you expect that 20 instances can handle your peak loads and the target utilization is 80%, set the autoscaler to:

  • 2/3 * 0.8 = 0.53 or 53% for target utilization instead of 80%
  • 3/2 * 20 = 30 for max number of instances instead of 20

This setup ensures that in the case of a single-zone failure, your instance group should not run out of capacity because the remaining 2/3 of instances should be able to handle the increased load from the offline zone (since you lowered the target utilization well below its capacity). The autoscaler will also add new instances up to the maximum number of instances you specified to maintain the 2/3 utilization target.

However, you shouldn't rely solely on overprovisioning your managed instance groups to handle increased load. As a best practice, Google recommends that you regularly load test your applications to make sure it can handle the increased utilization that might be caused by a zonal outage removing 1/3 of instances.

Enabling autoscaling

Console

  1. Go to the Instance Groups page on the Cloud Platform Console.

    Go to the Instance Groups page

  2. If you do not have an instance group, create one. Otherwise, click on an existing regional managed instance group from the list.
  3. On the instance group details page, click the Edit Group button.
  4. Under Autoscaling, check On.
  5. Fill out the properties for the autoscaling configuration.
  6. Save your changes.

gcloud

Using the gcloud command-line tool, using the set-autoscaling subcommand to enable regional autoscaling, followed by the --region flag. For more information on creating an autoscaler, read the autoscaling documentation.

For example, the following snippets sets up autoscaler for an example instance group named example-rmig. Replace us-east1 with the region of your managed instance group, example-autoscaler with your desired autoscaler name, and example-rmig with the name of the regional managed instance group:

gcloud compute instance-groups managed set-autoscaling example-rmig \
  --target-cpu-utilization 0.8 --max-num-replicas 5 --region us-east1

API

To set up regional autoscaling in the API, make a POST request to the following URL, with your own project ID and the region of your managed instance group:

POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/regionAutoscalers/

Your request body must contain the name, target, and autoscalingPolicy fields. autoscalingPolicy must define cpuUtilization and maxNumReplicas.

{
 "name": "[AUTOSCALER_NAME]",
 "target": "regions/us-east1/instanceGroupManagers/[INSTANCE_GROUP_NAME]",
 "autoscalingPolicy": {
    "maxNumReplicas": [MAX_NUM_INSTANCES],
    "cpuUtilization": {
       "utilizationTarget": [TARGET_UTILIZATION]
     },
    "coolDownPeriodSec": [SECONDS]
  }
}

For example:

{
 "name": "example-autoscaler",
 "target": "regions/us-east1/instanceGroupManagers/example-rmig",
 "autoscalingPolicy": {
    "maxNumReplicas": 10,
    "cpuUtilization": {
       "utilizationTarget": 0.8
     },
    "coolDownPeriodSec": 30
  }
}

Updating an autoscaler

You can update a regional autoscaler as you would a zonal autoscaler. Read the documentation on updating an autoscaler.

Adding a regional managed instance group to a load balancer

Google Cloud Platform load balancing uses instance groups to serve traffic. Depending on the type of load balancer you are using, you can add instance groups to a target pool or backend service. To read more about managed instance groups and load balancing, see the Instance Groups Overview.

You can assign a regional managed instance group as a target of a backend service for external load balancing and internal load balancing or as part of a target pool for Network load balancing.

For HTTP(S) load balancing, only maxRatePerInstance and maxUtilization are supported for regional managed instance groups.

Adding a regional managed instance group to a backend service

A backend service is necessary for creating an HTTP(S), internal, or SSL load balancer. A backend service contains individual backends that each contain one instance group, either managed or unmanaged. The instances in the instance group respond to traffic from the load balancer. The backend service in turn knows which instances it can use, how much traffic they can handle, and how much traffic they are currently handling. In addition, the backend service monitors health checking and does not send new connections to unhealthy instances.

For instructions to add an instance group to a backend service, read Adding instance groups to a backend service.

Adding a regional managed instance group to a target pool

A target pool is an object that contains one or more virtual machine instances. A target pool is used in Network load balancing, where a Network load balancer forwards user requests to the attached target pool. The instances that are part of that target pool serve these requests and return a response. You can add a managed instance group to a target pool so that when instances are added or removed from the instance group, the target pool is also automatically updated with the changes.

Before you can add a managed instance group to a target pool, the target pool must exist. For more information, see the documentation for Adding a target pool.

To add an existing managed instance group to a target pool, follow these instructions. This causes all VM instances that are part of the managed instance group to be added to the target pool.

Console

  1. Go to the Target Pools page in the Cloud Platform Console.

    Go to the Target Pools page

  2. Click on the target pool you want to add the instance group to.
  3. Click the Edit button.
  4. Scroll down to the VM instances section and click on Select instance groups.
  5. Select an instance group from the drop-down menu.
  6. Save your changes.

gcloud

With the gcloud command-line tool, use the set-target-pools command:

gcloud beta compute instance-groups managed set-target-pools [INSTANCE_GROUP] \
    --target-pools [TARGET_POOL,..] [--region REGION]

where:

  • [INSTANCE_GROUP] is the name of the instance group.
  • [TARGET_POOL] is the name of one or more target pools to add this instance group to.
  • [REGION] is the region of the instance group.

API

In the API, make a POST request to the following URI:

POST https://www.googleapis.com/compute/beta/projects/[PROJECT_ID]/regions/[REGION]/regionInstanceGroupManagers/[INSTANCE_GROUP]/setTargetPools

where:

  • [PROJECT_ID] is the project ID for this request.
  • [REGION] is the region for the instance group.
  • [INSTANCE_GROUP] is the name of the instance group.

The request body should contain a list of URIs to the target pools you want to add this group. For example:

{
  "targetPools": [
    "regions/us-central1/targetPools/example-targetpool-1",
    "regions/us-central1/targetPools/example-targetpool-2"
  ]
}

Simulating a zone outage for a regional managed instance group

To test that your regional managed instance group is overprovisioned enough and can survive a zone outage, you can use the following example to simulate a single zone failure.

This script stops and starts Apache as the default scenario. If this doesn't apply to your application, replace the commands that stop and start Apache with your own failure and recovery scenario.

  1. Deploy and run this script continuously in every virtual machine instance in the instance group. You can do this by adding the script to the instance template or by including the script in a custom image and using the image in the instance template.

    #!/usr/bin/env bash
    
    # Copyright 2016 Google Inc. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    set -o nounset
    set -o errexit
    set -o pipefail
    
    function GetMetadata() {
      curl -s "$1" -H "Metadata-Flavor: Google"
    }
    
    PROJECT_METADATA_URL="http://metadata.google.internal/computeMetadata/v1/project/attributes"
    INSTANCE_METADATA_URL="http://metadata.google.internal/computeMetadata/v1/instance"
    ZONE=$(GetMetadata "$INSTANCE_METADATA_URL/zone" | cut -d '/' -f 4)
    INSTANCE_NAME=$(hostname)
    
    # We keep track of the state to make sure failure and recovery is triggered only once.
    STATE="healthy"
    while true; do
      if [[ "$ZONE" = "$(GetMetadata $PROJECT_METADATA_URL/failed_zone)" ]] && \
         [[ "$INSTANCE_NAME" = *"$(GetMetadata $PROJECT_METADATA_URL/failed_instance_names)"* ]]; then
        if [[ "$STATE" = "healthy" ]]; then
          STATE="failure"
          # Do something to simulate failure here.
          echo "STARTING A FAILURE"
          /etc/init.d/apache2 stop
        fi
      else
        if [[ "$STATE" = "failure" ]] ; then
          STATE="healthy"
          # Do something to recover here.
          echo "RECOVERING FROM FAILURE"
          /etc/init.d/apache2 start
        fi
      fi
      sleep 5
    done
    
    

  2. Simulate a zone failure by setting these two project metadata fields:

    • failed_zone: Sets the zone where you want to simulate the outage (limit the failure to just one zone).
    • failed_instance_names: Choose the instances to take offline by name (to limit the failure to only instance names containing this string).

    You can set this metadata using the gcloud command-line tool. For example, the following command sets the zone outage to the europe-west1-b zone and affects instances that have names starting with instance-base-name:

    gcloud compute project-info add-metadata --metadata failed_zone='europe-west1-b',failed_instance_names='instance-base-name-'
    
  3. After you are done simulating the outage, recover from the failure by removing the metadata keys:

    gcloud compute project-info remove-metadata --keys failed_zone,failed_instance_names
    

Here are some ideas for failure scenarios you can run using this script:

  • Stop your application completely to see how the managed instance group responds.
  • Make your instances return as “unhealthy” on load balancing health checks.
  • Modify iptables to block some of the traffic to and from the instance.
  • Shutdown the virtual machine instances. By default, it will be recreated by the regional managed intance group shortly after but the new incarnation will immediately shutdown itself as soon as the script runs and as long as the metadata values are set. This will result in a crash loop.

What's next

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Compute Engine Documentation