Instance Groups

You can create and manage groups of virtual machine (VM) instances so that you don't have to individually control each instance in your project. Compute Engine offers two different types of instance groups: managed and unmanaged instance groups.

Managed instance groups

A managed instance group uses an instance template to create a group of identical instances. You control a managed instance group as a single entity. If you wanted to make changes to instances that are part of a managed instance group, you would make the change to the whole instance group. Because managed instance groups contain identical instances, they offer the following features.

  • When your applications require additional compute resources, managed instance groups can automatically scale the number of instances in the group.
  • Managed instance groups work with load balancing services to distribute traffic to all of the instances in the group.
  • If an instance in the group stops, crashes, or is deleted by an action other than the instance groups commands, the managed instance group automatically recreates the instance so it can resume its processing tasks. The recreated instance uses the same name and the same instance template as the previous instance, even if the group references a different instance template.
  • Managed instance groups can automatically identify and recreate unhealthy instances in a group to ensure that all of the instances are running optimally.
  • The managed instance group updater allows you to easily deploy new versions of software to instances in your managed instance groups, while controlling the speed and scope of deployment as well as the level of disruption to your service.

Types of managed instance groups

You can create two types of managed instance groups:

Regional managed instance groups are generally recommended over zonal managed instance groups because they allow you to spread application load across multiple zones, rather than confining your application to a single zone or having to manage multiple instance groups across different zones. This replication protects against zonal failures and unforeseen scenarios where an entire group of instances in a single zone malfunctions. If that happens, your application can continue serving traffic from instances running in another zone in the same region.

Choose zonal managed instance groups if you want to avoid the slightly higher latency incurred by cross-zone communication or if you need fine-grained control of the sizes of your groups in each zone.

Managed instance groups and the network

By default, instances in the group will be placed in the default network and randomly assigned IP addresses from the regional range. Alternatively, you can restrict the IP range of the group by creating a custom mode VPC network and subnet that uses a smaller IP range, then specifying this subnet in the instance template. This can simplify the creation of firewall rules.

After you create a managed instance group, the new instances start in the group as soon as the system can provision them. This process can take a significant amount of time depending on the number of instances in the group. Verify the status of instances in your managed instance group.

Managed instance groups and autoscaling

Managed instance groups support autoscaling so you can dynamically add or remove instances from a managed instance group in response to increases or decreases in load. You enable autoscaling and choose an autoscaling policy to determine how you want to scale. Applicable autoscaling policies include scaling based on CPU utilization, load balancing capacity, Stackdriver monitoring metrics, or by a queue-based workload like Google Cloud Pub/Sub.

Because autoscaling requires adding and removing instances from a group, you can only use autoscaling with managed instance groups so the autoscaler can maintain identical instances. Autoscaling does not work on unmanaged instance groups, which can contain heterogeneous instances.

For more information, read Autoscaling Groups of Instances.

Managed instance groups and autohealing

Managed instance groups maintain high availability of your applications by proactively keeping your instances available, which means in RUNNING state. A managed instance group will automatically recreate an instance that is not RUNNING. However, relying on instance state may not be sufficient. You may want to recreate instances when an application freezes, crashes, or runs out of memory.

Application-based autohealing improves application availability by relying on a health checking signal that detects application-specific issues such as freezing, crashing, or overloading. If a health check determines that an application has failed on an instance, the group automatically recreates that instance.

Health checking

The health checks used to monitor managed instance groups are similar to the health checks used for load balancing, with some differences in behavior. Load balancing health checks help direct traffic away from non-responsive instances and toward healthy instances; these health checks do not cause Compute Engine to recreate instances. On the other hand, managed instance group health checks proactively signal to delete and recreate instances if they become UNHEALTHY.

For the majority of scenarios, use separate health checks for load balancing and for autohealing. Health checking for load balancing can and should be more aggressive since these health checks determine whether an instance receives user traffic. Since customers might rely on your services, you want to catch non-responsive instances quickly so you can redirect traffic if necessary. In contrast, health checking for autohealing will cause Compute Engine to proactively replace failing instances so this health check should be more conservative than a load balancing health check.

Autohealing behavior

Autohealing recreates unhealthy instances using the original instance template that was used to create the VM instance (not necessarily the current instance template in the managed instance group). For example, if a VM instance was created using instance-template-a and then you update the managed instance group to use instance-template-b in OPPORTUNISTIC mode, autohealing will still use instance-template-a to recreate the instance. This is because autohealing recreations are not user-initiated so Compute Engine does not want to assume that the VM instance should use the new template. If you want to apply a new template, see Changing the instance template for a managed instance group.

The number of concurrently autohealed instances is smaller than the managed instance group size. This ensures that the group keeps running a subset of instances in situations where the autohealing policy does not fit the workload, firewall rules are misconfigured, or there are network connectivity or infrastructure issues, leading to false identification of all healthy instances as unhealthy. However, if a zonal managed instance group has only one instance, or a regional managed instance group has only one instance per zone, autohealing will recreate these instances when they become unhealthy.

Autohealing will not recreate an UNHEALTHY instance during that instance's initialization period, as specified by the autoHealingPolicies[].initialDelaySec property. This setting delays autohealing from checking on and potentially prematurely recreating the instance if the instance is in the process of starting up. The initial delay timer starts when the instance has a currentAction of VERIFYING.

Autohealing and disks

When recreating an instance based on its template, the autohealer handles different types of disks differently. Some disk configurations can cause autohealer to fail when attempting to recreate a managed instance.

Type autodelete Behaviour during an autohealing operation
New persistent disk true Disk is recreated as specified in the instance's template. Any data that was written to that disk is lost when the disk and its instance are recreated.
New persistent disk false Old disk is detached but stays available. However, VM instance recreation fails because Compute Engine cannot recreate an existing disk.
Existing persistent disk true Old disk is deleted. VM instance recreation fails because Compute Engine cannot reattach a deleted disk to the instance.
Existing persistent disk false Old disk is reattached as specified in the instance's template. The data on the disk is preserved. However, for existing read-write disks, a managed instance group can have only up to one VM because a single persistent disk cannot be attached to multiple instances in read-write mode.
New local SSD n/a Disk is recreated as specified in the instance's template. The data on a Local SSD is lost when an instance is recreated or deleted

The autohealer does not reattach disks that are not specified in the instance's template, such as disks that you attached to a VM manually after the VM had been created.

To preserve important data that was written to disk take precautions, such as:

  • Take regular persistent disk snapshots.

  • Export data to another source, such as Cloud Storage.

If your instances have important settings that you want to preserve, Google also recommends that you use a custom image in your instance template that contains any custom settings you need so that when an instance is recreated, the managed instance group will use the custom image that you specified.

Updating managed instance groups

You can easily deploy new versions of software to instances in your managed instance groups. The rollout of an update happens automatically to your specifications: you can control the speed and scope of the update in order to control disruptions to your application. You can optionally perform partial rollouts which allows for canary testing. See Updating managed instance groups.

Unmanaged instance groups

Unmanaged instance groups are groups of dissimilar instances that you can arbitrarily add and remove from the group. Unmanaged instance groups do not offer autoscaling, rolling update support, or the use of instance templates so Google recommends creating managed instance groups whenever possible. Use unmanaged instance groups only if you need to apply load balancing to your pre-existing configurations or to groups of dissimilar instances.

If you must create a group of dissimilar instances that do not follow an instance template, see Unmanaged Instance Groups.

Instance groups and load balancing

All of the load balancing configurations available on Google Cloud Platform require that you specify a backend - an instance group, a target pool, or a network endpoint group - that can serve traffic distributed from the load balancer backend service.

For HTTP(S), Internal, TCP Proxy, and SSL Proxy Load Balancing, you can assign an instance group to the backend service. A backend service is a centralized service for managing backends, which in turn manages instances that handle user requests for your load balancer. Each backend service contains one or more backends, and each backend can contain one instance group. The backend service knows which instances it can use, how much traffic they can handle, and how much traffic they are currently handling. You can assign either a managed or unmanaged instance group to a backend service.

For Network Load Balancing, you must add individual VM instances to a target pool or assign one or more managed instance groups to a target pool, which causes the server to add all instances that are part of the instance group to the specified target pool.

For more information on different load balancing configurations, see the load balancing documentation.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Compute Engine Documentation