Create and manage regional disks


Regional Persistent Disk and Hyperdisk Balanced High Availability are storage options that let you implement high availability (HA) services in Compute Engine. Regional Persistent Disk and Hyperdisk Balanced High Availability synchronously replicate data between two zones in the same region and ensure HA for disk data for up to one zonal failure. The regional disk can be a boot disk or a non-boot disk.

(Preview): You can also allow different instances to concurrently access a Hyperdisk Balanced High Availability disk by setting the disk access mode. Regional disks can only be attached to instances in the same zones as the disk's replicas. For more information, see Share a disk between instances.

This document explains how to do the following tasks for regional disks:

Before you begin

  • Review the differences between different types of disk storage options.
  • Review the basics of synchronous disk replication.
  • Read about regional disk failover.
  • If using multi-writer mode for Hyperdisk Balanced High Availability disks, review the requirements and limitations in Share disks between instances.
  • If you haven't already, then set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    Terraform

    To use the Terraform samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles and permissions

To get the permissions that you need to create a regional disk, ask your administrator to grant you the following IAM roles on the project:

For more information about granting roles, see Manage access to projects, folders, and organizations.

These predefined roles contain the permissions required to create a regional disk. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to create a regional disk:

  • compute.disks.create
  • compute.instances.attachDisk
  • compute.disks.use
  • Create a snapshot of a disk: compute.disks.createSnapshot
  • View the details for a disk: compute.disks.get
  • Get a list of disks: compute.disks.list
  • Change the size of a disk: compute.disks.update

You might also be able to get these permissions with custom roles or other predefined roles.

Limitations

  • Mexico, Osaka, and Montreal have three zones housed in one or two physical data centers. Since data stored in these regions can be lost in the rare event of data center destruction, you may want to consider backing up business-critical data to a second region for increased data protection.
  • You can attach regional Persistent Disk only to VMs that use E2, N1, N2, and N2D machine types.
  • You can attach Hyperdisk Balanced High Availability only to supported machine types.
  • You cannot create a regional Persistent Disk from an image, or from a disk that was created from an image.
  • When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
  • The minimum size of a regional standard Persistent Disk is 200 GiB.
  • You can only increase the size of a regional Persistent Disk or Hyperdisk Balanced High Availability volume; you can't decrease its size.
  • Regional Persistent Disk and Hyperdisk Balanced High Availability volumes have different performance characteristics than their corresponding zonal disks. For more information, see Block storage performance.
  • You can't use a Hyperdisk Balanced High Availability volume that's in multi-writer mode as a boot disk.
  • If you create a replicated disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your replicated disk is fully replicated.

About using a regional disk as a boot disk for an instance

You can attach a regional Persistent Disk or Hyperdisk Balanced High Availability disk as a boot disk for stateful workloads that are provisioned ahead of time, before you provision a production workload. Regional boot disks are not intended for hot standbys, because the regional boot disks cannot be attached simultaneously to two compute instances.

You can only create regional Persistent Disk or Hyperdisk Balanced High Availability volumes from snapshots; it isn't possible to create a regional disk from an image.

To use a regional disk as the boot disk for an instance, use either of the following methods:

  1. Create a new instance with a regional boot disk.
  2. Create a regional boot disk, and then attach it to an instance:
    1. Create a regional disk from a snapshot of a boot disk.
    2. Attach a regional boot disk to an instance.

If you need to failover a regional boot disk to a running standby instance in the replica zone, then use the steps described in Attach a regional boot disk to an instance.

Create a regional disk

Create a regional Persistent Disk or Hyperdisk Balanced High Availability volume. The disk must be in the same region as the compute instance that you plan to attach it to.

If you create a Hyperdisk Balanced High Availability volume, you can also allow different instances to concurrently access the disk by setting the disk access mode. For more information, see Share a disk between instances.

For regional Persistent Disk, if you create a disk in the Google Cloud console, the default disk type is pd-balanced. If you create a disk using the gcloud CLI or REST, the default disk type is pd-standard.

Console

  1. In the Google Cloud console, go to the Disks page.

    Go to Disks

  2. Select the required project.

  3. Click Create disk.

  4. Specify a Name for your disk.

  5. For the Location, choose Regional.

  6. Select the Region and Zone. You must select the same region when you create your instance.

  7. Select the Replica zone in the same region. Make a note of the zones that you select because you must attach the disk to your instance in one of those zones.

  8. Select the Disk source type.

  9. Under Disk settings, choose a Disk type and Size. You can also change the default Provisioned IOPS, and Provisioned Throughput settings.

  10. Optional: For Hyperdisk Balanced High Availability volumes, you can enable attaching the disk to multiple instances by creating the disk in multi-writer mode (Preview). Under Access mode, select Multiple VMs read write.

  11. Click Create to finish creating your disk.

gcloud

Create a regional disk by using the compute disks create command.

If you need a regional SSD Persistent Disk for additional throughput or IOPS, include the --type flag and specify pd-ssd.

gcloud compute disks create DISK_NAME \
   --size=DISK_SIZE \
   --type=DISK_TYPE \
   --region=REGION \
   --replica-zones=ZONE1,ZONE2
   --access-mode=DISK_ACCESS_MODE

Replace the following:

  • DISK_NAME: the name of the new disk
  • DISK_SIZE: the size, in GiB, of the new disk
  • DISK_TYPE: For regional Persistent Disk, this is the type of the regional disk. The default value is pd-standard. For Hyperdisk, specify the value hyperdisk-balanced-high-availability.
  • REGION: the region for the regional disk to reside in, for example: europe-west1
  • ZONE1,ZONE2: the zones within the region where the two disk replicas are located, for example: europe-west1-b,europe-west1-c
  • DISK_ACCESS_MODE: Optional: How instances can access the data on a Hyperdisk Balanced High Availability disk (Preview). Supported values are:

    • READ_WRITE_SINGLE, for read-write access from one instance. This is the default.
    • READ_WRITE_MANY, for read-write access from multiple instances.

    You can set the access mode only for Hyperdisk Balanced High Availability disks.

Terraform

To create a regional Persistent Disk or Hyperdisk Balanced High Availability volume, you can use the google_compute_region_disk resource.

resource "google_compute_region_disk" "regiondisk" {
  name                      = "region-disk-name"
  snapshot                  = google_compute_snapshot.snapdisk.id
  type                      = "pd-ssd"
  region                    = "us-central1"
  physical_block_size_bytes = 4096
  size                      = 11

  replica_zones = ["us-central1-a", "us-central1-f"]
}

REST

To create a regional Persistent Disk or Hyperdisk Balanced High Availability volume, construct a POST request to the compute.regionDisks.insert method.

To create a blank disk, don't specify a snapshot source.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/disks
{
  "name": "DISK_NAME",
  "region": "projects/PROJECT_ID/regions/REGION",
  "replicaZones": [
    "projects/PROJECT_ID/zones/ZONE1",
    "projects/PROJECT_ID/zones/ZONE2"
  ],
  "sizeGb": "DISK_SIZE",
  "type": "projects/PROJECT_ID/regions/REGION/diskTypes/DISK_TYPE",
  "accessMode": "DISK_ACCESS_MODE"
}

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region for the regional disk to reside in, for example: europe-west1
  • DISK_NAME: the name of the new disk
  • ZONE1,ZONE2: the zones where replicas of the new disk should be located
  • DISK_SIZE: the size, in GiB, of the new disk
  • DISK_TYPE: For regional Persistent Disk, this is the type of Persistent Disk. For Hyperdisk, specify the value hyperdisk-balanced-high-availability.
  • DISK_ACCESS_MODE: Optional: how instances can access the data on the Hyperdisk Balanced High Availability disk (Preview). Supported values are:

    • READ_WRITE_SINGLE, for read-write access from one instance. This is the default.
    • READ_WRITE_MANY, for read-write access from multiple instances.

    You can set the access mode only for Hyperdisk Balanced High Availability disks.

Attach a regional disk to your instance

For disks that are not boot disks, after you create a regional Persistent Disk or Hyperdisk Balanced High Availability volume, you can attach it to an instance. The instance must be in the same region as the disk.

To attach a regional boot disk to an instance, see Attach a regional boot disk to an instance.

To attach a Hyperdisk Balanced High Availability disk to multiple instances, repeat the procedure in this section for each instance. You can attach Hyperdisk Balanced High Availability disks only in read-write mode.

Console

  1. To attach a disk to an instance, go to the VM instances page.

    Go to VM instances

  2. In the Name column, click the name of the instance.

  3. Click Edit .

  4. Click +Attach existing disk.

  5. Choose the previously created regional disk to add to your instance.

  6. If you see a warning that indicates the selected disk is already attached to another instance, select the Force-attach disk box to force-attach the disk to the instance that you are editing.

    Review the use cases for force-attaching regional disks at Regional disk failover.

  7. Optional: If attaching a Hyperdisk Balanced High Availability disk to multiple instances, for Disk attachment mode, select Read/write.

  8. Click Save.

  9. On the Edit VM page, click Save.

gcloud

To attach a regional disk to a running or stopped instance, use the compute instances attach-disk command with the --disk-scope flag set to regional.

If attaching a Hyperdisk Balanced High Availability disk in multi-writer mode to multiple instances, the only supported attachment mode is rw, which is is the default access mode. You don't need to include the --mode flag.

gcloud compute instances attach-disk INSTANCE_NAME \
    --disk=DISK_NAME \
    --disk-scope=regional \
    --device-name=DEVICE_NAME

Replace the following:

  • INSTANCE_NAME: the name of the instance to which you're adding the regional disk
  • DISK_NAME: the name of the new disk that you're attaching to the instance
  • DEVICE_NAME: Optional: a name that the guest OS uses to create a symlink, which helps identify the disk at the OS level.

Terraform

To attach a regional Persistent Disk or Hyperdisk Balanced High Availability volume to an instance, you can use the google_compute_attached_disk resource.

resource "google_compute_instance" "test_node" {
  name         = "test-node"
  machine_type = "f1-micro"
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  attached_disk {
    source      = google_compute_disk.default.id
    device_name = google_compute_disk.default.name
  }

  network_interface {
    network = "default"
    access_config {
      # Ephemeral IP
    }
  }

  # Ignore changes for persistent disk attachments
  lifecycle {
    ignore_changes = [attached_disk]
  }


}

REST

To attach a regional disk to a running or stopped instance, construct a POST request to the compute.instances.attachDisk method and include the URL to the regional disk that you created.

If attaching a Hyperdisk Balanced High Availability disk in multi-writer mode to multiple instances, the only supported attachment mode is READ-WRITE, which is is the default access mode. You don't need to include the mode property.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
{
  "source": "/projects/PROJECT_ID/regions/REGION/disks/DISK_NAME",
  "deviceName": DEVICE_NAME
}

Replace the following:

  • PROJECT_ID: your project ID
  • ZONE: the location of your instance
  • INSTANCE_NAME: the name of the instance to which you're adding the new regional disk
  • REGION: the region where the regional disk is located
  • DISK_NAME: the name of the regional disk (as as shown in the Google Cloud console).
  • DEVICE_NAME: Optional: a name that the guest OS uses to create a symlink, which helps identify the disk at the OS level.

For non-boot disks, after you create and attach a blank regional disk to a instance, you must format and mount the disk, so that the operating system can use the available storage space.

Change a zonal disk to a regional disk

To convert an existing zonal Persistent Disk to a Regional Persistent Disk, create a new disk by cloning an existing zonal disk. For more information, see Creating a regional disk clone from a zonal disk.

To convert a Hyperdisk to a regional disk, create a new Hyperdisk Balanced High Availability disk from a snapshot of the existing disk, as described in Change a zonal disk to a Hyperdisk Balanced High Availability disk.

Create a new instance with regional disks

When creating an instance, you can optionally include regional Persistent Disk or Hyperdisk Balanced High Availability volumes as additional disks.

To create and attach a regional Persistent Disk or Hyperdisk Balanced High Availability volume to an instance during instance creation, see either of the following:

Create a new instance with a regional boot disk

When setting up a highly available compute instance, you can create the primary instance with a regional boot disk. If a zonal outage occurs, this lets you restart the instance in the secondary zone instead of creating a new instance.

In a high availability setup, where the boot device is a regional disk, Google recommends that you don't pre-create and start the standby instance. Instead, at the failover stage, attach the existing regional disk when you create the standby instance by using the forceAttach option.

To create an instance with a boot disk that is a regional disk, use either of the following methods:

gcloud

Use the gcloud compute instances create command to create an instance, and the --create-disk flag to specify the regional disk.

gcloud compute instances create PRIMARY_INSTANCE_NAME  \
 --zone=ZONE  \
 --create-disk=^:^name=REGIONAL_DISK_NAME:boot=true:type=DISK_TYPE:source-snapshot=SNAPSHOT_NAME:replica-zones=ZONE,REMOTE_ZONE

When specifying the disk parameters, the characters ^:^ specify that the separation character between parameters is a colon (:). This lets you use a comma (,) when specifying the replica-zones parameter.

Replace the following:

  • PRIMARY_INSTANCE_NAME: a name for the instance
  • ZONE: the name of the zone where you want to create the instance
  • REGIONAL_DISK_NAME: a name for the regional disk
  • DISK_TYPE: the type of disk to create, for example, hyperdisk-balanced-high-availability. If using a Persistent Disk, then you must also specify scope=regional within the --create-disk flag to create a Regional Persistent Disk.
  • SNAPSHOT_NAME: the name of the snapshot you created for the boot disk
  • REMOTE_ZONE: the alternate zone for the regional disk

REST

Create a POST request to the instances.insert method and specify the properties boot: 'true' and replicaZones. For example:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
{
 "name": "INSTANCE_NAME",
 "disks": [{
    "boot": true,
    "initializeParams": {
       "sourceSnapshot": "global/snapshots/BOOT_SNAPSHOT_NAME",
       "replicaZones": [
           "projects/PROJECT_ID/zones/ZONE",
           "projects/PROJECT_ID/zones/REMOTE_ZONE"
       ],
       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/DISK_TYPE"
    }
  }],
 "networkInterfaces": [
    {
      "network": "global/networks/default"
    }
  ]
}

Replace the following:

  • PROJECT_ID: your project ID
  • ZONE: the name of the zone where you want to create the instance
  • INSTANCE_NAME: a name for the instance
  • BOOT_SNAPSHOT_NAME: the name of the boot disk snapshot
  • REMOTE_ZONE: the remote zone for the regional disk
  • DISK_TYPE: the type of disk to create, for example, hyperdisk-balanced-high-availability or pd-balanced

Attach a regional boot disk to an instance

Use the following steps to:

  • Replace the boot disk of an existing instance with a regional boot disk.
  • Failover a regional boot disk to a hot standby instance that is running in the backup zone. You do this by attaching the regional disk to the instance as the boot disk.

These steps assume that the regional disk and instance already exist.

gcloud

  1. Stop the instance.
    gcloud compute instances stop INSTANCE_NAME  --zone=ZONE
    
  2. Detach the current boot disk from the instance.
    gcloud compute instances detach-disk INSTANCE_NAME \
     --zone=ZONE --disk=CURRENT_BOOT_DEVICE_NAME
    
  3. Attach the regional boot disk to the instance.
    gcloud compute instances attach-disk INSTANCE_NAME  \
     --zone=ZONE  \
     --disk=REGIONAL_DISK_NAME  \
     --disk-scope=regional --force-attach \
     --boot
    
  4. Restart the instance.

    gcloud compute instances start INSTANCE_NAME
    

Replace the variables in the previous commands with the following:

  • INSTANCE_NAME: the name of the instance to which you want to attach the regional boot disk
  • ZONE: the zone in which the instance is located
  • CURRENT_BOOT_DEVICE_NAME: the name of the boot disk being used by the instance. This is usually the same as the name of the instance.
  • REGIONAL_DISK_NAME: the name of the regional disk that you want to attach to the instance as a boot disk

Optional: If you can't successfully detach the regional boot disk from the primary instance due to an outage or failure, include the flag --force-attach.

REST

  1. Stop the instance.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/stop
    
  2. Detach the current boot disk from the instance.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/detachDisk?deviceName=CURRENT_BOOT_DEVICE_NAME
    
  3. Attach the regional boot disk to the instance.

    Construct a POST request to the compute.instances.attachDisk method, and include the URL to the regional boot disk:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    {
    "source": "compute/v1/projects/PROJECT_ID/regions/REGION/disks/REGIONAL_DISK_NAME",
    "boot": true
    }
  4. Restart the instance.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/start
    

Replace the variables in the previous commands with the following:

  • PROJECT_ID: your project ID
  • INSTANCE_NAME: the name of the instance to which you want to attach the regional disk
  • ZONE: the zone in which the instance is located
  • CURRENT_BOOT_DEVICE_NAME: the name of the boot disk being used by the instance. This is usually the same as the name of the instance.
  • REGION: the region in which the regional disk is located
  • REGIONAL_DISK_NAME: the name of the regional disk that you want to attach to the instance as a boot disk

Optional: If you can't successfully detach the regional boot disk from the instance that it was originally attached to because of an outage or failure, include "forceAttach": true in the request body.

List and describe your regional disks

You can view a list of all your configured regional disks, and information about their properties, including the following:

  • Disk ID
  • Disk name
  • Size
  • Disk type
  • Region
  • Zonal replicas

To view detailed information about your regional disks, use the following:

Resize a regional disk

If instances with regional disks require additional storage space, you can resize the disks. You can resize disks at any time, regardless of whether the disk is attached to a running instance. If you need to separate your data into unique volumes, create multiple secondary disks for the instance. For Hyperdisk Balanced High Availability you can also increase the IOPS and throughput limits for the disk.

The command for resizing a regional disk is very similar to that for resizing a zonal disk. However, you must specify a region instead of a zone for the disk location.

You can only increase, and not decrease, the size of a disk. To decrease the disk size, you must create a new disk with a smaller size. Until you delete the original, larger disk, you are charged for both disks.

For instructions on how to modify a regional disk, see the following:

What's next