Create and manage synchronously replicated disks


Regional Persistent Disk and Hyperdisk Balanced High Availability are storage options that let you implement high availability (HA) services in Compute Engine. Regional Persistent Disk and Hyperdisk Balanced High Availability synchronously replicate data between two zones in the same region and ensure HA for disk data for up to one zonal failure. The synchronously replicated disk can be a boot disk or a non-boot disk.

This document explains how to do the following tasks for replicated disks:

Before you begin

  • Review the differences between different types of disk storage options.
  • Review the basics of synchronous disk replication.
  • Read about replicated disk failover.
  • If you haven't already, then set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    Terraform

    To use the Terraform samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles and permissions

To get the permissions that you need to create a synchronously replicated disk, ask your administrator to grant you the following IAM roles on the project:

For more information about granting roles, see Manage access to projects, folders, and organizations.

These predefined roles contain the permissions required to create a synchronously replicated disk. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to create a synchronously replicated disk:

  • compute.disks.create
  • compute.instances.attachDisk
  • compute.disks.use
  • Create a snapshot of a disk: compute.disks.createSnapshot
  • View the details for a disk: compute.disks.get
  • Get a list of disks: compute.disks.list
  • Change the size of a disk: compute.disks.update

You might also be able to get these permissions with custom roles or other predefined roles.

Limitations

  • Mexico, Osaka, and Montreal have three zones housed in one or two physical data centers. Since data stored in these regions can be lost in the rare event of data center destruction, you may want to consider backing up business-critical data to a second region for increased data protection.
  • You can attach regional Persistent Disk only to VMs that use E2, N1, N2, and N2D machine types.
  • You can attach Hyperdisk Balanced High Availability only to supported machine types.
  • You cannot create a regional Persistent Disk from an image, or from a disk that was created from an image.
  • When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
  • The minimum size of a regional standard Persistent Disk is 200 GiB.
  • You can only increase the size of a regional Persistent Disk or Hyperdisk Balanced High Availability volume; you can't decrease its size.
  • Regional Persistent Disk and Hyperdisk Balanced High Availability volumes have different performance characteristics than their corresponding zonal disks. For more information, see Block storage performance.
  • You can't use a Hyperdisk Balanced High Availability volume that's in multi-writer mode as a boot disk.
  • If you create a replicated disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your replicated disk is fully replicated.

About using a replicated disk as a boot disk for a VM

You can attach a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) disk as a boot disk for stateful workloads that are provisioned ahead of time, before you provision a production workload. Replicated boot disks are not intended for hot standbys, because the replicated boot disks cannot be attached simultaneously to two VMs.

You can only create regional Persistent Disk or Hyperdisk Balanced High Availability volumes from snapshots; it isn't possible to create a replicated disk from an image.

To use a replicated disk as a VM boot disk, use either of the following methods:

  1. Create a new VM with a replicated boot disk.
  2. Create a replicated boot disk, and then attach it to a VM:
    1. Create a replicated disk from a snapshot of a boot disk.
    2. Attach a replicated boot disk to a VM.

If you need to failover a replicated boot disk to a running standby VM in the replica zone, use the steps described in Attach a replicated boot disk to a VM.

Create a synchronously replicated disk

Create a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volume. The disk must be in the same region as the VM that you plan to attach it to.

If you create a Hyperdisk Balanced High Availability volume, you can also allow different VMs to simultaneously access the disk by setting the disk access mode. For more information, see Share a disk between VMs.

For regional Persistent Disk, if you create a disk in the Google Cloud console, the default disk type is pd-balanced. If you create a disk using the gcloud CLI or REST, the default disk type is pd-standard.

Console

  1. In the Google Cloud console, go to the Disks page.

    Go to Disks

  2. Select the required project.

  3. Click Create disk.

  4. Specify a Name for your disk.

  5. For the Location, choose Regional.

  6. Select the Region and Zone. You must select the same region when you create your VM.

  7. Select the Replica zone in the same region. Make a note of the zones that you select because you must attach the disk to your VM in one of those zones.

  8. Select the Disk source type.

  9. Select the Disk type and Size.

  10. Click Create to finish creating your disk.

gcloud

Create a synchronously replicated disk by using the compute disks create command.

If you need a regional SSD Persistent Disk for additional throughput or IOPS, include the --type flag and specify pd-ssd.

gcloud compute disks create DISK_NAME \
   --size=DISK_SIZE \
   --type=DISK_TYPE \
   --region=REGION \
   --replica-zones=ZONE1,ZONE2
   --access-mode=DISK_ACCESS_MODE

Replace the following:

  • DISK_NAME: the name of the new disk
  • DISK_SIZE: the size, in GiB, of the new disk
  • DISK_TYPE: For regional Persistent Disk, this is the type of the replicated disk. The default value is pd-standard. For Hyperdisk, specify the value hyperdisk-balanced-high-availability.
  • REGION: the region for the replicated disk to reside in, for example: europe-west1
  • ZONE1,ZONE2: the zones within the region where the two disk replicas are located, for example: europe-west1-b,europe-west1-c
  • DISK_ACCESS_MODE: Optional: How VMs can access the data on the disk. Supported values are:

    • READ_WRITE_SINGLE, for read-write access from one VM. This is the default.
    • READ_WRITE_MANY, for read-write access from multiple VMs.

    You can set the access mode only for Hyperdisk Balanced High Availability disks.

Terraform

To create a regional Persistent Disk or Hyperdisk Balanced High Availability volume, you can use the google_compute_region_disk resource.

resource "google_compute_region_disk" "regiondisk" {
  name                      = "region-disk-name"
  snapshot                  = google_compute_snapshot.snapdisk.id
  type                      = "pd-ssd"
  region                    = "us-central1"
  physical_block_size_bytes = 4096
  size                      = 11

  replica_zones = ["us-central1-a", "us-central1-f"]
}

REST

To create a regional Persistent Disk or Hyperdisk Balanced High Availability volume, construct a POST request to the compute.regionDisks.insert method.

To create a blank disk, don't specify a snapshot source.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/disks
{
  "name": "DISK_NAME",
  "region": "projects/PROJECT_ID/regions/REGION",
  "replicaZones": [
    "projects/PROJECT_ID/zones/ZONE1",
    "projects/PROJECT_ID/zones/ZONE2"
  ],
  "sizeGb": "DISK_SIZE",
  "type": "projects/PROJECT_ID/regions/REGION/diskTypes/DISK_TYPE",
  "accessMode": "DISK_ACCESS_MODE"
}

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region for the replicated disk to reside in, for example: europe-west1
  • DISK_NAME: the name of the new disk
  • ZONE1,ZONE2: the zones where replicas of the new disk should be located
  • DISK_SIZE: the size, in GiB, of the new disk
  • DISK_TYPE: For regional Persistent Disk, this is the type of Persistent Disk. For Hyperdisk, specify the value hyperdisk-balanced-high-availability.
  • DISK_ACCESS_MODE: how VMs can access the data on the disk. Supported values are:

    • READ_WRITE_SINGLE, for read-write access from one VM. This is the default.
    • READ_WRITE_MANY, for read-write access from multiple VMs.

    You can set the access mode only for Hyperdisk Balanced High Availability disks.

Attach a replicated disk to your VM

For disks that are not boot disks, after you create a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volume, you can attach it to a VM. The VM must be in the same region as the disk.

To attach a replicated boot disk to a VM, see Attach a replicated boot disk to a VM.

To attach a disk to multiple VMs, repeat the procedure in this section for each VM.

Console

  1. To attach a disk to a VM, go to the VM instances page.

    Go to VM instances

  2. In the Name column, click the name of the VM.

  3. Click Edit .

  4. Click +Attach existing disk.

  5. Choose the previously created replicated disk to add to your VM.

  6. If you see a warning that indicates the selected disk is already attached to another instance, select the Force-attach disk box to force-attach the disk to the VM that you are editing.

    Review the use cases for force-attaching replicated disks at Replicated disk failover.

  7. Click Save.

  8. On the Edit VM page, click Save.

gcloud

To attach a replicated disk to a running or stopped VM, use the compute instances attach-disk command with the --disk-scope flag set to regional.

gcloud compute instances attach-disk VM_NAME \
    --disk=DISK_NAME \
    --disk-scope=regional

Replace the following:

  • VM_NAME: the name of the VM to which you're adding the replicated disk
  • DISK_NAME: the name of the new disk that you're attaching to the VM

Terraform

To attach a regional Persistent Disk or Hyperdisk Balanced High Availability volume to a VM, you can use the google_compute_attached_disk resource.

resource "google_compute_instance" "test_node" {
  name         = "test-node"
  machine_type = "f1-micro"
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  attached_disk {
    source      = google_compute_disk.default.id
    device_name = google_compute_disk.default.name
  }

  network_interface {
    network = "default"
    access_config {
      # Ephemeral IP
    }
  }

  # Ignore changes for persistent disk attachments
  lifecycle {
    ignore_changes = [attached_disk]
  }


}

REST

To attach a replicated disk to a running or stopped VM, construct a POST request to the compute.instances.attachDisk method and include the URL to the replicated disk that you created.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/attachDisk
{
  "source": "/projects/PROJECT_ID/regions/REGION/disks/DISK_NAME"
}

Replace the following:

  • PROJECT_ID: your project ID
  • ZONE: the location of your VM
  • VM_NAME: the name of the VM to which you're adding the new replicated disk
  • REGION: the region where the replicated disk is located
  • DISK_NAME: the name of the replicated disk

For non-boot disks, after you create and attach a blank replicated disk to a VM, you must format and mount the disk, so that the operating system can use the available storage space.

Change a zonal disk to a replicated disk

To convert an existing zonal Persistent Disk to a Regional Persistent Disk, create a new disk by cloning an existing zonal disk. For more information, see Creating a regional disk clone from a zonal disk.

To convert a Hyperdisk to a synchronously replicated disk, create a new Hyperdisk Balanced High Availability disk from a snapshot of the existing disk, as described in Change the disk type.

Create a new VM with replicated disks

When creating a VM, you can optionally include regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volumes as additional disks.

To create and attach a regional Persistent Disk or Hyperdisk Balanced High Availability volume to a VM during VM creation, see either of the following:

Create a new VM with a replicated boot disk

When setting up a highly available VM instance, you can create the primary VM with a replicated boot disk. If a zonal outage occurs, this lets you restart the VM in the secondary zone instead of creating a new VM.

In a high availability setup, where the boot device is a replicated disk, Google recommends that you don't pre-create and start the standby instance. Instead, at the failover stage, attach the existing replicated disk when you create the standby instance by using the forceAttach option.

To create a VM with a boot disk that is a replicated disk, use either of the following methods:

gcloud

Use the gcloud compute instances create command to create a VM, and the --create-disk flag to specify the replicated disk.

gcloud compute instances create PRIMARY_INSTANCE_NAME  \
 --zone=ZONE  \
 --create-disk=^:^name=REPLICATED_DISK_NAME:scope=regional:boot=true:type=DISK_TYPE:source-snapshot=SNAPSHOT_NAME:replica-zones=ZONE,REMOTE_ZONE

When specifying the disk parameters, the characters ^:^ specify that the separation character between parameters is a colon (:). This lets you use a comma (,) when specifying the replica-zones parameter.

Replace the following:

  • PRIMARY_INSTANCE_NAME: a name for the VM
  • ZONE: the name of the zone where you want to create the VM
  • REPLICATED_DISK_NAME: a name for the replicated disk
  • DISK_TYPE: the type of disk to create, for example, hyperdisk-balanced-high-availability (Preview) or pd-balanced
  • SNAPSHOT_NAME: the name of the snapshot you created for the boot disk
  • REMOTE_ZONE: the alternate zone for the replicated disk

REST

Create a POST request to the instances.insert method and specify the properties boot: 'true' and replicaZones. For example:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
{
 "name": "VM_NAME",
 "disks": [{
    "boot": true,
    "initializeParams": {
       "sourceSnapshot": "global/snapshots/BOOT_SNAPSHOT_NAME",
       "replicaZones": [
           "projects/PROJECT_ID/zones/ZONE",
           "projects/PROJECT_ID/zones/REMOTE_ZONE"
       ],
       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/DISK_TYPE"
    }
  }],
 "networkInterfaces": [
    {
      "network": "global/networks/default"
    }
  ]
}

Replace the following:

  • PROJECT_ID: your project ID
  • ZONE: the name of the zone where you want to create the VM
  • VM_NAME: a name for the VM
  • BOOT_SNAPSHOT_NAME: the name of the boot disk snapshot
  • REMOTE_ZONE: the remote zone for the replicated disk
  • DISK_TYPE: the type of disk to create, for example, hyperdisk-balanced-high-availability (Preview) or pd-balanced

Attach a replicated boot disk to a VM

Use the following steps to:

  • Replace the boot disk of an existing VM with a replicated boot disk.
  • Failover a replicated boot disk to a hot standby VM that is running in the backup zone. You do this by attaching the replicated disk to the VM as the boot disk.

These steps assume that the replicated disk and VM already exist.

gcloud

  1. Stop the VM.
    gcloud compute instances stop VM_NAME  --zone=ZONE
    
  2. Detach the current boot disk from the VM.
    gcloud compute instances detach-disk VM_NAME \
     --zone=ZONE --disk=CURRENT_BOOT_DEVICE_NAME
    
  3. Attach the replicated boot disk to the VM.
    gcloud compute instances attach-disk VM_NAME  \
     --zone=ZONE  \
     --disk=REPLICATED_DISK_NAME  \
     --disk-scope=regional --force-attach \
     --boot
    
  4. Restart the VM.

    gcloud compute instances start VM_NAME
    

Replace the variables in the previous commands with the following:

  • VM_NAME: the name of the VM to which you want to attach the replicated boot disk
  • ZONE: the zone in which the VM is located
  • CURRENT_BOOT_DEVICE_NAME: the name of the boot disk being used by the VM. This is usually the same as the name of the VM.
  • REPLICATED_DISK_NAME: the name of the replicated disk that you want to attach to the VM as a boot disk

Optional: If you can't successfully detach the replicated boot disk from the primary VM due to an outage or failure, include the flag --force-attach.

REST

  1. Stop the VM.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/stop
    
  2. Detach the current boot disk from the VM.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/detachDisk?deviceName=CURRENT_BOOT_DEVICE_NAME
    
  3. Attach the replicated boot disk to the VM.

    Construct a POST request to the compute.instances.attachDisk method, and include the URL to the replicated boot disk:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/attachDisk
    {
    "source": "compute/v1/projects/PROJECT_ID/regions/REGION/disks/REPLICATED_DISK_NAME",
    "boot": true
    }
  4. Restart the VM.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/start
    

Replace the variables in the previous commands with the following:

  • PROJECT_ID: your project ID
  • VM_NAME: the name of the VM to which you want to attach the replicated disk
  • ZONE: the zone in which the VM is located
  • CURRENT_BOOT_DEVICE_NAME: the name of the boot disk being used by the VM. This is usually the same as the name of the VM.
  • REGION: the region in which the replicated disk is located
  • REPLICATED_DISK_NAME: the name of the replicated disk that you want to attach to the VM as a boot disk

Optional: If you can't successfully detach the replicated boot disk from the VM that it was originally attached to because of an outage or failure, include "forceAttach": true in the request body.

List and describe your replicated disks

You can view a list of all your configured replicated disks, and information about their properties, including the following:

  • Disk ID
  • Disk name
  • Size
  • Disk type
  • Region
  • Zonal replicas

To view detailed information about your replicated disks, use the following:

Resize a replicated disk

If VMs with synchronously replicated disks require additional storage space, you can resize the disks. You can resize disks at any time, regardless of whether the disk is attached to a running VM. If you need to separate your data into unique volumes, create multiple secondary disks for the VM. For Hyperdisk Balanced High Availability you can also increase the IOPS and throughput limits for the disk.

The command for resizing a replicated disk is very similar to that for resizing a non-replicated disk. However, you must specify a region instead of a zone for the disk location.

You can only increase, and not decrease, the size of a disk. To decrease the disk size, you must create a new disk with a smaller size. Until you delete the original, larger disk, you are charged for both disks.

For instructions on how to modify a replicated disk, see the following:

What's next