Regional Persistent Disk and Hyperdisk Balanced High Availability are storage options that let you implement high availability (HA) services in Compute Engine. Regional Persistent Disk and Hyperdisk Balanced High Availability synchronously replicate data between two zones in the same region and ensure HA for disk data for up to one zonal failure. The synchronously replicated disk can be a boot disk or a non-boot disk.
This document explains how to do the following tasks for replicated disks:
- Create replicated disks.
- Attach a replicated disk to your VM.
- Change a zonal disk to a replicated disk.
- Create a new VM with replicated disks.
- Use a replicated disk as a boot disk for a new VM.
- Attach a replicated boot disk to a VM.
- List and describe your replicated disks.
- Resize a replicated disk.
Before you begin
- Review the differences between different types of disk storage options.
- Review the basics of synchronous disk replication.
- Read about replicated disk failover.
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
Terraform
To use the Terraform samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
For more information, see Set up authentication for a local development environment.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Required roles and permissions
To get the permissions that you need to create a synchronously replicated disk, ask your administrator to grant you the following IAM roles on the project:
-
Compute Instance Admin (v1) (
roles/compute.instanceAdmin.v1
) -
To connect to a VM that can run as a service account:
Service Account User (v1) (
roles/iam.serviceAccountUser
)
For more information about granting roles, see Manage access to projects, folders, and organizations.
These predefined roles contain the permissions required to create a synchronously replicated disk. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to create a synchronously replicated disk:
-
compute.disks.create
-
compute.instances.attachDisk
-
compute.disks.use
-
Create a snapshot of a disk:
compute.disks.createSnapshot
-
View the details for a disk:
compute.disks.get
-
Get a list of disks:
compute.disks.list
-
Change the size of a disk:
compute.disks.update
You might also be able to get these permissions with custom roles or other predefined roles.
Limitations
- Mexico, Osaka, and Montreal have three zones housed in one or two physical data centers. Since data stored in these regions can be lost in the rare event of data center destruction, you may want to consider backing up business-critical data to a second region for increased data protection.
- You can attach regional Persistent Disk only to VMs that use E2, N1, N2, and N2D machine types.
- You can attach Hyperdisk Balanced High Availability only to supported machine types.
- You cannot create a regional Persistent Disk from an image, or from a disk that was created from an image.
- When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
- The minimum size of a regional standard Persistent Disk is 200 GiB.
- You can only increase the size of a regional Persistent Disk or Hyperdisk Balanced High Availability volume; you can't decrease its size.
- Regional Persistent Disk and Hyperdisk Balanced High Availability volumes have different performance characteristics than their corresponding zonal disks. For more information, see Block storage performance.
- You can't use a Hyperdisk Balanced High Availability volume that's in multi-writer mode as a boot disk.
- If you create a replicated disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your replicated disk is fully replicated.
About using a replicated disk as a boot disk for a VM
You can attach a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) disk as a boot disk for stateful workloads that are provisioned ahead of time, before you provision a production workload. Replicated boot disks are not intended for hot standbys, because the replicated boot disks cannot be attached simultaneously to two VMs.
You can only create regional Persistent Disk or Hyperdisk Balanced High Availability volumes from snapshots; it isn't possible to create a replicated disk from an image.
To use a replicated disk as a VM boot disk, use either of the following methods:
- Create a new VM with a replicated boot disk.
- Create a replicated boot disk, and then attach it to a VM:
- Create a replicated disk from a snapshot of a boot disk.
- Attach a replicated boot disk to a VM.
If you need to failover a replicated boot disk to a running standby VM in the replica zone, use the steps described in Attach a replicated boot disk to a VM.
Create a synchronously replicated disk
Create a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volume. The disk must be in the same region as the VM that you plan to attach it to.
If you create a Hyperdisk Balanced High Availability volume, you can also allow different VMs to simultaneously access the disk by setting the disk access mode. For more information, see Share a disk between VMs.
For regional Persistent Disk, if you create a disk in the Google Cloud console, the default disk type ispd-balanced
. If you create a disk using the gcloud CLI or
REST, the default disk type is pd-standard
.
Console
In the Google Cloud console, go to the Disks page.
Select the required project.
Click Create disk.
Specify a Name for your disk.
For the Location, choose Regional.
Select the Region and Zone. You must select the same region when you create your VM.
Select the Replica zone in the same region. Make a note of the zones that you select because you must attach the disk to your VM in one of those zones.
Select the Disk source type.
Select the Disk type and Size.
Click Create to finish creating your disk.
gcloud
Create a synchronously replicated disk by using the
compute disks create
command.
If you need a regional SSD Persistent Disk for additional
throughput or IOPS, include the --type
flag and specify pd-ssd
.
gcloud compute disks create DISK_NAME \ --size=DISK_SIZE \ --type=DISK_TYPE \ --region=REGION \ --replica-zones=ZONE1,ZONE2 --access-mode=DISK_ACCESS_MODE
Replace the following:
DISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GiB, of the new diskDISK_TYPE
: For regional Persistent Disk, this is the type of the replicated disk. The default value ispd-standard
. For Hyperdisk, specify the valuehyperdisk-balanced-high-availability
.REGION
: the region for the replicated disk to reside in, for example:europe-west1
ZONE1
,ZONE2
: the zones within the region where the two disk replicas are located, for example:europe-west1-b,europe-west1-c
DISK_ACCESS_MODE
: Optional: How VMs can access the data on the disk. Supported values are:READ_WRITE_SINGLE
, for read-write access from one VM. This is the default.READ_WRITE_MANY
, for read-write access from multiple VMs.
You can set the access mode only for Hyperdisk Balanced High Availability disks.
Terraform
To create a
regional Persistent Disk or Hyperdisk Balanced High Availability volume, you can use the
google_compute_region_disk
resource.
REST
To create a
regional Persistent Disk or Hyperdisk Balanced High Availability volume,
construct a POST
request to the
compute.regionDisks.insert
method.
To create a blank disk, don't specify a snapshot source.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/disks { "name": "DISK_NAME", "region": "projects/PROJECT_ID/regions/REGION", "replicaZones": [ "projects/PROJECT_ID/zones/ZONE1", "projects/PROJECT_ID/zones/ZONE2" ], "sizeGb": "DISK_SIZE", "type": "projects/PROJECT_ID/regions/REGION/diskTypes/DISK_TYPE", "accessMode": "DISK_ACCESS_MODE" }
Replace the following:
PROJECT_ID
: your project IDREGION
: the region for the replicated disk to reside in, for example:europe-west1
DISK_NAME
: the name of the new diskZONE1
,ZONE2
: the zones where replicas of the new disk should be locatedDISK_SIZE
: the size, in GiB, of the new diskDISK_TYPE
: For regional Persistent Disk, this is the type of Persistent Disk. For Hyperdisk, specify the valuehyperdisk-balanced-high-availability
.DISK_ACCESS_MODE
: how VMs can access the data on the disk. Supported values are:READ_WRITE_SINGLE
, for read-write access from one VM. This is the default.READ_WRITE_MANY
, for read-write access from multiple VMs.
You can set the access mode only for Hyperdisk Balanced High Availability disks.
Attach a replicated disk to your VM
For disks that are not boot disks, after you create a regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volume, you can attach it to a VM. The VM must be in the same region as the disk.
To attach a replicated boot disk to a VM, see Attach a replicated boot disk to a VM.
To attach a disk to multiple VMs, repeat the procedure in this section for each VM.
Console
To attach a disk to a VM, go to the VM instances page.
In the Name column, click the name of the VM.
Click Edit
.Click +Attach existing disk.
Choose the previously created replicated disk to add to your VM.
If you see a warning that indicates the selected disk is already attached to another instance, select the Force-attach disk box to force-attach the disk to the VM that you are editing.
Review the use cases for force-attaching replicated disks at Replicated disk failover.
Click Save.
On the Edit VM page, click Save.
gcloud
To attach a replicated disk to a running or stopped VM, use the
compute instances attach-disk
command
with the --disk-scope
flag set to regional
.
gcloud compute instances attach-disk VM_NAME \ --disk=DISK_NAME \ --disk-scope=regional
Replace the following:
VM_NAME
: the name of the VM to which you're adding the replicated diskDISK_NAME
: the name of the new disk that you're attaching to the VM
Terraform
To attach a
regional Persistent Disk or Hyperdisk Balanced High Availability volume to a VM, you
can use the
google_compute_attached_disk
resource.
REST
To attach a replicated disk to a running or stopped VM,
construct a POST
request to the
compute.instances.attachDisk
method
and include the URL to the replicated disk that you created.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/attachDisk { "source": "/projects/PROJECT_ID/regions/REGION/disks/DISK_NAME" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the location of your VMVM_NAME
: the name of the VM to which you're adding the new replicated diskREGION
: the region where the replicated disk is locatedDISK_NAME
: the name of the replicated disk
For non-boot disks, after you create and attach a blank replicated disk to a VM, you must format and mount the disk, so that the operating system can use the available storage space.
Change a zonal disk to a replicated disk
To convert an existing zonal Persistent Disk to a Regional Persistent Disk, create a new disk by cloning an existing zonal disk. For more information, see Creating a regional disk clone from a zonal disk.To convert a Hyperdisk to a synchronously replicated disk, create a new Hyperdisk Balanced High Availability disk from a snapshot of the existing disk, as described in Change the disk type.
Create a new VM with replicated disks
When creating a VM, you can optionally include regional Persistent Disk or Hyperdisk Balanced High Availability (Preview) volumes as additional disks.
To create and attach a regional Persistent Disk or Hyperdisk Balanced High Availability volume to a VM during VM creation, see either of the following:
Create a new VM with a replicated boot disk
When setting up a highly available VM instance, you can create the primary VM with a replicated boot disk. If a zonal outage occurs, this lets you restart the VM in the secondary zone instead of creating a new VM.
In a high availability setup, where the boot device is a replicated disk,
Google recommends that you don't pre-create and start the standby instance.
Instead, at the failover stage, attach the existing replicated disk when you
create the standby instance by using the forceAttach
option.
To create a VM with a boot disk that is a replicated disk, use either of the following methods:
gcloud
Use the gcloud compute instances create
command
to create a VM, and the --create-disk
flag to specify the replicated disk.
gcloud compute instances create PRIMARY_INSTANCE_NAME \ --zone=ZONE \ --create-disk=^:^name=REPLICATED_DISK_NAME:scope=regional:boot=true:type=DISK_TYPE:source-snapshot=SNAPSHOT_NAME:replica-zones=ZONE,REMOTE_ZONE
When specifying the disk parameters, the characters ^:^
specify
that the separation character between parameters is a colon (:
). This
lets you use a comma (,
) when specifying the replica-zones parameter.
Replace the following:
- PRIMARY_INSTANCE_NAME: a name for the VM
- ZONE: the name of the zone where you want to create the VM
- REPLICATED_DISK_NAME: a name for the replicated disk
- DISK_TYPE: the type of disk to create, for example,
hyperdisk-balanced-high-availability
(Preview) orpd-balanced
- SNAPSHOT_NAME: the name of the snapshot you created for the boot disk
- REMOTE_ZONE: the alternate zone for the replicated disk
REST
Create a POST
request to the instances.insert
method
and specify the properties boot: 'true'
and replicaZones
. For example:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "VM_NAME", "disks": [{ "boot": true, "initializeParams": { "sourceSnapshot": "global/snapshots/BOOT_SNAPSHOT_NAME", "replicaZones": [ "projects/PROJECT_ID/zones/ZONE", "projects/PROJECT_ID/zones/REMOTE_ZONE" ], "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/DISK_TYPE" } }], "networkInterfaces": [ { "network": "global/networks/default" } ] }
Replace the following:
PROJECT_ID
: your project IDZONE
: the name of the zone where you want to create the VMVM_NAME
: a name for the VMBOOT_SNAPSHOT_NAME
: the name of the boot disk snapshotREMOTE_ZONE
: the remote zone for the replicated diskDISK_TYPE
: the type of disk to create, for example,hyperdisk-balanced-high-availability
(Preview) orpd-balanced
Attach a replicated boot disk to a VM
Use the following steps to:
- Replace the boot disk of an existing VM with a replicated boot disk.
- Failover a replicated boot disk to a hot standby VM that is running in the backup zone. You do this by attaching the replicated disk to the VM as the boot disk.
These steps assume that the replicated disk and VM already exist.
gcloud
- Stop the VM.
gcloud compute instances stop VM_NAME --zone=ZONE
- Detach the current boot disk from the VM.
gcloud compute instances detach-disk VM_NAME \ --zone=ZONE --disk=CURRENT_BOOT_DEVICE_NAME
- Attach the replicated boot disk to the VM.
gcloud compute instances attach-disk VM_NAME \ --zone=ZONE \ --disk=REPLICATED_DISK_NAME \ --disk-scope=regional --force-attach \ --boot
Restart the VM.
gcloud compute instances start VM_NAME
Replace the variables in the previous commands with the following:
VM_NAME
: the name of the VM to which you want to attach the replicated boot diskZONE
: the zone in which the VM is locatedCURRENT_BOOT_DEVICE_NAME
: the name of the boot disk being used by the VM. This is usually the same as the name of the VM.REPLICATED_DISK_NAME
: the name of the replicated disk that you want to attach to the VM as a boot disk
Optional: If you can't successfully detach the replicated boot disk
from the primary VM due to an outage or failure, include the
flag --force-attach
.
REST
Stop the VM.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/stop
Detach the current boot disk from the VM.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/detachDisk?deviceName=CURRENT_BOOT_DEVICE_NAME
Attach the replicated boot disk to the VM.
Construct a
POST
request to thecompute.instances.attachDisk
method, and include the URL to the replicated boot disk:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/attachDisk { "source": "compute/v1/projects/PROJECT_ID/regions/REGION/disks/REPLICATED_DISK_NAME", "boot": true }
Restart the VM.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME/start
Replace the variables in the previous commands with the following:
PROJECT_ID
: your project IDVM_NAME
: the name of the VM to which you want to attach the replicated diskZONE
: the zone in which the VM is locatedCURRENT_BOOT_DEVICE_NAME
: the name of the boot disk being used by the VM. This is usually the same as the name of the VM.REGION
: the region in which the replicated disk is locatedREPLICATED_DISK_NAME
: the name of the replicated disk that you want to attach to the VM as a boot disk
Optional: If you can't successfully detach the replicated boot disk
from the VM that it was originally attached to because of an outage or
failure, include "forceAttach": true
in the request body.
List and describe your replicated disks
You can view a list of all your configured replicated disks, and information about their properties, including the following:
- Disk ID
- Disk name
- Size
- Disk type
- Region
- Zonal replicas
To view detailed information about your replicated disks, use the following:
- To view the details of all replicated disks in
a specific region and project:
- Construct a
GET
request to thecompute.regionDisks.list
method. - Use the
gcloud compute disks list
command and filter the results by region.
- Construct a
- To view the details of a specific replicated disk:
- Run the
gcloud compute disks describe
command with the--region
flag, and specify the name of the disk and its region. - Construct a
GET
request to thecompute.regionDisks.get
method.
- Run the
Resize a replicated disk
If VMs with synchronously replicated disks require additional storage space, you can resize the disks. You can resize disks at any time, regardless of whether the disk is attached to a running VM. If you need to separate your data into unique volumes, create multiple secondary disks for the VM. For Hyperdisk Balanced High Availability you can also increase the IOPS and throughput limits for the disk.
The command for resizing a replicated disk is very similar to that for resizing a non-replicated disk. However, you must specify a region instead of a zone for the disk location.
You can only increase, and not decrease, the size of a disk. To decrease the disk size, you must create a new disk with a smaller size. Until you delete the original, larger disk, you are charged for both disks.
For instructions on how to modify a replicated disk, see the following:
- Regional Persistent Disk: Increase the size of a persistent disk
- Hyperdisk Balanced High Availability: Modify a Hyperdisk volume
What's next
- Learn about disk pricing.
- Learn how to monitor the replica states of replicated disks.
- Learn how to determine the replication state of a replicated disk.
- Review Share Persistent Disk volumes between VMs as an alternative to Regional Persistent Disks for read-only data.
- Create a snapshot of a disk.
- Learn about instance groups for VM instances.
- Learn how to build scalable and resilient web applications on Google Cloud.
- See the Google Cloud disaster recovery planning guide.