You can access the same disk from multiple virtual machine (VM) or bare metal instances by attaching the disk to each instance. You can attach a disk in read-only mode or multi-writer mode to an instance.
With read-only mode, multiple instances can only read data from the disk. None of the instances can write to the disk. Sharing a disk in read-only mode between instances is less expensive than having copies of the same data on multiple disks.
With multi-writer mode, multiple instances can read and write to the same disk. This is useful for highly-available (HA) shared file systems and databases like SQL Server Failover Cluster Infrastructure (FCI).
You can share a zonal disk only between instances in the same zone. Regional disks can be shared only with instances in the same zones as the disk's replicas.
There are no additional costs associated with sharing a disk between instances. Compute Engine instances don't have to use the same machine type to share a disk, but each instance must use a machine type that supports disk sharing.
This document discusses multi-writer and read-only disk sharing in Compute Engine, including the supported disk types and performance considerations.
Before you begin
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Enable disk sharing
You can attach an existing Hyperdisk or Persistent Disk volume to multiple instances. However, for Hyperdisk volumes, you must first put the disk in multi-writer or read-only mode by setting its access mode.
A Hyperdisk volume's access mode is a property that determines how instances can access the disk.
The available access modes are as follows:
- Single-writer mode (
READ_WRITE_SINGLE
): This is the default access mode. Allows the disk to be attached to at most one instance at any time. The instance has read-write access to the disk. - Read-only mode (
READ_ONLY_MANY
): enables simultaneous attachments to multiple instances in read-only mode. Instances can't write to the disk in this mode. Required for read-only sharing. - Multi-writer mode (
READ_WRITE_MANY
): enables simultaneous attachments to multiple instances in read-write mode. Required for multi-writer sharing.
Support for each access mode varies by Hyperdisk type, as stated in the following table. You can't set the access mode for Hyperdisk Throughput or Hyperdisk Extreme volumes.
Hyperdisk type | Supported access modes |
---|---|
Hyperdisk Balanced Hyperdisk Balanced High Availability (Preview) |
|
Hyperdisk ML |
|
Hyperdisk Throughput Hyperdisk Extreme |
|
For disks that can be shared between instances, you can set the access mode at or after disk creation. For instructions on setting the access mode, see set the disk's access mode.
Read-only mode for Hyperdisk and Persistent Disk
This section discusses sharing a single disk in read-only mode between multiple instances.
Supported disk types for read-only mode
You can attach these disk types to multiple instances in read-only mode:
- Hyperdisk ML
- Zonal and regional Balanced Persistent Disk
- SSD Persistent Disk
- Standard Persistent Disk
Performance in read-only mode
Attaching a disk in read-only mode to multiple instances doesn't affect the disk's performance. Each instance can still reach the maximum disk performance possible for the instance's machine type.
Limitations for sharing disks in read-only mode
- If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
- You can attach a Hyperdisk ML volume to up to 100 instances during every 30-second interval.
The maximum number of instances a disk can be attached to varies by disk type:
For Hyperdisk ML volumes, the maximum number of instances depends on the provisioned size, as follows:
- Volumes less than 256 GiB in size: 2,500 VMs
- Volumes with capacity of 256 GiB or more, and less than 1 TiB: 1,500 VMs
- Volumes with capacity of 1 TiB or more, and less than 2 TiB: 600 VMs
- Volumes with 2 TiB or more of capacity: 30 VMs
- Zonal or regional Balanced Persistent Disk volumes in read-only mode support at most 10 instances.
- For SSD Persistent Disk, Google recommends at most 100 instances.
- For Standard Persistent Disk volumes, the recommended maximum is 10 instances.
How to share a disk in read-only mode between instances
If you're not using Hyperdisk ML, attach the disk to multiple instances by following the instructions in Attach a non-boot disk to an instance.
To attach a Hyperdisk ML volume in read-only mode to multiple instances, you must first set the disk's access mode to read-only mode. After you set the access mode, attach the Hyperdisk ML volume to your instances.
Multi-writer mode for Hyperdisk
Disks in multi-writer mode are suitable for use cases like the following:
- Implementing SQL Server Failover Cluster Infrastructure (FCI)
- Clustered file systems where multiple instances all write to the same disk
- Highly available systems in active-active or active-passive mode. Attaching the same disk to multiple instances can prevent disruptions because if one instance fails, other instances still have access to the disk and can continue to run the workload.
If your primary goal is shared file storage among compute instances, consider one of the following options:
- Filestore, Google's managed file storage solution
- Cloud Storage
- A network file server on Compute Engine
Supported Hyperdisk and machine types for multi-writer mode
You can use Hyperdisk Balanced and Hyperdisk Balanced High Availability volumes (Preview) in multi-writer mode. You can attach a single volume in multi-writer mode to at most 8 instances.
Hyperdisk Balanced supports multi-writer mode for the following machine types:
Hyperdisk Balanced High Availability supports multi-writer mode (Preview) for the following machine types:
Multi-writer mode for Hyperdisk supports the NVMe interface. If you're attaching a disk in multi-writer mode to an instance, the instance's boot disk must also be attached with NVMe.
Supported file systems for multi-writer mode
To access a disk from multiple instances, use one of the following options:
- Persistent Reservations (PR), especially for HA systems such as SQL Server FCI and NetApp ONTAP. Google recommends using PR commands to provide I/O fencing and maintain data integrity. For a list of the supported PR commands, see I/O fencing with persistent reservations.
- Clustered file systems that support multiple instances writing to the same volume. Examples of such file systems include OCFS2, VMFS and GFS2.
- Scale-out software systems like Lustre and IBM Spectrum Scale.
- Your own synchronization mechanism to coordinate concurrent reads and writes.
Hyperdisk performance in multi-writer mode
When you attach a disk in multi-writer mode to multiple instances, the disk's provisioned performance is shared evenly between all instances. Performance is split evenly across all instances—even among instances that aren't running or that aren't actively using the disk. However, the maximum performance for each instance is ultimately limited by the throughput and IOPS limits of each instance's machine type.
For example, suppose you attach a Hyperdisk Balanced volume provisioned with 100,000 IOPS to 2 instances. Each instance gets 50,000 IOPS concurrently.
The following table shows how much performance each instance in this example would get depending on how many instances you attach the disk to. Each time you attach a disk to another instance, Compute Engine asynchronously adjusts the performance allotted to each previously attached instance.
# of instances attached | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|
Max IOPS per instance |
100,000 | 50,000 | ~33,333 | 25,000 | 20,000 | ~16,667 | 14285 | 12,500 |
Max throughput per instance in MiBps |
1,200 | 600 | 400 | 300 | 240 | 200 | ~172 | 150 |
Limitations for sharing Hyperdisk volumes in multi-writer mode
- You can attach a single Hyperdisk Balanced or Hyperdisk Balanced High Availability volume in multi-writer mode to at most 8 instances.
- You can't clone a disk in multi-writer mode.
- You can't create snapshots, machine images, or disk images from disks in multi-writer mode.
- You can't create a Hyperdisk volume in multi-writer mode when you are creating or editing an instance. You must create the disk separately first, and then attach it to the instance.
- You can't resize a disk in multi-writer mode unless you detach the disk from all instances.
You can make the following changes to a disk that's in multi-writer mode, even if the disk is already attached to multiple instances:
- Modify its provisioned IOPS or throughput
- Attach the disk to additional instances
When you make one of these changes, Compute Engine redistributes the disk's provisioned performance across all the attached instances. This process might take up to 6 hours to complete.
You can't create an image from a disk in multi-writer mode.
You can't enable auto-delete for disks in multi-writer mode.
You can't use a disk in multi-writer mode as the boot disk for an instance.
Disks in multi-writer mode can't be used with instances on sole tenancy nodes.
You must use the same interface type as the instance's boot disk.
You can't change the machine type of an instance that's attached to a disk in multi-writer mode.
Storage pools don't support disks in multi-writer mode.
Available regions
Multi-writer mode is supported in all the regions where Hyperdisk Balanced and Hyperdisk Balanced High Availability are available.
I/O fencing with persistent reservations
Google recommends using persistent reservations (PR) with disks in multi writer mode to provide I/O fencing. Persistent reservations manage access to the disk between instances. This prevents data corruption from instances simultaneously writing to the same portion of the disk.
Hyperdisk volumes in multi-writer mode support NVMe (spec 1.2.1) reservations.
Supported reservation modes
The following reservation modes are supported:
- Write Exclusive: there will be a single reservation holder and a single writer. All other registrants/non-registrants will only have read access.
- Write Exclusive - Registrants Only: there will be a single reservation holder. All registrants will have read and write access to the disk. The non-registrants will only have read access.
The following reservation modes aren't supported:
- Write Exclusive - All registrants
- Exclusive Access
- Exclusive Access - Registrant Only
- Exclusive Access - All registrants
NVMe Get Features - Host Identifier
is supported. The instance number is used as the default Host ID.
The following NVMe reservation features are not supported:
- Set Features - Host Identifier
- Reservation notifications:
- Get Log Page
- Reservation Notification Mask
Supported commands
NVMe reservations support the following commands:
- Reservation Register Action (
RREGA
) - Replace/Register/Unregister -IEKEY
- Reservation Acquire Action (
RACQA
) - Acquire/Preempt -IEKEY
- Reservation Release Action (
RRELA
) - Release/Clear -IEKEY
- Reservation Report
- Reservation capabilities (
RESCAP
) field in the identify namespace data structure.
NVMe reservations don't support the following commands:
- Preempt and Abort
- Disabling Persist Through Power Loss (PTPL). PTPL is always enabled.
How to share a disk in multi-writer mode
Before you attach a disk in multi-writer mode to multiple instances, you must set the disk's access mode to multi-writer. You can set the access mode for a disk when you create it.
You can also set the access mode for an existing disk, but you must first detach the disk from all instances.
To create and use a new disk in multi-writer mode, follow these steps:
- Create the disk, setting its access mode to multi-writer.
- Attach the disk to each instance.
To use an existing disk in multi-writer mode, follow these steps:
- Detach the disk from all instances.
- Set the disk's access mode to multi-writer.
- Attach the disk to each instance.
Multi-writer mode for Persistent Disk volumes
You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.
If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:
- Connect your instances to Cloud Storage
- Connect your instances to Filestore
- Create a network file server on Compute Engine
To enable multi-writer mode for new Persistent Disk volumes,
create a new Persistent Disk volume and specify the --multi-writer
flag in the
gcloud CLI or the multiWriter
property in the Compute Engine API.
Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage.
For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine instances.
Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.
The following SCSI PR commands are supported:
- IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
- OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}
For instructions, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.
Supported Persistent Disk types for multi-writer mode
You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.
Best practices for multi-writer mode
- I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
- Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
- Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple instances.
Persistent Disk performance in multi-writer mode
Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.
Zonal SSD persistent disk multi-writer mode | ||
---|---|---|
Maximum sustained IOPS | ||
Read IOPS per GB | 30 | |
Write IOPS per GB | 30 | |
Read IOPS per instance | 15,000–100,000* | |
Write IOPS per instance | 15,000–100,000* | |
Maximum sustained throughput (MB/s) | ||
Read throughput per GB | 0.48 | |
Write throughput per GB | 0.48 | |
Read throughput per instance | 240–1,200* | |
Write throughput per instance | 240–1,200* |
To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.
Restrictions for sharing a disk in multi-writer mode
- Multi-writer mode is only supported for SSD type Persistent Disk volumes.
- You can create a Persistent Disk volume in multi-writer mode in
any zone, but you can only attach that disk to VMs in the following locations:
australia-southeast1
europe-west1
us-central1
(us-central1-a
andus-central1-c
zones only)us-east1
(us-east1-d
zone only)us-west1
(us-west1-b
andus-west1-c
zones only)
- Attached VMs must have an N2 machine type.
- Minimum disk size is 10 GiB.
- Disks in multi-writer mode don't support attaching more than 2 VMs at a time. Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
- Disks in multi-writer mode cannot change to read-only mode.
- You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
- You can't create snapshots or images from Persistent Disk volumes in multi-writer mode.
- Lower IOPS limits. See disk performance for details.
- You can't resize a multi-writer Persistent Disk volume.
- When creating an instance using the Google Cloud CLI, you can't create a
multi-writer Persistent Disk volume using the
--create-disk
flag.
Share an SSD Persistent Disk volume in multi-writer mode between VMs
You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:
gcloud
Create and attach a zonal Persistent Disk volume by using the gcloud CLI:
Use the
gcloud beta compute disks create
command command to create a zonal Persistent Disk volume. Include the--multi-writer
flag to indicate that the disk must be shareable between the VMs in multi-writer mode.gcloud beta compute disks create DISK_NAME \ --size DISK_SIZE \ --type pd-ssd \ --multi-writer
Replace the following:
DISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME
Replace the following:
INSTANCE_NAME
: the name of the N2 VM where you are adding the new zonal Persistent Disk volumeDISK_NAME
: the name of the new disk that you are attaching to the VM
Repeat the
gcloud compute instances attach-disk
command but replace INSTANCE_NAME with the name of your second VM.
After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.
REST
Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.
In the API, construct a
POST
request to create a zonal Persistent Disk volume using thedisks.insert
method. Include thename
,sizeGb
, andtype
properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include themultiWriter
property with a value ofTrue
to indicate that the disk must be sharable between the VMs in multi-writer mode.POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks { "name": "DISK_NAME", "sizeGb": "DISK_SIZE", "type": "zones/ZONE/diskTypes/pd-ssd", "multiWriter": "True" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedDISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
To attach the disk to an instance, construct a
POST
request to thecompute.instances.attachDisk
method. Include the URL to the zonal Persistent Disk volume that you just created:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedINSTANCE_NAME
: the name of the VM where you are adding the new Persistent Disk volume.DISK_NAME
: the name of the new disk
To attach the disk to a second VM, repeat the
instances.attachDisk
command from the previous step. Set theINSTANCE_NAME
to the name of the second VM.
After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.
What's next
- Learn about cross-zonal synchronous disk replication.
- Learn about Persistent Disk Asynchronous Replication.