Share disks between VMs


You can access the same disk from multiple virtual machine (VM) instances by attaching the disk to each VM. You can attach a disk in read-only mode or multi-writer mode to a VM.

With read-only mode, multiple VMs can only read data from the disk. None of the VMs can write to the disk. Sharing a disk in read-only mode between VMs is less expensive than having copies of the same data on multiple disks.

With multi-writer mode, multiple VMs can read and write to the same disk. This is useful for highly-available (HA) shared file systems and databases like SQL Server Failover Cluster Infrastructure (FCI).

You can share a zonal disk only between VMs in the same zone. Regional disks can be shared only with VMs in the same zones as the disk's replicas.

There are no additional costs associated with sharing a disk between VMs. VMs don't have to use the same machine type to share a disk, but each VM must use a machine type that supports disk sharing.

This document discusses multi-writer and read-only disk sharing in Compute Engine, including the supported disk types and performance considerations.

Before you begin

  • If you haven't already, then set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Enable disk sharing

You can attach an existing Hyperdisk or Persistent Disk volume to multiple VMs. However, for Hyperdisk volumes, you must first put the disk in multi-writer or read-only mode by setting its access mode.

A Hyperdisk volume's access mode is a property that determines how VMs can access the disk.

The available access modes are as follows:

  • Single-writer mode (READ_WRITE_SINGLE): This is the default access mode. Allows the disk to be attached to at most one VM at any time. The attached VM has read-write access to the disk.
  • Read-only mode (READ_ONLY_MANY): enables simultaneous attachments to multiple VMs in read-only mode. VMs can't write to the disk in this mode. Required for read-only sharing.
  • Multi-writer mode (READ_WRITE_MANY): enables simultaneous attachments to multiple VMs in read-write mode. Required for multi-writer sharing.

Support for each access mode varies by Hyperdisk type, as stated in the following table. You can't set the access mode for Hyperdisk Throughput or Hyperdisk Extreme volumes.

Hyperdisk type Supported access modes
Hyperdisk Balanced
Hyperdisk Balanced High Availability
(Preview)
  • Single-writer mode
  • Multi-writer mode
Hyperdisk ML
  • Single-writer mode
  • Read-only mode
Hyperdisk Throughput
Hyperdisk Extreme
  • Single-writer mode

For disks that can be shared between VMs, you can set the access mode at or after disk creation. For instructions on setting the access mode, see set the disk's access mode.

Read-only mode for Hyperdisk and Persistent Disk

This section discusses sharing a single disk in read-only mode between multiple VMs.

Supported disk types for read-only mode

You can attach these disk types to multiple VMs in read-only mode:

  • Hyperdisk ML
  • Zonal and regional Balanced Persistent Disk
  • SSD Persistent Disk
  • Standard Persistent Disk

Performance in read-only mode

Attaching a disk in read-only mode to multiple VMs doesn't affect the disk's performance. Each VM can still reach the maximum disk performance possible for the VM's machine series.

Limitations for sharing disks in read-only mode

  • If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
  • You can attach a Hyperdisk ML volume to up to 100 VMs during every 30-second interval.
  • The maximum number of VMs a disk can be attached to varies by disk type:
    • For Hyperdisk ML volumes, the maximum number of VMs depends on the provisioned size, as follows:
      • Volumes less than 256 GiB in size: 2,500 VMs
      • Volumes with capacity of 256 GiB or more, and less than 1 TiB: 1,500 VMs
      • Volumes with capacity of 1 TiB or more, and less than 2 TiB: 600 VMs
      • Volumes with 2 TiB or more of capacity: 30 VMs
    • Zonal or regional Balanced Persistent Disk volumes in read-only mode support at most 10 VMs.
    • For SSD Persistent Disk, Google recommends at most 100 VMs.
    • For Standard Persistent Disk volumes, the recommended maximum is 10 VMs.

How to share a disk in read-only mode between VMs

If you're not using Hyperdisk ML, attach the disk to multiple VMs by following the instructions in Attach a non-boot disk to a VM.

To attach a Hyperdisk ML volume in read-only mode to multiple VMs, you must first set the disk's access mode to read-only mode. After you set the access mode, attach the Hyperdisk ML volume to your VMs.

Multi-writer mode for Hyperdisk

Disks in multi-writer mode are suitable for use cases like the following:

  • Implementing SQL Server Failover Cluster Infrastructure (FCI)
  • Clustered file systems where multiple VMs all write to the same disk
  • Highly available systems in active-active or active-passive mode. Attaching the same disk to multiple VMs can prevent disruptions because if one VM fails, other VMs still have access to the disk and can continue to run the workload.

If your primary goal is shared file storage among VMs, consider one of the following options:

Supported Hyperdisk and machine types for multi-writer mode

You can use Hyperdisk Balanced and Hyperdisk Balanced High Availability (Preview) volumes in multi-writer mode. You can attach a single volume in multi-writer mode to at most 8 VMs.

Hyperdisk Balanced supports multi-writer mode for the following machine types:

Hyperdisk Balanced High Availability supports multi-writer mode for the following machine types:

Multi-writer mode for Hyperdisk supports the NVMe interface. If you're attaching a disk in multi-writer mode to a VM, the VM's boot disk must also be attached with NVMe.

Supported file systems for multi-writer mode

To access a disk from multiple VMs, use one of the following options:

  • Persistent Reservations (PR), especially for HA systems such as SQL Server FCI and NetApp ONTAP. Google recommends using PR commands to provide I/O fencing and maintain data integrity. For a list of the supported PR commands, see I/O fencing with persistent reservations.
  • Clustered file systems that support multiple instances writing to the same volume. Examples of such file systems include OCFS2, VMFS and GFS2.
  • Scale-out software systems like Lustre and IBM Spectrum Scale.
  • Your own synchronization mechanism to coordinate concurrent reads and writes.

Hyperdisk performance in multi-writer mode

When you attach a disk in multi-writer mode to multiple VMs, the disk's provisioned performance is shared evenly between all VMs. Performance is split evenly across all VMs even among VMs that aren't running or that aren't actively using the disk. However, the maximum performance for each VM is ultimately limited by the throughput and IOPS limits of each VM's machine type.

For example, suppose you attach a Hyperdisk Balanced volume provisioned with 100,000 IOPS to 2 VMs. Each VM gets 50,000 IOPS concurrently.

The following table shows how much performance each VM in this example would get depending on how many VMs you attach the disk to. Each time you attach a disk to another VM, Compute Engine asynchronously adjusts the performance allotted to each previously attached VM.

# of VMs attached 1 2 3 4 5 6 7 8
Max IOPS
per VM
100,000 50,000 ~33,333 25,000 20,000 ~16,667 14285 12,500
Max throughput
per VM
in MiBps
1,200 600 400 300 240 200 ~172 150

Limitations for sharing Hyperdisk volumes in multi-writer mode

  • You can attach a single Hyperdisk Balanced or Hyperdisk Balanced High Availability volume in multi-writer mode to at most 8 VMs.
  • You can't clone a disk in multi-writer mode.
  • You can't create snapshots, machine images, or disk images from disks in multi-writer mode.
  • You can't create a Hyperdisk volume in multi-writer mode when you are creating or editing a VM. You must create the disk separately first, and then attach it to the VM.
  • You can't resize a disk in multi-writer mode unless you detach the disk from all VMs.
  • You can make the following changes to a disk that's in multi-writer mode, even if the disk is already attached to multiple VMs:

    • Modify its provisioned IOPS or throughput
    • Attach the disk to additional VMs

    When you make one of these changes, Compute Engine redistributes the disk's provisioned performance across all the attached VMs. This process might take up to 6 hours to complete.

  • You can't create an image from a disk in multi-writer mode.

  • You can't enable auto-delete for disks in multi-writer mode.

  • You can't use a disk in multi-writer mode as the boot disk for a VM.

  • Disks in multi-writer mode can't be used with sole tenancy VMs.

  • You must use the same interface type as the VM's boot disk.

  • You can't change the machine type of a VM that's attached to a disk in multi-writer mode.

  • Storage pools don't support disks in multi-writer mode.

Available regions

Multi-writer mode is supported in all the regions where Hyperdisk Balanced and Hyperdisk Balanced High Availability are available.

I/O fencing with persistent reservations

Google recommends using persistent reservations (PR) with disks in multi writer mode to provide I/O fencing. Persistent reservations manage access to the disk between VMs. This prevents data corruption from VMs simultaneously writing to the same portion of the disk.

Hyperdisk volumes in multi-writer mode support NVMe (spec 1.2.1) reservations.

Supported reservation modes

The following reservation modes are supported:

  1. Write Exclusive: there will be a single reservation holder and a single writer. All other registrants/non-registrants will only have read access.
  2. Write Exclusive - Registrants Only: there will be a single reservation holder. All registrants will have read and write access to the disk. The non-registrants will only have read access.

The following reservation modes aren't supported:

  • Write Exclusive - All registrants
  • Exclusive Access
  • Exclusive Access - Registrant Only
  • Exclusive Access - All registrants

NVMe Get Features - Host Identifier is supported. The VM number is used as the default Host ID.

The following NVMe reservation features are not supported:

  • Set Features - Host Identifier
  • Reservation notifications:
    • Get Log Page
    • Reservation Notification Mask

Supported commands

NVMe reservations support the following commands:

  • Reservation Register Action (RREGA) - Replace/Register/Unregister - IEKEY
  • Reservation Acquire Action (RACQA) - Acquire/Preempt - IEKEY
  • Reservation Release Action (RRELA) - Release/Clear - IEKEY
  • Reservation Report
  • Reservation capabilities (RESCAP) field in the identify namespace data structure.

NVMe reservations don't support the following commands:

  • Preempt and Abort
  • Disabling Persist Through Power Loss (PTPL). PTPL is always enabled.

How to share a disk in multi-writer mode

Before you attach a disk in multi-writer mode to multiple VMs, you must set the disk's access mode to multi-writer. You can set the access mode for a disk when you create it. You can also set the access mode for an existing disk, but you must first detach the disk from all VMs.

To create and use a new disk in multi-writer mode, follow these steps:

  1. Create the disk, setting its access mode to multi-writer.
  2. Attach the disk to each VM.

To use an existing disk in multi-writer mode, follow these steps:

  1. Detach the disk from all VMs.
  2. Set the disk's access mode to multi-writer.
  3. Attach the disk to each VM.

Multi-writer mode for Persistent Disk volumes

You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.

If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:

To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk volume and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API.

Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.

Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.

The following SCSI PR commands are supported:

  • IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
  • OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}

For instructions, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.

Supported Persistent Disk types for multi-writer mode

You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.

Best practices for multi-writer mode

  • I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
  • Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
  • Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple VMs.

Persistent Disk performance in multi-writer mode

Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.

Zonal SSD persistent disk multi-writer mode
Maximum sustained IOPS
Read IOPS per GB 30
Write IOPS per GB 30
Read IOPS per instance 15,000–100,000*
Write IOPS per instance 15,000–100,000*
Maximum sustained throughput (MB/s)
Read throughput per GB 0.48
Write throughput per GB 0.48
Read throughput per instance 240–1,200*
Write throughput per instance 240–1,200*
* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.
Attaching a multi-writer disk to multiple virtual machine instances does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.

Restrictions for sharing a disk in multi-writer mode

  • Multi-writer mode is only supported for SSD type Persistent Disk volumes.
  • You can create a Persistent Disk volume in multi-writer mode in any zone, but you can only attach that disk to VMs in the following locations:
    • australia-southeast1
    • europe-west1
    • us-central1 (us-central1-a and us-central1-c zones only)
    • us-east1 (us-east1-d zone only)
    • us-west1 (us-west1-b and us-west1-c zones only)
  • Attached VMs must have an N2 machine type.
  • Minimum disk size is 10 GiB.
  • Disks in multi-writer mode don't support attaching more than 2 VMs at a time. Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
  • Disks in multi-writer mode cannot change to read-only mode.
  • You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
  • You can't create snapshots or images from Persistent Disk volumes in multi-writer mode.
  • Lower IOPS limits. See disk performance for details.
  • You can't resize a multi-writer Persistent Disk volume.
  • When creating a VM using the Google Cloud CLI, you can't create a multi-writer Persistent Disk volume using the --create-disk flag.

Share an SSD Persistent Disk volume in multi-writer mode between VMs

You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:

gcloud

Create and attach a zonal Persistent Disk volume by using the gcloud CLI:

  1. Use the gcloud beta compute disks create command command to create a zonal Persistent Disk volume. Include the --multi-writer flag to indicate that the disk must be shareable between the VMs in multi-writer mode.

    gcloud beta compute disks create DISK_NAME \
       --size DISK_SIZE \
       --type pd-ssd \
       --multi-writer
    

    Replace the following:

    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk INSTANCE_NAME \
       --disk DISK_NAME
    

    Replace the following:

    • INSTANCE_NAME: the name of the N2 VM where you are adding the new zonal Persistent Disk volume
    • DISK_NAME: the name of the new disk that you are attaching to the VM
  3. Repeat the gcloud compute instances attach-disk command but replace INSTANCE_NAME with the name of your second VM.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.

REST

Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.

  1. In the API, construct a POST request to create a zonal Persistent Disk volume using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include the multiWriter property with a value of True to indicate that the disk must be sharable between the VMs in multi-writer mode.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks
    
    {
    "name": "DISK_NAME",
    "sizeGb": "DISK_SIZE",
    "type": "zones/ZONE/diskTypes/pd-ssd",
    "multiWriter": "True"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. To attach the disk to a VM, construct a POST request to the compute.instances.attachDisk method. Include the URL to the zonal Persistent Disk volume that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    
    {
    "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • INSTANCE_NAME: the name of the VM where you are adding the new Persistent Disk volume.
    • DISK_NAME: the name of the new disk
  3. To attach the disk to a second VM, repeat the instances.attachDisk command from the previous step. Set the INSTANCE_NAME to the name of the second VM.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.

What's next