To access the same disk from multiple virtual machine (VM) instances, you can enable read-only or multi-writer sharing.
Read-only sharing allows static access to the data on a disk from multiple VMs. The VMs that the disk is attached to can only read data from the disk, and can't write to the disk.
Multi-writer mode grants multiple VMs read-write access to the same disk.
VMs must be in the same zone to share a zonal disk. Similarly, regional disks can only be shared with VMs in the same zones as the disk's replicas.
This document discusses disk sharing in Compute Engine and how to enable it.
Before you begin
- To share a Hyperdisk ML volume between VMs, you must set the access mode of the Hyperdisk ML volume to read-only. To change a Hyperdisk ML volume's access mode, see Change the access mode of a Hyperdisk ML volume.
-
If you haven't already, set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine as follows.
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
Java
To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create local authentication credentials for your Google Account:
gcloud auth application-default login
For more information, see Set up authentication for a local development environment.
Python
To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create local authentication credentials for your Google Account:
gcloud auth application-default login
For more information, see Set up authentication for a local development environment.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Required roles and permissions
To get the permissions that you need to Share a disk between VMs, ask your administrator to grant you the following IAM roles on the project:
-
Compute Instance Admin (v1) (
roles/compute.instanceAdmin.v1
) -
To connect to a VM that can run as a service account:
Service Account User (v1) (
roles/iam.serviceAccountUser
)
For more information about granting roles, see Manage access.
These predefined roles contain the permissions required to Share a disk between VMs. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to Share a disk between VMs:
-
To attach a disk to a VM:
-
compute.instances.attachDisk
on the VM -
compute.disks.use
on the disk that you want to attach to the VM
-
You might also be able to get these permissions with custom roles or other predefined roles.
Overview of read-only mode
To share static data on a disk between multiple VMs, attach the disk to each VM in read-only mode. Sharing a single disk between multiple VMs is less expensive than having copies of the same data on multiple disks.
Supported disk types for read-only mode
These Persistent Disk and Google Cloud Hyperdisk types support attaching multiple VMs in read-only mode:
- Hyperdisk ML
- Zonal and regional Balanced Persistent Disk
- SSD Persistent Disk
- Standard Persistent Disk
Performance in read-only mode
Attaching a read-only disk to multiple VMs does not affect the disk's performance. Each VM can still reach the maximum disk performance possible for the VM's machine series.
Restrictions for sharing disks in read-only mode
- If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
- You can attach a Hyperdisk ML volume to up to 100 VMs during every 30-second interval.
- The maximum number of VMs a disk can be attached to varies by disk type:
- For Hyperdisk ML volumes, the maximum number of VMs depends on the provisioned size, as follows:
- Volumes less than 256 GiB in size: 2,500
- Volumes with capacity of 256 GiB or more, and less than 1 TiB: 1,500
- Volumes with capacity of 1 TiB or more, and less than 2 TiB: 600
- Volumes with 2 TiB or more of capacity: 30
- Zonal or regional Balanced Persistent Disk volumes in read-only mode support at most 10 VMs.
- For SSD Persistent Disk, Google recommends at most 100 VMs.
- For Standard Persistent Disk volumes, the recommended maximum is 10 VMs.
- For Hyperdisk ML volumes, the maximum number of VMs depends on the provisioned size, as follows:
Prepare to share a disk in read-only mode
If you're not using Hyperdisk ML, you don't need to follow any additional steps. You can share the disk by following the instructions in Share a disk in read-only mode between multiple VMs.
To share a Hyperdisk ML volume in read-only mode, you must set the disk's access mode property to read-only mode. The access mode indicates the type of access granted to any VM the disk is attached to. If you're using a Persistent Disk volume, you don't have to manually set the access mode.
The available access modes for Hyperdisk volumes are as follows:
- Read-only mode (
READ_ONLY_MANY
): grants read-only access to all VMs attached to the disk. - Read-write mode (
READ_WRITE_SINGLE
): allows only 1 VM to be attached to the disk, and grants the attached VM read-write access. This is the default access mode.
To share a Hyperdisk ML volume between VMs,
Change the access mode to READ_ONLY_MANY
.
After you enable read-only mode, follow the steps to Share a disk in read-only mode between multiple VMs.
Share a disk in read-only mode between VMs
This section describes how to attach a non-boot Hyperdisk ML or Persistent Disk volume in read-only mode to multiple VMs.
Console
In the Google Cloud console, go to the VM instances page.
In the list of VMs in your project, click the name of the VM where you want to attach the disk. The VM instance details page opens.
On the instance details page, click Edit.
In the Additional disks section, click one of the following:
- Click Add a disk.
- Attach existing disk to select an existing disk and attach it in read-only mode to your VM.
In the Disk list, select the disk you want to attach. If the disk isn't listed, make sure it's in the same location as the VM. This means the same zone for a zonal disk and the same region for a regional disk.
For Disk attachment mode, select Read-only.
Specify other options for your disk.
To apply the changes to the disk, click Done.
To apply your changes to the VM, click Save.
Connect to the VM and mount the disk.
Repeat this process to attach the disk to other VMs in read-only mode.
gcloud
In the gcloud CLI, use the
compute instances attach-disk
command
and specify the --mode
flag with the ro
option.
gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME \ --mode ro
Replace the following:
INSTANCE_NAME
: the name of the VM where you want to attach the zonal Persistent Disk volumeDISK_NAME
: the name of the disk that you want to attach
After you attach the disk, connect to the VM and mount the disk.
Repeat this command for each VM where you want to add this disk in read-only mode.
Java
Java
Before trying this sample, follow the Java setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Java API reference documentation.
To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
Python
Before trying this sample, follow the Python setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Python API reference documentation.
To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
REST
In the API, construct a POST
request to the
compute.instances.attachDisk
method
method. In the request body, specify the mode
parameter as READ_ONLY
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "zones/ZONE/disks/DISK_NAME", "mode": "READ_ONLY" }
Replace the following:
INSTANCE_NAME
: the name of the VM where you want to attach the zonal Persistent Disk volumePROJECT_ID
: your project IDZONE
: the zone where your disk is locatedDISK_NAME
: the name of the disk that you are attaching
After you attach the disk, connect to the VM and mount the disk.
Repeat this request for each VM where you want to add this disk in read-only mode.
Overview of multi-writer mode
You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:
- Connect your VMs to Cloud Storage
- Connect your VMs to Filestore
- Create a network file server on Compute Engine
To enable multi-writer mode for new Persistent Disk volumes,
create a new Persistent Disk volume and specify the --multi-writer
flag in the
gcloud CLI or the multiWriter
property in the Compute Engine API.
Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.
Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.
The following SCSI PR commands are supported:
- IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
- OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}
For instructions on sharing a disk in multi-writer mode, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.
Supported disk types for multi-writer mode
You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.
Best practices for multi-writer mode
- I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
- Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
- Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple VMs.
Performance in multi-writer mode
Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.
Zonal SSD persistent disk multi-writer mode | ||
---|---|---|
Maximum sustained IOPS | ||
Read IOPS per GB | 30 | |
Write IOPS per GB | 30 | |
Read IOPS per instance | 15,000–100,000* | |
Write IOPS per instance | 15,000–100,000* | |
Maximum sustained throughput (MB/s) | ||
Read throughput per GB | 0.48 | |
Write throughput per GB | 0.48 | |
Read throughput per instance | 240–1,200* | |
Write throughput per instance | 240–1,200* |
To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.
Restrictions for sharing a disk in multi-writer mode
- Multi-writer mode is only supported for SSD type Persistent Disk volumes.
- You can create a Persistent Disk volume in multi-writer mode in
any zone, but you can only attach that disk to VMs in the following locations:
australia-southeast1
europe-west1
us-central1
(us-central1-a
andus-central1-c
zones only)us-east1
(us-east1-d
zone only)us-west1
(us-west1-b
andus-west1-c
zones only)
- Attached VMs must have an N2 machine type.
- Minimum disk size is 10 GiB.
- Disks in multi-writer mode don't support attaching more than 2 VMs at a time. Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
- Disks in multi-writer mode cannot change to read-only mode.
- You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
- You can't create snapshots or images from Persistent Disk volumes in multi-writer mode.
- Lower IOPS limits. See disk performance for details.
- You can't resize a multi-writer Persistent Disk volume.
- When creating a VM using the Google Cloud CLI, you can't create a
multi-writer Persistent Disk volume using the
--create-disk
flag.
Share an SSD Persistent Disk volume in multi-writer mode between VMs
You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:
gcloud
Create and attach a zonal Persistent Disk volume by using the gcloud CLI:
Use the
gcloud beta compute disks create
command command to create a zonal Persistent Disk volume. Include the--multi-writer
flag to indicate that the disk must be shareable between the VMs in multi-writer mode.gcloud beta compute disks create DISK_NAME \ --size DISK_SIZE \ --type pd-ssd \ --multi-writer
Replace the following:
DISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME
Replace the following:
INSTANCE_NAME
: the name of the N2 VM where you are adding the new zonal Persistent Disk volumeDISK_NAME
: the name of the new disk that you are attaching to the VM
Repeat the
gcloud compute instances attach-disk
command but replace INSTANCE_NAME with the name of your second VM.
After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.
REST
Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.
In the API, construct a
POST
request to create a zonal Persistent Disk volume using thedisks.insert
method. Include thename
,sizeGb
, andtype
properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include themultiWriter
property with a value ofTrue
to indicate that the disk must be sharable between the VMs in multi-writer mode.POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks { "name": "DISK_NAME", "sizeGb": "DISK_SIZE", "type": "zones/ZONE/diskTypes/pd-ssd", "multiWriter": "True" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedDISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
To attach the disk to a VM, construct a
POST
request to thecompute.instances.attachDisk
method. Include the URL to the zonal Persistent Disk volume that you just created:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedINSTANCE_NAME
: the name of the VM where you are adding the new Persistent Disk volume.DISK_NAME
: the name of the new disk
To attach the disk to a second VM, repeat the
instances.attachDisk
command from the previous step. Set theINSTANCE_NAME
to the name of the second VM.
After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.