You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2
virtual machine (VM) instances simultaneously so that both VMs can read and
write to the disk. To enable multi-writer mode for new Persistent Disk volumes,
create a new Persistent Disk volume and specify the --multi-writer
flag in the
gcloud CLI or the multiWriter
property in the Compute Engine API.
Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.
Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.
The following SCSI PR commands are supported:
- IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
- OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}
Before you begin
-
If you haven't already, set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine as follows.
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
Java
To use the Java samples on this page from a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create local authentication credentials for your Google Account:
gcloud auth application-default login
For more information, see Set up authentication for a local development environment.
Python
To use the Python samples on this page from a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create local authentication credentials for your Google Account:
gcloud auth application-default login
For more information, see Set up authentication for a local development environment.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
-
Restrictions
- Available only for SSD type Persistent Disk volumes.
- You can create a Persistent Disk volume in multi-writer mode in
any zone, but you can only attach that disk to VMs in the following locations:
australia-southeast1
europe-west1
us-central1
(us-central1-a
andus-central1-c
zones only)us-east1
(us-east1-d
zone only)us-west1
(us-west1-b
andus-west1-c
zones only)
- Attached VMs must have an N2 machine type.
- Minimum size: 10 GB
- Maximum attached VMs: 2
- Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
- Disks in multi-writer mode cannot change to read-only mode.
- You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
- You cannot create snapshots or images from Persistent Disk volumes in multi-writer mode.
- Lower IOPS limits. See disk performance for details.
- You cannot resize a multi-writer Persistent Disk volume.
- When creating a VM using the Google Cloud CLI, you cannot create a
multi-writer Persistent Disk volume using the
--create-disk
flag.
Best practices
- I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
- Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
- Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple VMs.
Performance
Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.
Zonal SSD persistent disk multi-writer mode | ||
---|---|---|
Maximum sustained IOPS | ||
Read IOPS per GB | 30 | |
Write IOPS per GB | 30 | |
Read IOPS per instance | 15,000–100,000* | |
Write IOPS per instance | 15,000–100,000* | |
Maximum sustained throughput (MB/s) | ||
Read throughput per GB | 0.48 | |
Write throughput per GB | 0.48 | |
Read throughput per instance | 240–1,200* | |
Write throughput per instance | 240–1,200* |
To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.
Share a zonal Persistent Disk volume between VM instances
This section explains the different methods to share zonal Persistent Disk volumes between multiple VMs.
Share a disk in read-only mode between multiple VMs
You can attach a non-boot Persistent Disk volume to more than one VM in read-only mode, which lets you share static data between multiple VMs. Sharing static data between multiple VMs from one Persistent Disk volume is less expensive than replicating your data to unique disks for individual VMs.
If you need to share dynamic storage space between multiple VMs, you can use one of the following options:
- Connect your VMs to Cloud Storage
- Connect your VMs to Filestore
- Create a network file server on Compute Engine
- Create a Persistent Disk volume with multi-writer mode enabled and attach it to up to two VMs.
Console
In the Google Cloud console, go to the VM instances page.
In the list of VMs in your project, click the name of the VM where you want to attach the disk. The VM instance details page opens.
On the instance details page, click Edit.
In the Additional disks section, click one of the following:
- Add a disk to add a disk in read-only mode to the VM.
- Attach existing disk to select an existing disk and attach it in read-only mode to your VM.
Specify other options for your disk.
Click Done to apply the changes.
Click Save to apply your changes to the VM.
Connect to the VM and mount the disk.
Repeat this process to add the disk to other VMs in read-only mode.
gcloud
In the gcloud CLI, use the
compute instances attach-disk
command
and specify the --mode
flag with the ro
option.
gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME \ --mode ro
Replace the following:
INSTANCE_NAME
: the name of the VM where you want to attach the zonal Persistent Disk volumeDISK_NAME
: the name of the disk that you want to attach
After you attach the disk, connect to the VM and mount the disk.
Repeat this command for each VM where you want to add this disk in read-only mode.
Java
Java
Before trying this sample, follow the Java setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Java API reference documentation.
To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
Python
Before trying this sample, follow the Python setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Python API reference documentation.
To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
REST
In the API, construct a POST
request to the
compute.instances.attachDisk
method
method. In the request body, specify the mode
parameter as READ_ONLY
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "zones/ZONE/disks/DISK_NAME", "mode": "READ_ONLY" }
Replace the following:
INSTANCE_NAME
: the name of the VM where you want to attach the zonal Persistent Disk volumePROJECT_ID
: your project IDZONE
: the zone where your disk is locatedDISK_NAME
: the name of the disk that you are attaching
After you attach the disk, connect to the VM and mount the disk.
Repeat this request for each VM where you want to add this disk in read-only mode.
Share an SSD Persistent Disk volume in multi-writer mode between VMs
You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:
gcloud
Create and attach a zonal Persistent Disk volume by using the gcloud CLI:
Use the
gcloud beta compute disks create
command command to create a zonal Persistent Disk volume. Include the--multi-writer
flag to indicate that the disk must be shareable between the VMs in multi-writer mode.gcloud beta compute disks create DISK_NAME \ --size DISK_SIZE \ --type pd-ssd \ --multi-writer
Replace the following:
DISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME
Replace the following:
INSTANCE_NAME
: the name of the N2 VM where you are adding the new zonal Persistent Disk volumeDISK_NAME
: the name of the new disk that you are attaching to the VM
Repeat the
gcloud compute instances attach-disk
command but replace INSTANCE_NAME` with the name of your second VM.
After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.
REST
Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.
In the API, construct a
POST
request to create a zonal Persistent Disk volume using thedisks.insert
method. Include thename
,sizeGb
, andtype
properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include themultiWriter
property with a value ofTrue
to indicate that the disk must be sharable between the VMs in multi-writer mode.POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks { "name": "DISK_NAME", "sizeGb": "DISK_SIZE", "type": "zones/ZONE/diskTypes/pd-ssd", "multiWriter": "True" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedDISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
Construct a
POST
request to thecompute.instances.attachDisk
method, and include the URL to the zonal Persistent Disk volume that you just created:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and new disk are locatedINSTANCE_NAME
: the name of the VM where you are adding the new Persistent Disk volume.DISK_NAME
: the name of the new disk
Repeat the
disks.insert
command, but specify the second VM instead.
After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.