Share Persistent Disk volumes between VMs


You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk. To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk volume and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API.

Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.

Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.

The following SCSI PR commands are supported:

  • IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
  • OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}

Before you begin

  • If you haven't already, set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine as follows.

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    Java

    To use the Java samples on this page from a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. Create local authentication credentials for your Google Account:

      gcloud auth application-default login

    For more information, see Set up authentication for a local development environment.

    Python

    To use the Python samples on this page from a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. Create local authentication credentials for your Google Account:

      gcloud auth application-default login

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

Restrictions

  • Available only for SSD type Persistent Disk volumes.
  • You can create a Persistent Disk volume in multi-writer mode in any zone, but you can only attach that disk to VMs in the following locations:
    • australia-southeast1
    • europe-west1
    • us-central1 (us-central1-a and us-central1-c zones only)
    • us-east1 (us-east1-d zone only)
    • us-west1 (us-west1-b and us-west1-c zones only)
  • Attached VMs must have an N2 machine type.
  • Minimum size: 10 GB
  • Maximum attached VMs: 2
  • Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
  • Disks in multi-writer mode cannot change to read-only mode.
  • You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
  • You cannot create snapshots or images from Persistent Disk volumes in multi-writer mode.
  • Lower IOPS limits. See disk performance for details.
  • You cannot resize a multi-writer Persistent Disk volume.
  • When creating a VM using the Google Cloud CLI, you cannot create a multi-writer Persistent Disk volume using the --create-disk flag.

Best practices

  • I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
  • Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
  • Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple VMs.

Performance

Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.

Zonal SSD persistent disk multi-writer mode
Maximum sustained IOPS
Read IOPS per GB 30
Write IOPS per GB 30
Read IOPS per instance 15,000–100,000*
Write IOPS per instance 15,000–100,000*
Maximum sustained throughput (MB/s)
Read throughput per GB 0.48
Write throughput per GB 0.48
Read throughput per instance 240–1,200*
Write throughput per instance 240–1,200*
* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.
Attaching a multi-writer disk to multiple virtual machine instances does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.

Share a zonal Persistent Disk volume between VM instances

This section explains the different methods to share zonal Persistent Disk volumes between multiple VMs.

Share a disk in read-only mode between multiple VMs

You can attach a non-boot Persistent Disk volume to more than one VM in read-only mode, which lets you share static data between multiple VMs. Sharing static data between multiple VMs from one Persistent Disk volume is less expensive than replicating your data to unique disks for individual VMs.

If you need to share dynamic storage space between multiple VMs, you can use one of the following options:

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. In the list of VMs in your project, click the name of the VM where you want to attach the disk. The VM instance details page opens.

  3. On the instance details page, click Edit.

  4. In the Additional disks section, click one of the following:

    1. Add a disk to add a disk in read-only mode to the VM.
    2. Attach existing disk to select an existing disk and attach it in read-only mode to your VM.
  5. Specify other options for your disk.

  6. Click Done to apply the changes.

  7. Click Save to apply your changes to the VM.

  8. Connect to the VM and mount the disk.

  9. Repeat this process to add the disk to other VMs in read-only mode.

gcloud

In the gcloud CLI, use the compute instances attach-disk command and specify the --mode flag with the ro option.

gcloud compute instances attach-disk INSTANCE_NAME \
  --disk DISK_NAME \
  --mode ro

Replace the following:

  • INSTANCE_NAME: the name of the VM where you want to attach the zonal Persistent Disk volume
  • DISK_NAME: the name of the disk that you want to attach

After you attach the disk, connect to the VM and mount the disk.

Repeat this command for each VM where you want to add this disk in read-only mode.

Java

Java

Before trying this sample, follow the Java setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Java API reference documentation.

To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.compute.v1.AttachDiskInstanceRequest;
import com.google.cloud.compute.v1.AttachedDisk;
import com.google.cloud.compute.v1.InstancesClient;
import com.google.cloud.compute.v1.Operation;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class AttachDisk {

  public static void main(String[] args)
      throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String projectId = "your-project-id";

    // Name of the zone in which the instance you want to use resides.
    String zone = "zone-name";

    // Name of the compute instance you want to attach a disk to.
    String instanceName = "instance-name";

    // Full or partial URL of a persistent disk that you want to attach. This can be either
    // be a regional or zonal disk.
    // Valid formats:
    //     * https://www.googleapis.com/compute/v1/projects/{project}/zones/{zone}/disks/{disk_name}
    //     * /projects/{project}/zones/{zone}/disks/{disk_name}
    //     * /projects/{project}/regions/{region}/disks/{disk_name}
    String diskLink = String.format("/projects/%s/zones/%s/disks/%s",
        "project", "zone", "disk_name");

    // Specifies in what mode the disk will be attached to the instance. Available options are
    // `READ_ONLY` and `READ_WRITE`. Disk in `READ_ONLY` mode can be attached to
    // multiple instances at once.
    String mode = "READ_ONLY";

    attachDisk(projectId, zone, instanceName, diskLink, mode);
  }

  // Attaches a non-boot persistent disk to a specified compute instance.
  // The disk might be zonal or regional.
  // You need following permissions to execute this action:
  // https://cloud.google.com/compute/docs/disks/regional-persistent-disk#expandable-1
  public static void attachDisk(String projectId, String zone, String instanceName, String diskLink,
      String mode)
      throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the `instancesClient.close()` method on the client to safely
    // clean up any remaining background resources.
    try (InstancesClient instancesClient = InstancesClient.create()) {

      AttachDiskInstanceRequest attachDiskInstanceRequest = AttachDiskInstanceRequest.newBuilder()
          .setProject(projectId)
          .setZone(zone)
          .setInstance(instanceName)
          .setAttachedDiskResource(AttachedDisk.newBuilder()
              .setSource(diskLink)
              .setMode(mode)
              .build())
          .build();

      Operation response = instancesClient.attachDiskAsync(attachDiskInstanceRequest)
          .get(3, TimeUnit.MINUTES);

      if (response.hasError()) {
        System.out.println("Attach disk failed! " + response);
        return;
      }
      System.out.println("Attach disk - operation status: " + response.getStatus());
    }
  }
}

Python

Python

Before trying this sample, follow the Python setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Python API reference documentation.

To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

from __future__ import annotations

import sys
from typing import Any

from google.api_core.extended_operation import ExtendedOperation
from google.cloud import compute_v1


def wait_for_extended_operation(
    operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
    """
    Waits for the extended (long-running) operation to complete.

    If the operation is successful, it will return its result.
    If the operation ends with an error, an exception will be raised.
    If there were any warnings during the execution of the operation
    they will be printed to sys.stderr.

    Args:
        operation: a long-running operation you want to wait on.
        verbose_name: (optional) a more verbose name of the operation,
            used only during error and warning reporting.
        timeout: how long (in seconds) to wait for operation to finish.
            If None, wait indefinitely.

    Returns:
        Whatever the operation.result() returns.

    Raises:
        This method will raise the exception received from `operation.exception()`
        or RuntimeError if there is no exception set, but there is an `error_code`
        set for the `operation`.

        In case of an operation taking longer than `timeout` seconds to complete,
        a `concurrent.futures.TimeoutError` will be raised.
    """
    result = operation.result(timeout=timeout)

    if operation.error_code:
        print(
            f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
            file=sys.stderr,
            flush=True,
        )
        print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
        raise operation.exception() or RuntimeError(operation.error_message)

    if operation.warnings:
        print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
        for warning in operation.warnings:
            print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)

    return result


def attach_disk(
    project_id: str, zone: str, instance_name: str, disk_link: str, mode: str
) -> None:
    """
    Attaches a non-boot persistent disk to a specified compute instance. The disk might be zonal or regional.

    You need following permissions to execute this action:
    https://cloud.google.com/compute/docs/disks/regional-persistent-disk#expandable-1

    Args:
        project_id: project ID or project number of the Cloud project you want to use.
        zone:name of the zone in which the instance you want to use resides.
        instance_name: name of the compute instance you want to attach a disk to.
        disk_link: full or partial URL to a persistent disk that you want to attach. This can be either
            regional or zonal disk.
            Expected formats:
                * https://www.googleapis.com/compute/v1/projects/[project]/zones/[zone]/disks/[disk_name]
                * /projects/[project]/zones/[zone]/disks/[disk_name]
                * /projects/[project]/regions/[region]/disks/[disk_name]
        mode: Specifies in what mode the disk will be attached to the instance. Available options are `READ_ONLY`
            and `READ_WRITE`. Disk in `READ_ONLY` mode can be attached to multiple instances at once.
    """
    instances_client = compute_v1.InstancesClient()

    request = compute_v1.AttachDiskInstanceRequest()
    request.project = project_id
    request.zone = zone
    request.instance = instance_name
    request.attached_disk_resource = compute_v1.AttachedDisk()
    request.attached_disk_resource.source = disk_link
    request.attached_disk_resource.mode = mode

    operation = instances_client.attach_disk(request)

    wait_for_extended_operation(operation, "disk attachement")

REST

In the API, construct a POST request to the compute.instances.attachDisk method method. In the request body, specify the mode parameter as READ_ONLY.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk

{
 "source": "zones/ZONE/disks/DISK_NAME",
 "mode": "READ_ONLY"
}

Replace the following:

  • INSTANCE_NAME: the name of the VM where you want to attach the zonal Persistent Disk volume
  • PROJECT_ID: your project ID
  • ZONE: the zone where your disk is located
  • DISK_NAME: the name of the disk that you are attaching

After you attach the disk, connect to the VM and mount the disk.

Repeat this request for each VM where you want to add this disk in read-only mode.

Share an SSD Persistent Disk volume in multi-writer mode between VMs

You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:

gcloud

Create and attach a zonal Persistent Disk volume by using the gcloud CLI:

  1. Use the gcloud beta compute disks create command command to create a zonal Persistent Disk volume. Include the --multi-writer flag to indicate that the disk must be shareable between the VMs in multi-writer mode.

    gcloud beta compute disks create DISK_NAME \
       --size DISK_SIZE \
       --type pd-ssd \
       --multi-writer
    

    Replace the following:

    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk INSTANCE_NAME \
       --disk DISK_NAME
    

    Replace the following:

    • INSTANCE_NAME: the name of the N2 VM where you are adding the new zonal Persistent Disk volume
    • DISK_NAME: the name of the new disk that you are attaching to the VM
  3. Repeat the gcloud compute instances attach-disk command but replace INSTANCE_NAME` with the name of your second VM.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.

REST

Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.

  1. In the API, construct a POST request to create a zonal Persistent Disk volume using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include the multiWriter property with a value of True to indicate that the disk must be sharable between the VMs in multi-writer mode.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks
    
    {
    "name": "DISK_NAME",
    "sizeGb": "DISK_SIZE",
    "type": "zones/ZONE/diskTypes/pd-ssd",
    "multiWriter": "True"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. Construct a POST request to the compute.instances.attachDisk method, and include the URL to the zonal Persistent Disk volume that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    
    {
    "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • INSTANCE_NAME: the name of the VM where you are adding the new Persistent Disk volume.
    • DISK_NAME: the name of the new disk
  3. Repeat the disks.insert command, but specify the second VM instead.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.

What's next