Adding or resizing zonal persistent disks

This page explains how to resize both zonal persistent boot disks and secondary (non-boot) zonal persistent disks. This page also explains how to add and format new zonal persistent disks on your instances.

You can only resize a zonal persistent disk to increase its size. You cannot reduce the size of a zonal persistent disk.

It is a best practice to back up your disks using snapshots to prevent unintended data loss.

Zonal persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD). For more general information about zonal persistent disks and the types of persistent disks that are available, read the persistent disks overview.

Compute Engine manages the hardware behind zonal persistent disks so that you can add and resize your disks without handling striping or redundancy. Either attach one large secondary disk and resize it as you require additional space, or attach multiple smaller disks to separate your data into multiple volumes.

Unless you create a disk from an image, your new zonal persistent disks start with no data or file systems. You must format those disks yourself after you attach them to your instances.

If either zonal standard persistent disks or zonal SSD persistent disks don't meet all of your performance or flexibility requirements, you can add storage options to your instances.

Every persistent disk you create has a default physical block size of 4 KB. If your database app requires a larger physical block size, you can select 16 KB when creating your disk. This feature is not available for boot disks. If you want to change the size of an existing disk's physical block, you must snapshot the disk, then create a new disk. You can't directly edit the physical block size of a persistent disk.

Before you begin

Adding a zonal persistent disk to your instance

Create a zonal standard persistent disk or a zonal SSD persistent disk and attach it to an existing instance. If you do not have any instances, create and start a new instance. During instance creation, you can attach up to 127 secondary non-boot zonal persistent disks, on which you can store, in separate volumes, your applications, data files, databases, and logs. You can have a total attached capacity of 257 TB per instance. For information about how to ensure maximum performance with large volumes, see Larger logical volume performance.

Create and attach a zonal persistent disk through the Google Cloud Platform Console, the gcloud tool, or the API.

Console

Create and attach a zonal persistent disk in the Google Cloud Platform Console:

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Check the box and click on the name of the instance where you want to add a disk.
  3. At the top of the VM instance details page, click Edit.
  4. Under Additional disks, click Add new disk.
  5. Specify a name for the disk, configure the disk's properties, and specify the disk's Source type.

  6. Optionally, you can select your Physical block size (KB). The disk default size is 4 KB. However, you can select 16 KB from the drop-down list to increase the physical block size of the disk.

  7. Click Done to complete the disk's configuration.

  8. At the bottom of the VM instance details page, click Save to apply your changes to the instance and add the new disk.

  9. After you create or attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.

gcloud

Create and attach a zonal persistent disk by using the gcloud tool:

  1. Use the gcloud beta compute disks create command to create a zonal persistent disk. If you need a zonal SSD persistent disk for greater throughput or IOPS, include the --type flag and specify pd-ssd. Optionally, add the --physical-block-size flag to set the physical block size.

    gcloud beta compute disks create [DISK_NAME] /
        --size [DISK_SIZE] /
        --type [DISK_TYPE] /
        --physical-block-size [BLOCK_SIZE]
    

    where:

    • [DISK_NAME] is the name of the new disk.
    • [DISK_SIZE] is the size, in GB, of the new disk.
    • [DISK_TYPE] is the type of persistent disk, either pd-standard or pd-ssd.
    • [BLOCK_SIZE] is either 4096 (4 KB) or 16384 (16 KB). 4 KB is the default physical block size. 16 KB is the increased physical block size.

  2. After you create the disk, attach it to any running or stopped instance. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk [INSTANCE_NAME] /
        --disk [DISK_NAME]
    

    where:

    • [INSTANCE_NAME] is the name of the instance where you are adding the new zonal persistent disk.
    • [DISK_NAME] is the name of the new disk that you are attaching to the instance.

    After you create and attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.

  3. Use the gcloud beta compute disks describe command to see a description of your disk. The response includes the disk's physical block size.

API

  1. In the API, construct a POST request to create a zonal persistent disk using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, do not specify a source image or a source snapshot for this disk. Optionally, include the physicalBlockSizeBytes property to set the physical block size.

    POST https://compute.googleapis.com/compute/beta/projects/[PROJECT_ID]/zones/[ZONE]/disks
    
    {
     "name": "[DISK_NAME]",
     "sizeGb": "[DISK_SIZE]",
     "type": "zones/[ZONE]/diskTypes/[DISK_TYPE]"
     "physicalBlockSizeBytes": "[BLOCK_SIZE]"
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [ZONE] is the zone where your instance and new disk are located.
    • [DISK_NAME] is the name of the new disk.
    • [DISK_SIZE] is the size, in GB, of the new disk.
    • [DISK_TYPE] is the type of persistent disk. Either pd-standard or pd-ssd.
    • [BLOCK_SIZE] is either 4096 (4 KB) or 16384 (16 KB). 4 KB is the default physical block size. 16 KB is an increased physical block size.

  2. Construct a POST request to the compute.instances.attachDisk method, and include the URL to the zonal persistent disk that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances/[INSTANCE_NAME]/attachDisk
    
    {
     "source": "/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/disks/[DISK_NAME]"
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [ZONE] is the zone where your instance and new disk are located.
    • [INSTANCE_NAME] is the name of the instance where you are adding the new persistent disk.
    • [DISK_NAME] is the name of the new disk.

After you create and attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.

Formatting and mounting a zonal persistent disk

A new zonal persistent disk starts with no data or file system. You must format this disk yourself after you attach it to your instance. The formatting process is different between a Linux instance and a Windows instance.

Linux instances


Format and mount the new disk on your Linux instance. You can use any partition format and configuration that you need, but we recommend a single ext4 file system without a partition table. You can resize your disk later if you need more storage space.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.

  3. In the terminal, use the lsblk command to list the disks that are attached to your instance and find the disk that you want to format and mount.

    $ sudo lsblk
    
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda      8:0    0   10G  0 disk
    └─sda1   8:1    0   10G  0 part /
    sdb      8:16   0  250G  0 disk
    

    In this example, sdb is the device ID for the new zonal persistent disk.

  4. Format the disk. You can use any file format that you need, but we recommend a single ext4 file system without a partition table. If you resize the zonal persistent disk later, you can resize the file system without having to modify disk partitions.

    Format the disk using the mkfs tool. This command deletes all data from the specified disk, so make sure that you specify the disk device correctly. To maximize disk performance, use the recommended formatting options in the -E flag. It is not necessary to reserve space for root on this secondary disk, so specify -m 0 to use all of the available disk space.

    $ sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/[DEVICE_ID]
    

    where [DEVICE_ID] is the device ID of the zonal persistent disk that you are formatting. For this example, specify sdb to format the entire disk with no partition table.

  5. Create a directory that serves as the mount point for the new disk. You can use any directory that you like. The following example creates a directory under /mnt/disks/.

    $ sudo mkdir -p /mnt/disks/[MNT_DIR]
    

    where: [MNT_DIR] is the directory at which to mount your zonal persistent disk.

  6. Use the mount tool to mount the disk to the instance, and enable the discard option:

    $ sudo mount -o discard,defaults /dev/[DEVICE_ID] /mnt/disks/[MNT_DIR]
    

    where:

    • [DEVICE_ID] is the device ID of the zonal persistent disk to mount.
    • [MNT_DIR] is the directory in which to mount your zonal persistent disk.
  7. Configure read and write permissions on the device. For this example, grant write access to the device for all users.

    $ sudo chmod a+w /mnt/disks/[MNT_DIR]
    

    where: [MNT_DIR] is the directory where you mounted your zonal persistent disk.

Optionally, you can add the zonal persistent disk to the /etc/fstab file so that the device automatically mounts again when the instance restarts.

  1. Create a backup of your current /etc/fstab file.

    $ sudo cp /etc/fstab /etc/fstab.backup
    
  2. Use the blkid command to find the UUID for the zonal persistent disk. The system generates this UUID when you format the disk. Use UUIDs to mount zonal persistent disks because UUIDs do not change when you move disks between systems.

    $ sudo blkid /dev/[DEVICE_ID]
    
    /dev/[DEVICE_ID]: UUID="[UUID_VALUE]" TYPE="ext4"
    

    where:

    • [DEVICE_ID] is the device ID of the zonal persistent disk that you want to automatically mount. If you created a partition table on the disk, specify the partition that you want to mount.
    • [UUID_VALUE] is the UUID of the zonal persistent disk that you must include in the /etc/fstab file.
  3. Open the /etc/fstab file in a text editor and create an entry that includes the UUID. Specify the nofail option to allow the system to boot even if this persistent disk is unavailable. For example:

    UUID=[UUID_VALUE] /mnt/disks/[MNT_DIR] ext4 discard,defaults,nofail 0 2
    

    where:

    • [UUID_VALUE] is the UUID of the zonal persistent disk that you must include in the /etc/fstab file.
    • [MNT_DIR] is the directory where you mounted your zonal persistent disk.
    • [NOFAIL_OPTION] is a variable that specifies what the operating system should do if it cannot mount the zonal persistent disk at boot time. To allow the system to continue booting even when it cannot mount the zonal persistent disk, specify this option. For most distributions, specify the nofail option. For Ubuntu 12.04 or Ubuntu 14.04, specify the nobootwait option.

    Optionally, you can complete this step with a single command. For example, the following command creates an entry in /etc/fstab to mount the /dev/sdb zonal persistent disk at /mnt/disks/disk-1 using its UUID.

    $ echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/disk-1 ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
    
    UUID=c994cf26-1853-48ab-a6a5-9d7c0250fed4 /mnt/disks/disk-1 ext4 discard,defaults,nofail 0 2
    
    
  4. Use the cat command to verify that your /etc/fstab entries are correct:

    $ cat /etc/fstab
    
    LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
    UUID=d761bdc5-7e2a-4529-a393-b9aefdb623b6 /mnt/disks/disk-1 ext4 discard,defaults,nofail 0 2
    
    

If you detach this zonal persistent disk or create a snapshot from the boot disk for this instance, edit the /etc/fstab file and remove the entry for this zonal persistent disk. Even with the nofail option in place, keep the /etc/fstab file in sync with the devices that are attached to your instance and remove these entries before you create your boot disk snapshot or when you detach zonal persistent disks.

Windows instances


Use Windows Disk Management to format and mount the new disk on a Windows instance.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.

  3. Right-click the Windows Start button and select Disk Management.

    Selecting Disk Management by right-clicking the Windows Start button.

  4. If you have not initialized the zonal persistent disk before, Disk Management prompts you to select a partitioning scheme for the new disk. Select GPT and click OK.

    Selecting a partition scheme in the disk initialization window.

  5. After the disk initializes, right-click the unallocated disk space and select New Simple Volume.

    Creating a New Simple Volume from the attached disk.

  6. Follow the instructions in the New Simple Volume Wizard to configure the new volume. You can use any partition format that you like, but for this example select NTFS. Also, check Perform a quick format to speed up the formatting process. Optionally, set the cluster size in the Allocation unit size field. The cluster size limits the maximum size of the partition. Keep this in mind if you try to resize the zonal persistent disk and this partition later.

    Selecting the partition format type in the New Simple Volume Wizard.

  7. After you complete the wizard and the volume is formatted, check the Status column on the list of attached disks to ensure that the new disk has a Healthy status.

    Checking that the disk is online with a healthy status.

You can now write files to the zonal persistent disk.

Resizing a zonal persistent disk

You can resize zonal persistent disks when your instances require more storage, and attach multiple secondary disks only when you need to separate your data into unique partitions.

You can resize disks at any time, whether or not the disk is attached to a running instance.

Resizing a disk should not delete or modify disk data, but as a best practice, snapshot your disk before you make any changes.

Console

  1. Go to the Disks page to see a list of zonal persistent disks in your project.

    Go to the Disks page

  2. Click the name of the disk that you want to resize.
  3. At the top of the disk details page, click Edit.
  4. In the Size field, enter the new size for your disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.
  5. At the bottom of the disk details page, click Save to apply your changes to the disk.
  6. After you resize the disk, you must resize the file system so that the operating system can access the additional space.

gcloud


In the gcloud tool, use the disks resize command and specify the --size flag with the desired disk size, in GB.

gcloud compute disks resize [DISK_NAME] --size [DISK_SIZE]

where:

  • [DISK_NAME] is the name of the disk that you are resizing.
  • [DISK_SIZE] is the new size, in GB, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.

After you resize the disk, you must resize the file system so that the operating system can access the additional space.

API


In the API, construct a POST request to the compute.disks.resize method. In the request body, specify the sizeGb parameter and set it to the desired disk size, in GB.

POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/disks/[DISK_NAME]/resize

{
 "sizeGb": "[DISK_SIZE]"
}

where:

  • [PROJECT_ID] is your project ID.
  • [ZONE] is the zone where your disk is located.
  • [DISK_NAME] is the name of the disk to resize.
  • [DISK_SIZE] is the new size, in GB, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.

After you resize the disk, you must resize the file system so that the operating system can access the additional space.

Resizing the file system and partitions on a zonal persistent disk

After you resize your zonal persistent disk, you must configure the file system on the disk to use the additional disk space. If the disk has a partition table, such as a boot disk, you must grow the partition and resize the file system on that partition. If your zonal persistent disk has only a file system and no partition table, you can just resize the file system.

Alternatively, instances that use the most recent versions of Public images can automatically resize their partitions and file systems after a system reboot. The SUSE Linux Enterprise Server (SLES) public images are the only images that don't support this feature.

Before you modify the file system on your zonal persistent disk, create a snapshot.

Linux instances


On Linux instances, connect to your instance and manually resize your partitions and file systems to use the disk space that you added. You do not need to restart your instance after you complete this manual process.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.

  3. Identify the disk with the file system and the partition that you want to resize. If your disk is already mounted, you can use the df command and the lsblk command to compare the size of the file system and find the disk ID. In this example, the /dev/sda1 partition is on a resized 20-GB boot disk, but the partition table and the file system provide only 9.7 GB to the operating system. Additionally, the /dev/sdb secondary disk has no partition table, but the file system on that disk provides only 250 GB to the operating system.

    $ sudo df -h
    
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda1       9.7G  1.2G  8.5G  12% /
    /dev/sdb        250G   60M  250G   1% /mnt/disks/disk-1
    
    
    $ sudo lsblk
    
    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda       8:0    0   20G  0 disk
    └─sda1    8:1    0   10G  0 part /
    sdb       8:16   0  500G  0 disk /mnt/disks/disk-1
    
    

    In this example, the df command shows that the /dev/sda1 partition is mounted as the root file system, and the /dev/sdb disk is mounted as a secondary disk at /mnt/disks/disk-1.

    If your disk has a file system written to it, and no partition table, you can skip step 4, which describes how to install the growpart utility and how to grow a partition.

  4. If the disk that you want to resize has a partition table, you must grow the partition before you resize the file system. Use growpart to resize your image partition.

    1. To install growpart on Debian servers, run:

      $ sudo apt -y install cloud-guest-utils
    2. To install growpart on CentOS servers, run:

      $  sudo yum -y install cloud-utils-growpart

      For help with the tool after installation, run growpart -h.

    3. After you install the growpart tool, you can grow the partition.

      $ sudo growpart /dev/[DEVICE_ID] [PARTITION_NUMBER]

      where [DEVICE_ID] is the device ID, and [PARTITION_NUMBER] is the partition number for that device. For example, sudo growpart /dev/sda 1, notice that there is a space between the device ID and the partition number.

  5. Extend the file system on the disk or partition to use the added space.

    If you are using ext4, use the resize2fs command. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.

    $ sudo resize2fs /dev/[DEVICE_ID][PARTITION_NUMBER]

    where [DEVICE_ID] is the device ID and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system. For example, /dev/sda points to a disk, and /dev/sda1 points to the first partition on that disk.

    If you are using xfs, use the xfs_growfs command to extend the file system, and specify the mount point instead of the device ID:

    $ sudo xfs_growfs /mnt/disks/disk-1

    disk-1 is the mount point.

  6. Optionally, use the df command to verify that the file system is resized.

    $ df -h /dev/[DEVICE_ID]
    
    Filesystem        Size  Used Avail Use% Mounted on
    /dev/[DEVICE_ID]  493G   70M  492G   1% /mnt/disks/disk-1
    

    where [DEVICE_ID] is the device ID for the disk where you want to view the file system configuration.

Windows instances


Use the Windows Disk Management tool to resize partitions on a Windows instance.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.
  3. Right-click the Windows Start button and select Disk Management to open the Disk Management tool.

    Selecting the Windows Disk Manager tool from the right-click menu on the Windows Start button.

  4. Refresh the Disk Management tool so that it recognizes the additional space on your zonal persistent disk. At the top of the Disk Management window, click Action and select Refresh.

    Clicking the Action menu and selecting Refresh to update the zonal persistent disk information in the Disk Management tool.

  5. On the disk that you resized, right-click the formatted partition and select Extend Volume.

    Right-clicking the formatted portion of the disk and selecting the Extend Volume option.

  6. Follow the instructions in the Extend Volume Wizard to extend your existing partition to include the extra disk space. If the existing partition is formatted in NTFS, the maximum partition size is limited by its cluster size settings.

  7. After you complete the wizard and the volume finishes formatting, check the Status column on the list of attached disks to ensure that the new disk has a Healthy status.

    Viewing the list of disks that are recognized by Windows, verify that the instance is Online with a Healthy status.

You can now use the extra zonal persistent disk space to store data.

Recovering an inaccessible instance or a full boot disk

If an instance is completely out of disk space or if it is not running a Linux guest environment, then automatically resizing your root filesystem isn't possible, even after you've increased the size of the persistent disk that backs it. If you can't connect to your instance, or your boot disk is full and you can't resize it, you must create a new instance and recreate the boot disk from a snapshot to resize it.

You must know the size of the book disk you're recreating. Find the size of the disk by looking on the Compute Engine console.

  1. Go to the VM instances page.

    Go to the VM instances page

    1. Check the box next to the instance you can't connect to.
    2. Click on the instance name to open the VM instance details page.
    3. Click Stop at the top of the VM instance details page to stop the instance.
    4. Scroll down to the Boot disk section and note the boot disk's size.
  2. Go to the Disks page to see a list of persistent disks in your project.

    Go to the Disks page

    1. Select the boot disk for that instance from the list.
    2. Click Create Snapshot to snapshot the disk.
  3. Go to the Snapshots page in the Google Cloud Platform Console.
    GO TO THE SNAPSHOTS PAGE
    1. Enter the snapshot Name.
    2. Select the disk from the Source disk drop-down menu.
    3. Enter you disk details.
  4. Go to the VM instances page.

    Go to the VM instances page

  5. Enter the instance details.
  6. Change the Boot disk.
    1. Select Snapshots.
    2. Select the boot disk snapshot.
    3. Select the Boot disk type.
    4. Enter the new size for the disk.
    5. Click Select.
  7. Click Create.
  8. Mount and format the disk.

Setting the auto-delete state of a zonal persistent disk

You can automatically delete read/write zonal persistent disks when the associated virtual machine instance is deleted. This behavior is controlled by the autoDelete property on the virtual machine instance for a given attached zonal persistent disk and can be updated at any time. Similarly, you can also prevent a zonal persistent disk from being deleted as well by marking the autoDelete value as false.

Console

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Check the box next to the instance that has the disks associated with it.
  3. Click on the instance name.
  4. The VM instance details page appears.
  5. Click Edit at the top of the page.
  6. Scroll to Additional disks.
  7. Click on the pencil to edit the disk's Deletion Rule.
  8. Click Done to save your changes.
  9. Click Save to update your instance.

gcloud


To set the auto-delete state of a zonal persistent disk, use the gcloud compute instances set-disk-auto-delete command:

gcloud compute instances set-disk-auto-delete example-instance \
    [--auto-delete|--no-auto-delete] \
     --disk example-disk

API


If you are using the API, make a POST request to the following URI:

https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-f/instances/example-instance/setDiskAutoDelete?deviceName=deviceName,autoDelete=true

If you are using the client library, use the instances().setDiskAutoDelete method:

def setAutoDelete(gce_service, auth_http):
  request = gce_service.instances().setDiskAutoDelete(project=example-project, zone=us-central1-f, deviceName=my-new-device, instance=example-instance, autoDelete=True)
  response = request.execute(http=auth_http)

  print response

Share a zonal persistent disk between multiple instances

You can attach a non-boot persistent disk to more than one virtual machine instance in read-only mode, which allows you to share static data between multiple instances. Sharing static data between multiple instances from one persistent disk is cheaper than replicating your data to unique disks for individual instances.

If you attach a persistent disk to multiple instances, all of those instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode. If you need to share dynamic storage space between multiple instances, you can use one of the following options:

If you have a persistent disk with data that you want to share between multiple instances, detach it from any read-write instances and attach it to one or more instances in read-only mode.

Console

  1. Go to the VM instances page to see the list of instances in your project.

    Go to the Instances page

  2. In the Name column, click the name of the instance where you want to attach the disk. The VM instance details page opens.
  3. At the top of the instance details page, click Edit.
  4. In the Additional disks section, click one of the following:
    1. Add a disk to add a disk in read-only mode to the instance.
    2. Attach existing disk to select an existing disk and attach it in read-only mode to your instance.
  5. Specify other options for your disk.
  6. Click Done to apply the changes.
  7. At the bottom of the VM instance details page, click Save to apply your changes to the instance.
  8. Connect to the instance and mount the disk.
  9. Repeat this process to add the disk to other instances in read-only mode.

gcloud

In the gcloud tool, use the compute instances attach-disk command and specify the --mode flag with the ro option.

gcloud compute instances attach-disk [INSTANCE_NAME] \
    --disk [DISK_NAME] \
    --mode ro

where:

  • [INSTANCE_NAME] is the name of the instance where you want to attach the zonal persistent disk.
  • [DISK_NAME] is the name of the disk that you want to attach.

After you attach the disk, connect to the instance and mount the disk.

Repeat this command for each instance where you want to add this disk in read-only mode.

API

In the API, construct a POST request to the compute.instances.attachDisk method. In the request body, specify the mode parameter as READ_ONLY.

POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances/[INSTANCE_NAME]/attachDisk

{
 "source": "zones/[ZONE]/disks/[DISK_NAME]",
 "mode": "READ_ONLY"
}

where:

  • [INSTANCE_NAME] is the name of the instance where you want to attach the zonal persistent disk.
  • [PROJECT_ID] is your project ID.
  • [ZONE] is the zone where your disk is located.
  • [DISK_NAME] is the name of the disk that you are attaching.

After you attach the disk, connect to the instance and mount the disk.

Repeat this request for each instance where you want to add this disk in read-only mode.

What's next

Pomohla vám tato stránka? Ohodnoťte ji prosím:

Typ zpětné vazby...

Compute Engine Documentation