Adding or resizing zonal persistent disks

This page explains how to add and format new zonal persistent disks on your instances. This page also explains how to resize both zonal persistent boot disks and secondary (non-boot) zonal persistent disks.

Zonal persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD). For more general information about zonal persistent disks and the types of persistent disks that are available, read the persistent disks overview. If either zonal standard persistent disks or zonal SSD persistent disks don't meet all of your performance or flexibility requirements, you can add other storage options to your instances.

You can either create blank disks, or create disks from a source. If you create a blank disk, your new zonal persistent disks start with no data or file systems. You must format those disks yourself after you attach them to your instances.

You can create new persistent disks from the following sources:

Restrictions

  • You can only resize a zonal persistent disk to increase its size. You cannot reduce the size of a zonal persistent disk. Compute Engine manages the hardware behind zonal persistent disks, so that you can add and resize your disks without handling striping or redundancy. Either attach one large secondary disk and resize it as you require additional space, or attach multiple smaller disks to separate your data into multiple volumes.

  • It is a best practice to back up your disks using snapshots to prevent unintended data loss.

Before you begin

Adding a blank zonal persistent disk to your instance

Create a zonal standard persistent disk or a zonal SSD persistent disk and attach it to an existing instance. If you do not have any instances, create and start a new instance. During instance creation, you can attach up to 127 secondary non-boot zonal persistent disks, on which you can store, in separate volumes, your applications, data files, databases, and logs. You can have a total attached capacity of 257 TB per instance. For information about how to ensure maximum performance with large volumes, see Larger logical volume performance.

Create and attach a zonal persistent disk by using the Google Cloud Console, the gcloud command-line tool, or the Compute Engine API.

Console

Create and attach a zonal persistent disk in the Google Cloud Console:

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Check the box and click the name of the instance where you want to add a disk.

  3. On the VM instance details page, click Edit.

  4. Under Additional disks, click Add new disk.

  5. Specify a name for the disk, configure the disk's properties, and select Blank as the Source type.

  6. Click Done to complete the disk's configuration.

  7. Click Save to apply your changes to the instance and add the new disk.

  8. After you create or attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.

gcloud

Create and attach a zonal persistent disk by using the gcloud tool:

  1. Use the gcloud compute disks create command to create a zonal persistent disk. If you need a zonal SSD persistent disk for greater throughput or IOPS, include the --type flag and specify pd-ssd.

    gcloud compute disks create DISK_NAME \
      --size DISK_SIZE \
      --type DISK_TYPE
    

    Replace the following:

    • DISK_NAME: the name of the new disk.
    • DISK_SIZE: the size, in gigabytes, of the new disk. Acceptable sizes range, in 1 GB increments, from 10 GB to 65,536 GB inclusive.
    • DISK_TYPE: the type of the persistent disk. For example, pd-ssd.
  2. After you create the disk, attach it to any running or stopped instance. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk INSTANCE_NAME \
      --disk DISK_NAME
    

    Replace the following:

    • INSTANCE_NAME: the name of the instance where you are adding the new zonal persistent disk
    • DISK_NAME: the name of the new disk that you are attaching to the instance

    After you create and attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.

  3. Use the gcloud compute disks describe command to see a description of your disk.

API

  1. In the API, construct a POST request to create a zonal persistent disk by using the disks.insert method. Include the name, sizeGb, and type properties. To create this disk as an empty and unformatted non-boot disk, do not specify a source image or a source snapshot.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/disks
    
    {
     "name": "DISK_NAME",
     "sizeGb": "DISK_SIZE",
     "type": "zones/ZONE/diskTypes/DISK_TYPE"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID.
    • ZONE: the zone where your instance and new disk are located.
    • DISK_NAME: the name of the new disk.
    • DISK_SIZE: the size, in gigabytes, of the new disk. Acceptable sizes range, in 1 GB increments, from 10 GB to 65,536 GB inclusive.
    • DISK_TYPE: the type of the persistent disk. For example, pd-ssd.
  2. Construct a POST request to the compute.instances.attachDisk method, and include the URL to the zonal persistent disk that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    
    {
     "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your instance and new disk are located
    • INSTANCE_NAME: the name of the instance where you are adding the new persistent disk
    • DISK_NAME: the name of the new disk

After you create and attach a new disk to an instance, you must format and mount the disk, so that the operating system can use the available storage space.

Formatting and mounting a zonal persistent disk

A new blank zonal persistent disk starts with no data or file system. You must format this disk yourself after you attach it to your instance. The formatting process is different between a Linux instance and a Windows instance.

Linux instances

Format and mount the new disk on your Linux instance. You can use any partition format and configuration that you need, but we recommend a single ext4 file system without a partition table. You can resize your disk later if you need more storage space.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.

  3. In the terminal, use the lsblk command to list the disks that are attached to your instance and find the disk that you want to format and mount.

    $ sudo lsblk
    
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda      8:0    0   10G  0 disk
    └─sda1   8:1    0   10G  0 part /
    sdb      8:16   0  250G  0 disk
    

    In this example, sdb is the device ID for the new zonal persistent disk.

  4. Format the disk. You can use any file format that you need, but we recommend a single ext4 file system without a partition table. If you resize the zonal persistent disk later, you can resize the file system without having to modify disk partitions.

    Format the disk using the mkfs tool. This command deletes all data from the specified disk, so make sure that you specify the disk device correctly. To maximize disk performance, use the recommended formatting options in the -E flag. It is not necessary to reserve space for root on this secondary disk, so specify -m 0 to use all of the available disk space.

    $ sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/DEVICE_ID
    

    Replace DEVICE_ID with the device ID of the zonal persistent disk that you are formatting. For this example, specify sdb to format the entire disk with no partition table.

  5. Create a directory that serves as the mount point for the new disk. You can use any directory that you like. The following example creates a directory under /mnt/disks/.

    $ sudo mkdir -p /mnt/disks/MNT_DIR
    

    Replace MNT_DIR with the directory at which to mount your zonal persistent disk.

  6. Use the mount tool to mount the disk to the instance, and enable the discard option:

    $ sudo mount -o discard,defaults /dev/DEVICE_ID /mnt/disks/MNT_DIR
    

    Replace the following:

    • DEVICE_ID: the device ID of the zonal persistent disk to mount
    • MNT_DIR: the directory in which to mount your zonal persistent disk
  7. Configure read and write permissions on the device. For this example, grant write access to the device for all users.

    $ sudo chmod a+w /mnt/disks/MNT_DIR
    

    Replace MNT_DIR with the directory where you mounted your zonal persistent disk.

    Optionally, you can add the zonal persistent disk to the /etc/fstab file, so that the device automatically mounts again when the instance restarts.

  8. Create a backup of your current /etc/fstab file.

    $ sudo cp /etc/fstab /etc/fstab.backup
    
  9. Use the blkid command to find the UUID for the zonal persistent disk. The system generates this UUID when you format the disk. Use UUIDs to mount zonal persistent disks because UUIDs do not change when you move disks between systems.

    $ sudo blkid /dev/DEVICE_ID
    
    /dev/DEVICE_ID: UUID="UUID_VALUE" TYPE="ext4"
    

    Replace the following:

    • DEVICE_ID: the device ID of the zonal persistent disk that you want to automatically mount. If you created a partition table on the disk, specify the partition that you want to mount.
    • UUID_VALUE: the UUID of the zonal persistent disk that you must include in the /etc/fstab file.
  10. Open the /etc/fstab file in a text editor and create an entry that includes the UUID. For example:

    UUID=UUID_VALUE /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
    

    Replace the following:

    • UUID_VALUE: the UUID of the zonal persistent disk that you must include in the /etc/fstab file.
    • MNT_DIR: the directory where you mounted your zonal persistent disk.
    • NOFAIL_OPTION: a variable that specifies what the operating system does if it cannot mount the zonal persistent disk at boot time. To let the system boot even if the persistent disk is unavailable, use the nofail option for most distributions or the nobootwait option for Ubuntu 12.04 and Ubuntu 14.04.

    Optionally, you can complete this step with a single command. For example, the following command creates an entry in /etc/fstab to mount the /dev/sdb zonal persistent disk at /mnt/disks/MNT_DIR using its UUID.

    $ echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2 | sudo tee -a /etc/fstab
    
    UUID=c994cf26-1853-48ab-a6a5-9d7c0250fed4 /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
    
    
  11. Use the cat command to verify that your /etc/fstab entries are correct:

    $ cat /etc/fstab
    
    LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
    UUID=d761bdc5-7e2a-4529-a393-b9aefdb623b6 /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
    
    

If you detach this zonal persistent disk or create a snapshot from the boot disk for this instance, edit the /etc/fstab file and remove the entry for this zonal persistent disk. Even with the NOFAIL_OPTION set to nofail or nobootwait, keep the /etc/fstab file in sync with the devices that are attached to your instance and remove these entries before you create your boot disk snapshot or when you detach zonal persistent disks.

Windows instances

Use Windows Disk Management to format and mount the new disk on a Windows instance.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.

  3. Right-click the Windows Start button and select Disk Management.

    Selecting Disk Management by right-clicking the Windows Start button.

  4. If you have not initialized the zonal persistent disk before, Disk Management prompts you to select a partitioning scheme for the new disk. Select GPT and click OK.

    Selecting a partition scheme in the disk initialization window.

  5. After the disk initializes, right-click the unallocated disk space and select New Simple Volume.

    Creating a New Simple Volume from the attached disk.

  6. Follow the instructions in the New Simple Volume Wizard to configure the new volume. You can use any partition format that you like, but for this example select NTFS. Also, check Perform a quick format to speed up the formatting process. Optionally, set the cluster size in the Allocation unit size field. The cluster size limits the maximum size of the partition. Keep this in mind if you try to resize the zonal persistent disk and this partition later.

    Selecting the partition format type in the New Simple Volume Wizard.

  7. After you complete the wizard and the volume is formatted, check the Status column on the list of attached disks to ensure that the new disk has a Healthy status.

    Checking that the disk is online with a healthy status.

You can now write files to the zonal persistent disk.

Creating a persistent disk clone from a source disk

You can create a new persistent disk clone from an existing persistent disk, even if the existing disk is attached to a VM instance. After you clone a source disk, you can delete the source disk without any risk that the clone will be deleted.

Disk clones are useful for duplicating production data to debug without disturbing production, duplicating disks while scaling out your VMs, and creating replicas for database backup verification. You can also use disk clones to move non-boot disk data to a new project. For scenarios where data protection is required for additional resilience, such as backup and disaster recovery, we recommend using snapshots instead of disk clones.

Restrictions

  • The zone, region, and disk type (pd-standard or pd-ssd) of the clone must be the same as that of the source disk.
  • You cannot create a zonal disk clone from a regional disk. You cannot create a regional disk clone from a zonal disk.
  • The size of the clone must be at least the size of the source disk. If you create a clone using the Google Cloud Console, you cannot specify a disk size and the clone is the same size as the source disk.
  • If you use a customer-supplied encryption key or a customer-managed encryption key to encrypt the source disk, you must use the same key to encrypt the clone. For more information, see Creating a clone of an encrypted source disk.
  • You can create at most one clone of a given source disk every 30 seconds.
  • You can create at most 1000 total disk clones of a given source disk. Exceeding this limit returns an internalError.

Creating a disk clone

Console

  1. In the Google Cloud Console, go to the Disks page to see a list of zonal persistent disks in your project.

    Go to Disks

  2. Find the disk that you want to clone.

  3. Click the menu button under Actions and select Clone disk.

Create clone.

  1. In the Clone disk panel, specify a name for the new disk.
  2. Under Properties, review other details for the new disk.
  3. Click Save.

gcloud

In the gcloud tool, use the disks create command and specify the --source-disk. The following example clones the source disk to a new disk in a different project.


gcloud compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME

API

In the API, construct a POST request to the compute.disks.insert method. In the request body, specify the name and sourceDisk parameters. The clone inherits all omitted properties from the source disk. The following example clones the source disk to a new disk in a different project.


POST https://www.googleapis.com/compute/v1/projects/TARGET_PROJECT_ID/zones/ZONE/disks

{
  "name": "TARGET_DISK_NAME"
  "sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
}

Creating a clone of an encrypted source disk

If you use a customer-supplied encryption key to encrypt your source disk, you must also use the same key to encrypt the clone.

Console

Under Decryption and encryption, provide the source disk encryption key.

gcloud

Provide the source disk encryption key using the --csek-key-file flag when you create the disk clone. If you are using an RSA-wrapped key, use the gcloud beta component:


gcloud beta compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME
--csek-key-file example-key-file.json

API

Provide the source disk encryption key using the diskEncryptionKey property.


POST https://www.googleapis.com/compute/beta/projects/TARGET_PROJECT_ID/zones/ZONE/disks

{
  "name": "TARGET_DISK_NAME"
  "sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
  "diskEncryptionKey": {
    "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA=="
  },
}

If you use a customer-managed encryption key to encrypt your source disk, you must also use the same key to encrypt the clone.

Console

Compute Engine automatically encrypts the clone using the source disk encryption key.

gcloud

Provide the key for the source disk using the --kms-key flag when you create the disk clone. If you are using an RSA-wrapped key, use the gcloud beta component:


gcloud beta compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME
--kms-key projects/KMS_PROJECT_ID/locations/REGION/keyRings/KEY_RING/cryptoKeys/KEY

API

Provide the key for the source disk using the kmsKeyName property when you create the disk clone.


POST https://www.googleapis.com/compute/beta/projects/TARGET_PROJECT_ID/zones/ZONE/disks

{
  "name": "TARGET_DISK_NAME"
  "sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
  "diskEncryptionKey": {
    "kmsKeyName": "projects/KMS_PROJECT_ID/locations/REGION/keyRings/KEY_RING/cryptoKeys/KEY"
  },
}

Resizing a zonal persistent disk

You can resize zonal persistent disks when your instances require more storage, and attach multiple secondary disks only when you need to separate your data into unique partitions.

You can resize disks at any time, whether or not the disk is attached to a running instance.

Resizing a disk doesn't delete or modify disk data, but as a best practice, snapshot your disk before you make any changes.

Console

  1. Go to the Disks page to see a list of zonal persistent disks in your project.

    Go to the Disks page

  2. Click the name of the disk that you want to resize.

  3. On the disk details page, click Edit.

  4. In the Size field, enter the new size for your disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.

  5. Click Save to apply your changes to the disk.

  6. After you resize the disk, you must resize the file system so that the operating system can access the additional space.

gcloud

In the gcloud tool, use the disks resize command and specify the --size flag with the desired disk size, in gigabytes.

gcloud compute disks resize DISK_NAME --size DISK_SIZE

Replace the following:

  • DISK_NAME: the name of the disk that you are resizing.
  • DISK_SIZE: the new size, in gigabytes, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.

After you resize the disk, you must resize the file system so that the operating system can access the additional space.

API

In the API, construct a POST request to the compute.disks.resize method. In the request body, specify the sizeGb parameter and set it to the desired disk size, in gigabytes.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME/resize

{
 "sizeGb": "DISK_SIZE"
}

Replace the following:

  • PROJECT_ID: your project ID.
  • ZONE: the zone where your disk is located.
  • DISK_NAME: the name of the disk to resize.
  • DISK_SIZE: the new size, in gigabytes, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.

After you resize the disk, you must resize the file system so that the operating system can access the additional space.

Resizing the file system and partitions on a zonal persistent disk

After you resize your zonal persistent disk, you must configure the file system on the disk to use the additional disk space. If the disk has a partition table, such as a boot disk, you must grow the partition and resize the file system on that partition. If your zonal persistent disk has only a file system and no partition table, you can just resize the file system.

Alternatively, instances that use the most recent versions of Public images can automatically resize their partitions and file systems after a system reboot. The SUSE Linux Enterprise Server (SLES) public images are the only images that don't support this feature.

Before you modify the file system on your zonal persistent disk, create a snapshot.

Linux instances

On Linux instances, connect to your instance and manually resize your partitions and file systems to use the disk space that you added. You do not need to restart your instance after you complete this manual process.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.

  3. Identify the disk with the file system and the partition that you want to resize. If your disk is already mounted, you can use the df command and the lsblk command to compare the size of the file system and find the disk ID. In this example, the /dev/sda1 partition is on a resized 20-GB boot disk, but the partition table and the file system provide only 9.7 GB to the operating system. Additionally, the /dev/sdb secondary disk has no partition table, but the file system on that disk provides only 250 GB to the operating system. Also, in this example /mnt/disks/disk-1 is the mount directory.

    $ sudo df -Th
    
    Filesystem      Type     Size   Used  Avail  Use%  Mounted on
    /dev/sda1       ext4     9.7G   1.2G   8.5G   12%  /
    /dev/sdb        ext4     250G    60M   250G    1%  /mnt/disks/disk-1
    
    
    $ sudo lsblk
    
    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda       8:0    0   20G  0 disk
    └─sda1    8:1    0   10G  0 part /
    sdb       8:16   0  500G  0 disk /mnt/disks/disk-1
    
    

    In this example, the df command shows that the /dev/sda1 partition is mounted as the root file system, and the /dev/sdb disk is mounted as a secondary disk at /mnt/disks/disk-1.

    If your disk has a file system written to it, and no partition table, you can skip step 4, which describes how to install the growpart utility and how to grow a partition.

  4. If the disk that you want to resize has a partition table, you must grow the partition before you resize the file system. Use growpart to resize your image partition.

    1. To install growpart on Debian servers, run:

      $ sudo apt -y install cloud-guest-utils
    2. To install growpart on CentOS servers, run:

      $  sudo yum -y install cloud-utils-growpart

      For help with the tool after installation, run growpart -h.

    3. After you install the growpart tool, you can grow the partition.

      $ sudo growpart /dev/DEVICE_ID PARTITION_NUMBER

      Replace the following:

      • DEVICE_ID: the device ID.
      • PARTITION_NUMBER: the partition number for that device. For example, sudo growpart /dev/sda 1, notice that there is a space between the device ID and the partition number.
  5. Extend the file system on the disk or partition to use the added space.

    If you are using ext4, use the resize2fs command. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.

    $ sudo resize2fs /dev/DEVICE_IDPARTITION_NUMBER

    Replace the following:

    • DEVICE_ID: the device ID.
    • PARTITION_NUMBER: the partition number for the device where you are resizing the file system. For example, /dev/sda points to a disk, and /dev/sda1 points to the first partition on that disk.

    If you are using xfs, use the xfs_growfs command to extend the file system, and specify the mount point instead of the device ID:

    $ sudo xfs_growfs /mnt/disks/disk-1

    disk-1 is the mount point.

  6. Optionally, use the df command to verify that the file system is resized.

    $ df -h /dev/DEVICE_ID
    
    Filesystem        Size  Used Avail Use% Mounted on
    /dev/DEVICE_ID  493G   70M  492G   1% /mnt/disks/disk-1
    

    Replace DEVICE_ID with the device ID for the disk where you want to view the file system configuration.

Windows instances

Use the Windows Disk Management tool to resize partitions on a Windows instance.

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.

  3. Right-click the Windows Start button and select Disk Management to open the Disk Management tool.

    Selecting the Windows Disk Manager tool from the right-click menu on the Windows Start button.

  4. Refresh the Disk Management tool so that it recognizes the additional space on your zonal persistent disk. At the top of the Disk Management window, click Action and select Refresh.

    Clicking the Action menu and selecting Refresh to update the zonal persistent disk information in the Disk Management tool.

  5. On the disk that you resized, right-click the formatted partition and select Extend Volume.

    Right-clicking the formatted portion of the disk and selecting the Extend Volume option.

  6. Follow the instructions in the Extend Volume Wizard to extend your existing partition to include the extra disk space. If the existing partition is formatted in NTFS, the maximum partition size is limited by its cluster size settings.

  7. After you complete the wizard and the volume finishes formatting, check the Status column on the list of attached disks to ensure that the new disk has a Healthy status.

    Viewing the list of disks that are recognized by Windows, verify that the instance is Online with a Healthy status.

You can now use the extra zonal persistent disk space to store data.

Recovering an inaccessible instance or a full boot disk

If an instance is completely out of disk space or if it is not running a Linux guest environment, then automatically resizing your root filesystem isn't possible, even after you've increased the size of the persistent disk that backs it. If you can't connect to your instance, or your boot disk is full and you can't resize it, you must create a new instance and recreate the boot disk from a snapshot to resize it.

You must know the size of the boot disk you're recreating. Find the size of the disk by looking on the Compute Engine console.

  1. Go to the VM instances creation page.

    Go to the VM instances page

    1. Click the instance name to open the VM instance details page.
    2. Click Stop.
    3. In the Boot disk section, note the boot disk's size and name.
  2. In the Google Cloud Console, go to the Snapshots creation page.

    Go to the Snapshots page

    1. Enter a snapshot Name.
    2. Select the boot disk from the Source disk drop-down list.
    3. Click Create.
  3. Go to the VM instances creation page.

    Go to the VM instances page

  4. Enter the instance details.

  5. Create a new boot disk from the snapshot of the old boot disk.

    1. Under Boot disk, select Change.
    2. Select Snapshots.
    3. Select the snapshot of the old boot disk from the Snapshot drop-down list.
    4. Select the Boot disk type.
    5. Enter the new size for the disk.
    6. Click Select to confirm your disk options.
  6. Click Create.

  7. Mount and format the disk.

Setting the auto-delete state of a zonal persistent disk

You can automatically delete read/write zonal persistent disks when the associated VM instance is deleted. This behavior is controlled by the autoDelete property on the VM instance for a given attached zonal persistent disk and can be updated at any time. Similarly, you can also prevent a zonal persistent disk from being deleted as well by marking the autoDelete value as false.

Console

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Check the box next to the instance that has the disks associated with it.

  3. Click the instance name.

  4. The VM instance details page appears.

  5. Click Edit.

  6. Scroll to Additional disks.

  7. Click the pencil to edit the disk's Deletion Rule.

  8. Click Done to save your changes.

  9. Click Save to update your instance.

gcloud

To set the auto-delete state of a zonal persistent disk, use the gcloud compute instances set-disk-auto-delete command:

gcloud compute instances set-disk-auto-delete example-instance \
  [--auto-delete|--no-auto-delete] \
  --disk example-disk

API

If you are using the API, make a POST request to the following URI:

https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-f/instances/example-instance/setDiskAutoDelete?deviceName=deviceName,autoDelete=true

If you are using the client library, use the instances().setDiskAutoDelete method:

def setAutoDelete(gce_service, auth_http):
  request = gce_service.instances().setDiskAutoDelete(project=example-project, zone=us-central1-f, deviceName=my-new-device, instance=example-instance, autoDelete=True)
  response = request.execute(http=auth_http)

  print response

Share a zonal persistent disk between multiple instances

You can attach a non-boot persistent disk to more than one virtual machine instance in read-only mode, which allows you to share static data between multiple instances. Sharing static data between multiple instances from one persistent disk is cheaper than replicating your data to unique disks for individual instances.

If you attach a persistent disk to multiple instances, all of those instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode. If you need to share dynamic storage space between multiple instances, you can use one of the following options:

If you have a persistent disk with data that you want to share between multiple instances, detach it from any read-write instances and attach it to one or more instances in read-only mode.

Console

  1. Go to the VM instances page to see the list of instances in your project.

    Go to the Instances page

  2. In the Name column, click the name of the instance where you want to attach the disk. The VM instance details page opens.

  3. On the instance details page, click Edit.

  4. In the Additional disks section, click one of the following:

    1. Add a disk to add a disk in read-only mode to the instance.
    2. Attach existing disk to select an existing disk and attach it in read-only mode to your instance.
  5. Specify other options for your disk.

  6. Click Done to apply the changes.

  7. Click Save to apply your changes to the instance.

  8. Connect to the instance and mount the disk.

  9. Repeat this process to add the disk to other instances in read-only mode.

gcloud

In the gcloud tool, use the compute instances attach-disk command and specify the --mode flag with the ro option.

gcloud compute instances attach-disk INSTANCE_NAME \
  --disk DISK_NAME \
  --mode ro

Replace the following:

  • INSTANCE_NAME: the name of the instance where you want to attach the zonal persistent disk
  • DISK_NAME: the name of the disk that you want to attach

After you attach the disk, connect to the instance and mount the disk.

Repeat this command for each instance where you want to add this disk in read-only mode.

API

In the API, construct a POST request to the compute.instances.attachDisk method. In the request body, specify the mode parameter as READ_ONLY.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk

{
 "source": "zones/ZONE/disks/DISK_NAME",
 "mode": "READ_ONLY"
}

Replace the following:

  • INSTANCE_NAME: the name of the instance where you want to attach the zonal persistent disk.
  • PROJECT_ID: your project ID.
  • ZONE: the zone where your disk is located.
  • DISK_NAME: the name of the disk that you are attaching.

After you attach the disk, connect to the instance and mount the disk.

Repeat this request for each instance where you want to add this disk in read-only mode.

Change the type of your persistent disk

Persistent disk pricing and performance depends on the type of the persistent disk. Change the type of your persistent disk using snapshots. For example, to change your standard persistent disk to an SSD persistent disk, use the following process:

Console

  1. Create a snapshot of your standard persistent disk.
  2. Create a new persistent disk based on the snapshot. From the Type drop-down list, select "SSD persistent disk".

gcloud

  1. Create a snapshot of your standard persistent disk.
  2. Create a new persistent disk based on the snapshot. Include the --type flag and specify pd-ssd.

API

  1. Create a snapshot of your standard persistent disk.
  2. Create a new persistent disk based on the snapshot. In the type field, specify "zones/ZONE/diskTypes/pd-ssd" and replace ZONE with the zone where your instance and new disk are located.

What's next