This page explains how to add and format new zonal persistent disks on your instances. This page also explains how to resize both zonal persistent boot disks and secondary (non-boot) zonal persistent disks.
Zonal persistent disks are available as either standard hard disk drives (HDD) or solid-state drives (SSD). For more general information about zonal persistent disks and the types of persistent disks that are available, read the persistent disks overview. If either zonal standard persistent disks or zonal SSD persistent disks don't meet all of your performance or flexibility requirements, you can add other storage options to your instances.
You can either create blank disks, or create disks from a source. If you create a blank disk, your new zonal persistent disks start with no data or file systems. You must format those disks yourself after you attach them to your instances.
You can create new persistent disks from the following sources:
Existing persistent disks: Create a new persistent disk directly from another persistent disk. Use this option if you need an instantly attachable copy of an existing non-boot persistent disk.
Snapshots: Create a new non-boot persistent disk from a source snapshot. Use this option to restore data from a persistent disk that you've backed up using snapshots.
Images: Create new boot persistent disks from a source image. Use this option to create a boot disk for a new VM or to create a standalone boot persistent disk.
Restrictions
You can only resize a zonal persistent disk to increase its size. You cannot reduce the size of a zonal persistent disk. Compute Engine manages the hardware behind zonal persistent disks, so that you can add and resize your disks without handling striping or redundancy. Either attach one large secondary disk and resize it as you require additional space, or attach multiple smaller disks to separate your data into multiple volumes.
It is a best practice to back up your disks using snapshots to prevent unintended data loss.
Before you begin
- If you want to use the command-line examples in this guide:
- Install or update to the latest version of the gcloud command-line tool.
- Set a default region and zone.
- If you want to use the API examples in this guide, set up API access.
- Read about the different types of persistent disks.
Adding a blank zonal persistent disk to your instance
Create a zonal standard persistent disk or a zonal SSD persistent disk and attach it to an existing instance. If you do not have any instances, create and start a new instance. During instance creation, you can attach up to 127 secondary non-boot zonal persistent disks, on which you can store, in separate volumes, your applications, data files, databases, and logs. You can have a total attached capacity of 257 TB per instance. For information about how to ensure maximum performance with large volumes, see Larger logical volume performance.
Create and attach a zonal persistent disk by using the
Google Cloud Console
, the
gcloud
command-line tool, or the
Compute Engine API.
Console
Create and attach a zonal persistent disk in the Google Cloud Console :
Go to the VM instances page.
Check the box and click the name of the instance where you want to add a disk.
On the VM instance details page, click Edit.
Under Additional disks, click Add new disk.
Specify a name for the disk, configure the disk's properties, and select Blank as the Source type.
Click Done to complete the disk's configuration.
Click Save to apply your changes to the instance and add the new disk.
After you create or attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.
gcloud
Create and attach a zonal persistent disk by using the
gcloud
tool:
Use the
gcloud compute disks create
command to create a zonal persistent disk. If you need a zonal SSD persistent disk for greater throughput or IOPS, include the--type
flag and specifypd-ssd
.gcloud compute disks create DISK_NAME \ --size DISK_SIZE \ --type DISK_TYPE
Replace the following:
DISK_NAME
: the name of the new disk.DISK_SIZE
: the size, in gigabytes, of the new disk. Acceptable sizes range, in 1 GB increments, from 10 GB to 65,536 GB inclusive.DISK_TYPE
: the type of the persistent disk. For example,pd-ssd
.
After you create the disk, attach it to any running or stopped instance. Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME
Replace the following:
INSTANCE_NAME
: the name of the instance where you are adding the new zonal persistent diskDISK_NAME
: the name of the new disk that you are attaching to the instance
After you create and attach a new disk to an instance, you must format and mount the disk so that the operating system can use the available storage space.
Use the
gcloud compute disks describe
command to see a description of your disk.
API
In the API, construct a
POST
request to create a zonal persistent disk by using thedisks.insert
method. Include thename
,sizeGb
, andtype
properties. To create this disk as an empty and unformatted non-boot disk, do not specify a source image or a source snapshot.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/disks { "name": "DISK_NAME", "sizeGb": "DISK_SIZE", "type": "zones/ZONE/diskTypes/DISK_TYPE" }
Replace the following:
PROJECT_ID
: your project ID.ZONE
: the zone where your instance and new disk are located.DISK_NAME
: the name of the new disk.DISK_SIZE
: the size, in gigabytes, of the new disk. Acceptable sizes range, in 1 GB increments, from 10 GB to 65,536 GB inclusive.DISK_TYPE
: the type of the persistent disk. For example,pd-ssd
.
Construct a POST request to the
compute.instances.attachDisk
method, and include the URL to the zonal persistent disk that you just created:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME" }
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your instance and new disk are locatedINSTANCE_NAME
: the name of the instance where you are adding the new persistent diskDISK_NAME
: the name of the new disk
After you create and attach a new disk to an instance, you must format and mount the disk, so that the operating system can use the available storage space.
Formatting and mounting a zonal persistent disk
A new blank zonal persistent disk starts with no data or file system. You must format this disk yourself after you attach it to your instance. The formatting process is different between a Linux instance and a Windows instance.
Linux instances
Format and mount the new disk on your Linux instance. You can use any
partition format and configuration that you need, but we recommend a
single ext4
file system without a partition table. You can
resize your disk later if you need more storage
space.
Go to the VM instances page.
Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.
In the terminal, use the
lsblk
command to list the disks that are attached to your instance and find the disk that you want to format and mount.$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk └─sda1 8:1 0 10G 0 part / sdb 8:16 0 250G 0 disk
In this example,
sdb
is the device ID for the new zonal persistent disk.Format the disk. You can use any file format that you need, but we recommend a single
ext4
file system without a partition table. If you resize the zonal persistent disk later, you can resize the file system without having to modify disk partitions.Format the disk using the
mkfs
tool. This command deletes all data from the specified disk, so make sure that you specify the disk device correctly. To maximize disk performance, use the recommended formatting options in the-E
flag. It is not necessary to reserve space for root on this secondary disk, so specify-m 0
to use all of the available disk space.$ sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/DEVICE_ID
Replace
DEVICE_ID
with the device ID of the zonal persistent disk that you are formatting. For this example, specifysdb
to format the entire disk with no partition table.Create a directory that serves as the mount point for the new disk. You can use any directory that you like. The following example creates a directory under
/mnt/disks/
.$ sudo mkdir -p /mnt/disks/MNT_DIR
Replace
MNT_DIR
with the directory at which to mount your zonal persistent disk.Use the mount tool to mount the disk to the instance, and enable the
discard
option:$ sudo mount -o discard,defaults /dev/DEVICE_ID /mnt/disks/MNT_DIR
Replace the following:
DEVICE_ID
: the device ID of the zonal persistent disk to mountMNT_DIR
: the directory in which to mount your zonal persistent disk
Configure read and write permissions on the device. For this example, grant write access to the device for all users.
$ sudo chmod a+w /mnt/disks/MNT_DIR
Replace
MNT_DIR
with the directory where you mounted your zonal persistent disk.Optionally, you can add the zonal persistent disk to the
/etc/fstab
file, so that the device automatically mounts again when the instance restarts.Create a backup of your current
/etc/fstab
file.$ sudo cp /etc/fstab /etc/fstab.backup
Use the
blkid
command to find the UUID for the zonal persistent disk. The system generates this UUID when you format the disk. Use UUIDs to mount zonal persistent disks because UUIDs do not change when you move disks between systems.$ sudo blkid /dev/DEVICE_ID /dev/DEVICE_ID: UUID="UUID_VALUE" TYPE="ext4"
Replace the following:
DEVICE_ID
: the device ID of the zonal persistent disk that you want to automatically mount. If you created a partition table on the disk, specify the partition that you want to mount.UUID_VALUE
: the UUID of the zonal persistent disk that you must include in the/etc/fstab
file.
Open the
/etc/fstab
file in a text editor and create an entry that includes the UUID. For example:UUID=UUID_VALUE /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
Replace the following:
UUID_VALUE
: the UUID of the zonal persistent disk that you must include in the/etc/fstab
file.MNT_DIR
: the directory where you mounted your zonal persistent disk.NOFAIL_OPTION
: a variable that specifies what the operating system does if it cannot mount the zonal persistent disk at boot time. To let the system boot even if the persistent disk is unavailable, use thenofail
option for most distributions or thenobootwait
option for Ubuntu 12.04 and Ubuntu 14.04.
Optionally, you can complete this step with a single command. For example, the following command creates an entry in
/etc/fstab
to mount the/dev/sdb
zonal persistent disk at/mnt/disks/MNT_DIR
using its UUID.$ echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2 | sudo tee -a /etc/fstab UUID=c994cf26-1853-48ab-a6a5-9d7c0250fed4 /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
Use the
cat
command to verify that your/etc/fstab
entries are correct:$ cat /etc/fstab LABEL=cloudimg-rootfs / ext4 defaults 0 0 UUID=d761bdc5-7e2a-4529-a393-b9aefdb623b6 /mnt/disks/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
If you detach this zonal persistent disk or create a snapshot from the
boot disk for this instance, edit the /etc/fstab
file and remove the
entry for this zonal persistent disk. Even with the
NOFAIL_OPTION
set to
nofail
or nobootwait
, keep the /etc/fstab
file in sync with the
devices that are attached to your instance and remove these entries before
you create your boot disk snapshot or when you detach zonal persistent disks.
Windows instances
Use Windows Disk Management to format and mount the new disk on a Windows instance.
Go to the VM instances page.
Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.
Right-click the Windows Start button and select Disk Management.
If you have not initialized the zonal persistent disk before, Disk Management prompts you to select a partitioning scheme for the new disk. Select GPT and click OK.
After the disk initializes, right-click the unallocated disk space and select New Simple Volume.
Follow the instructions in the New Simple Volume Wizard to configure the new volume. You can use any partition format that you like, but for this example select
NTFS
. Also, check Perform a quick format to speed up the formatting process. Optionally, set the cluster size in the Allocation unit size field. The cluster size limits the maximum size of the partition. Keep this in mind if you try to resize the zonal persistent disk and this partition later.After you complete the wizard and the volume is formatted, check the
Status
column on the list of attached disks to ensure that the new disk has aHealthy
status.
You can now write files to the zonal persistent disk.
Creating a persistent disk clone from a source disk
You can create a new persistent disk clone from an existing persistent disk, even if the existing disk is attached to a VM instance. After you clone a source disk, you can delete the source disk without any risk that the clone will be deleted.
Disk clones are useful for duplicating production data to debug without disturbing production, duplicating disks while scaling out your VMs, and creating replicas for database backup verification. You can also use disk clones to move non-boot disk data to a new project. For scenarios where data protection is required for additional resilience, such as backup and disaster recovery, we recommend using snapshots instead of disk clones.
Restrictions
- The zone, region, and disk type of the clone must be the same as that of the source disk.
- You cannot create a zonal disk clone from a regional disk. You cannot create a regional disk clone from a zonal disk.
- The size of the clone must be at least the size of the source disk. If you create a clone using the Google Cloud Console , you cannot specify a disk size and the clone is the same size as the source disk.
- If you use a customer-supplied encryption key or a customer-managed encryption key to encrypt the source disk, you must use the same key to encrypt the clone. For more information, see Creating a clone of an encrypted source disk.
- You can create at most one clone of a given source disk every 30 seconds.
- You can create at most 1000 total disk clones of a given source disk.
Exceeding this limit returns an
internalError
.
Creating a disk clone
Console
In the Google Cloud Console, go to the Disks page to see a list of zonal persistent disks in your project.
Find the disk that you want to clone.
Click the menu button under Actions and select Clone disk.
- In the Clone disk panel, specify a name for the new disk.
- Under Properties, review other details for the new disk.
- Click Save.
gcloud
In the gcloud
tool, use the disks create
command and specify the --source-disk
. The following example clones the
source disk to a new disk in a different project.
gcloud compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME
API
In the API, construct a POST
request to the
compute.disks.insert
method. In the request body, specify the name
and sourceDisk
parameters.
The clone inherits all omitted properties from the source disk. The
following example clones the source disk to a new disk in a different
project.
POST https://compute.googleapis.com/compute/v1/projects/TARGET_PROJECT_ID/zones/ZONE/disks
{
"name": "TARGET_DISK_NAME"
"sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
}
Creating a clone of an encrypted source disk
If you use a customer-supplied encryption key to encrypt your source disk, you must also use the same key to encrypt the clone.
Console
Under Decryption and encryption, provide the source disk encryption key.
gcloud
Provide the source disk encryption key using the --csek-key-file
flag when
you create the disk clone. If you are using an RSA-wrapped key, use the
gcloud beta
component:
gcloud beta compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME
--csek-key-file example-key-file.json
API
Provide the source disk encryption key using the diskEncryptionKey
property.
POST https://compute.googleapis.com/compute/beta/projects/TARGET_PROJECT_ID/zones/ZONE/disks
{
"name": "TARGET_DISK_NAME"
"sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
"diskEncryptionKey": {
"rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA=="
},
}
If you use a customer-managed encryption key to encrypt your source disk, you must also use the same key to encrypt the clone.
Console
Compute Engine automatically encrypts the clone using the source disk encryption key.
gcloud
Provide the key for the source disk using the --kms-key
flag when you create
the disk clone. If you are using an RSA-wrapped key, use the
gcloud beta
component:
gcloud beta compute disks create projects/TARGET_PROJECT_ID/zones/ZONE/disks/TARGET_DISK_NAME /
--description="cloned disk" --source-disk=projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME
--kms-key projects/KMS_PROJECT_ID/locations/REGION/keyRings/KEY_RING/cryptoKeys/KEY
API
Provide the key for the source disk using the kmsKeyName
property when you
create the disk clone.
POST https://compute.googleapis.com/compute/beta/projects/TARGET_PROJECT_ID/zones/ZONE/disks
{
"name": "TARGET_DISK_NAME"
"sourceDisk": "projects/SOURCE_PROJECT_ID/zones/ZONE/disks/SOURCE_DISK_NAME"
"diskEncryptionKey": {
"kmsKeyName": "projects/KMS_PROJECT_ID/locations/REGION/keyRings/KEY_RING/cryptoKeys/KEY"
},
}
Resizing a zonal persistent disk
You can resize zonal persistent disks when your instances require more storage, and attach multiple secondary disks only when you need to separate your data into unique partitions.
You can resize disks at any time, whether or not the disk is attached to a running instance.
Resizing a disk doesn't delete or modify disk data, but as a best practice, snapshot your disk before you make any changes.
Console
Go to the Disks page to see a list of zonal persistent disks in your project.
Click the name of the disk that you want to resize.
On the disk details page, click Edit.
In the Size field, enter the new size for your disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.
Click Save to apply your changes to the disk.
After you resize the disk, you must resize the file system so that the operating system can access the additional space.
gcloud
In the gcloud
tool, use the
disks resize
command and
specify the --size
flag with the desired disk size, in gigabytes.
gcloud compute disks resize DISK_NAME --size DISK_SIZE
Replace the following:
DISK_NAME
: the name of the disk that you are resizing.DISK_SIZE
: the new size, in gigabytes, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.
After you resize the disk, you must resize the file system so that the operating system can access the additional space.
API
In the API, construct a POST
request to the
compute.disks.resize
method.
In the request body, specify the sizeGb
parameter and set it to the
desired disk size, in gigabytes.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME/resize { "sizeGb": "DISK_SIZE" }
Replace the following:
PROJECT_ID
: your project ID.ZONE
: the zone where your disk is located.DISK_NAME
: the name of the disk to resize.DISK_SIZE
: the new size, in gigabytes, for the disk. Boot disks and secondary disks with MBR partition tables can resize only up to 2 TB.
After you resize the disk, you must resize the file system so that the operating system can access the additional space.
Resizing the file system and partitions on a zonal persistent disk
After you resize your zonal persistent disk, you must configure the file system on the disk to use the additional disk space. If the disk has a partition table, such as a boot disk, you must grow the partition and resize the file system on that partition. If your zonal persistent disk has only a file system and no partition table, you can just resize the file system.
Alternatively, instances that use the most recent versions of Public images can automatically resize their partitions and file systems after a system reboot. The SUSE Linux Enterprise Server (SLES) public images are the only images that don't support this feature.
Before you modify the file system on your zonal persistent disk, create a snapshot.
Linux instances
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the disk space that you added. You do not need to restart your instance after you complete this manual process.
Go to the VM instances page.
Click the SSH button next to the instance that has the new attached disk. The browser opens a terminal connection to the instance.
Identify the disk with the file system and the partition that you want to resize. If your disk is already mounted, you can use the
df
command and thelsblk
command to compare the size of the file system and find the disk ID. In this example, the/dev/sda1
partition is on a resized 20-GB boot disk, but the partition table and the file system provide only 9.7 GB to the operating system. Additionally, the/dev/sdb
secondary disk has no partition table, but the file system on that disk provides only 250 GB to the operating system. Also, in this example/mnt/disks/disk-1
is the mount directory.$ sudo df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 9.7G 1.2G 8.5G 12% / /dev/sdb ext4 250G 60M 250G 1% /mnt/disks/disk-1
$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk └─sda1 8:1 0 10G 0 part / sdb 8:16 0 500G 0 disk /mnt/disks/disk-1
In this example, the
df
command shows that the/dev/sda1
partition is mounted as the root file system, and the/dev/sdb
disk is mounted as a secondary disk at/mnt/disks/disk-1
.If your disk has a file system written to it, and no partition table, you can skip step 4, which describes how to install the
growpart
utility and how to grow a partition.If the disk that you want to resize has a partition table, you must grow the partition before you resize the file system. Use
growpart
to resize your image partition.To install
growpart
on Debian servers, run:$ sudo apt -y install cloud-guest-utils
To install
growpart
on CentOS servers, run:$ sudo yum -y install cloud-utils-growpart
For help with the tool after installation, run
growpart -h
.After you install the
growpart
tool, you can grow the partition.$ sudo growpart /dev/DEVICE_ID PARTITION_NUMBER
Replace the following:
DEVICE_ID
: the device ID.PARTITION_NUMBER
: the partition number for that device. For example,sudo growpart /dev/sda 1
, notice that there is a space between the device ID and the partition number.
Extend the file system on the disk or partition to use the added space.
If you are using
ext4
, use theresize2fs
command. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.$ sudo resize2fs /dev/DEVICE_IDPARTITION_NUMBER
Replace the following:
DEVICE_ID
: the device ID.PARTITION_NUMBER
: the partition number for the device where you are resizing the file system. For example,/dev/sda
points to a disk, and/dev/sda1
points to the first partition on that disk.
If you are using
xfs
, use thexfs_growfs
command to extend the file system, and specify the mount point instead of the device ID:$ sudo xfs_growfs /mnt/disks/disk-1
disk-1 is the mount point.
Optionally, use the
df
command to verify that the file system is resized.$ df -h /dev/DEVICE_ID Filesystem Size Used Avail Use% Mounted on /dev/DEVICE_ID 493G 70M 492G 1% /mnt/disks/disk-1
Replace
DEVICE_ID
with the device ID for the disk where you want to view the file system configuration.
Windows instances
Use the Windows Disk Management tool to resize partitions on a Windows instance.
Go to the VM instances page.
Click the RDP button next to the instance that has the resized disk. The browser opens an RDP connection to the instance.
Right-click the Windows Start button and select Disk Management to open the Disk Management tool.
Refresh the Disk Management tool so that it recognizes the additional space on your zonal persistent disk. At the top of the Disk Management window, click Action and select Refresh.
On the disk that you resized, right-click the formatted partition and select Extend Volume.
Follow the instructions in the Extend Volume Wizard to extend your existing partition to include the extra disk space. If the existing partition is formatted in NTFS, the maximum partition size is limited by its cluster size settings.
After you complete the wizard and the volume finishes formatting, check the
Status
column on the list of attached disks to ensure that the new disk has aHealthy
status.
You can now use the extra zonal persistent disk space to store data.
Recovering an inaccessible instance or a full boot disk
If an instance is completely out of disk space or if it is not running a Linux guest environment, then the instance cannot automatically resize your root filesystem at boot time, even after you've increased the size of the persistent disk that backs it. If you can't connect to your instance, or your boot disk is full and you can't resize it, you must create a new instance and recreate the boot disk from a snapshot to resize it.
You must know the size of the boot disk you're recreating. Find the size of the disk by looking on the Compute Engine console.
Go to the VM instances creation page.
- Click the instance name to open the VM instance details page.
- Click Stop.
- In the Boot disk section, note the boot disk's size and name.
In the Google Cloud Console, go to the Snapshots creation page.
- Enter a snapshot Name.
- Select the boot disk from the Source disk drop-down list.
- Click Create.
Go to the VM instances creation page.
Enter the instance details.
Create a new boot disk from the snapshot of the old boot disk.
- Under Boot disk, select Change.
- Select Snapshots.
- Select the snapshot of the old boot disk from the Snapshot drop-down list.
- Select the Boot disk type.
- Enter the new size for the disk.
- Click Select to confirm your disk options.
Click Create.
Mount and format the disk.
Setting the auto-delete state of a zonal persistent disk
You can automatically delete read/write zonal persistent disks when the associated VM instance is deleted. This behavior is controlled by the autoDelete
property on the VM instance for a given attached
zonal persistent disk and can be updated at any time. Similarly, you can also prevent a zonal persistent disk from being deleted as well by marking the autoDelete
value as false.
Console
Go to the VM instances page.
Check the box next to the instance that has the disks associated with it.
Click the instance name.
The VM instance details page appears.
Click Edit.
Scroll to Additional disks.
Click the pencil to edit the disk's Deletion Rule.
Click Done to save your changes.
Click Save to update your instance.
gcloud
To set the auto-delete state of a zonal persistent disk, use the
gcloud compute instances set-disk-auto-delete
command:
gcloud compute instances set-disk-auto-delete example-instance \ [--auto-delete|--no-auto-delete] \ --disk example-disk
API
If you are using the API, make a POST
request to the following URI:
https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-f/instances/example-instance/setDiskAutoDelete?deviceName=deviceName,autoDelete=true
If you are using the client library, use the instances().setDiskAutoDelete
method:
def setAutoDelete(gce_service, auth_http): request = gce_service.instances().setDiskAutoDelete(project=example-project, zone=us-central1-f, deviceName=my-new-device, instance=example-instance, autoDelete=True) response = request.execute(http=auth_http) print response
Share a zonal persistent disk between instances
You can share persistent disks between instances using the following options:
- Share a disk in read-only mode between multiple instances
- Share an SSD persistent disk in multi-writer mode between instances
Persistent disks enabled for multi-writer mode have some limitations. Read the persistent disk overview to learn more.
Share a disk in read-only mode between multiple instances
You can attach a non-boot persistent disk to more than one virtual machine instance in read-only mode, which allows you to share static data between multiple instances. Sharing static data between multiple instances from one persistent disk is cheaper than replicating your data to unique disks for individual instances.
If you need to share dynamic storage space between multiple instances, you can use one of the following options:
- Connect your instances to Cloud Storage
- Connect your instances to Filestore
- Create a network file server on Compute Engine
- Create a persistent disk with multi-writer mode enabled and attach it to up to two instances.
Console
Go to the VM instances page to see the list of instances in your project.
In the Name column, click the name of the instance where you want to attach the disk. The VM instance details page opens.
On the instance details page, click Edit.
In the Additional disks section, click one of the following:
- Add a disk to add a disk in read-only mode to the instance.
- Attach existing disk to select an existing disk and attach it in read-only mode to your instance.
Specify other options for your disk.
Click Done to apply the changes.
Click Save to apply your changes to the instance.
Connect to the instance and mount the disk.
Repeat this process to add the disk to other instances in read-only mode.
gcloud
In the gcloud
tool, use the
compute instances attach-disk
command
and specify the --mode
flag with the ro
option.
gcloud compute instances attach-disk INSTANCE_NAME \ --disk DISK_NAME \ --mode ro
Replace the following:
INSTANCE_NAME
: the name of the instance where you want to attach the zonal persistent diskDISK_NAME
: the name of the disk that you want to attach
After you attach the disk, connect to the instance and mount the disk.
Repeat this command for each instance where you want to add this disk in read-only mode.
API
In the API, construct a POST
request to the
compute.instances.attachDisk
method. In the request body, specify the mode
parameter as READ_ONLY
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk { "source": "zones/ZONE/disks/DISK_NAME", "mode": "READ_ONLY" }
Replace the following:
INSTANCE_NAME
: the name of the instance where you want to attach the zonal persistent disk.PROJECT_ID
: your project ID.ZONE
: the zone where your disk is located.DISK_NAME
: the name of the disk that you are attaching.
After you attach the disk, connect to the instance and mount the disk.
Repeat this request for each instance where you want to add this disk in read-only mode.
Sharing an SSD persistent disk in multi-writer mode between instances
You can share an SSD persistent disk in multi-writer mode between N2 instances in the same zone. See Persistent disk multi-writer mode for details about how this mode works. You can create and attach multi-writer persistent disks using the following process:
gcloud
Create and attach a zonal persistent disk by using the
gcloud
tool:
Use the
gcloud beta compute disks create
command to create a zonal persistent disk. Include the--multi-writer
flag to indicate that the disk must be sharable between the instances in multi-writer mode.gcloud beta compute disks create [DISK_NAME] / --size [DISK_SIZE] / --type pd-ssd / --multi-writer
where:
[DISK_NAME]
is the name of the new disk.[DISK_SIZE]
is the size, in GB, of the new disk. Acceptable sizes range from 1 GB to 65,536 GB for SSD persistent disks, or 200 GB to 65,536 GB for standard persistent disks in multi-writer mode.
After you create the disk, attach it to any running or stopped instance with an N2 machine type. Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-disk [INSTANCE_NAME] / --disk [DISK_NAME]
where:
[INSTANCE_NAME]
is the name of the N2 instance where you are adding the new zonal persistent disk.[DISK_NAME]
is the name of the new disk that you are attaching to the instance.
Repeat the
gcloud compute instances attach-disk
command, but replace[INSTANCE_NAME]
with the name of your second instance.
After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer persistent disk.
API
Use the Compute Engine API to create and attach an SSD persistent disk to N2 instances in multi-writer mode.
In the API, construct a
POST
request to create a zonal persistent disk using thedisks.insert
method. Include thename
,sizeGb
, andtype
properties. To create this new disk as an empty and unformatted non-boot disk, do not specify a source image or a source snapshot for this disk. Include themultiWriter
property with a value ofTrue
to indicate that the disk must be sharable between the instances in multi-writer mode.POST https://compute.googleapis.com/compute/beta/projects/[PROJECT_ID]/zones/[ZONE]/disks { "name": "[DISK_NAME]", "sizeGb": "[DISK_SIZE]", "type": "zones/[ZONE]/diskTypes/pd-ssd", "multiWriter": "True" }
where:
[PROJECT_ID]
is your project ID.[ZONE]
is the zone where your instance and new disk are located.[DISK_NAME]
is the name of the new disk.[DISK_SIZE]
is the size, in GB, of the new disk. Acceptable sizes range from 1 GB to 65,536 GB for SSD persistent disks, or 200 GB to 65,536 GB for standard persistent disks in multi-writer mode.
Construct a
POST
request to thecompute.instances.attachDisk
method, and include the URL to the zonal persistent disk that you just created:POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances/[INSTANCE_NAME]/attachDisk { "source": "/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/disks/[DISK_NAME]" }
where:
[PROJECT_ID]
is your project ID.[ZONE]
is the zone where your instance and new disk are located.[INSTANCE_NAME]
is the name of the instance where you are adding the new persistent disk.[DISK_NAME]
is the name of the new disk.
Repeat the
disks.insert
command, but specify the second instance instead.
After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer persistent disk.
Change the type of your persistent disk
Persistent disk pricing and performance depends on the type of the persistent disk. Change the type of your persistent disk using snapshots. For example, to change your standard persistent disk to an SSD persistent disk, use the following process:
Console
- Create a snapshot of your standard persistent disk.
- Create a new persistent disk based on the snapshot. From the Type drop-down list, select "SSD persistent disk".
gcloud
- Create a snapshot of your standard persistent disk.
- Create a new persistent disk based on the snapshot.
Include the
--type
flag and specifypd-ssd
.
API
- Create a snapshot of your standard persistent disk.
- Create a new persistent disk based on the snapshot.
In the
type
field, specify"zones/ZONE/diskTypes/pd-ssd"
and replaceZONE
with the zone where your instance and new disk are located.
What's next
- Create a snapshot of a persistent disk.
- Create an instance with local SSDs.
- Connect your instance to a Cloud Storage bucket.
- Mount a RAM disk on your instance.