Compute Engine offers several types of storage options for your instances. Each of the following storage options has unique price and performance characteristics:
- Zonal standard persistent disk and zonal SSD persistent disk: Efficient, reliable block storage.
- Regional standard persistent disk and regional SSD persistent disk: Regional block storage replicated in two zones.
- Local SSD: High performance, transient, local block storage.
- Cloud Storage buckets: Affordable object storage.
- Filestore: High performance file storage for Google Cloud users.
If you are not sure which option to use, the most common solution is to add a persistent disk to your instance.
By default, each Compute Engine instance has a single boot-persistent disk that contains the operating system. When your apps require additional storage space, you can add one or more additional storage options to your instance. For cost comparisons, see Disk pricing.
|Cloud Storage buckets|
|Storage type||Efficient and reliable block storage||Efficient and reliable block storage with synchronous replication across two zones in a region||Fast and reliable block storage||Fast and reliable block storage with synchronous replication across two zones in a region||High performance local block storage||Affordable object storage|
|Minimum capacity per disk||10 GB||200 GB||10 GB||10 GB||375 GB||n/a|
|Maximum capacity per disk||64 TB||64 TB||64 TB||64 TB||375 GB||n/a|
|Capacity increment||1 GB||1 GB||1 GB||1 GB||375 GB||n/a|
|Maximum capacity per instance||257 TB*||257 TB*||257 TB*||257 TB*||3 TB
(9 TB in beta)
|Scope of access||Zone||Zone||Zone||Zone||Instance||Global|
|Data redundancy||Zonal||Multi-zonal||Zonal||Multi-zonal||None||Regional, dual-regional or multi-regional|
|Encryption at rest||Yes||Yes||Yes||Yes||Yes||Yes|
|Custom encryption keys||Yes||Yes||Yes||Yes||No||Yes|
|Machine type support||All machine types||All machine types||Most machine types||Most machine types||Most machine types||All machine types|
|Add a standard persistent disk||Add a regional standard persistent disk||Add an SSD persistent disk||Add a regional SSD persistent disk||Add a local SSD||Connect a bucket|
In addition to the storage options that Google Cloud provides, you can deploy alternative storage solutions on your instances.
- Create a file server or distributed file system on Compute Engine to use as a network file system with NFSv3 and SMB3 capabilities.
- Mount a RAM disk within instance memory to create a block storage volume with high throughput and low latency.
Block storage resources have different performance characteristics. Consider your storage size and performance requirements to help you determine the correct block storage type for your instances.
|Local SSD (SCSI)||Local SSD (NVMe)|
|Maximum sustained IOPS|
|Read IOPS per GB||0.75||0.75||30||30||–||–|
|Write IOPS per GB||1.5||1.5||30||30||–||–|
|Read IOPS per instance||7,500*||3,000*||15,000–100,000*||15,000–100,000*||900,000 (beta)||2,400,000 (beta)|
|Write IOPS per instance||15,000*||15,000*||15,000–30,000*||15,000–30,000*||800,000 (beta)||1,200,000 (beta)|
|Maximum sustained throughput (MB/s)|
|Read throughput per GB||0.12||0.12||0.48||0.48||–||–|
|Write throughput per GB||0.12||0.12||0.48||0.48||–||–|
|Read throughput per instance||240–1,200*||240*||240–1,200*||240–1,200*||9,360 (beta)||9,360 (beta)|
|Write throughput per instance||76–400**||38–200**||204–800*||102–400*||4,680 (beta)||4,680 (beta)|
Zonal Persistent disks (Standard and SSD)
Persistent disks are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each persistent disk is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance. Standard persistent disks are backed by standard hard disk drives (HDD). SSD persistent disks are backed by solid-state drives (SSD).
Persistent disks are located independently from your virtual machine (VM) instances, so you can detach or move persistent disks to keep your data even after you delete your instances. Persistent disk performance scales automatically with size, so you can resize your existing persistent disks or add more persistent disks to an instance to meet your performance and storage space requirements.
Add a persistent disk to your instance when you need reliable and affordable storage with consistent performance characteristics.
Ease of use
Compute Engine handles most disk management tasks for you so that you do not need to deal with partitioning, redundant disk arrays, or subvolume management. Generally, you don't need to create larger logical volumes, but you can extend your secondary attached persistent disk capacity to 257 TB per instance and apply these practices to your persistent disks if you want. You can save time and get the best performance if you format your persistent disks with a single file system and no partition tables.
If you need to separate your data into multiple unique volumes, create additional disks rather than dividing your existing disks into multiple partitions.
Persistent disk performance is predictable and scales linearly with provisioned capacity until the limits for an instance's provisioned vCPUs are reached. For more information about performance scaling limits and optimization, see Block storage performance.
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD persistent disks. SSD persistent disks are designed for single-digit millisecond latencies. Observed latency is application specific.
Compute Engine optimizes performance and scaling on persistent disks automatically. You don't need to stripe multiple disks together or pre-warm disks to get the best performance. When you need more disk space or better performance, resize your disks and possibly add more vCPUs to add more storage space, throughput, and IOPS. Persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has.
For boot devices, you can reduce costs by using a standard persistent disk. Small, 10 GB persistent disks can work for basic boot and package management use cases. However, to ensure consistent performance for more general use of the boot device, use either an SSD persistent disk as your boot disk or use a standard persistent disk that is at least 200 GB in size.
Each persistent disk write operation contributes to the cumulative network egress traffic for your instance. This means that persistent disk write operations are capped by the network egress cap for your instance.
Persistent disks have built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all persistent disk operations, so we can ensure that what you read is what you wrote.
Additionally, you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running instances.
Persistent disk encryption
Compute Engine automatically encrypts your data before it travels outside of your instance to persistent disk storage space. Each persistent disk remains encrypted either with system-defined keys or with customer-supplied keys. Google distributes persistent disk data across multiple physical disks in a manner that users do not control.
When you delete a persistent disk, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.
If you want to control the encryption keys that are used to encrypt your data, create your disks with your own encryption keys.
You cannot attach a persistent disk to an instance in another project.
Instances with shared-core machine types are limited to a maximum of 16 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 128 persistent disks and up to 257 TB of total persistent disk space attached. Total persistent disk space for an instance includes the size of the boot disk.
Shared-core machine types are limited to 16 persistent disks and 3 TB of total persistent disk space.
Creating logical volumes larger than 64 TB might require special consideration. For more information see larger logical volume performance.
Regional persistent disks (standard and SSD)
Regional persistent disks have storage qualities that are similar to zonal persistent disks (standard and SSD). However, regional persistent disks provide durable storage and replication of data between two zones in the same region. If you are designing robust systems on Compute Engine, consider using regional persistent disks to maintain high availability for resources across multiple zones. Regional persistent disks provide synchronous replication for workloads that might not have application-level replication.
Regional persistent disks are designed for workloads that require redundancy across multiple zones with failover capabilities. Regional persistent disks are also designed to work with regional managed instance groups. Regional persistent disks are an option for high performance databases and enterprise apps that also require high availability.
In the unlikely event of a zonal outage, you can failover your workload running
on regional persistent disks to another zone by using the
flag with the
attach-disk command. The
--force-attach flag lets you attach
the regional persistent disk to a standby VM instance even if the disk can't be
detached from the original VM due to its unavailability.
If both replicas are available, a write is acknowledged back to a VM when it is durably persisted in both replicas. If one of the replicas is unavailable, a write is acknowledged after it is durably persisted in the healthy replica. When the unhealthy replica is back up (as detected by Compute Engine), then it is transparently brought in sync with the healthy replica, and the fully synchronous mode of operation resumes. This operation is transparent to a VM.
In the rare event that both replicas become unavailable at the same time, or the healthy replica becomes unavailable while another one is being brought into sync, the corresponding disk becomes unavailable.
Regional persistent disks are an option when write performance is less critical than data redundancy across multiple zones.
Like standard persistent disks, regional persistent disks can achieve greater IOPS and throughput performance on instances with a greater number of vCPUs. For more information about this and other limitations, see SSD persistent disk performance limits.
When you need more disk space or better performance, you can resize your regional disks to add more storage space, throughput, and IOPS.
Compute Engine replicates data of your regional persistent disk to the zones you selected when you created your disks. The data of each replica is spread across multiple physical machines within the zone to ensure redundancy.
Similar to regular persistent disks, you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running instances.
Regional persistent disk limits are similar to zonal persistent disks. However, regional standard persistent disks have a 200 GB size minimum.
For information about performance limits, see Block storage performance comparison.
Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. Each local SSD is 375 GB in size, but you can attach up to eight local SSD partitions for a total of 3 TB of local SSD storage space per instance. In beta, you can attach a maximum of 24 local SSD partitions for a total of 9 TB per instance.
Create an instance with Local SSDs when you need a fast scratch disk or cache and don't want to use instance memory.
Local SSDs are designed to offer very high IOPS and low latency. Unlike persistent disks, you must manage the striping on local SSDs yourself. Combine multiple local SSD partitions into a single logical volume to achieve the best local SSD performance per instance, or format local SSD partitions individually.
The following table provides an overview of local SSD capacity and estimated performance using NVMe. To reach maximum performance limits with 16 or 24 local SSD partitions, use a VM with 32 or more vCPUs.
|6 TB (beta)||16||1,600,000||800,000||6,240||3,120|
|9 TB (beta)||24||2,400,000||1,200,000||9,360||4,680|
Local SSD encryption
Compute Engine automatically encrypts your data when it is written to local SSD storage space. You can't use customer-supplied encryption keys with local SSDs.
Data persistence on Local SSDs
Please read Local SSD data persistence to learn what events preserve your Local SSD data and what events can cause your Local SSD data to be unrecoverable.
You can create instances with up to eight, 375 GB local SSD partitions for 3 TB of local SSD space on each instance.
In beta, you can create an instance with 16 or 24 local SSD partitions for 6 TB or 9 TB of local SSD space, respectively. This is available on instances with all N1 machine types and custom machine types. To reach the maximum IOPS limits, use a VM instance with 32 or more vCPUs.
Instances with shared-core machine types can't attach any local SSD partitions.
Local SSDs and machine types
You can attach Local SSDs to most machine types available on Compute Engine, unless otherwise noted. However, there are constraints around how many local SSDs you can attach based on each machine type. For example, if you are using an N2 machine type with 2 vCPUs, then, as shown in the following table, you can attach either 1, 2, 4, or 8 local SSD partitions to that VM, but you can't attach 3, 5, 6, 7, 16, or 24 partitions.
|N1 machine types||Number of local SSD partitions allowed per VM instance|
|All N1 machine types||1 to 8, 16, or 24|
|N2 machine types|
|Machine types with 2 to 10 vCPUs, inclusive||1, 2, 4, or 8|
|Machine types with 12 to 20 vCPUs, inclusive||2, 4, or 8|
|Machine types with 22 to 40 vCPUs, inclusive||4 or 8|
|Machine types with 42 to 80 vCPUs, inclusive||8|
|C2 machine types|
|Machine types with 4 or 8 vCPUs||1, 2, 4, or 8|
|Machine types with 16 vCPUs||2, 4, or 8|
|Machine types with 30 vCPUs||4 or 8|
|Machine types with 60 vCPUs||8|
Local SSDs and preemptible VM instances
You can start a preemptible VM instance with a local SSD and Compute Engine charges you preemptible prices for the local SSD usage. Local SSDs attached to preemptible instances work like normal local SSDs, retain the same data persistence characteristics, and remain attached for the life of the instance. You can request a separate quota for preemptible local SSDs, but you can also choose to use your regular local SSD quota when creating preemptible local SSDs.
Compute Engine doesn't charge you for local SSDs if their instances are preempted in the first minute after they start running.
For more information about local SSDs, see Adding local SSDs.
Reserving Local SSDs with committed use discounts
To reserve Local SSD resources in a specific zone, see Reserving zonal resources. Reservations are required for committed-use pricing for Local SSDs.
Cloud Storage buckets
Cloud Storage buckets are the most flexible, scalable, and durable storage option for your VM instances. If your apps don't require the lower latency of Persistent Disks and Local SSDs, you can store your data in a Cloud Storage bucket.
Connect your instance to a Cloud Storage bucket when latency and throughput aren't a priority and when you must share data easily between multiple instances or zones.
The performance of Cloud Storage buckets depends on the storage class that you select and the location of the bucket relative to your instance.
The standard storage class used in the same location as your instance gives performance that is comparable to persistent disks but with higher latency and less consistent throughput characteristics. The standard storage class used in a multiregional location stores your data redundantly across at least two regions within a larger multiregional location.
Nearline and coldline storage classes are primarily for long-term data archival. Unlike the standard storage class, these archival classes have minimum storage durations and read charges. Consequently, they are best for long-term storage of data that is accessed infrequently.
All Cloud Storage buckets have built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all Cloud Storage operations to help ensure that what you read is what you wrote.
Unlike persistent disks, Cloud Storage buckets aren't restricted to the zone where your instance is located. Additionally, you can read and write data to a bucket from multiple instances simultaneously. For example, you can configure instances in multiple zones to read and write data in the same bucket rather than replicate the data to persistent disks in multiple zones.
Furthermore, you can mount a Cloud Storage bucket to your instance as a file system. Mounted buckets function similarly to a persistent disk when you read or write files. However, Cloud Storage buckets are object stores that don't have the same write constraints as a POSIX file system and can't be used as boot disks. Your instance can write data to a file and overwrite critical data from other instances that are also writing data to the storage object simultaneously.
Cloud Storage encryption
Compute Engine automatically encrypts your data before it travels outside of your instance to Cloud Storage buckets. You don't need to encrypt files on your instances before you write them to a bucket.
Just like persistent disks, you can encrypt buckets with your own encryption keys.