Google Cloud Platform
Compute Engine

Block Storage

Google Compute Engine offers several types of data disk that can be used as primary storage for your virtual machine instances:

  • Persistent disk storage, available as both standard (HDD) and solid-state (SSD), exists independently of your virtual machine instances. A persistent disk can be attached and detached to any instance in the same zone. Data stored in persistent disk storage remains intact regardless of the state of the instance to which it is attached.
  • Local SSD storage is physically attached to the server running a given virtual machine instance. In contrast to persistent disk storage, data stored on local SSD does not persist beyond the lifetime of the instance to which the local SSD is attached.


Overview of disk types

Standard persistent disks (HDD)

Standard persistent disks have the following attributes:

  • Efficient and economical when handling sequential input/output operations
  • Limited to 64 TB of total storage per instance combined with Solid-state persistent disks
  • Each individual persistent disk can be up to 64 TB in size
  • Can be resized up to add more storage space to each volume
  • Flexible partition sizing
  • Redundant, industry-standard data protection
  • Snapshotting capabilities
  • Can be attached to any machine type
  • Can be used as a boot disk for a virtual machine instance

Use standard persistent disks when your limiting factor is space or streaming throughput.

Solid-state persistent disks (SSD)

Solid-state persistent disks have the following attributes:

  • Efficient and economical when handling high rates of random input/output operations per second (IOPS)
  • Limited to 64 TB of total storage per instance combined with Standard persistent disks
  • Each individual persistent disk can be up to 64 TB in size
  • Can be resized up to add more storage space to each volume
  • Low latency
  • Flexible partition sizing
  • Redundant, industry-standard data protection
  • Snapshotting capabilities
  • Can be attached to any machine type
  • Can be used as a boot disk for a virtual machine instance

Use solid-state persistent disks when your limiting factor is random IOPS or streaming throughput with low latency.

Local SSDs

Local SSDs have the following attributes:

  • Most economical option per GB and per MB/s of throughput
  • Extremely efficient when handling high rates of random input/output operations per second (IOPS)
  • Limited to 3 TB of total Local SSD storage per instance
  • Available only in 375 GB partitions that you must format and manage yourself.
  • Local SSDs do not transfer data across the network like persistent disks, and do not consume network bandwidth.
  • Extremely low latency
  • Ephemeral, as they are tied to the life of the virtual machine instance on which they are mounted
  • Fixed partition size
  • Can be attached to most machine types, with the exception of shared-core machine types.

Use local SSDs for use cases where you need a fast scratch disk or cache and don't want to use pure RAM, or when your workload itself is replicated across instances.

Comparison of disk types

Standard persistent disks SSD persistent disks Local SSD (SCSI) Local SSD (NVMe)
Data redundancy Yes Yes No No
Disk interface Virtio SCSI Virtio SCSI Virtio SCSI NVMe
Encryption Yes Yes Yes Yes
Snapshotting Yes Yes No No
Machine type support All machine types All machine types Most machine types Most machine types
Zone availability All zones All zones Most zones Most zones
Maximum sustained IOPS
Read IOPS per GB 0.75 30 266.7 453.3
Write IOPS per GB 1.5 30 186.7 240
Read IOPS per instance 3000 15,000 400,000 680,000
Write IOPS per instance 15,000 15,000 280,000 360,000
Maximum sustained throughput (MB/s)
Read throughput per GB 0.12 0.48 1.04 1.77
Write throughput per GB 0.12 0.48 0.73 0.94
Read throughput per instance 180 240 1,560 2,650
Write throughput per instance 120 240 1,090 1,400

Persistent disk specifications

Available types

Large sequential I/O describes I/O operations that access locations contiguously. Each I/O operation accesses a large amount of data: approximately 128 KB or more.

Small random I/O describes an I/O operation that accesses 4 KB to 16 KB of data.

Compute Engine offers two types of persistent disk volumes:

  • Standard persistent disks (HDD) are best for applications that require bulk storage or sequential I/O with large block sizes.
  • Solid-state persistent disks are best for applications that require high rates of random input/output operations per second (IOPS).

Choosing one or the other type of disk depends on your application and data needs. Each type volume has their own performance characteristics and their own pricing. For a detailed look into the performance characteristics of each type of disk, see Performance.

Each virtual machine instance can attach 64TB of total standard persistent disk and SSD persistent disk space combined. You can attach one large 64TB disk of either type, or you can attach a combination of smaller standard persistent disks and SSD persistent disks that total up to 64TB in size.


Persistent disk performance depends on the size of the volume and the type of disk you select. Larger volumes can achieve higher I/O levels than smaller volumes. There are no separate I/O charges as the cost of the I/O capability is included in the price of the persistent disk.

Persistent disk performance can be described as follows:

  • IOPS performance limits grow linearly with the size of the persistent disk volume.
  • Throughput limits also grow linearly, up to the maximum bandwidth for the virtual machine instance to which the persistent disk is attached.
  • Larger virtual machine instances have higher bandwidth limits than smaller virtual machine instances.

These performance characteristics and limitations echo those of a RAID array, but at a greater degree of granularity. Whereas RAID arrays grow volumes in increments of entire disks, persistent disk offers Compute Engine customers granularity at the per-GB level for their volumes.

This persistent disk pricing and performance model provides three main benefits:

Disk striping is a technique to store logically sequential data across multiple drives or storage devices so that multiple disks can be accessed in parallel, allowing for higher throughput.

Operational simplicity
Persistent disk handles much of the complexity of increasing I/O to a virtual machine instance – for example, disk striping – automatically. A single 1 TB volume of persistent disk performs the same as ten 100 GB volumes that would otherwise need to be striped together manually.
Predictable pricing
Volumes are priced on a per-GB basis only. This price pays for both the volume’s space and its total I/O capabilities. Customers’ bills do not vary with usage of the volume.
Predictable performance
This model allows more predictable performance than other possible models for HDD- and SSD-based storage while still keeping the price very low.

Volume I/O limits distinguish between read and write I/O and between IOPS and bandwidth. These limits are described in the following performance chart.

Standard persistent disks Solid-state persistent disks
Price (USD/GB per month) $0.04 $0.17
Maximum Sustained IOPS
Read IOPS/GB 0.75 30
Write IOPS/GB 1.5 30
Read IOPS/volume per VM 3,000 15,000
Write IOPS/volume per VM 15,000 15,000
Maximum Sustained Throughput
Read throughput/GB (MB/s) 0.12 0.48
Write throughput/GB (MB/s) 0.12 0.48
Read throughput/volume per VM (MB/s) 180 240
Write throughput/volume per VM (MB/s) 120 240

To illustrate this chart and the difference between standard persistent disk and solid-state persistent disk, consider that, for $1.00/month, you get:

  • Standard persistent disk: 25 GB of space, 7.5 read IOPS, and 37.5 write IOPS
  • Solid-state persistent disk: 5.88 GB of space, 176 read IOPS, and 176 write IOPS

Compared to standard persistent disks, solid-state persistent disks are more expensive per GB and per MB/s of throughput. However, they are far less expensive per IOPS. Use standard persistent disks where the limiting factor is space or streaming throughput, and use solid-state persistent disks where the limiting factor is random IOPS.

Standard persistent disk performance

When considering a standard persistent disk volume for your instance, keep in the mind the following information:

  • The performance numbers in the chart above for standard persistent disks are caps for maximum sustained I/O.

    To better serve the many cases where I/O spikes, Compute Engine allows virtual machines to save up I/O capability and burst over the numbers listed for standard persistent disks. In this way, smaller persistent disks can be used for cases where I/O is typically low but periodically bursts well above the average.

  • Performance depends on I/O pattern, volume size, and instance type.

    IOPS and throughput caps have per-GB values. To determine the cap for a given volume, multiply these values by a volume’s size. Note that, in some cases, the virtual machine instance will itself impose throughput caps, regardless of the throughput cap of the volume. In these cases, the observed throughput cap for a volume will be the lower of the volume’s cap and the virtual machine instance's cap.

As an example of how you can use the performance chart to determine the disk volume you want, consider that a 500 GB standard persistent disk will give you:

  • (0.75 x 500) = 375 small random reads
  • (1.5 x 500) = 750 small random writes
  • (0.12 x 500) = 60 MB/s of large sequential reads
  • (0.12 x 500) = 60 MB/s of large sequential writes

Solid-state persistent disk performance

When using solid-state persistent disks, keep in mind the following information:

  • The performance numbers in the chart above are the expected performance numbers for solid-state persistent disks.

    The performance numbers are caps, but they are also the actual performance numbers you should experience if your application and virtual machine instance have been properly optimized.

  • The IOPS numbers for solid-state persistent disks are published with the assumption of up to 16 KB I/O operation size.

    Compute Engine assumes that each I/O operation will be less than or equal to 16 KB in size. If your I/O operation is larger than 16 KB, the number of IOPS will be proportionally smaller. For example, for a 200 GB solid-state persistent disk, you can perform 6,000 16 KB IOPS (30 IOPS/GB * 200 GB) or 1,500 64 KB IOPS.

    Conversely, the throughput you experience decreases if your I/O operation is less than 16 KB. The same 200 GB volume can perform throughput of 96 MB/s of 16 KB I/Os or 24 MB/s of 4 KB I/Os.

For information about optimizing your virtual machine instance and application for optimal IOPS on solid-state persistent disks, see Optimize your solid-state persistent disk performance.


All data written to disk in Compute Engine is encrypted on the fly and then transmitted and stored in encrypted form. Compute Engine has completed ISO 27001, SSAE-16, SOC 1, SOC 2, and SOC 3 certifications, demonstrating our commitment to information security.

Data integrity

Compute Engine uses redundant, industry-standard mechanisms to protect persistent disk users from data corruption and from sophisticated attacks against data integrity.

Disk interface

By default, Compute Engine uses SCSI for attaching persistent disks.

Network egress caps

Each persistent disk write operation contributes to your virtual machine instance's cumulative network egress traffic. This means that persistent disk write operations are capped by your instance's network egress cap.

To calculate the maximum persistent disk write traffic that a virtual machine instance can issue, subtract an instance’s other network egress traffic from its 2 Gbit/s/core network cap. The remaining throughput represents the throughput available to you for persistent disk write traffic.

Because persistent disk storage has 3.3x data redundancy, each write has to be written 3.3 times. This means that a single write operation counts as 3.3 I/O operations.

The following figures are the persistent disk I/O caps per virtual machine instance, based on the network egress caps for the virtual machine. These figures are based on an instance that has no additional IP traffic.

Standard persistent disk Solid-state persistent disks
Number of cores Standard persistent disk write limit (MB/s) Standard volume size needed to reach limit (GB) Solid-state persistent disk write limit (MB/s) Solid-state volume size needed to reach limit (GB)
1 76 842 76 158
2 120 1333 152 316
4 120 1333 240 500
8 120 1333 240 500
16 120 1333 240 500

To derive the figures above, divide the network egress cap – 2 Gbit/s, which is equivalent to 250 MB/s – by the data redundancy multiplier (3.3):

Number of max write I/O for one core = 250 / 3.3 = ~76 MB/s of I/O issued by your standard persistent disk

Using the standard persistent disk write throughput/GB figure provided in the performance chart presented earlier, you can now derive an appropriate disk size as well:

Desired disk size = 76 / 0.12 = ~634 GB

Determine the size of a persistent disk

To determine what size of volume is required to have the same optimal performance as a typical 7200 RPM SATA drive, you must first identify the I/O pattern of the volume. The chart below describes some I/O patterns and the size of each persistent disk type you would need to create for that I/O pattern.

IO pattern Volume size of standard persistent disk (GB) Volume size of solid-state persistent disk (GB)
Small random reads 250 3
Small random writes 50 3
Streaming large reads 1000 250
Streaming large writes 1333 250

Local SSD specifications

Local SSD offers very high IOPS and low latency, but lacks some features provided by persistent disk options. Local SSD is a good option for:

  • High performance scratch data that lives and dies with the lifetime of the instance.
  • High performance databases where the database can replicate the data between multiple local SSD instances.
  • Temporary storage or cache.
  • Hadoop deployments where the required I/O is more than what the Google Cloud Storage Connector for Hadoop can provide. In such cases, you can use SSD underneath a Hadoop-optimized filesystem.


Local SSD has the following restrictions when compared to other block storage options:

  • Limited create-time flexibility. Local SSDs must be created at the same time as the virtual machine instance that they are attached to.
  • Limited reliability. Local SSDs are unreplicated and have no data redundancy.
  • No snapshotting. Unlike persistent disks, local SSDs do not offer a snapshot feature.
  • No stop/start ability. Instances in a TERMINATED state with a local SSD attached cannot be restarted.
  • Limited disk-size flexibility. Local SSDs are 375 GB per device. You can attach a maximum of eight local SSDs to each instance for a total of 3TB per instance. Some zones restrict their instances to a maximum of four local SSD devices for a total of 1.5TB per instance. See Local SSD limits for details.
  • Restricted usage. Local SSDs cannot be used as root disks.
  • Limited Windows support for NVMe. Although Windows instances can use the NVMe interface, support for NVMe on Windows is in Beta and we do not guarantee the same performance as Linux instances. The SCSI interface is fully supported for Windows.
  • No support for shared-core machine types. Machine types, such as f1-micro or g1-small cannot use local SSDs.

In addition, note that local SSD devices present 4 KB sectors to the guest operating system. If you are running a workload that performs direct I/O to the raw device, you must ensure that your workload can adhere to this sector size before you decide to run your workload on a local SSD.


Local SSD is designed to offer very high IOPS and low latency. The following are the expected local SSD performance numbers for a single local SSD device. If a machine type contains more than one local SSD device, multiply the performance numbers to see the maximum performance for that machine type.

Local SSD performance varies by interface type.

Type Maximum sustained read IOPS/device* First write IOPS/device*
SCSI 100,000 70,000
NVMe 170,000 90,000

* See Comparing local SSD performance.

The IOPS figures above assume that the I/O operation size is up to or equal to 4 KB. If your I/O operation is larger than 4 KB, the number of IOPS will be proportionally smaller.

First write IOPS numbers are meant as the optimal experience you can attain on the first instance of writing data to the device. With each subsequent random write operation, the IOPS performance might decrease, because it takes the device more work to find empty data blocks or to find and overwrite old data blocks. This is known as write amplification.

In general, you can help the performance of your SSD device by writing sequentially rather than randomly and, if you must perform random writes, try to make the writes as large as possible, such as 256 KB blocks instead of 4 KB blocks. An application that limits itself to sequential writes should see very little to no write bandwidth degradation.

If your application must write in random 4 KB blocks, consider over-provisioning your SSD device. This sets aside some amount of storage space so that the write amplification is lowered, helping performance of the SSD device while sacrificing some storage space.

Disk interface

Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you will need to use a special NVMe-enabled image for your instance. For more information, see Choosing a disk interface type.


Data written to local SSD disks in Compute Engine is always encrypted at rest. Compute Engine has completed ISO 27001, SSAE-16, SOC 1, SOC 2, and SOC 3 certifications, demonstrating our commitment to information security.

Pricing and quota

Local SSDs are billed at a different rate than persistent disk. Refer to Compute Engine Pricing for more information.

By default, all projects are limited to 2 TB of local SSD disk space per region per project. If you need more quota for your project, you can request more quota.

A virtual machine can attach up four local SSD devices at a time.