Block storage performance

To configure a storage option for apps that run on your instances, use the following process:

Understand your workload

For workloads that primarily involve small (4 KB to 16 KB) random I/Os, the limiting factor is random input/output operations per second (IOPS).

For workloads that primarily involve sequential or large (256 KB to 1 MB) random I/Os, the limiting factor is throughput.

Choose a storage option

You can provide several different types of block storage for your instances to use. Each type has different price, performance, and durability characteristics. See Storage options for a full comparison.

  • Standard persistent disks are suited for large data processing workloads that primarily use sequential I/Os.
  • SSD persistent disks are suited for enterprise applications and high-performance database needs that require lower latency and more IOPs than standard persistent disks provide. SSD persistent disks are designed for single-digit millisecond latencies; the observed latency is app specific.
  • Local SSDs provide low latency but are nonredundant and exist only for the lifetime of a specific instance.
Zonal
standard
persistent disks
Regional
standard
persistent disks
Zonal
SSD
persistent disks
Regional
SSD
persistent disks
Local SSD (SCSI) Local SSD (NVMe)
Maximum sustained IOPS
Read IOPS per GB 0.75 0.75 30 30 266.7 453.3
Write IOPS per GB 1.5 1.5 30 30 186.7 240
Read IOPS per instance 7,500* 3,000* 15,000–100,000* 15,000–100,000* 400,000 680,000
Write IOPS per instance 15,000* 15,000* 15,000–30,000* 15,000–30,000* 280,000 360,000
Maximum sustained throughput (MB/s)
Read throughput per GB 0.12 0.12 0.48 0.48 1.04 1.77
Write throughput per GB 0.12 0.12 0.48 0.48 0.73 0.94
Read throughput per instance 240–1,200* 240* 240–1,200* 240–1,200* 1,560 2,650
Write throughput per instance 76–400** 38–200** 76–800* 38–400* 1,090 1,400
* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.
** Persistent disks can achieve greater throughput performance on instances with more vCPUs. Read Network egress caps on write throughput.

Attaching a disk to multiple virtual machines does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

Note that SSD read bandwidth and IOPS consistency near the maximum performance limits depend largely on network ingress utilization. Some variability in the performance limits is to be expected, especially when operating near the maximum IOPs limits with an I/O size of 16 KB.

Calculate the price per IOP for a persistent disk

Persistent disk has no per-I/O costs, so there is no need to estimate monthly I/O to calculate budget for what you will spend on disks. However, for IOPS-oriented workloads, it is possible to break down the per month cost to look at price per IOPS, for comparison purposes.

The following pricing calculation examples use U.S. persistent disk pricing. Consider the relative costs of standard persistent disks compared to SSD persistent disks. For example, in the zone us-central1, standard persistent disks are priced at $0.040 per GB and SSD persistent disks are priced at $0.170 per GB. When you increase the size of a volume, you also increase the performance caps automatically, at no additional cost.

To determine the cost per IOPS of a persistent disk, divide the price per GB per month with the number of IOPS per GB. The following table calculates the price per random read IOPS per GB. You can use the same calculations to calculate the price per write IOPS as well.

Disk type Price per GB / month Read IOPS per GB Price per IOPS per GB
Standard persistent disk $0.040 0.75 $0.040 / 0.75 = $0.0533
SSD persistent disk $0.170 30 $0.170 / 30 = $0.0057

Standard persistent disks offer affordable capacity, while SSD persistent disks offer price-performance ratios suited for IOPs-oriented workloads. Learn more about persistent disk pricing and local SSD pricing.

Configure your disks and instances

To maximize performance, configure the correct disk size, vCPU count, and machine type.

Compare persistent disk to a physical hard drive

When you specify the size of your persistent disks, consider how these disks compare to traditional physical hard drives. The following tables compare standard persistent disks and SSD persistent disks to the typical performance that you would expect from a 7200 RPM SATA drive, which typically achieves 75 IOPS or 120 MB/s.

I/O type I/O pattern Size required to match a 7200 RPM SATA drive (GB)
Standard persistent disk SSD persistent disk
Small random reads 75 small random reads 100 3
Small random writes 75 small random writes 50 3
Streaming large reads 120 MB/s streaming reads 1,000 250
Streaming large writes 120 MB/s streaming writes 1,000 250

Disk size and vCPU count

The performance of a standard persistent disk scales with its size. Its performance also depends on the number of vCPUs assigned to your VM instance, due to network egress caps on write throughput. Using sixteen or more vCPUs does not limit performance. Using fewer than 16 vCPUs limits performance. For more information, see Network egress limits.

For SSD persistent disks, performance scales linearly until it reaches either the limits of the disk or the limits of the Compute Engine instance to which the disk is attached. For example, consider an SSD persistent disk with a volume size of 1,000 GB. The read limit is 30,000 IOPs. However, if using an instance with only 4 vCPUs, the read limit is 15,000 IOPs.

The following tables show performance by disk size in increments where performance changes significantly. However, you can specify a disk size in any 1 GB increment.

Standard persistent disk

Volume size (GB) Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB per I/O)
Write
(<=8 KB per I/O)
Write
(16 KB per I/O)
Read Write
10 * * * * *
32 24 48 48 3 3
64 48 96 96 7 7
128 96 192 192 15 15
256 192 384 384 30 30
512 384 768 768 61 61
1,000 750 1,500 1,500 120 120
1,500 1,125 2,250 2,250 180 180
2,048 1,536 3,072 3,072 245 245
4,000 3,000 6,000 6,000 480 400
5,000 3,750 7,500 7,500 600 400
8,192 6,144 12,288 7,500 983 400
10,000–
65,536
7,500 15,000 7,500 1,200 400

* Use this volume size only for boot volumes. I/O bursting provides higher performance for boot volumes than the linear scaling described here.

SSD persistent disk

Sustained random IOPS Sustained throughput (MB/s)
Volume size (GB) Read
(<=8 KB per I/O)
Read
(<=16 KB per I/O)
Write
(<=8 KB per I/O)
Write
(16 KB per I/O)
Read Write
10 300 300 300 300 4.8 4.8
32 960 960 960 960 15 15
64 1,920 1,920 1,920 1,920 30 30
128 3,840 3,840 3,840 3,840 61 61
256 7,680 7,680 7,680 7,680 122 122
500 15,000 15,000 15,000 15,000 240 240
834 25,000 25,000 25,000 25,000 400 400
1,000 30,000 30,000 30,000 25,000 480 480
1,334 40,000 40,000 30,000 25,000 640 640
1,667 50,000 50,000 30,000 25,000 800 800
2,048 60,000 60,000 30,000 25,000 983 800
4,000–65,536 100,000 75,000 30,000 25,000 1,200 800
Instance vCPU count Read
(<=8 KB per I/O)
Read
(<=16 KB per I/O)
Write
(<=8 KB per I/O)
Write
(16 KB per I/O)
Read Write
1 vCPU 15,000 15,000 9,000 4,500 240 72
2 to 3 vCPUs 15,000 15,000 15,000 4,500/vCPU 240 72/vCPU
4 to 7 vCPUs 15,000 15,000 15,000 15,000 240 240
8 to 15 vCPUs 15,000 15,000 15,000 15,000 800 400
16 to 31 vCPUs 25,000 25,000 25,000 25,000 1,200 800
32 to 63 vCPUs 60,000 60,000 30,000 25,000 1,200 800
64+ vCPUs* 100,000 75,000 30,000 25,000 1,200 800

* Maximum performance might not be achievable at full CPU utilization. SSD read bandwidth and IOPS consistency near the maximum limits largely depend on network ingress utilization; some variability is to be expected, especially for 16 KB I/Os near the maximum IOPS limits.

Machine type

In Compute Engine, machine types are grouped and curated for different workloads.

In particular, compute-optimized machine types are subject to specific persistent disk limits per vCPU that differ from the limits for other machine types. Note that the performance by volume remains the same as described in the performance by disk size section.

Consider factors that affect performance

Network egress caps on write throughput

Your virtual machine (VM) instance has a network egress cap that depends on the machine type of the VM.

Compute Engine stores data on persistent disks with multiple parallel writes to ensure built-in redundancy. Additionally, each write request has some overhead that uses additional write bandwidth.

The maximum write traffic that a VM instance can issue is the network egress cap divided by a bandwidth multiplier that accounts for the write bandwidth used by this redundancy and overhead.

In a situation where persistent disk is competing with IP traffic for network egress bandwidth, 60% of the maximum write bandwidth goes to persistent disk traffic, leaving 40% for IP traffic. Click below to see an example of how to calculate the maximum persistent disk write traffic that a VM instance can issue.

Simultaneous reads and writes

For standard persistent disks, simultaneous reads and writes share the same resources. While your instance is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that use more write throughput or IOPs are able to perform fewer reads.

SSD persistent disk are capable of achieving maximum throughput limits for both reads and writes simultaneously. However, it is not possible for SSD persistent disks to reach their maximum IOPs limits for reads and writes simultaneously.

Note that throughput = IOPs * I/O size. To take advantage of maximum throughput limits for simultaneous reads and writes on SSD persistent disks, use an I/O size such that read and write IOPs combined don't exceed the IOPs limit.

Logical volume size

Persistent disks can be up to 64 TB in size, and you can create single logical volumes of up to 257 TB using logical volume management inside your VM. A larger volume size impacts performance in the following ways:

  • Not all local file systems work well at this scale. Common operations, such as mounting and file system checking might take longer than expected.

  • Maximum persistent disk performance is achieved at smaller sizes. Disks take longer to fully read or write with this much storage on one VM. If your application supports it, consider using multiple VMs for greater total-system throughput.

  • Snapshotting large amounts of persistent disk might take longer than expected to complete and might provide an inconsistent view of your logical volume without careful coordination with your application.

Multiple disks on a single VM instance

Suppose you have multiple disks of the same type (standard or SSD) attached in the same mode (for example, read/write). If you use only one disk, then that single disk can reach the performance limit corresponding to the combined size of the disks. If you use all of the disks at 100%, the aggregate performance limit is split evenly among the disks regardless of relative disk size.

For example, suppose you have a 200-GB standard disk and a 1000-GB standard disk. If you do not use the 1000-GB disk, then the 200-GB disk can reach the performance limit of a 1200-GB standard disk. If you use both disks at 100%, then each will have the performance limit of a 600-GB standard persistent disk (1200 GB / 2 disks = 600-GB disk).

To demonstrate this, consider the following instances. instance-a has one 200-GB standard persistent disk and one 1000-GB standard persistent disk. instance-b has one 200-GB standard persistent disk. Otherwise, the two instances have the same configuration.

Multi-disk instance. Single-disk instance.

The following commands test read IOPs for the 200-GB disk on instance-a and for the 200-GB disk on instance-b, separately.

Multi-disk instance rIOPs. Single-disk instance rIOPs.

The observed read IOPs for the 200-GB disk on instance-a is 902, which matches the expected read IOPs level for a combined disk size of 200-GB + 1000-GB = 1200-GB.

Optimize performance

To maximize performance, consider the following:

After you ensure that any bottlenecks are not due to the disk size or machine type of the VM, your app and operating system might still need some tuning. See Optimizing persistent disk performance and Optimizing local SSD performance for more information on benchmarking and tuning persistent disk performance.