Configure disks to meet performance requirements

Overview

This page discusses the many factors that determine the performance of the block storage volumes that you attach to your virtual machine (VM) instances. Before you begin, consider the following:

  • Persistent disks are networked storage and generally have higher latency compared to physical disks or local SSDs. To reach the maximum performance limits of your persistent disks, you must issue enough I/O requests in parallel. To check if you're using a high enough queue depth to reach your required performance levels, see I/O queue depth.

  • Make sure that your application is issuing enough I/Os to saturate your disk.

  • For workloads that primarily involve small (from 4 KB to 16 KB) random I/Os, the limiting performance factor is random input/output operations per second (IOPS).

  • For workloads that primarily involve sequential or large (256 KB to 1 MB) random I/Os, the limiting performance factor is throughput.

Choose a storage option

To choose a block storage option that is appropriate for your workload, consider factors such as machine type support, disk size, and performance limits.

Disk types

You can provide several different types of block storage for your instances to use. When you configure a zonal or regional persistent disk, you can select one of the following disk types. If you create a disk in the console, the default disk type is pd-balanced. If you create a disk using the gcloud CLI or the Compute Engine API, the default disk type is pd-standard.

  • Standard persistent disks (pd-standard) are suited for large data processing workloads that primarily use sequential I/Os.
  • Balanced persistent disks (pd-balanced) are an alternative to SSD persistent disks that balance performance and cost. With the same maximum IOPS as SSD persistent disks and lower IOPS per GB, a balanced persistent disk offers performance levels suitable for most general-purpose applications at a price point between that of standard and SSD persistent disks.
  • SSD persistent disks (pd-ssd) are suited for enterprise applications and high-performance database needs that require lower latency and more IOPS than standard persistent disks provide. SSD persistent disks are designed for single-digit millisecond latencies; the observed latency is app specific.
  • Extreme persistent disks (pd-extreme) offer consistently high performance for both random access workloads and bulk throughput. They are designed for high-end database workloads, such as Oracle or SAP HANA. Unlike other disk types, you can provision your desired IOPS. For more information, see Extreme persistent disks.

Performance limits

The following table shows performance limits for persistent disks. For information about local SSD performance limits, see Local SSD performance.

Zonal persistent disks

The following table shows maximum sustained IOPS for zonal persistent disks:

Zonal
standard
PD
Zonal
balanced
PD
Zonal
SSD PD
Zonal
extreme PD
Zonal
SSD PD
multi-writer mode
Read IOPS per GB 0.75 6 30 30
Write IOPS per GB 1.5 6 30 30
Read IOPS per instance 7,500* 80,000* 100,000* 120,000* 100,000*
Write IOPS per instance 15,000* 80,000* 100,000* 120,000* 100,000*

The following table shows maximum sustained throughput for zonal persistent disks:

Zonal
standard
PD
Zonal
balanced
PD
Zonal
SSD PD
Zonal
extreme PD
Zonal
SSD PD
multi-writer mode
Throughput per GB (MB/s) 0.12 0.28 0.48 0.48
Read throughput per instance (MB/s) 1,200* 1,200* 1,200* 2,200** 1,200**
Write throughput per instance (MB/s) 400** 1,200* 1,200* 2,200** 1,200**

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.

Regional persistent disks

The following table shows maximum sustained IOPS for regional persistent disks:

Regional
standard
PD
Regional
balanced
PD
Regional
SSD PD
Read IOPS per GB 0.75 6 30
Write IOPS per GB 1.5 6 30
Read IOPS per instance 7,500* 60,000* 100,000*
Write IOPS per instance 15,000* 30,000* 80,000*§

The following table shows maximum sustained throughput for regional persistent disks:

Regional
standard
PD
Regional
balanced
PD
Regional
SSD PD
Throughput per GB (MB/s) 0.12 0.28 0.48
Read throughput per instance 1,200* 1,200* 1,200*
Write throughput per instance 200** 600* 600*

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.
Requires at least 64 vCPU and N1 or N2 machine type. May be lower when it is in unreplicated mode.
§ Requires at least 64 vCPU and N1 or N2 machine type.

Attaching a disk to multiple virtual machine instances in read-only mode mode or in multi-writer mode does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit. Persistent disks created in multi-writer mode have specific IOPS and throughput limits. To learn how to share persistent disks between multiple VMs, see Sharing persistent disks between VMs.

Persistent disk I/O operations share a common path with vNIC network traffic within your VM's hypervisor. Therefore, if your VM has significant network traffic, the actual read bandwidth and IOPS consistency might be less than the listed maximum limits. Some variability in the performance limits is to be expected, especially when operating near the maximum IOPS limits with an I/O size of 16 KB. For a summary of bandwidth expectations, see Bandwidth summary table.

Configure your persistent disks and instances

Persistent disk performance scales with the size of the disk and with the number of vCPUs on your VM instance.

Performance scales until it reaches either the limits of the disk or the limits of the VM instance to which the disk is attached. The machine type and the number of vCPUs on the instance determine the VM instance limits.

The following tables show performance limits for zonal persistent disks. For performance limits for regional persistent disks, see Regional persistent disk performance.

Performance by machine type and vCPU count

The following tables show how zonal persistent disk performance varies according to the machine type and number of vCPUs on the VM to which the disk is attached.

A2 VMs

pd-standard

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 5,000 400 800
a2-highgpu-2g 15,000 7,500 400 1,200
a2-highgpu-4g 15,000 7,500 400 1,200
a2-highgpu-8g 15,000 7,500 400 1,200
a2-megagpu-16g 15,000 7,500 400 1,200

pd-balanced

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 15,000 800 800
a2-highgpu-2g 20,000 20,000 1,200 1,200
a2-highgpu-4g 50,000 50,000 1,200 1,200
a2-highgpu-8g 80,000 80,000 1,200 1,200
a2-megagpu-16g 80,000 80,000 1,200 1,200

pd-performance

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 15,000 800 800
a2-highgpu-2g 25,000 25,000 1,200 1,200
a2-highgpu-4g 60,000 60,000 1,200 1,200
a2-highgpu-8g 100,000 100,000 1,200 1,200
a2-megagpu-16g 100,000 100,000 1,200 1,200

C2 VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
4 4,000 3,000 240 240
8 4,000 3,000 240 240
16 4,000 3,000 240 240
30 8,000 3,000 240 240
60 15,000 3,000 240 240

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
4 4,000 4,000 240 240
8 4,000 4,000 240 240
16 4,000 8,000 480 600
30 8,000 15,000 480 600
60 15,000 15,000 800 1,200

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
4 4,000 4,000 240 240
8 4,000 4,000 240 240
16 4,000 8,000 480 600
30 8,000 15,000 480 600
60 15,000 30,000 800 1,200

C2D VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2 4,590 3,060 245 245
4 4,590 3,060 245 245
8 4,590 3,060 245 245
16 4,590 3,060 245 245
32 8,160 3,060 245 245
56 8,160 3,060 245 245
112 15,300 3,060 245 245

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2 4,590 4,080 245 245
4 4,590 4,080 245 245
8 4,590 4,080 245 245
16 4,590 8,160 245 326
32 8,160 15,300 245 612
56 8,160 15,300 245 612
112 15,300 30,600 408 1,224

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2 4,590 4,080 245 245
4 4,590 4,080 245 245
8 4,590 4,080 245 245
16 4,590 8,160 245 326
32 8,160 15,300 245 612
56 8,160 15,300 245 612
112 15,300 30,600 408 1,224

E2 VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
e2-medium* 10,000 1,000 200 200
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
e2-medium* 10,000 12,000 200 200
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,000 1,200
32 or more 50,000 50,000 1,000 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
e2-medium* 10,000 12,000 200 200
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,000 1,200
32 or more 60,000 60,000 1,000 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

N1 VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 Up to 3,000 204 240
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more 80,000 80,000 1,200 1,200

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more 100,000 100,000 1,200 1,200

N2 VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more 80,000 80,000 1,200 1,200

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more 100,000 100,000 1,200 1,200

pd-extreme

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
n2-standard-64 120,000 120,000 2,200 2,200
n2-highmem-64 120,000 120,000 2,200 2,200
n2-highmem-80 120,000 120,000 2,200 2,200

N2D VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more Up to 80,000 Up to 80,000 1,200 1,200

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more Up to 100,000 Up to 100,000 1,200 1,200

M1 VMs

pd-standard

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m1-megamem-96 15,000 7,500 400 1,200
m1-ultramem-40 15,000 7,500 400 1,200
m1-ultramem-80 15,000 7,500 400 1,200
m1-ultramem-160 15,000 7,500 400 1,200

pd-balanced

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m1-megamem-96 80,000 80,000 1,200 1,200
m1-ultramem-40 60,000 60,000 1,200 1,200
m1-ultramem-80 70,000 70,000 1,200 1,200
m1-ultramem-160 70,000 70,000 1,200 1,200

pd-performance

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m1-megamem-96 90,000 90,000 1,200 1,200
m1-ultramem-40 60,000 60,000 1,200 1,200
m1-ultramem-80 70,000 70,000 1,200 1,200
m1-ultramem-160 70,000 70,000 1,200 1,200

pd-extreme

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m1-megamem-96 90,000 90,000 2,200 2,200

M2 VMs

pd-standard

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m2-megamem-416 15,000 7,500 400 1,200
m2-ultramem-208 15,000 7,500 400 1,200
m2-ultramem-416 15,000 7,500 400 1,200

pd-balanced

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m2-megamem-416 40,000 40,000 1,200 1,200
m2-ultramem-208 60,000 60,000 1,200 1,200
m2-ultramem-416 40,000 40,000 1,200 1,200

pd-performance

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m2-megamem-416 40,000 40,000 1,200 1,200
m2-ultramem-208 60,000 60,000 1,200 1,200
m2-ultramem-416 40,000 40,000 1,200 1,200

pd-extreme

Machine type Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
m2-ultramem-208 60,000 60,000 2,200 2,200
m2-ultramem-416 40,000 40,000 1,200 2,200

T2D VMs

pd-standard

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 3,000 204 240
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more Up to 80,000 Up to 80,000 1,200 1,200

pd-performance

VM vCPU count Maximum write IOPS Maximum read IOPS Maximum write throughput (MB/s) Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more Up to 100,000 Up to 100,000 1,200 1,200

Example

Consider a 1,000 GB zonal SSD persistent disk attached to a VM with an N2 machine type and 4 vCPUs. The read limit based solely on the size of the disk is 30,000 IOPS, because SSD persistent disks can reach up to 30 IOPS per GB of disk space. However, the VM has 4 vCPUs so the read limit is restricted to 15,000 IOPS.

Review performance and throttling metrics

You can review persistent disk performance metrics in Cloud Monitoring, Google Cloud's integrated monitoring solution.

If and when your disks are being throttled, several of these metrics are useful to understand. Throttling is intended to smooth out bursty I/Os. With throttling, bursty I/Os can be spread out over a period, so that the performance limits of your disk can be met but not exceeded at any given instant.

If your workload has a bursty I/O usage pattern, expect to see bursts in throttled bytes corresponding to bursts in read or written bytes. Similarly, expect to see bursts in throttled operations corresponding to bursts in read/write operations.

If your disk limit is 1,000 write IOPS, the disk accepts a write request every 1 millisecond. If you issue faster write requests, then a small delay is introduced to spread the requests 1 millisecond apart. The IOPS and throughput limits discussed in this document are enforced all the time, not on a per-minute or per-second basis.

Databases are a common example of bursty workloads. Databases tend to have short microbursts of I/O operations, which lead to temporary increases in queue depth. Higher queue depth can result in higher latency because more outstanding I/O operation requests are waiting in queue.

If your workload has a uniform I/O usage pattern and you are continuously reaching the performance limits of your disk, you can expect to see uniform levels of throttled bytes and operations.

To learn more, see Reviewing persistent disk performance metrics.

Optimize disk performance

To increase disk performance, start with the following steps:

  • Resize your persistent disks to increase the per-disk IOPS and throughput limits. Persistent disks do not have any reserved, unusable capacity, so you can use the full disk without performance degradation. However, certain file system and applications might perform worse as the disk becomes full, so you might need to consider increasing the size of your disk to avoid such situations.

  • Change the machine type and number of vCPUs on the instance to increase the per-instance IOPS and throughput limits.

After you ensure that any bottlenecks are not due to the disk size or machine type of the VM, your app and operating system might still need some tuning. See Optimizing persistent disk performance and Optimizing local SSD performance.

Other factors that affect performance

What's next