When you configure a storage option for apps that run on your instances, use the following process:
- Understand your workload: determine how much space you need and what performance characteristics your apps require.
- Determine the correct disk type: compare performance across disk types.
- Configure your instances and disk size for optimal storage performance: see factors that affect storage performance.
Workloads
Performance requirements for a given app are typically separated into two distinct I/O patterns:
- Small reads and writes (< 1 MB)
- Large reads and writes
For small reads and writes, the limiting factor is random input/output operations per second (IOPS). Workloads that involve many concurrent and small I/Os are IOPs-driven workloads.
For large reads and writes, the limiting factor is throughput. Workloads that involve random, large I/Os or primarily sequential I/Os are throughput-driven workloads.
Block storage performance comparison
You can provide several different types of block storage for your instances to use. Each type has different price, performance, and durability characteristics.
- Standard persistent disks are a cost-sensitive option suited for large data processing workloads.
- SSD persistent disks are a performance-sensitive option suited for enterprise applications and high-performance database needs.
- Local SSDs provide performance and lower latency but are nonredundant and exist only for the lifetime of a specific instance.
The IOPS per GB and throughput numbers represent the total aggregate performance for data on a single disk, whether attached to a single instance or shared across multiple instances. For multiple instances that are reading from the same disk, the aggregate throughput and IOPS capacity of the disk is shared among the instances. For planning purposes, we recommend that you use the following IOPS per GB and throughput rates:
Zonal standard persistent disks |
Regional standard persistent disks |
Zonal SSD persistent disks |
Regional SSD persistent disks |
Local SSD (SCSI) | Local SSD (NVMe) | |
---|---|---|---|---|---|---|
Maximum sustained IOPS | ||||||
Read IOPS per GB | 0.75 | 0.75 | 30 | 30 | 266.7 | 453.3 |
Write IOPS per GB | 1.5 | 1.5 | 30 | 30 | 186.7 | 240 |
Read IOPS per instance | 7,500* | 3,000* | 15,000–100,000* | 15,000–100,000* | 400,000 | 680,000 |
Write IOPS per instance | 15,000* | 15,000* | 15,000–30,000* | 15,000–30,000* | 280,000 | 360,000 |
Maximum sustained throughput (MB/s) | ||||||
Read throughput per GB | 0.12 | 0.12 | 0.48 | 0.48 | 1.04 | 1.77 |
Write throughput per GB | 0.12 | 0.12 | 0.48 | 0.48 | 0.73 | 0.94 |
Read throughput per instance | 240-1,200* | 240* | 240–1,200* | 240–1,200* | 1,560 | 2,650 |
Write throughput per instance | 76-400** | 38-200** | 76-800* | 38-400* | 1,090 | 1,400 |
* Persistent disk IOPS and throughput performance depends on the number of instance vCPUs and I/O block size. See Instance vCPU count and volume size for details.
Comparing persistent disk to a physical hard drive
When you specify the size of your persistent disks, consider how these disks compare to traditional physical hard drives. The following tables compare standard persistent disks and SSD persistent disks to the typical performance that you would expect from a 7200 RPM SATA drive, which typically achieves 75 IOPS or 120 MB/s.
I/O type | I/O pattern | Size required to match a 7200 RPM SATA drive | |
---|---|---|---|
Standard persistent disk | SSD persistent disk | ||
Small random reads | 75 small random reads | 100 GB | 3 GB |
Small random writes | 75 small random writes | 50 GB | 3 GB |
Streaming large reads | 120 MB/s streaming reads | 1,000 GB | 250 GB |
Streaming large writes | 120 MB/s streaming writes | 1,000 GB | 250 GB |
Price vs. performance
While you have several inputs to consider when you select a volume type and size for your app, one factor you do not need to consider is the price of using your volume. Persistent disk has no per-I/O costs, so there is no need to estimate monthly I/O to calculate budget for what you will spend on disks. However, for IOPS-oriented workloads, it is possible to break down the per month cost to look at price per IOPS, for comparison purposes.
The following pricing calculation examples use U.S. persistent disk pricing. In these examples, consider the relative costs of standard persistent disks compared to SSD persistent disks. Standard persistent disks are priced at $0.040 per GB, and SSD persistent disks are priced at $0.170 per GB. When you increase the size of a volume, you also increase the performance caps automatically, at no additional cost.
To determine the cost per IOPS of a persistent disk, divide the price per GB per month with the number of IOPS per GB. The following table calculates the price per random read IOPS per GB. You can use the same calcuations to calculate the price per write IOPS as well.
Disk type | Price per GB / month | Read IOPS per GB | Price per IOPS per GB |
---|---|---|---|
Standard persistent disk | $0.040 | 0.75 | $0.040 / 0.75 = $0.0533 |
SSD persistent disk | $0.170 | 30 | $0.170 / 30 = $0.0057 |
Factors that affect performance limits
Instance vCPU count and volume size
Standard persistent disk
Standard persistent disk IOPS and throughput performance increases linearly with the size of the disk until it reaches the following per-instance limits:
- Read throughput: Up to 1,200 MB/s at a 10 TB disk size.
- Write throughput: Up to 400 MB/s at a 3.4 TB disk size.
- Read IOPS: Up to 7,500 IOPS at a 10 TB disk size.
- Write IOPS: Up to 15,000 IOPS at a 10 TB disk size.
A vCPU count of 16 or more for your instance does not limit the performance of standard persistent disks.
A vCPU count of less than 8 for your instance reduces the write limit for throughput because network egress limits are proportional to the vCPU count. The observed write throughput also depends on the size of I/Os (16 KB I/Os consume more throughput than 8 KB I/Os at the same IOPS level).
To gain persistent disk performance benefits on your existing instances, resize your persistent disks to increase IOPS and throughput per persistent disk.
Volume size (GB) | Sustained random IOPS | Sustained throughput (MB/s) | |||
---|---|---|---|---|---|
Read (<=16 KB/IO) |
Write (<=8 KB/IO) |
Write (16 KB/IO) |
Read | Write | |
10 | * | * | * | * | * |
32 | 24 | 48 | 48 | 3 | 3 |
64 | 48 | 96 | 96 | 7 | 7 |
128 | 96 | 192 | 192 | 15 | 15 |
256 | 192 | 384 | 384 | 30 | 30 |
512 | 384 | 768 | 768 | 61 | 61 |
1,000 | 750 | 1,500 | 1,500 | 120 | 120 |
1,500 | 1,125 | 2,250 | 2,250 | 180 | 180 |
2,048 | 1,536 | 3,072 | 3,072 | 245 | 245 |
4,000 | 3,000 | 6,000 | 6,000 | 480 | 400 |
5,000 | 3,750 | 7,500 | 7,500 | 600 | 400 |
8,192 | 6,144 | 12,288 | 7,500 | 983 | 400 |
10,000** | 7,500 | 15,000 | 7,500 | 1,200 | 400 |
65,536 | 7,500 | 15,000 | 7,500 | 1,200 | 400 |
* Use this volume size only for boot volumes. I/O bursting provides higher performance for boot volumes than the linear scaling described here.
** Throughput near the limit depends on CPU utilization and resource availability, so performance variability is expected.
SSD persistent disk
IOPS performance of SSD persistent disks depends on the number of vCPUs in the instance in addition to disk size. Performance scales linearly until it reaches either the limits of the volume or the limits of each Compute Engine instance.
For example, consider an SSD persistent disk with a volume size of 1,000 GB. According to the tables below, the read limit is 30,000 IOPs. However, if using an instance with only 4 vCPUs, the read limit is 15,000 IOPs.
Lower core VMs have lower write IOPS and throughput limits due to network egress limitations on write throughput. For more information, see Network egress caps on write throughput. SSD read bandwidth and IOPS consistency near the maximum limits largely depend on network ingress utilization; some variability is to be expected, especially for 16 KB I/Os near the maximum IOPS limits.
SSD persistent disks are designed for single-digit millisecond latencies. The observed latency is app-specific.
To improve SSD persistent disk performance on your existing instances, change the machine type of the instance to increase the per-vm limits and resize your persistent disks to increase IOPS and throughput per persistent disk.
Sustained random IOPS | Sustained throughput (MB/s) | |||||
---|---|---|---|---|---|---|
Volume size (GB) | Read(<=8 KB/IO) | Read(<=16 KB/IO) | Write(<=8 KB/IO) | Write(16 KB/IO) | Read | Write |
10 | 300 | 300 | 300 | 300 | 4.8 | 4.8 |
32 | 960 | 960 | 960 | 960 | 15 | 15 |
64 | 1,920 | 1,920 | 1,920 | 1,920 | 30 | 30 |
128 | 3,840 | 3,840 | 3,840 | 3,840 | 61 | 61 |
256 | 7,680 | 7,680 | 7,680 | 7,680 | 122 | 122 |
500 | 15,000 | 15,000 | 15,000 | 15,000 | 240 | 240 |
834 | 25,000 | 25,000 | 25,000 | 25,000 | 400 | 400 |
1,000 | 30,000 | 30,000 | 30,000 | 25,000 | 480 | 480 |
1,334 | 40,000 | 40,000 | 30,000 | 25,000 | 640 | 640 |
1,667 | 50,000 | 50,000 | 30,000 | 25,000 | 800 | 800 |
2,048 | 60,000 | 60,000 | 30,000 | 25,000 | 983 | 800 |
4,096 | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
8,192 | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
16,384 | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
32,768 | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
65,536 | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
Instance vCPU count | Read(<=8 KB/IO) | Read(<=16 KB/IO) | Write(<=8 KB/IO) | Write(16 KB/IO) | Read* | Write |
1 vCPU | 15,000 | 15,000 | 9,000 | 4,500 | 240 | 72 |
2 to 3 vCPUs | 15,000 | 15,000 | 15,000 | 4,500/vCPU | 240 | 72/vCPU |
4 to 7 vCPUs | 15,000 | 15,000 | 15,000 | 15,000 | 240 | 240 |
8 to 15 vCPUs | 15,000 | 15,000 | 15,000 | 15,000 | 800 | 400 |
16 to 31 vCPUs | 25,000 | 25,000 | 25,000 | 25,000 | 1,200 | 800 |
32 to 63 vCPUs | 60,000 | 60,000 | 30,000 | 25,000 | 1,200 | 800 |
64+ vCPUs** | 100,000 | 75,000 | 30,000 | 25,000 | 1,200 | 800 |
* Maximum throughput based on I/O block sizes of 256 KB or larger.
** Maximum performance might not be achievable at full CPU utilization.
Machine type
Compute-optimized machine types are subject to specific persistent disk limits per vCPU that differ from the limits for other machine types. The following tables show these limits.
Note that the performance by volume remains the same as that described in the Standard disk performance and SSD disk performance sections.
C2 standard persistent disk
Instance vCPU count | Sustained random IOPS | Sustained throughput (MB/s) | |||
---|---|---|---|---|---|
Read (<=16 KB/IO) |
Write (<=8 KB/IO) |
Write (16 KB/IO) |
Read* | Write | |
4 vCPUs | 3,000 | 4,000 | 4,000 | 240 | 240 |
8 vCPUs | 3,000 | 4,000 | 4,000 | 240 | 240 |
16 vCPUs | 3,000 | 4,000 | 4,000 | 240 | 240 |
30 vCPUs | 3,000 | 8,000 | 8,000 | 240 | 240 |
60 vCPUs | 3,000 | 15,000 | 15,000 | 240 | 240 |
C2 SSD persistent disk
Instance vCPU count | Sustained random IOPS | Sustained throughput (MB/s) | |||
---|---|---|---|---|---|
Read (<=16 KB/IO) |
Write (<=8 KB/IO) |
Write (16 KB/IO) |
Read* | Write | |
4 vCPUs | 4,000 | 4,000 | 4,000 | 240 | 240 |
8 vCPUs | 4,000 | 4,000 | 4,000 | 240 | 240 |
16 vCPUs | 8,000 | 4,000 | 4,000 | 320 | 240 |
30 vCPUs | 15,000 | 8,000 | 8,000 | 600 | 240 |
60 vCPUs | 30,000 | 15,000 | 15,000 | 1,200 | 400 |
Simultaneous reads and writes
For standard persistent disks, simultaneous reads and writes share the same resources. While your instance is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that perform more write throughput are able to make fewer reads.
SSD persistent disk are capable of achieving maximum throughput limits for both reads and writes simultaneously. However, this is not the case for IOPS; that is, it is not possible to for SSD persistent disks to reach their maximum read and write limits simultaneously. To achieve maximum throughput limits for simultaneous reads and writes, optimize the I/O size so that the volume can reach its throughput limits without reaching an IOPS bottleneck.
Instance IOPS limits for simultaneous reads and writes:
The IOPS numbers in the following table are based on an 8-KB I/O size. Other I/O sizes, such as 16 KB, might have different IOPS numbers but maintain the same read/write distribution.
Standard persistent disk | SSD persistent disk (8 vCPUs) | SSD persistent disk (32+ vCPUs) | |||
---|---|---|---|---|---|
Read | Write | Read | Write | Read | Write |
7,500 IOPS | 0 IOPS | 15,000 IOPS | 0 IOPS | 60,000 IOPS | 0 IOPS |
5,625 IOPS | 3,750 IOPS | 11,250 IOPS | 3,750 IOPS | 45,000 IOPS | 7,500 IOPS |
3,750 IOPS | 7,500 IOPS | 7,500 IOPS | 7,500 IOPS | 30,000 IOPS | 15,000 IOPS |
1875 IOPS | 11,250 IOPS | 3,750 IOPS | 11,250 IOPS | 15,000 IOPS | 22,500 IOPS |
0 IOPS | 15,000 IOPS | 0 IOPS | 15,000 IOPS | 0 IOPS | 30,000 IOPS |
Instance throughput limits for simultaneous reads and writes:
Standard persistent disk | SSD persistent disk (8 vCPUs) | SSD persistent disk (16+ vCPUs) | |||
---|---|---|---|---|---|
Read | Write | Read | Write | Read | Write |
1200 MB/s | 0 MB/s | 800 MB/s* | 400 MB/s* | 1,200 MB/s* | 800 MB/s* |
900 MB/s | 100 MB/s | ||||
600 MB/s | 200 MB/s | ||||
300 MB/s | 300 MB/s | ||||
0 MB/s | 400 MB/s |
* For SSD persistent disks, the max read throughput and max write throughput are independent of each other, so these limits are constant.
Larger logical volume performance
Persistent disks can be up to 64 TB in size, and you can create single logical volumes of up to 257 TB using logical volume management inside your VM. A larger volume size impacts performance in the following ways:
Not all local file systems work well at this scale. Common operations, such as mounting and file system checking might take longer than expected.
Maximum persistent disk performance is achieved at smaller sizes. Disks take longer to fully read or write with this much storage on one VM. If your application supports it, consider using multiple VMs for greater total-system throughput.
Snapshotting large amounts of persistent disk might take longer than expected to complete and might provide an inconsistent view of your logical volume without careful coordination with your application.
Network egress caps on write throughput
Each persistent disk write operation contributes to your virtual machine (VM) instance's cumulative network egress cap.
To calculate the maximum persistent disk write traffic that a VM instance can issue, subtract an instance's other network egress traffic from its 2 Gbit/s/vCPU network cap. The remaining throughput represents the throughput available to you for persistent disk write traffic.
Compute Engine stores data on persistent disks so that they have built-in redundancy. Instances write data to persistent disk three times in parallel to achieve this redundancy. Additionally, each write request has a certain amount of overhead, which uses egress bandwidth.
Each instance has a persistent disk write limit based on the network egress cap for the VM. In a situation where persistent disk is competing with IP traffic for network egress, 60% of the network egress cap goes to persistent disk traffic, leaving 40% for IP traffic. The following table shows the expected persistent disk write bandwidth with and without additional IP traffic:
Standard persistent disk | Solid-state persistent disk | |||||
---|---|---|---|---|---|---|
Number of vCPUs | Standard persistent disk write limit (MB/s) | Standard persistent disk write allocation (MB/s) | Standard volume size needed to reach limit (GB) | SSD persistent disk write limit (MB/s) | SSD persistent disk write allocation (MB/s) | SSD persistent disk size needed to reach limit (GB) |
1 | 72 | 43 | 600 | 72 | 43 | 150 |
2 | 144 | 86 | 1,200 | 144 | 86 | 300 |
4 | 240 | 173 | 2,000 | 240 | 173 | 500 |
8+ | 400 | 346 | 3,334 | 400 | 346 | 834 |
To understand how the values in this table were calculated, take, for example, 1 vCPU and standard persistent disk. In this example, we approximate that the bandwidth multiplier for every write request is 3.3x, which means that data is written out 3 times and has a total overhead of 10%. To calculate the egress cap, divide the network egress cap—2 Gbit/s, which is equivalent to 238 MB/s—by 3.3:
Max write bandwidth for 1 vCPU = 238 / 3.3 = ~72 MB/s to your standard persistent disk
Using the standard persistent disk write throughput per GB figure provided in the performance chart presented earlier, you can also derive the required disk capacity to achieve this performance:
Required disk capacity to achieve max write bandwidth for 1 vCPU = 72 / 0.12 = ~600 GB
Similar to zonal persistent disks, write traffic from regional persistent disks contributes to a VM instance's cumulative network egress cap. To calculate the available network egress for regional persistent disks, use the factor of 6.6.
Max write bandwidth for 1 vCPU = 238 / 6.6 = ~36 MB/s to your standard replicated persistent disk.
For 16+ core VMs the maximum network egress bandwidth consumed by persistent disk writes doesn't change with the growth of write throughput above 400 MB/s up to 800 MB/s and stays at 1,320 MB/s (400 MB/s * 3.3).
What's next
- Learn how to optimize persistent disk performance
- Learn how to optimize local SSD performance
- Learn about persistent disk pricing.
- Learn about local SSD pricing.