[{
"type": "thumb-down",
"id": "hardToUnderstand",
"label":"Hard to understand"
},{
"type": "thumb-down",
"id": "incorrectInformationOrSampleCode",
"label":"Incorrect information or sample code"
},{
"type": "thumb-down",
"id": "missingTheInformationSamplesINeed",
"label":"Missing the information/samples I need"
},{
"type": "thumb-down",
"id": "otherDown",
"label":"Other"
}]
[{
"type": "thumb-up",
"id": "easyToUnderstand",
"label":"Easy to understand"
},{
"type": "thumb-up",
"id": "solvedMyProblem",
"label":"Solved my problem"
},{
"type": "thumb-up",
"id": "otherUp",
"label":"Other"
}]
Block storage performance
Overview
This page discusses the many factors that determine the performance
of the block storage volumes that you attach to your virtual machine (VM)
instances. Before you begin, consider the following:
To reach the performance limits of your persistent disks, use a high I/O
queue depth
(32 or higher). Persistent disks are networked storage and generally have
higher latency compared to physical disks or local SSDs.
Make sure that your application is issuing enough I/Os to saturate your
disk.
For workloads that primarily involve small (4 KB to 16 KB) random
I/Os, the limiting performance factor is random input/output operations per
second (IOPS).
For workloads that primarily involve sequential or large
(256 KB to 1 MB) random I/Os, the limiting performance factor is
throughput.
Choose a storage option
You can provide several different types of block storage
for your instances to use. Attaching a persistent disk to multiple virtual
machines does not affect aggregate performance or cost. Each machine gets a
share of the per-disk performance limit.
When you configure a zonal or regional persistent disk, you must select one of
the following disk types:
Standard persistent disks (pd-standard) are suited for large data
processing workloads that primarily use sequential I/Os.
Balanced persistent disks (pd-balanced) are an alternative to SSD
persistent disks that balance performance and cost. With the same maximum
IOPS as SSD persistent disks and lower IOPS per GB, balanced PD offers
performance levels suitable for most general purpose applications at a price
point between that of standard and SSD persistent disks.
SSD persistent disks (pd-ssd) are suited for enterprise
applications and high-performance database needs that require lower
latency and more IOPS than standard persistent disks provide. SSD
persistent disks are designed for single-digit millisecond latencies; the
observed latency is app specific.
Attaching a disk to multiple virtual machine
instances in read-only mode does not affect aggregate performance or cost.
Each machine gets a share of the per-disk performance limit.
Note that SSD read bandwidth and IOPS consistency near the maximum performance
limits depend largely on network ingress utilization. Some variability in the
performance limits is to be expected, especially when operating near the maximum
IOPS limits with an I/O size of 16 KB.
Performance of persistent disks in multi-writer mode
Persistent disks created in multi-writer mode have specific IOPS and
throughput limits.
* Persistent disk IOPS and throughput performance
depends on disk size, instance vCPU count, and I/O block size, among other
factors.
Attaching a multi-writer disk to multiple virtual machine instances does not affect
aggregate performance or cost. Each machine gets a share of the per-disk performance limit.
When you specify the size of your persistent disks, consider how these disks
compare to traditional physical hard drives. The following tables compare
standard persistent disks and SSD persistent disks to the typical performance
that you would expect from a 7200 RPM SATA drive, which typically achieves
75 IOPS or 120 MB per second.
I/O type
I/O pattern
Size required to match a 7200 RPM SATA drive (GB)
Standard persistent disk
Balanced persistent disk
SSD persistent disk
Small random reads
75 small random reads
100
12
3
Small random writes
75 small random writes
50
12
3
Streaming large reads
120 MB/s streaming reads
1,000
428
250
Streaming large writes
120 MB/s streaming writes
1,000
428
250
Configure your persistent disks and instances
Persistent disk performance scales with the size of the disk and with the
number of vCPUs on your VM instance.
Performance scales until it reaches either the limits of the disk or
the limits of the VM instance to which the disk is attached. The VM instance
limits are determined by the machine type and the
number of vCPUs on the instance.
For example, consider a 1,000 GB SSD persistent disk attached to an instance
with an N2 machine type and 4 vCPUs. The read limit based solely on the size of
the disk is 30,000 IOPS. However, because the instance has 4 vCPUs, the read
limit is restricted to 15,000 IOPS.
The following tables show performance by disk size in increments where
performance changes significantly. However, you can specify a disk size in any
1 GB increment. Configure your disks and VM instances according to your
performance requirements.
* Maximum performance might not be achievable at
full CPU utilization. SSD read bandwidth and IOPS consistency near the
maximum limits largely depend on network ingress utilization; some
variability is to be expected, especially for 16 KB I/Os near the maximum
IOPS limits.
C2 VMs
Standard persistent disk
IOPS
Sustained throughput (MB/s)
VM vCPU count
Read (<=16 KB per I/O)
Write (<=8 KB per I/O)
Write (16 KB per I/O)
Read
Write
4
3,000
4,000
4,000
240
240
8
3,000
4,000
4,000
240
240
16
3,000
4,000
4,000
240
240
30
3,000
8,000
8,000
240
240
60
3,000
15,000
15,000
240
240
SSD persistent disk
IOPS
Sustained throughput (MB/s)
VM vCPU count
Read (<=16 KB per I/O)
Write (<=8 KB per I/O)
Write (16 KB per I/O)
Read
Write
4
4,000
4,000
4,000
240
240
8
4,000
4,000
4,000
240
240
16
8,000
4,000
4,000
320
240
30
15,000
8,000
8,000
600
240
60
30,000
15,000
15,000
1,200
400
Review performance and throttling metrics
You can review persistent disk performance metrics in
Cloud Monitoring,
Google Cloud's integrated monitoring solution.
Several of these metrics are useful for understanding if and when your
disks are being throttled. Throttling is intended to smooth out bursty I/Os.
With throttling, bursty I/Os can be spread out over a period of time, so that
the performance limits of your disk can be met but not exceeded at any given
instant.
If your workload has a bursty I/O usage pattern, expect to see bursts
in throttled bytes corresponding to bursts in read or written bytes. Similarly,
expect to see bursts in throttled operations corresponding to bursts in
read/write operations.
If your disk limit is 1000 write IOPS, the disk will accept a write
request every 1 millisecond. If you issue write requests faster than that, a
small delay will be introduced to spread the requests 1 millisecond apart. The
IOPS and throughput limits discussed on this page are enforced at all times,
not on a per-minute or per-second basis.
Databases are a common example of bursty workloads. Databases tend to have short
microbursts of I/O operations, which lead to temporary increases in queue
depth. Higher
queue depth can result in higher latency because more outstanding I/O operation
requests are waiting in queue.
If your workload has a uniform I/O usage pattern and you are continuously
reaching the performance limits of your disk, you can expect to see uniform
levels of throttled bytes and operations.
To increase disk performance, start with the following steps:
Resize your persistent disks
to increase the per-disk IOPS and throughput limits. Persistent disks do not
have any reserved, unusable capacity, so you can use the full disk without
performance degredation. However, certain file system and applications
might perform worse as the disk becomes full, so you might need to consider
increasing the size of your disk to avoid this.
Change the machine type
and number of vCPUs on the instance to increase the per-instance IOPS and
throughput limits.
Your virtual machine (VM) instance has a
network egress cap that depends on the machine type of the VM.
Compute Engine stores data on persistent disks with multiple parallel writes
to ensure built-in redundancy. Additionally, each write request has some
overhead that uses additional write bandwidth.
The maximum write traffic that a VM instance can issue is the
network egress cap divided by a
bandwidth multiplier that accounts for the write bandwidth used by this
redundancy and overhead. The network egress cap depends on the machine type of
the VM instance. The network egress caps for each machine type are listed in the
Machine type tables in the Network
egress bandwidth (Gbps) column.
In a situation where persistent disks compete with network egress bandwidth,
60% of the maximum network egress bandwidth, defined by the machine type, is
allocated to persistent disk writes, leaving 40% for all other network egress
traffic. The following example shows how to calculate the maximum persistent
disk write traffic that a VM instance can issue. Refer to
egress bandwidth for
details about other network egress traffic.
Standard persistent disk
SSD persistent disk
Balanced persistent disk
Number of vCPUs
Write limit (MB/s)
Write allocation (MB/s)
Disk size needed to reach limit (GB)
Write limit (MB/s)
Write allocation (MB/s)
Disk size needed to reach limit (GB)
Write limit (MB/s)
Write allocation (MB/s)
Disk size needed to reach limit (GB)
1
72
43
600
204
122
425
204
122
729
2
240
227
2,000
240
227
500
240
227
858
4
240
227
2,000
240
227
500
240
227
858
6+
400
346
3,334
800
480
1667
800
480
2,858
16+
400
346
3,334
1,200
720
2,500
The bandwidth multiplier for standard persistent disks is 3.3x, which means
the data is written out three times with a total overhead of 10%. The bandwidth
multiplier for SSD persistent disks is 1.16 because the data is written out
once with a total overhead of 16%.
If you attach a disk to an
N1 VM
instance with 1 vCPU, the network egress cap is 2 gigabits per second
(Gbps). You can calculate the ultimate bandwidth limit using the following
formulas:
Standard persistent disk maximum write bandwidth for 1 vCPU
= 2 Gbps / 8 bits
= 0.25 GB per second
= 250 MB per second
=> 250 MB per second / 3.35
~= 72 MB per second SSD and balanced persistent disk maximum write bandwidth for 1 vCPU
= 2 Gbps / 8 bits
= 0.25 GB per second
= 250 MB per second
=> 250 MB per second / 1.16
~= 204 MB per second
Using the write throughput per GB figures provided in
the performance chart, you can also derive the required disk capacity to
achieve this performance:
Required standard persistent disk size for maximum 1 vCPU write bandwidth
= 72 MB per second / 0.12 MB per second per GB
~= 600 GB Required SSD persistent disk size for maximum 1 vCPU write bandwidth
= 204 MB per second / 0.48 MB per second per GB
~= 425 GB Required balanced persistent disk size for maximum 1 vCPU write bandwidth
= 204 MB per second / 0.28 MB per second per GB
~= 729 GB
Similar to zonal persistent disks, write traffic from regional persistent disks
contributes to the cumulative network egress cap of the instance. To calculate
the available network egress for regional persistent disks, use a bandwidth
multiplier of 6.6 for standard persistent disks and 2.32 for SSD and balanced
persistent disks.
For 16+ core VMs the maximum network egress bandwidth consumed by standard
persistent disks is 1,320 MB per second (400 MB/s * 3.3). For SSD persistent
disks, the maximum network egress bandwidth consumed is 1,392 MB per second
(1200 MB/s * 1.16).
Simultaneous reads and writes
For standard persistent disks, simultaneous reads and writes share the same
resources. While your instance is using more read throughput or IOPS,
it is able to perform fewer writes. Conversely, instances that use more
write throughput or IOPS are able to perform fewer reads.
SSD and balanced persistent disk are capable of achieving maximum throughput
limits for both reads and writes simultaneously. However, SSD and balanced
persistent disks cannot simultaneously reach their maximum IOPS limits for
reads and writes.
Note that throughput = IOPS * I/O size. To take advantage of maximum
throughput limits for simultaneous reads and writes on SSD persistent disks,
use an I/O size such that read and write IOPS combined don't exceed the IOPS
limit.
Instance IOPS limits for simultaneous reads and writes
Standard persistent disk
SSD persistent disk (8 vCPUs)
SSD persistent disk (32+ vCPUs)
SSD persistent disk (64+ vCPUs)
Read
Write
Read
Write
Read
Write
Read
Write
7,500
0
15,000
0
60,000
0
100,00
0
5,625
3,750
11,250
3,750
45,000
15,000
75,000
25,000
3,750
7,500
7,500
7,500
30,000
30,000
50,000
50,000
1875
11,250
3,750
11,250
15,000
45,000
25,000
75,000
0
15,000
0
15,000
0
60,000
0
100,000
Instance throughput limits (MB per second) for simultaneous reads and writes
* For SSD persistent disks, the max read throughput
and max write throughput are independent of each other, so these limits are
constant.
The IOPS numbers in this table are based on an 8 KB I/O size.
Other I/O sizes, such as 16 KB, might have different IOPS numbers
but maintain the same read/write distribution.
Logical volume size
Persistent disks can be up to 64 TB in size, and you can create single logical
volumes of up to 257 TB using logical volume management inside your VM.
A larger volume size impacts performance in the following ways:
Not all local file systems work well at this scale. Common operations, such
as mounting and file system checking might take longer than expected.
Maximum persistent disk performance is achieved at smaller sizes. Disks take
longer to fully read or write with this much storage on one VM. If your
application supports it, consider using multiple VMs for greater total-system
throughput.
Snapshotting large amounts of persistent disk might take longer than
expected to complete and might provide an inconsistent view of your logical
volume without careful coordination with your application.
Multiple disks attached to a single VM instance
Multiple disks of the same type
If you have multiple disks of the same type attached to a VM instance in
the same mode (for example, read/write). If you use only one disk, then that
single disk can reach the performance limit corresponding to the combined size
of the disks. If you use all of the disks at 100%, the aggregate performance
limit is split evenly among the disks regardless of relative disk size.
For example, suppose you have a 200 GB standard disk and a 1,000 GB
standard disk. If you do not use the 1,000 GB disk, then the 200 GB
disk can reach the performance limit of a 1,200 GB standard disk. If you
use both disks at 100%, then each will have the performance limit of a
600 GB standard persistent disk (1,200 GB / 2 disks = 600 GB
disk).
Multiple disks of different types
If you have multiple disks of different types attached to a single VM, then
the total performance limit for the VM is determined by the SSD per-VM limit.
This total performance limit is shared between all disks attached to the VM.
For example, suppose you have one 5,000 GB standard disk and one 1,000 GB SSD
disk attached to an N2 VM with one vCPU. The read IOPS limit for the standard
disk is 3,000 and the read IOPS limit for the SSD disk is 15,000. Because the
limit of the SSD disk determines the overall limit, the total read IOPS limit
for your VM is 15,000. This limit is shared between all attached disks.