After you provision your Google Cloud Hyperdisk volumes, your application and operating system might require performance tuning to meet your performance needs.
In the following sections, we describe a few key elements that can be tuned for better performance and how you can apply some of these elements to specific types of workloads.
Use a high I/O queue depth
Hyperdisk volumes have higher latency than locally attached disks such as local SSDs because they are network-attached devices. They can provide very high IOPS and throughput, but you need to make sure that enough I/O requests are done in parallel. The number of I/O requests done in parallel is referred to as the I/O queue depth.
The following tables show the recommended I/O queue depth to ensure you can achieve a certain performance level. The tables use a slight overestimate of typical latency in order to show conservative recommendations. The example assumes that you are using an I/O size of 16 KB.
Desired IOPS | Queue depth |
---|---|
500 | 1 |
1,000 | 2 |
2,000 | 4 |
4,000 | 8 |
8,000 | 16 |
16,000 | 32 |
32,000 | 64 |
64,000 | 128 |
100,000 | 200 |
200,000 | 400 |
320,000 | 640 |
Desired throughput (MB/s) | Queue depth |
---|---|
8 | 1 |
16 | 2 |
32 | 4 |
64 | 8 |
128 | 16 |
256 | 32 |
512 | 64 |
1,000 | 128 |
1,200 | 153 |
Ensure you have free CPUs
Reading and writing to Hyperdisk volumes requires CPU cycles from your VM. If your VM instance is starved for CPU, your application won't be able to manage the IOPS described earlier. To achieve very high, consistent IOPS levels, you must have CPUs free to process I/O.
Review Hyperdisk performance metrics
You can review disk performance metrics in Cloud Monitoring, Google Cloud's integrated monitoring solution. You can use these metrics to observe the performance of your disks and other VM resources under different application workloads.
To learn more, see Reviewing disk performance metrics.
You can also use the Observability page in the console to view the disk performance metrics.