Optimizing persistent disk and local SSD performance

Persistent disks are the most common storage option due to their price, performance, and durability. You can also choose local SSDs, which provide even higher performance and lower latency but are nonredundant and exist only for the lifetime of a specific instance. When you configure a storage option for apps that run on your instances, use the following processes:

  • Determine how much space you need.
  • Determine what performance characteristics your apps require.
  • Configure your instances to optimize storage performance.

The following sections describe the available block storage options that you can attach to your Compute Engine instances. For a complete list of storage options on Google Cloud Platform, see Cloud Storage products.

Block storage performance comparison

To help you determine the correct disk type and size for your instances, consider your storage size and performance requirements. Performance requirements for a given app are typically separated into two distinct I/O patterns:

  • Small reads and writes
  • Large reads and writes

For small reads and writes, the limiting factor is random input/output operations per second IOPS.

For large reads and writes, the limiting factor is throughput.

The IOPS per GB and throughput numbers represent the total aggregate performance for data on a single disk, whether attached to a single instance or shared across multiple instances. For multiple instances that are reading from the same disk, the aggregate throughput and IOPS capacity of the disk is shared among the instances. For planning purposes, we recommend that you use the following IOPS per GB and throughput rates:

Zonal
standard
persistent disks
Regional
standard
persistent disks
Zonal
SSD
persistent disks
Regional
SSD
persistent disks
Local SSD (SCSI) Local SSD (NVMe)
Maximum sustained IOPS
Read IOPS per GB 0.75 0.75 30 30 266.7 453.3
Write IOPS per GB 1.5 1.5 30 30 186.7 240
Read IOPS per instance 3,000* 3,000* 15,000–60,000* 15,000–60,000* 400,000 680,000
Write IOPS per instance 15,000* 15,000* 15,000–30,000* 15,000–30,000* 280,000 360,000
Maximum sustained throughput (MB/s)
Read throughput per GB 0.12 0.12 0.48 0.48 1.04 1.77
Write throughput per GB 0.12 0.12 0.48 0.48 0.73 0.94
Read throughput per instance 240* 240* 240–1,200* 240–1,200* 1,560 2,650
Write throughput per instance 76-240** 38-200** 76–400* 38–200* 1,090 1,400

* Persistent disk IOPS and throughput performance depends on the number of instance vCPUs and IO block size. Read SSD persistent disk performance limits for details on SSD persistent disk, and Standard persistent disk performance limits for details on standard persistent disk.

** SSD and standard persistent disks can achieve greater throughput performance on instances with greater numbers of vCPUs. Read Network egress caps on write throughput for details.

Comparing persistent disk to a physical hard drive

When you specify the size of your persistent disks, consider how these disks compare to traditional physical hard drives. The following tables compare standard persistent disks and SSD persistent disks to the typical performance that you would expect from a 7200 RPM SATA drive, which typically achieves 75 IOPS or 120 MB/s.

I/O type I/O pattern Size required to match a 7200 RPM SATA drive
Standard persistent disk SSD persistent disk
Small random reads 75 small random reads 100 GB 3 GB
Small random writes 75 small random writes 50 GB 3 GB
Streaming large reads 120 MB/s streaming reads 1,000 GB 250 GB
Streaming large writes 120 MB/s streaming writes 1,000 GB 250 GB

Size, price, and performance summary

While you have several inputs to consider when you select a volume type and size for your app, one factor you do not need to consider is the price of using your volume. Persistent disk has no per-I/O costs, so there is no need to estimate monthly I/O to calculate budget for what you will spend on disks. However, for IOPS-oriented workloads, it is possible to break down the per month cost to look at price per IOPS, for comparison purposes.

The following pricing calculation examples use U.S. persistent-disk pricing. In these examples, consider the relative costs of standard persistent disks compared to SSD persistent disks. Standard persistent disks are priced at $0.040 per GB, and SSD persistent disks are priced at $0.170 per GB. When you increase the size of a volume, you also increase the performance caps automatically, at no additional cost.

To determine the cost per IOPS of a persistent disk, divide the price per GB per month with the number of IOPS per GB. The following table calculates the price per random read IOPS per GB. You can use the same calcuations to calculate the price per write IOPS as well.

Disk type Price per GB / month Read IOPS per GB Price per IOPS per GB
Standard persistent disk $0.040 0.75 $0.040 / 0.75 = $0.0533
SSD persistent disk $0.170 30 $0.170 / 30 = $0.2267

SSD persistent disks reach their limit of 60,000 random read IOPS at 2,000 GB and 30,000 random write IOPS at 1,000 GB. In contrast, standard persistent disks reach their 3,000 random read IOPS limits at 4 TB and 15,000 random write IOPS at 10 TB.

SSD persistent disks are designed for single-digit millisecond latencies. The observed latency is app-specific.

Standard persistent disk

Standard persistent disk performance scales linearly up to the VM performance limits. A vCPU count of 4 or more for your instance does not limit the performance of standard persistent disks.

A vCPU count of less than 4 for your instance reduces the write limit for IOPS because network egress limits are proportional to the vCPU count. The write limit also depends on the size of I/Os (16 KB I/Os consume more bandwidth than 8 KB I/Os at the same IOPS level).

Standard persistent disk IOPS and throughput performance increases linearly with the size of the disk until it reaches the following per-instance limits:

  • Read throughput: Up to 240 MB/s at a 2 TB disk size.
  • Write throughput: Up to 240 MB/s at a 2 TB disk size.
  • Read IOPS: Up to 3,000 IOPS at a 4 TB disk size.
  • Write IOPS: Up to 15,000 IOPS at a 10 TB disk size.

To gain persistent disk performance benefits on your existing instances, resize your persistent disks to increase IOPS and throughput per persistent disk.

Volume size (GB) Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB/IO)
Write
(<=8 KB/IO)
Write
(16 KB/IO)
Read Write
10 * * * * *
32 24 48 48 3 3
64 48 96 96 7 7
128 96 192 192 15 15
256 192 384 384 30 30
512 384 768 768 61 61
1,000 750 1,500 1,500 120 120
1,500 1,125 2,250 2,250 180 180
2,048 1,536 3,072 3,072 240 240
4,000 3,000 6,000 6,000 240 240
5,000 3,000 7,500 7,500 240 240
8,192 3,000 12,288 7,500 240 240
10,000 3,000 15,000 7,500 240 240
16,384 3,000 15,000 7,500 240 240
32,768 3,000 15,000 7,500 240 240
65,536 3,000 15,000 7,500 240 240

* Use this volume size only for boot volumes. I/O bursting provides higher performance for boot volumes than the linear scaling described here.

SSD persistent disk

IOPS performance of SSD persistent disks depends on the number of vCPUs in the instance in addition to disk size.

Lower core VMs have lower write IOPS and throughput limits due to network egress limitations on write throughput. For more information, see Network egress caps on write throughput.

SSD persistent disk performance scales linearly until it reaches either the limits of the volume or the limits of each Compute Engine instance. SSD read bandwidth an IOPS consistency near the maximum limits largely depend on network ingress utilization; some variability is to be expected, especially for 16 KB I/Os near the maximum IOPS limits.

Instance vCPU count Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB/IO)
Write
(<=8 KB/IO)
Write
(16 KB/IO)
Read* Write
1 vCPU 15,000 9,000 4,500 240 72
2 to 3 vCPUs 15,000 15,000 4,500/vCPU 240 72/vCPU
4 to 7 vCPUs 15,000 15,000 15,000 240 240
8 to 15 vCPUs 15,000 15,000 15,000 800 400
16 to 31 vCPUs 25,000 25,000 25,000 1,200 400
32 to 63 vCPUs 60,000 30,000 25,000 1,200 400
64+ vCPUs** 60,000 30,000 25,000 1,200 400

* Maximum throughput based on I/O block sizes of 256 KB or larger.

** Maximum performance might not be achievable at full CPU utilization.

To improve SSD persistent disk performance on your existing instances, change the machine type of the instance to increase the per-vm limits and resize your persistent disks to increase IOPS and throughput per persistent disk.

Volume size (GB) Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB/IO)
Write
(<=8 KB/IO)
Write
(16 KB/IO)
Read Write
10 300 300 300 4.8 4.8
32 960 960 960 15 15
64 1,920 1,920 1,920 30 30
128 3,840 3,840 3,840 61 61
256 7,680 7,680 7,680 122 122
500 15,000 15,000 15,000 240 240
834 25,000 25,000 25,000 400 400
1,000 30,000 30,000 25,000 480 400
1,334 40,000 30,000 25,000 640 400
1,667 50,000 30,000 25,000 800 400
2,048 60,000 30,000 25,000 983 400
4,096 60,000 30,000 25,000 1,200 400
8,192 60,000 30,000 25,000 1,200 400
16,384 60,000 30,000 25,000 1,200 400
32,768 60,000 30,000 25,000 1,200 400
65,536 60,000 30,000 25,000 1,200 400

C2 disks limits

Compute-optimized machine types are subject to specific persistent disk limits per vCPU that differ from the limits for other machine types. The following tables show these limits.

Note that the performance by volume remains the same as that described in the Standard disk performance and SSD disk performance sections.

Standard persistent disk
Instance vCPU count Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB/IO)
Write
(<=8 KB/IO)
Write
(16 KB/IO)
Read* Write
4 vCPUs 3,000 4,000 4,000 240 240
8 vCPUs 3,000 4,000 4,000 240 240
16 vCPUs 3,000 4,000 4,000 240 240
30 vCPUs 3,000 8,000 8,000 240 240
60 vCPUs 3,000 15,000 15,000 240 240
SSD persistent disk
Instance vCPU count Sustained random IOPS Sustained throughput (MB/s)
Read
(<=16 KB/IO)
Write
(<=8 KB/IO)
Write
(16 KB/IO)
Read* Write
4 vCPUs 4,000 4,000 4,000 240 240
8 vCPUs 4,000 4,000 4,000 240 240
16 vCPUs 8,000 4,000 4,000 320 240
30 vCPUs 15,000 8,000 8,000 600 240
60 vCPUs 30,000 15,000 15,000 1,200 400

Simultaneous reads and writes

For standard persistent disks, simultaneous reads and writes share the same resources. While your instance is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that perform more write throughput are able to make fewer reads.

SSD persistent disk are capable of achieving maximum throughput limits for both reads and writes simultaneously. However, this is not the case for IOPS; that is, it is not possible to for SSD persistent disks to reach their maximum read and write limits simultaneously. To achieve maximum throughput limits for simultaneous reads and writes, optimize the I/O size so that the volume can reach its throughput limits without reaching an IOPS bottleneck.

Instance IOPS limits for simultaneous reads and writes:

The IOPS numbers in the following table are based on an 8-KB I/O size. Other I/O sizes, such as 16 KB, might have different IOPS numbers but maintain the same read/write distribution.

Standard persistent disk SSD persistent disk (8 vCPUs) SSD persistent disk (32+ vCPUs)
Read Write Read Write Read Write
3,000 IOPS 0 IOPS 15,000 IOPS 0 IOPS 60,000 IOPS 0 IOPS
2,250 IOPS 3,750 IOPS 11,250 IOPS 3,750 IOPS 45,000 IOPS 7,500 IOPS
1,500 IOPS 7,500 IOPS 7,500 IOPS 7,500 IOPS 30,000 IOPS 15,000 IOPS
750 IOPS 11,250 IOPS 3750 IOPS 11,250 IOPS 15,000 IOPS 22,500 IOPS
0 IOPS 15,000 IOPS 0 IOPS 15,000 IOPS 0 IOPS 30,000 IOPS

Instance throughput limits for simultaneous reads and writes:

Standard persistent disk SSD persistent disk (8 vCPUs) SSD persistent disk (16+ vCPUs)
Read Write Read Write Read Write
240 MB/s 0 MB/s 800 MB/s* 400 MB/s* 1,200 MB/s* 400 MB/s*
180 MB/s 60 MB/s
120 MB/s 120 MB/s
60 MB/s 180 MB/s
0 MB/s 240 MB/s

* For SSD persistent disks, the max read throughput and max write throughput are independent of each other, so these limits are constant. You might notice increased SSD persistent disk write throughput per instance over the published limits due to ongoing improvements.

Network egress caps on write throughput

Each persistent disk write operation contributes to your virtual machine (VM) instance's cumulative network egress cap.

To calculate the maximum persistent disk write traffic that a VM instance can issue, subtract an instance's other network egress traffic from its 2 Gbit/s/vCPU network cap. The remaining throughput represents the throughput available to you for persistent disk write traffic.

Compute Engine persistent disks offer built-in redundancy. Instances write data to persistent disk three times in parallel to achieve this redundancy. Additionally, each write request has a certain amount of overhead, which uses egress bandwidth.

Each instance has a persistent disk write limit based on the network egress cap for the VM. In a situation where persistent disk is competing with IP traffic for network egress, 60% of the network egress cap goes to persistent disk traffic, leaving 40% for IP traffic. The following table shows the expected persistent disk write bandwidth with and without additional IP traffic:

Standard persistent disk Solid-state persistent disk
Number of vCPUs Standard persistent disk write limit (MB/s) Standard persistent disk write allocation (MB/s) Standard volume size needed to reach limit (GB) SSD persistent disk write limit (MB/s) SSD persistent disk write allocation (MB/s) SSD persistent disk size needed to reach limit (GB)
1 72 43 600 72 43 150
2 144 86 1,200 144 86 300
4 240 173 2,000 240 173 500
8+ 240 240 2,000 400 346 834

To understand how the values in this table were calculated, take, for example, 1 vCPU and standard persistent disk. In this example, we approximate that the bandwidth multiplier for every write request is 3.3x, which means that data is written out 3 times and has a total overhead of 10%. To calculate the egress cap, divide the network egress cap—2 Gbit/s, which is equivalent to 238 MB/s—by 3.3:

Max write bandwidth for 1 vCPU = 238 / 3.3 = ~72 MB/s to your standard persistent disk

Using the standard persistent disk write throughput per GB figure provided in the performance chart presented earlier, you can also derive the required disk capacity to achieve this performance:

Required disk capacity to achieve max write bandwidth for 1 vCPU = 72 / 0.12 = ~600 GB

Similar to zonal persistent disks, write traffic from regional persistent disks contributes to a VM instance's cumulative network egress cap. To calculate the available network egress for regional persistent disks, use the factor of 6.6.

Max write bandwidth for 1 vCPU = 238 / 6.6 = ~36 MB/s to your standard replicated persistent disk.

Optimizing persistent disk and local SSD performance

You can optimize persistent disks and local SSDs to handle your data more efficiently.

Optimizing persistent disks

Persistent disks give you the performance described in the disk type chart if the VM drives usage that is sufficient to reach the performance caps. After you size your persistent disk volumes to meet your performance needs, your app and operating system might need some tuning.

In the following sections, we describe a few key elements that can be tuned for better performance and how to apply some of them to specific types of workloads.

Disable lazy initialization and enable DISCARD commands

Persistent disks support DISCARD or TRIM commands, which allow operating systems to inform the disks when blocks are no longer in use. DISCARD support allows the OS to mark disk blocks as no longer needed, without incurring the cost of zeroing out the blocks.

On most Linux operating systems, you enable DISCARD when you mount a persistent disk to your instance. Windows 2012 R2 instances enable DISCARD by default when you mount a persistent disk. Windows 2008 R2 does not support DISCARD.

Enabling DISCARD can boost general runtime performance, and it can also speed up the performance of your disk when it is first mounted. Formatting an entire disk volume can be time consuming, so "lazy formatting" is a common practice. The downside of lazy formatting is that the cost is often then paid the first time the volume is mounted. By disabling lazy initialization and enabling DISCARD commands, you can get fast format and mount.

  • Disable lazy initialization and enable DISCARD during format by passing the following parameters to mkfs.ext4:

    -E lazy_itable_init=0,lazy_journal_init=0,discard
    

    The lazy_journal_init=0 parameter does not work on instances with CentOS 6 or RHEL 6 images. For those instances, format persistent disks without that parameter.

    -E lazy_itable_init=0,discard
    
  • Enable DISCARD commands on mount by passing the following flag to the mount command:

    -o discard
    

Persistent disks work well with the discard option enabled. However, you can optionally run fstrim periodically in addition to, or instead of using the discard option. If you do not use the discard option, run fstrim before you create a snapshot of your disk. Trimming the file system lets you create smaller snapshot images, which reduces the cost of storing snapshots.

I/O queue depth

Many apps have settings that affect their I/O queue depth. Higher queue depths increase IOPS but can also increase latency. Lower queue depths decrease per-I/O latency, but might result in lower maximum IOPS.

Readahead cache

To improve I/O performance, operating systems employ techniques such as readahead, where more of a file than was requested is read into memory with the assumption that subsequent reads are likely to need that data. Higher readahead increases throughput at the expense of memory and IOPS. Lower readahead increases IOPS at the expense of throughput.

On Linux systems, you can get and set the readahead value with the blockdev command:

$ sudo blockdev --getra /dev/[DEVICE_ID]
$ sudo blockdev --setra [VALUE] /dev/[DEVICE_ID]

The readahead value is <desired_readahead_bytes> / 512 bytes.

For example, for an 8-MB readahead, 8 MB is 8388608 bytes (8 * 1024 * 1024).

8388608 bytes / 512 bytes = 16384

You set blockdev to 16384:

$ sudo blockdev --setra 16384 /dev/[DEVICE_ID]

Free CPUs

Reading and writing to persistent disk requires CPU cycles from your VM. To achieve very high, consistent IOPS levels, you must have CPUs free to process I/O.

IOPS-oriented workloads

Databases, whether SQL or NoSQL, have usage patterns of random access to data. Google recommends the following values for IOPS-oriented workloads:

  • I/O queue depth values of 1 per each 400–800 IOPS, up to a limit of 64 on large volumes

  • One free CPU for every 2,000 random read IOPS and 1 free CPU for every 2,500 random write IOPS

Lower readahead values are typically suggested in best practices documents for MongoDB, Apache Cassandra, and other database applications.

Throughput-oriented workloads

Streaming operations, such as a Hadoop job, benefit from fast sequential reads, and larger I/O sizes can increase streaming performance. For throughput-oriented workloads, we recommend I/O sizes of 256 KB or greater.

Optimizing SSD persistent disk performance

The performance by disk type chart describes the expected, maximum achievable performance for solid-state persistent disks. To optimize your apps and VM instances to achieve these speeds, use the following best practices:

  • Make sure your app is generating enough I/O

    If your app is generating fewer IOPS than the limit described in the earlier chart, you won't reach that level of IOPS. For example, on a 500-GB disk, the expected IOPS limit is 15,000 IOPS. However, if you generate fewer IOPS than that or the I/O operations are larger than 8 KB, you won't achieve 15,000 IOPS.

  • Make sure to provide I/O with enough parallelism

    Use a high-enough queue depth that you're leveraging the parallelism of the OS. If you provide 1,000 IOPS but do so in a synchronous manner with a queue depth of 1, you will achieve far fewer IOPS than the limit described in the chart. At a minimum, your app should have a queue depth of at least 1 per every 400–800 IOPS.

  • Make sure there is enough available CPU on the instance that is generating the I/O

    If your VM instance is starved for CPU, your app won't be able to manage the IOPS described earlier. We recommend that you have one available CPU for every 2,000–2,500 IOPS of expected traffic.

  • Make sure your app is optimized for a reasonable temporal data locality on large disks

    If your app accesses data that is distributed across different parts of a disk over a short period of time (hundreds of GB per vCPU), you won't achieve optimal IOPS. For best performance, optimize for temporal data locality, weighing factors like the fragmentation of the disk and the randomness of accessed parts of the disk.

  • Make sure the I/O scheduler in the OS is configured to meet your specific needs

    On Linux-based systems, you can set the I/O scheduler to noop to achieve the highest number of IOPS on SSD-backed devices.

Benchmarking SSD persistent disk performance

The following commands assume a 2,500 GB PD-SSD device. If your device size is different, modify the value of the --filesize argument. This disk size is necessary to achieve the 32 vCPU VM throughput limits.

    # Install dependencies
    sudo apt-get update
    sudo apt-get install -y fio
  1. Fill the disk with nonzero data. Persistent disk reads from empty blocks have a latency profile that is different from blocks that contain data. We recommend filling the disk before running any read latency benchmarks.

    # Running this command causes data loss on the second device.
    # We strongly recommend using a throwaway VM and disk.
    sudo fio --name=fill_disk \
      --filename=/dev/sdb --filesize=2500G \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=128K --iodepth=64 --rw=randwrite
    
  2. Test write bandwidth by performing sequential writes with multiple parallel streams (8+), using 1 MB as the I/O size and having an I/O depth that is greater than or equal to 64.

    # Running this command causes data loss on the second device.
    # We strongly recommend using a throwaway VM and disk.
    sudo fio --name=write_bandwidth_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=1M --iodepth=64 --rw=write --numjobs=8
    
  3. Test write IOPS. To achieve maximum PD IOPS, you must maintain a deep I/O queue. If, for example, the write latency is 1 millisecond, the VM can achieve, at most, 1,000 IOPS for each I/O in flight. To achieve 15,000 IOPS, the VM must maintain at least 15 I/Os in flight. If your disk and VM are able to achieve 30,000 IOPS, the number of I/Os in flight must be at least 30 I/Os. If the I/O size is larger than 4 KB, the VM might reach the bandwidth limit before it reaches the IOPS limit.

    # Running this command causes data loss on the second device.
    # We strongly recommend using a throwaway VM and disk.
    sudo fio --name=write_iops_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=4K --iodepth=64 --rw=randwrite
    
  4. Test write latency. While testing I/O latency, the VM must not reach maximum bandwidth or IOPS; otherwise, the observed latency won't reflect actual persistent disk I/O latency. For example, if the IOPS limit is reached at an I/O depth of 30 and the fio command has double that, then the total IOPS remains the same and the reported I/O latency doubles.

    # Running this command causes data loss on the second device.
    # We strongly recommend using a throwaway VM and disk.
    sudo fio --name=write_latency_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=4K --iodepth=4 --rw=randwrite
    
  5. Test read bandwidth by performing sequential reads with multiple parallel streams (8+), using 1 MB as the I/O size and having an I/O depth that is equal to 64 or greater.

    sudo fio --name=read_bandwidth_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=1M --iodepth=64 --rw=read --numjobs=8
    
  6. Test read IOPS. To achieve the maximum PD IOPS, you must maintain a deep I/O queue. If, for example, the I/O size is larger than 4 KB, the VM might reach the bandwidth limit before it reaches the IOPS limit.

    sudo fio --name=read_iops_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=4K --iodepth=64 --rw=randread
    
  7. Test read latency. It's important to fill the disk with data to get a realistic latency measurement. It's important that the VM not reach IOPS or throughput limits during this test because after the persistent disk reaches its saturation limit, it pushes back on incoming I/Os and this is reflected as an artificial increase in I/O latency.

    sudo fio --name=read_latency_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --bs=4K --iodepth=4 --rw=randread
    
  8. Test sequential read bandwidth.

    sudo fio --name=read_bandwidth_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --numjobs=4 --thread --offset_increment=500G \
      --bs=1M --iodepth=64 --rw=read
    
  9. Test sequential write bandwidth.

    sudo fio --name=write_bandwidth_test \
      --filename=/dev/sdb --filesize=2500G \
      --time_based --ramp_time=2s --runtime=1m \
      --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \
      --numjobs=4 --thread --offset_increment=500G \
      --bs=1M --iodepth=64 --rw=write
    

Optimizing local SSDs

The performance by disk type chart describes the maximum achievable performance for local SSD devices. To optimize your apps and VM instances to achieve these speeds, use the following best practices:

Using guest environment optimizations for local SSDs

By default, most Compute Engine-provided Linux images automatically run an optimization script that configures the instance for peak local SSD performance. The script enables certain queue sysfs settings that enhance the overall performance of your machine and mask interrupt requests (IRQs) to specific virtual CPUs (vCPUs). This script only optimizes performance for Compute Engine local SSD devices.

Ubuntu, SLES, and other older images might not be configured to include this performance optimization. If you are using any of these images or an image that is earlier than v20141218, you can install the guest environment to enable these optimizations.

Select the best image for NVMe or SCSI interfaces

Local SSDs can expose either an NVMe or SCSI interface, and the best choice depends on the operating system you are using. Choose an interface for your local SSD devices that works best with your boot disk image. If your instances connect to local SSDs using SCSI interfaces, you can enable multi-queue SCSI on the guest OS to achieve optimal performance over the SCSI interface.

Enable multi-queue SCSI on instances with custom images and local SSDs

Some public images support multi-queue SCSI. If you require multi-queue SCSI capability on custom images that you import to your project, you must enable it yourself. Your imported Linux images can use multi-queue SCSI only if they include kernel version 3.19 or later.

To enable multi-queue SCSI on a custom image, import the image with the VIRTIO_SCSI_MULTIQUEUE guest OS feature enabled and add an entry to your GRUB config:

CentOS

For CentOS7 only.

  1. Import your custom image using the API and include a guestOsFeatures item with a type value of VIRTIO_SCSI_MULTIQUEUE.

  2. Create an instance using your custom image and attach one or more local SSDs.

  3. Connect to your instance through SSH.

  4. Check the value of the /sys/module/scsi_mod/parameters/use_blk_mq file

    $ cat /sys/module/scsi_mod/parameters/use_blk_mq
    

    If the value of this file is Y, then multi-queue SCSI is already enabled on your imported image. If the value of the file is N, include scsi_mod.use_blk_mq=Y in the GRUB_CMDLINE_LINUX entry in your GRUB config file and restart the system.

    1. Open the /etc/default/grub GRUB config file in a text editor.

      $ sudo vi /etc/default/grub
      
    2. Add scsi_mod.use_blk_mq=Y to the GRUB_CMDLINE_LINUX entry.

      GRUB_CMDLINE_LINUX=" vconsole.keymap=us console=ttyS0,38400n8 vconsole.font=latarcyrheb-sun16 scsi_mod.use_blk_mq=Y"
      
    3. Save the config file.

    4. Run the grub2-mkconfig command to regenerate the GRUB file and complete the configuration.

      $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
      
    5. Reboot the instance.

      $ sudo reboot
      

Ubuntu

  1. Import your custom image using the API and include a guestOsFeatures item with a type value of VIRTIO_SCSI_MULTIQUEUE.

  2. Create an instance using your custom image and attach one or more local SSDs using the SCSI interface.

  3. Connect to your instance through SSH.

  4. Check the value of the /sys/module/scsi_mod/parameters/use_blk_mq file.

    $ cat /sys/module/scsi_mod/parameters/use_blk_mq
    

    If the value of this file is Y, then multi-queue SCSI is already enabled on your imported image. If the value of the file is N, include scsi_mod.use_blk_mq=Y in the GRUB_CMDLINE_LINUX entry in your GRUB config file and restart the system.

    1. Open the sudo nano /etc/default/grub GRUB config file in a text editor.

      $ sudo nano /etc/default/grub
      
    2. Add scsi_mod.use_blk_mq=Y to the GRUB_CMDLINE_LINUX entry.

      GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=Y"
      
    3. Save the config file.

    4. Run the update-grub command to regenerate the GRUB file and complete the configuration.

      $ sudo update-grub
      
    5. Reboot the instance.

      $ sudo reboot
      

Disable write cache flushing

File systems, databases, and other apps use cache flushing to ensure that data is committed to durable storage at various checkpoints. For most storage devices, this default makes sense. However, write cache flushes are fairly slow on local SSDs. You can increase the write performance for some apps by disabling automatic flush commands in those apps or by disabling flush options at the file system level.

Local SSDs always flush cached writes within two seconds regardless of the flush commands that you set for your file systems and apps, so temporary hardware failures can cause you to lose only two seconds of cached writes at most. Permanent hardware failures can still cause loss of all data on the device whether the data is flushed or not, so you should still back up critical data to persistent disks or Cloud Storage buckets.

To disable write cache flushing on ext4 file systems, include the nobarrier setting in your mount options or in your /etc/fstab entries. For example:

$ sudo mount -o discard,defaults,nobarrier /dev/[LOCAL_SSD_ID] /mnt/disks/[MNT_DIR]

where: [LOCAL_SSD_ID] is the device ID for the local SSD that you want to mount and [MNT_DIR] is the directory in which to mount it.

Benchmarking local SSD performance

The local SSD performance figures provided in the Performance section were achieved by using specific settings on the local SSD instance. If your instance is having trouble reaching these performance limits and you have already configured the instance using the recommended local SSD settings, you can compare your performance limits against the published limits by replicating the settings used by the Compute Engine team.

  1. Create a local SSD instance that has four or eight vCPUs for each device, depending on your workload. For example, if you want to attach four local SSD devices to an instance, use a machine type with 16 or 32 vCPUs.

    The following command creates a virtual machine with 8 vCPUs, and a single local SSD:

    gcloud compute instances create ssd-test-instance \
    --machine-type "n1-standard-8" \
    --local-ssd interface="SCSI"
    
  2. Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the Performance section. Note that the --bs parameter defines the block size, which affects the results for different types of read and write operations.

    # install dependencies
    sudo apt-get update -y
    sudo apt-get install -y build-essential git libtool gettext autoconf \
    libgconf2-dev libncurses5-dev python-dev fio bison autopoint
    
    # blkdiscard
    git clone https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git
    cd util-linux/
    ./autogen.sh
    ./configure --disable-libblkid
    make
    sudo mv blkdiscard /usr/bin/
    sudo blkdiscard /dev/disk/by-id/google-local-ssd-0
    
    # full write pass - measures write bandwidth with 1M blocksize
    sudo fio --name=writefile --size=100G --filesize=100G \
    --filename=/dev/disk/by-id/google-local-ssd-0 --bs=1M --nrfiles=1 \
    --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers --end_fsync=1 \
    --iodepth=200 --ioengine=libaio
    
    # rand read - measures max read IOPS with 4k blocks
    sudo fio --time_based --name=benchmark --size=100G --runtime=30 \
    --filename=/dev/disk/by-id/google-local-ssd-0 --ioengine=libaio --randrepeat=0 \
    --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
    --numjobs=4 --rw=randread --blocksize=4k --group_reporting
    
    # rand write - measures max write IOPS with 4k blocks
    sudo fio --time_based --name=benchmark --size=100G --runtime=30 \
    --filename=/dev/disk/by-id/google-local-ssd-0 --ioengine=libaio --randrepeat=0 \
    --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
    --numjobs=4 --rw=randwrite --blocksize=4k --group_reporting
    

What's next

หน้านี้มีประโยชน์ไหม โปรดแสดงความคิดเห็น