The performance by disk type chart describes the maximum achievable performance for local SSD partitions. To optimize your apps and VM instances to achieve these speeds, use the following best practices:
Using guest environment optimizations for Local SSDs
By default, most Compute Engine-provided Linux images
automatically run an optimization script that configures the instance for peak
Local SSD performance. The script enables certain
queue sysfs
settings
that enhance the overall performance of your machine and mask
interrupt requests (IRQs)
to specific virtual CPUs (vCPUs). This script only optimizes performance for
Compute Engine local SSD partitions.
Ubuntu, SLES, and other earlier images might not be configured to include this performance optimization. If you are using any of these images or an image that is earlier than v20141218, you can install the guest environment to enable these optimizations.
Select the best image for NVMe or SCSI interfaces
Local SSDs can expose either an NVMe or SCSI interface, and the best choice depends on the operating system you are using. Choose an interface for your local SSD partitions that works best with your boot disk image. If your instances connect to local SSDs using SCSI interfaces, you can enable multi-queue SCSI on the guest OS to achieve optimal performance over the SCSI interface.
Enable multi-queue SCSI on instances with custom images and Local SSDs
Some public images
support multi-queue SCSI.
If you require multi-queue SCSI capability on
custom images that you import to your project, you must enable it yourself.
Your imported Linux images can use multi-queue SCSI only if they include
kernel version 3.19
or later.
To enable multi-queue SCSI on a custom image, import the image with the
VIRTIO_SCSI_MULTIQUEUE
guest OS feature enabled and add
an entry to your GRUB config:
CentOS
For CentOS7 only.
Import your custom image using the API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
/etc/default/grub
GRUB config file in a text editor.$ sudo vi /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX=" vconsole.keymap=us console=ttyS0,38400n8 vconsole.font=latarcyrheb-sun16 scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
grub2-mkconfig
command to regenerate the GRUB file and complete the configuration.$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the instance.
$ sudo reboot
Ubuntu
Import your custom image using the Compute Engine API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs using the SCSI interface.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file.$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
sudo nano /etc/default/grub
GRUB config file in a text editor.$ sudo nano /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
update-grub
command to regenerate the GRUB file and complete the configuration.$ sudo update-grub
Reboot the instance.
$ sudo reboot
Disable write cache flushing
File systems, databases, and other apps use cache flushing to ensure that data is committed to durable storage at various checkpoints. For most storage devices, this default makes sense. However, write cache flushes are fairly slow on local SSDs. You can increase the write performance for some apps by disabling automatic flush commands in those apps or by disabling flush options at the file system level.
Local SSDs always flush cached writes within two seconds regardless of the flush commands that you set for your file systems and apps, so temporary hardware failures can cause you to lose only two seconds of cached writes at most. Permanent hardware failures can still cause loss of all data on the device whether the data is flushed or not, so you should still back up critical data to Persistent Disks or Cloud Storage buckets.
To disable write cache flushing on ext4
file systems, include the
nobarrier
setting in your mount options or in your /etc/fstab
entries. For example:
$ sudo mount -o discard,defaults,nobarrier /dev/[LOCAL_SSD_ID] /mnt/disks/[MNT_DIR]
where: [LOCAL_SSD_ID]
is the device ID for the local SSD that you want to
mount and [MNT_DIR]
is the directory in which to mount it.
What's next
- Benchmark your local SSDs.
- Learn about local SSD pricing.