The performance by disk type chart describes the maximum achievable performance for local SSD partitions. To optimize your apps and VM instances to achieve these speeds, use the following best practices:
Use guest environment optimizations for Local SSDs
By default, most Compute Engine-provided Linux images
automatically run an optimization script that configures the instance for peak
Local SSD performance. The script enables certain
queue sysfs
settings
that enhance the overall performance of your machine and mask
interrupt requests (IRQs)
to specific virtual CPUs (vCPUs). This script only optimizes performance for
Compute Engine local SSD partitions.
Ubuntu, SLES, and other earlier images might not be configured to include this performance optimization. If you are using any of these images or an image that is earlier than v20141218, you can install the guest environment to enable these optimizations.
Choose an interface to connect your local SSDs
You can connect Local SSDs to your VMs using either the NVMe interface or the SCSI interface. The best choice depends on the operating system (OS) you are using. For most workload configurations involving Local SSDs, using the NVMe interface leads to better performance.
If you need to use a specific OS, choose an interface for your local SSD partitions that works best with your boot disk image.
If you have an existing setup that requires using a SCSI interface, use an image that supports multi-queue SCSI to achieve better performance over the standard SCSI interface.
Enable multi-queue SCSI
Some public images support multi-queue SCSI. To use multi-queue SCSI on custom
images that you import to your project, you must enable it yourself. Your
imported Linux images can use multi-queue SCSI only if they include kernel
version 3.19
or later.
To enable multi-queue SCSI on a custom image, import the image with the
VIRTIO_SCSI_MULTIQUEUE
guest OS feature enabled and add
an entry to your GRUB config:
CentOS
For CentOS7 only.
Import your custom image using the API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
/etc/default/grub
GRUB config file in a text editor.$ sudo vi /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX=" vconsole.keymap=us console=ttyS0,38400n8 vconsole.font=latarcyrheb-sun16 scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
grub2-mkconfig
command to regenerate the GRUB file and complete the configuration.$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the instance.
$ sudo reboot
Ubuntu
Import your custom image using the Compute Engine API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs using the SCSI interface.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file.$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
sudo nano /etc/default/grub
GRUB config file in a text editor.$ sudo nano /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
update-grub
command to regenerate the GRUB file and complete the configuration.$ sudo update-grub
Reboot the instance.
$ sudo reboot
Disable write cache flushing
File systems, databases, and other apps use cache flushing to ensure that data is committed to durable storage at various checkpoints. For most storage devices, this default makes sense. However, write cache flushes are fairly slow on local SSDs. You can increase the write performance for some apps by disabling automatic flush commands in those apps or by disabling flush options at the file system level.
Local SSDs always flush cached writes within two seconds regardless of the flush commands that you set for your file systems and apps, so temporary hardware failures can cause you to lose only two seconds of cached writes at most. Permanent hardware failures can still cause loss of all data on the device whether the data is flushed or not, so you should still back up critical data to Persistent Disks or Cloud Storage buckets.
To disable write cache flushing on ext4
file systems, include the
nobarrier
setting in your mount options or in your /etc/fstab
entries. For example:
$ sudo mount -o discard,defaults,nobarrier /dev/[LOCAL_SSD_ID] /mnt/disks/[MNT_DIR]
where: [LOCAL_SSD_ID]
is the device ID for the local SSD that you want to
mount and [MNT_DIR]
is the directory in which to mount it.
Disable VM automatic restart
Local SSD firmware includes a capability that unlocks faster write performance for sync-heavy workloads. When you enable this performance enhancement, the local SSD disk is more likely to lose all of its data if the VM that it's attached to fails.
To enable this performance enhancement, set your VM's automaticRestart
policy
to false
. The automaticRestart
field is a VM scheduling policy that
specifies whether Compute Engine should should automatically restart
the VM if it fails. This means that if you enable the write performance
enhancement on a local SSD and the VM that it's attached to fails,
Compute Engine does not restart the VM and all data on the local SSD
is lost.
You might need to make sure that your workload is set up to detect and restart
terminated VMs before setting the automaticRestart
field to false
.