The performance by disk type chart describes the maximum achievable performance for local SSD partitions. To optimize your apps and VM instances to achieve these speeds, use the following best practices:
Use guest environment optimizations for Local SSDs
By default, most Compute Engine-provided Linux images
automatically run an optimization script that configures the instance for peak
Local SSD performance. The script enables certain
queue sysfs
settings
that enhance the overall performance of your machine and mask
interrupt requests (IRQs)
to specific virtual CPUs (vCPUs). This script only optimizes performance for
Compute Engine local SSD partitions.
Ubuntu, SLES, and other earlier images might not be configured to include this performance optimization. If you are using any of these images or an image that is earlier than v20141218, you can install the guest environment to enable these optimizations.
Choose an interface to connect your local SSDs
You can connect Local SSDs to your VMs using either the NVMe interface or the SCSI interface. The best choice depends on the operating system (OS) you are using. For most workload configurations involving Local SSDs, using the NVMe interface leads to better performance.
If you need to use a specific OS, choose an interface for your local SSD partitions that works best with your boot disk image.
If you have an existing setup that requires using a SCSI interface, use an image that supports multi-queue SCSI to achieve better performance over the standard SCSI interface.
Enable multi-queue SCSI
Some public images support multi-queue SCSI. To use multi-queue SCSI on custom
images that you import to your project, you must enable it yourself. Your
imported Linux images can use multi-queue SCSI only if they include kernel
version 3.19
or later.
To enable multi-queue SCSI on a custom image, import the image with the
VIRTIO_SCSI_MULTIQUEUE
guest OS feature enabled and add
an entry to your GRUB config:
CentOS
For CentOS7 only.
Import your custom image using the API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
/etc/default/grub
GRUB config file in a text editor.$ sudo vi /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX=" vconsole.keymap=us console=ttyS0,38400n8 vconsole.font=latarcyrheb-sun16 scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
grub2-mkconfig
command to regenerate the GRUB file and complete the configuration.$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the instance.
$ sudo reboot
Ubuntu
Import your custom image using the Compute Engine API and include a
guestOsFeatures
item with atype
value ofVIRTIO_SCSI_MULTIQUEUE
.Create an instance using your custom image and attach one or more local SSDs using the SCSI interface.
Connect to your instance through SSH.
Check the value of the
/sys/module/scsi_mod/parameters/use_blk_mq
file.$ cat /sys/module/scsi_mod/parameters/use_blk_mq
If the value of this file is
Y
, then multi-queue SCSI is already enabled on your imported image. If the value of the file isN
, includescsi_mod.use_blk_mq=Y
in theGRUB_CMDLINE_LINUX
entry in your GRUB config file and restart the system.Open the
sudo nano /etc/default/grub
GRUB config file in a text editor.$ sudo nano /etc/default/grub
Add
scsi_mod.use_blk_mq=Y
to theGRUB_CMDLINE_LINUX
entry.GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=Y"
Save the config file.
Run the
update-grub
command to regenerate the GRUB file and complete the configuration.$ sudo update-grub
Reboot the instance.
$ sudo reboot