Local SSD performance limits provided in the Choose a storage option section were achieved by using specific settings on the local SSD instance. If your virtual machine (VM) instance is having trouble reaching these performance limits and you have already configured the instance using the recommended local SSD settings, you can compare your performance limits against the published limits by replicating the settings used by the Compute Engine team.
These instructions assume that you are using a Linux operating system with the
apt
package manager installed.
Create a virtual machine with 8 vCPUs and one local SSD partition
Create a local SSD instance that has four or eight vCPUs for each device, depending on your workload. For example, if you want to attach four local SSD partitions to an instance, use a machine type with 16 or 32 vCPUs.
The following command creates a virtual machine with 8 vCPUs and a single local SSD:
gcloud compute instances create ssd-test-instance \ --machine-type "n1-standard-8" \ --local-ssd interface=nvme
Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section. Note that the
--bs
parameter defines the block size, which affects the results for different types of read and write operations.# install tools sudo apt-get -y update sudo apt-get install -y fio util-linux # discard local SSD sectors sudo blkdiscard /dev/disk/by-id/google-local-nvme-ssd-0 # full write pass - measures write bandwidth with 1M blocksize sudo fio --name=writefile --size=100G --filesize=100G \ --filename=/dev/disk/by-id/google-local-nvme-ssd-0 --bs=1M --nrfiles=1 \ --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers --end_fsync=1 \ --iodepth=200 --ioengine=libaio # rand read - measures max read IOPS with 4k blocks sudo fio --time_based --name=benchmark --size=100G --runtime=30 \ --filename=/dev/disk/by-id/google-local-nvme-ssd-0 --ioengine=libaio --randrepeat=0 \ --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \ --numjobs=4 --rw=randread --blocksize=4k --group_reporting # rand write - measures max write IOPS with 4k blocks sudo fio --time_based --name=benchmark --size=100G --runtime=30 \ --filename=/dev/disk/by-id/google-local-nvme-ssd-0 --ioengine=libaio --randrepeat=0 \ --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \ --numjobs=4 --rw=randwrite --blocksize=4k --group_reporting
Create a virtual machine with 32 vCPUs and 24 local SSD partitions
If you want to attach 24 local SSD partitions to an instance, use a machine type with 32 or more vCPUs.
The following command creates a virtual machine with 32 vCPUs and 24 local SSD partitions:
gcloud compute instances create ssd-test-instance \ --machine-type "n1-standard-32" \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme
Install the
mdadm
tool. The install process formdadm
includes a user prompt that halts scripts, so run the process manually:Debian and Ubuntu
sudo apt update && sudo apt install mdadm --no-install-recommends
CentOS and RHEL
sudo yum install mdadm -y
SLES and openSUSE
sudo zypper install -y mdadm
Use the
lsblk
command to identify all of the local SSDs that you want to mount together:lsblk
The output looks similar to the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk └─sda1 8:1 0 10G 0 part / nvme0n1 259:0 0 375G 0 disk nvme0n2 259:1 0 375G 0 disk nvme0n3 259:2 0 375G 0 disk nvme0n4 259:3 0 375G 0 disk nvme0n5 259:4 0 375G 0 disk nvme0n6 259:5 0 375G 0 disk nvme0n7 259:6 0 375G 0 disk nvme0n8 259:7 0 375G 0 disk nvme0n9 259:8 0 375G 0 disk nvme0n10 259:9 0 375G 0 disk nvme0n11 259:10 0 375G 0 disk nvme0n12 259:11 0 375G 0 disk nvme0n13 259:12 0 375G 0 disk nvme0n14 259:13 0 375G 0 disk nvme0n15 259:14 0 375G 0 disk nvme0n16 259:15 0 375G 0 disk nvme0n17 259:16 0 375G 0 disk nvme0n18 259:17 0 375G 0 disk nvme0n19 259:18 0 375G 0 disk nvme0n20 259:19 0 375G 0 disk nvme0n21 259:20 0 375G 0 disk nvme0n22 259:21 0 375G 0 disk nvme0n23 259:22 0 375G 0 disk nvme0n24 259:23 0 375G 0 disk
Local SSDs in SCSI mode have standard names similar to
sdb
. Local SSDs in NVMe mode have names similar tonvme0n1
.Use the
mdadm
tool to combine multiple local SSD devices into a single array named/dev/md0
. This example merges twenty four local SSD devices in NVMe mode. For local SSD devices in SCSI mode, specify the names that you obtained from thelsblk
command:sudo mdadm --create /dev/md0 --level=0 --raid-devices=24 \ /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 \ /dev/nvme0n5 /dev/nvme0n6 /dev/nvme0n7 /dev/nvme0n8 \ /dev/nvme0n9 /dev/nvme0n10 /dev/nvme0n11 /dev/nvme0n12 \ /dev/nvme0n13 /dev/nvme0n14 /dev/nvme0n15 /dev/nvme0n16 \ /dev/nvme0n17 /dev/nvme0n18 /dev/nvme0n19 /dev/nvme0n20 \ /dev/nvme0n21 /dev/nvme0n22 /dev/nvme0n23 /dev/nvme0n24
The response is similar to the following:
mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section. that the
--bs
parameter defines the block size, which affects the results for different types of read and write operations.# install tools sudo apt-get -y update sudo apt-get install -y fio util-linux # full write pass - measures write bandwidth with 1M blocksize sudo fio --name=writefile --size=100G --filesize=100G \ --filename=/dev/md0 --bs=1M --nrfiles=1 \ --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers --end_fsync=1 \ --iodepth=200 --ioengine=libaio # rand read - measures max read IOPS with 4k blocks sudo fio --time_based --name=benchmark --size=100G --runtime=30 \ --filename=/dev/md0 --ioengine=libaio --randrepeat=0 \ --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \ --numjobs=32 --rw=randread --blocksize=4k --group_reporting --norandommap # rand write - measures max write IOPS with 4k blocks sudo fio --time_based --name=benchmark --size=100G --runtime=30 \ --filename=/dev/md0 --ioengine=libaio --randrepeat=0 \ --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \ --numjobs=32 --rw=randwrite --blocksize=4k --group_reporting --norandommap
What's next
- Learn about local SSD pricing.