About local SSDs


Compute Engine offers always-encrypted local solid-state drive (SSD) block storage for virtual machine (VM) instances. Each local SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for 9 TB per instance. Optionally, you can format and mount multiple local SSD partitions into a single logical volume.

Unlike Persistent Disks, Local SSDs are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to persistent disks. See Configure disks to meet performance requirements for details.

Local SSDs are suitable only for temporary storage such as caches, processing space, or low value data. To store data that is not temporary or ephemeral in nature, use one of our durable storage options.

You cannot stop a VM with a local SSD via the gcloud CLI or the console. However, Compute Engine does not prevent you from shutting down a VM from inside the guest operating system (OS). If you do shut down a VM with a local SSD through the guest operating system, the data on the local SSD is lost. Make sure that you migrate your critical data from the local SSD to a persistent disk or to another VM before deleting the VM.

If local SSDs do not meet your redundancy or flexibility requirements, you can use local SSDs in combination with other storage options.

9 TB maximum capacity

You can create an instance with:

  • 16 local SSD partitions for 6 TB of local SSD space and performance of 1.6 million read IOPS
  • 24 local SSD partitions for 9 TB of local SSD space and performance of 2.4 million read IOPS

This is available on instances with N1, N2, N2D, and custom machine types. To achieve maximum performance on N1 machines, select a machine type with 32 or more vCPUs. To achieve maximum performance on N2 and N2D machines, select a machine type with 24 or more vCPUs.

Note that reading and writing to local SSD disks requires CPU cycles from your virtual machine. To achieve high and consistent IOPS levels, you must have free CPUs to process input and output operations. To learn more, see Configure disks to meet performance requirements.

Local SSD data persistence

Before you create an instance with a local SSD, you must understand what events preserve your local SSD data and what events can cause your local SSD data to be unrecoverable.

Data on local SSDs persist only through the following events:

  • If you reboot the guest operating system.
  • If you configure your instance for live migration and the instance goes through a host maintenance event.
  • If the host system experiences a host error, Compute Engine makes a best effort to reconnect to the VM and preserve the local SSD data, but might not succeed. If the attempt is successful, the VM restarts automatically. However, if the attempt to reconnect fails, the VM restarts without the data. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive. To configure how your VM instances behave in the event of a host error, see Setting instance availability policies.

Data on Local SSDs does not persist through the following events:

  • If you shut down the guest operating system and force the instance to stop.
  • If you configure the instance to be preemptible and the instance goes through the preemption process.
  • If you configure the instance to stop on host maintenance events and the instance goes through a host maintenance event.
  • If the host system experiences a host error, and the underlying drive does not recover within 60 minutes, Compute Engine does not attempt to preserve the data on your local SSD. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive.
  • If you misconfigure the local SSD so that it becomes unreachable.
  • If you disable project billing. The instance will stop and your data will be lost.

Choosing an interface

You can connect local SSDs to your VMs using either an NVMe interface or a SCSI interface. Most public images include both NVMe and SCSI drivers. Most images include a kernel with optimized drivers that allow your VM to achieve the best performance using NVMe. Your imported Linux images achieve the best performance with NVMe if they include kernel version 4.14.68 or later.

The following images support NVMe but do not include all optimizations for NVMe:

  • Debian 9 and earlier
  • CentOS 6 and earlier
  • RHEL 6 and earlier
  • SLES 12SP3 and earlier
  • Container-Optimized OS (COS) 65 and earlier

If you have an existing setup that requires using a SCSI interface, use an image that supports multi-queue SCSI to achieve better performance over the standard SCSI interface.

The following images support multi-queue SCSI:

  • Debian 9 Stretch images, or image family debian-9
  • Ubuntu 14.04 LTS image ubuntu-1404-trusty-v20170807 and later, or image family ubuntu-1404-lts
  • Ubuntu 16.04 LTS image ubuntu-1604-xenial-v20170803 and later, or image family ubuntu-1604-lts
  • Ubuntu 17.10 image family ubuntu-1710
  • Ubuntu 18.04 LTS image family ubuntu-1804-lts
  • All Windows Server images
  • All SQL Server images

Optionally, you can enable multi-queue SCSI for custom images that you import to your project. Read enabling multi-queue SCSI to learn more.

Choosing a valid number of local SSDs

If you are attaching multiple local SSDs to a single VM instance, there are certain constraints around the valid number of local SSDs you can attach, based on the machine type of the VM instance. Depending on the machine type of the VM, you can attach 1 to 8, 16, or 24 local SSDs to a single VM. For more information, see Local SSDs and machine types restrictions.

Performance

Local SSD performance depends heavily on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you must use a special NVMe-enabled image to achieve the best performance. For more information, see Selecting the NVMe or SCSI interfaces.

To reach maximum performance limits with an N1 machine type, use 32 or more vCPUs. To reach maximum performance limits on an N2, N2D, or A2 machine type, use 24 or more vCPUs.

NVMe

Storage space (GB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 170,000 90,000 660 350
750 2 340,000 180,000 1,320 700
1,125 3 510,000 270,000 1,980 1,050
1,500 4 680,000 360,000 2,650 1,400
1,875 5 680,000 360,000 2,650 1,400
2,250 6 680,000 360,000 2,650 1,400
2,625 7 680,000 360,000 2,650 1,400
3,000 8 680,000 360,000 2,650 1,400
6,000 16 1,600,000 800,000 6,240 3,120
9,000 24 2,400,000 1,200,000 9,360 4,680

SCSI

Storage space (GB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 100,000 70,000 390 270
750 2 200,000 140,000 780 550
1,125 3 300,000 210,000 1,170 820
1,500 4 400,000 280,000 1,560 1,090
1,875 5 400,000 280,000 1,560 1,090
2,250 6 400,000 280,000 1,560 1,090
2,625 7 400,000 280,000 1,560 1,090
3,000 8 400,000 280,000 1,560 1,090
6,000 16 900,000 800,000 6,240 3,120
9,000 24 900,000 800,000 9,360 4,680

Read and write IOPS are expected to be 20% lower on VMs with N2D machine types, compared to VMs with N1, N2, or A2 machine types.

Optimizing local SSD performance

There are several VM and disk configuration settings that can improve local SSD performance. For more information, see Optimizing local SSD performance.