About local SSDs

Stay organized with collections Save and categorize content based on your preferences.

Compute Engine offers always-encrypted local solid-state drive (SSD) block storage for virtual machine (VM) instances. Each local SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for 9 TB per instance. Optionally, you can format and mount multiple local SSD partitions into a single logical volume.

Unlike Persistent Disks, Local SSDs are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to persistent disks. See Configure disks to meet performance requirements for details.

Local SSDs are suitable only for temporary storage such as caches, processing space, or low value data. To store data that is not temporary or ephemeral in nature, use one of our durable storage options.

When you stop or suspend a VM, all data on the local SSD is discarded. To stop or suspend a VM with attached local SSD disks, you can use one of the following methods.

  • You can stop or suspend a VM with a local SSD by including the --discard-local-ssd option in the gcloud compute instances stop command. This option indicates that the contents of the local SSD disk are discarded when the VM stops. This is the recommended approach.
  • You can shut down the VM from within the guest OS.

Before you shutdown or delete a VM with local SSD disks, make sure that you migrate your critical data from the local SSD disk to a persistent disk or to another VM.

If local SSDs do not meet your redundancy or flexibility requirements, you can use local SSDs in combination with other storage options.

9 TB maximum capacity

You can create an instance with:

  • 16 local SSD partitions for 6 TB of local SSD space and performance of 1.6 million read IOPS
  • 24 local SSD partitions for 9 TB of local SSD space and performance of 2.4 million read IOPS

This is available on instances with N1, N2, N2D, and custom machine types. To achieve maximum performance on N1 machines, select a machine type with 32 or more vCPUs. To achieve maximum performance on N2 and N2D machines, select a machine type with 24 or more vCPUs.

Note that reading and writing to local SSD disks requires CPU cycles from your virtual machine. To achieve high and consistent IOPS levels, you must have free CPUs to process input and output operations. To learn more, see Configure disks to meet performance requirements.

Local SSD data persistence

Before you create an instance with a local SSD, you must understand what events preserve your local SSD data and what events can cause your local SSD data to be unrecoverable.

Data on local SSDs persist only through the following events:

  • If you reboot the guest operating system.
  • If you configure your instance for live migration and the instance goes through a host maintenance event.
  • If the host system experiences a host error, Compute Engine makes a best effort to reconnect to the VM and preserve the local SSD data, but might not succeed. If the attempt is successful, the VM restarts automatically. However, if the attempt to reconnect fails, the VM restarts without the data. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive. To configure how your VM instances behave in the event of a host error, see Setting instance availability policies.

Data on Local SSDs does not persist through the following events:

  • If you shut down the guest operating system and force the instance to stop.
  • If you configure the instance to be preemptible and the instance goes through the preemption process.
  • If you configure the instance to stop on host maintenance events and the instance goes through a host maintenance event.
  • If the host system experiences a host error, and the underlying drive does not recover within 60 minutes, Compute Engine does not attempt to preserve the data on your local SSD. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive.
  • If you misconfigure the local SSD so that it becomes unreachable.
  • If you disable project billing. The instance will stop and your data will be lost.

Choosing an interface

You can connect local SSDs to your VMs using either an NVMe interface or a SCSI interface. Most public images include both NVMe and SCSI drivers. For public images that support SCSI, multi-queue SCSI is enabled. For a detailed list, see the supported interfaces column in the operating system details documentation.

Considerations for NVME for custom images

Most images include a kernel with optimized drivers that allow your VM to achieve the best performance using NVMe. Your imported custom Linux images achieve the best performance with NVMe if they include kernel version 4.14.68 or later.

Considerations for SCSI for custom images

If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface. If you are using a custom image that you imported, see Enable multi-queue SCSI.

Choosing a valid number of local SSDs

If you are attaching multiple local SSDs to a single VM instance, there are certain constraints around the valid number of local SSDs you can attach, based on the machine type of the VM instance. Depending on the machine type of the VM, you can attach 1 to 8, 16, or 24 local SSDs to a single VM. For more information, see Local SSDs and machine types restrictions.

Performance

Local SSD performance depends heavily on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you must use a special NVMe-enabled image to achieve the best performance. For more information, see Selecting the NVMe or SCSI interfaces.

To reach maximum performance limits with an N1 machine type, use 32 or more or more vCPUs. To reach maximum performance limits on an N2, N2D, or A2 machine type, use 24 or more vCPUs. Read and write IOPS are expected to be 20% lower on VMs with N2D machine types, compared to VMs with N1, N2, or A2 machine types.

NVMe

Storage space (GB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 170,000 90,000 660 350
750 2 340,000 180,000 1,320 700
1,125 3 510,000 270,000 1,980 1,050
1,500 4 680,000 360,000 2,650 1,400
1,875 5 680,000 360,000 2,650 1,400
2,250 6 680,000 360,000 2,650 1,400
2,625 7 680,000 360,000 2,650 1,400
3,000 8 680,000 360,000 2,650 1,400
6,000 16 1,600,000 800,000 6,240 3,120
9,000 24 2,400,000 1,200,000 9,360 4,680

SCSI

Storage space (GB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 100,000 70,000 390 270
750 2 200,000 140,000 780 550
1,125 3 300,000 210,000 1,170 820
1,500 4 400,000 280,000 1,560 1,090
1,875 5 400,000 280,000 1,560 1,090
2,250 6 400,000 280,000 1,560 1,090
2,625 7 400,000 280,000 1,560 1,090
3,000 8 400,000 280,000 1,560 1,090
6,000 16 900,000 800,000 6,240 3,120
9,000 24 900,000 800,000 9,360 4,680

Optimizing local SSD performance

There are several VM and disk configuration settings that can improve local SSD performance. For more information, see Optimizing local SSD performance.