Compute Engine offers always-encrypted local solid-state drive (SSD) block storage for virtual machine (VM) instances. Each Local SSD disk is 375 GiB in size. You can attach a maximum of 12 TiB (or 32 Local SSD disks) to a single VM. Optionally, you can format and mount multiple Local SSD disks into a single logical volume.
Unlike Persistent Disk, Local SSD disks are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to Persistent Disk. See Configure disks to meet performance requirements for details.
Local SSDs are suitable only for temporary storage such as caches, processing space, or low value data. To store data that is not temporary or ephemeral in nature, use one of our durable storage options.
Local SSD data persistence
Before you create a VM with Local SSD storage, you must understand which events preserve your Local SSD data and which events can cause your Local SSD data to be unrecoverable.
The following information applies to each Local SSD disk attached to a VM.
Scenarios where Compute Engine persists Local SSD data
Data on Local SSDs persist only through the following events:
- If you reboot the guest operating system.
- If you configure your VM for live migration and the VM goes through a host maintenance event.
Scenarios where Compute Engine might not persist Local SSD data
Data on Local SSD disks might be lost if a host error occurs on the VM and Compute Engine can't reconnect the VM to the Local SSD disk within a specified time.
You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine cannot reconnect to the disk before the timeout expires, the VM is restarted. When the VM is restarted, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted VM.
The Local SSD recovery timeout is part of a VM's host maintenance policy. For more information, see Local SSD recovery timeout.
Scenarios where Compute Engine does not persist Local SSD data
Data on Local SSDs does not persist through the following events:
- If you shut down the guest operating system and force the VM to stop.
- If you configure the VM to be preemptible and the VM goes through the preemption process.
- If you configure the VM to stop on host maintenance events and the VM goes through a host maintenance event.
- If you misconfigure the Local SSD so that it becomes unreachable.
- If you disable project billing, causing the VM to stop.
If Compute Engine was unable to recover a VM's Local SSD data, Compute Engine restarts the VM with a mounted and attached Local SSD disk for each previously attached Local SSD disk.
Stop a VM with Local SSD
When you stop or suspend a VM, all data on the Local SSD is discarded. To stop or suspend a VM with attached Local SSDs, you can use one of the following methods.
You can stop or suspend a VM with a Local SSD by including the
--discard-local-ssd=Trueoption in the
gcloud compute instances stopand
gcloud compute instances suspendcommands. This option indicates that the contents of the Local SSD are discarded when the VM stops.
If you want to preserve the contents of the Local SSD, you can stop or suspend a VM using
--discard-local-ssd=False. This begins a managed migration of Local SSD data to persistent storage upon VM termination or suspension. You are charged for the additional storage utilization while the VM is not running. See the suspend documentation for more details. You might have to remount the Local SSD into the file system when the VM is restarted.
You can also shut down the VM from within the guest OS. This will not preserve Local SSD data.
--discard-local-ssd=Falseis currently in public preview only and not covered under the GA terms for Compute Engine.
- Currently Compute Engine only supports using
--discard-local-ssd=Falsein VMs with at most 16 attached Local SSDs.
- Saving the Local SSD data is a slow process. Copying the Local SSD data
begins only after the
stoprequest is received.
- When using Spot VMs or preemptible VMs, preemption may happen at any time, and might interrupt a suspend or resume attempt. In this case, the VM is STOPPED (preempted), not SUSPENDED, and no Local SSD data is retained in persistent storage when the VM resumes or restarts
Before you delete a VM with Local SSDs, make sure that you migrate your critical data from the Local SSD to a persistent disk or to another VM.
If Local SSDs do not meet your redundancy or flexibility requirements, you can use Local SSDs in combination with other storage options.
12 TB maximum capacity
The following table provides an overview of Local SSD capacity and estimated performance using NVMe. To reach maximum performance limits, use the largest possible machine type.
|Storage space||Partitions||Read IOPS||Write IOPS||Read Throughput
Choosing a valid number of Local SSDs
For most machine types on Compute Engine, you can attach Local SSDs. For some machine types, Local SSD is automatically attached to the machine type.
Depending on the machine type of the VM, there are certain constraints around the valid number of Local SSDs that are allowed on the VM instance. Based on the machine type, you can attach from 1 to 24 Local SSDs to a single VM, as shown in the following table.
|N1 machine types||Local SSD is automatically attached||Number of Local SSD disks allowed per VM instance|
|All N1 machine types||—||1 to 8, 16, or 24|
|N2 machine types|
|Machine types with 2 to 10 vCPUs, inclusive||—||1, 2, 4, 8, 16, or 24|
|Machine types with 12 to 20 vCPUs, inclusive||—||2, 4, 8, 16, or 24|
|Machine types with 22 to 40 vCPUs, inclusive||—||4, 8, 16, or 24|
|Machine types with 42 to 80 vCPUs, inclusive||—||8, 16, or 24|
|Machine types with 82 to 128 vCPUs, inclusive||—||16 or 24|
|N2D machine types|
|Machine types with 2 to 16 vCPUs, inclusive||—||1, 2, 4, 8, 16, or 24|
|Machine types with 32 or 48 vCPUs||—||2, 4, 8, 16, or 24|
|Machine types with 64 or 80 vCPUs||—||4, 8, 16, or 24|
|Machine types with 96 to 224 vCPUs, inclusive||—||8, 16, or 24|
|C2 machine types|
|Machine types with 4 or 8 vCPUs||—||1, 2, 4, or 8|
|Machine types with 16 vCPUs||—||2, 4, or 8|
|Machine types with 30 vCPUs||—||4 or 8|
|Machine types with 60 vCPUs||—||8|
|C2D machine types|
|Machine types with 2 to 16 vCPUs, inclusive||—||1, 2, 4, 8|
|Machine types with 32 vCPUs||—||2, 4, 8|
|Machine types with 56 vCPUs||—||4, 8|
|Machine types with 112 vCPUs||—||8|
|C3 machine types|
|C3D machine types (Preview)|
|A2 standard machine types|
||—||1, 2, 4, or 8|
||—||2, 4, or 8|
||—||4 or 8|
|A2 ultra machine types|
|G2 machine types|
|M1 machine types|
||—||1 to 8|
|M3 machine types|
|E2, Tau T2D, Tau T2A, and M2 machine types||These machine types don't support Local SSD disks.|
Choosing an interface
You can connect Local SSDs to your VMs using either an NVMe interface or a SCSI interface. Most public images include both NVMe and SCSI drivers. For public images that support SCSI, multi-queue SCSI is enabled. For a detailed list, see the Interfaces tab for each table in the operating system details documentation.
Considerations for NVMe for custom images
Most images include a kernel with optimized
drivers that allow your VM to achieve the best performance using NVMe. Your
imported custom Linux images achieve the best performance with NVMe if they
include kernel version
4.14.68 or later.
Considerations for SCSI for custom images
If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface.
If you are using a custom image that you imported, see Enable multi-queue SCSI.
The Linux device names for the disks attached to your VM depend on the interface
that you choose when creating the disks. When you use the
system command to view your disk devices, it displays the prefix
disks attached with the NVMe interface, and the prefix
sd for disks attached
with the SCSI interface.
The ordering of the disk numbers or NVMe controllers is not predictable or
consistent across VMs restarts. On the first boot, a persistent disk might be
sda. On the second boot, the device name for the same persistent
disk might be
When accessing attached disks, you should use the symbolic links created in
/dev/disk/by-id/ instead. These names persist across reboots.
SCSI device names
The format of a SCSI-attached disk device is
sda for the first attached disk.
The disk partitions appear as
sda2, and so on. Each additional disk
uses a sequential letter, such as
sdc, and so on. When
reached, the next disks added have the names
sdac, and so on,
NVMe device names
The format of a NVMe-attached disk device in Linux operating systems is
number represents the NVMe disk controller number and
namespace is an NVMe name space ID that is assigned by the NVMe disk
controller. For partitions, pn is appended to the device name, where
n is a number, starting with 1, that denotes the
The controller number starts at
0. A single NVMe disk attached to your VM
has a device name of
nvme0n1. Most machine types use a single NVMe disk
controller. The Local SSD device names are then
and so on.
Local SSDs on C3 and C3D VMs have a separate NVMe controller for each disk.
So, on C3 and C3D VMs, Local SSD NVMe-attached device names look like
nvme2n1, and so on. The number of attached Local SSDs depends
on the machine type of your C3 VM
or C3D VM.
C3 and C3D VMs use NVMe for both persistent disks and Local SSDs. Each VM has 1 NVMe controller for persistent disks and 1 NVMe controller for each Local SSD. The persistent disk NVMe controller has a single NVMe namespace for all attached persistent disks. So, a VM with 2 persistent disks (each with 2 partitions) and 2 Local SSDs uses the following device names on a C3 or C3D VM:
nvme0n1- first Persistent Disk
nvme0n2- second Persistent Disk
nvme1n1- first Local SSD
nvme2n1- second Local SSD
Local SSD performance depends heavily on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you must use a special NVMe-enabled image to achieve the best performance. For more information, see Choosing an interface.
To reach maximum performance limits with an N1 machine type, use 32 or more or more vCPUs. To reach maximum performance limits on an N2, N2D, or A2 machine type, use 24 or more vCPUs. Read and write IOPS are expected to be 20% lower on VMs with N2D machine types, compared to VMs with N1, N2, or A2 machine types.
Note that reading and writing to Local SSDs requires CPU cycles from your virtual machine. To achieve high and consistent IOPS levels, you must have free CPUs to process input and output operations. To learn more, see Configure disks to meet performance requirements.
|Storage space (GiB)||Partitions||IOPS||Throughput
|Storage space (GiB)||Partitions||IOPS||Throughput
Optimizing Local SSD performance
There are several VM and disk configuration settings that can improve Local SSD performance. For more information, see Optimizing Local SSD performance.
Local SSDs and preemptible VM instances
You can start a Spot VM or preemptible VM instance with a local SSD and Compute Engine charges you discounted spot prices for the local SSD usage. Local SSDs attached to Spot VMs or preemptible instances work like normal local SSDs, retain the same data persistence characteristics, and remain attached for the life of the VM.
Compute Engine doesn't charge you for local SSDs if the VMs are preempted in the first minute after they start running.
For more information about local SSDs, see Adding local SSDs.
Reserving Local SSDs with committed use discounts
To reserve Local SSD resources in a specific zone, see Reservations of Compute Engine zonal resources.
To receive committed use discounts for Local SSDs in a specific zone, you must create and attach reservations to the commitments that you purchase for those Local SSD resources. For more information, see Attach reservations to commitments.