Compute Engine offers always-encrypted local solid-state drive (SSD) block storage for virtual machine (VM) instances. Each local SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for 9 TB per instance. Optionally, you can format and mount multiple local SSD partitions into a single logical volume.
Unlike Persistent Disks, Local SSDs are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to persistent disks. See Configure disks to meet performance requirements for details.
Local SSDs are suitable only for temporary storage such as caches, processing space, or low value data. To store data that is not temporary or ephemeral in nature, use one of our durable storage options.
You cannot stop a VM with a local SSD via the gcloud CLI or the console. However, Compute Engine does not prevent you from shutting down a VM from inside the guest operating system (OS). If you do shut down a VM with a local SSD through the guest operating system, the data on the local SSD is lost. Make sure that you migrate your critical data from the local SSD to a persistent disk or to another VM before deleting the VM.
If local SSDs do not meet your redundancy or flexibility requirements, you can use local SSDs in combination with other storage options.
9 TB maximum capacity
You can create an instance with:
- 16 local SSD partitions for 6 TB of local SSD space and performance of 1.6 million read IOPS
- 24 local SSD partitions for 9 TB of local SSD space and performance of 2.4 million read IOPS
This is available on instances with N1, N2, N2D, and custom machine types. To achieve maximum performance on N1 machines, select a machine type with 32 or more vCPUs. To achieve maximum performance on N2 and N2D machines, select a machine type with 24 or more vCPUs.
Note that reading and writing to local SSD disks requires CPU cycles from your virtual machine. To achieve high and consistent IOPS levels, you must have free CPUs to process input and output operations. To learn more, see Configure disks to meet performance requirements.
Local SSD data persistence
Before you create an instance with a local SSD, you must understand what events preserve your local SSD data and what events can cause your local SSD data to be unrecoverable.
Data on local SSDs persist only through the following events:
- If you reboot the guest operating system.
- If you configure your instance for live migration and the instance goes through a host maintenance event.
- If the host system experiences a host error, Compute Engine makes a best effort to reconnect to the VM and preserve the local SSD data, but might not succeed. If the attempt is successful, the VM restarts automatically. However, if the attempt to reconnect fails, the VM restarts without the data. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive. To configure how your VM instances behave in the event of a host error, see Setting instance availability policies.
Data on Local SSDs does not persist through the following events:
- If you shut down the guest operating system and force the instance to stop.
- If you configure the instance to be preemptible and the instance goes through the preemption process.
- If you configure the instance to stop on host maintenance events and the instance goes through a host maintenance event.
- If the host system experiences a host error, and the underlying drive does not recover within 60 minutes, Compute Engine does not attempt to preserve the data on your local SSD. While Compute Engine is recovering your VM and local SSD, which can take up to 60 minutes, the host system and the underlying drive are unresponsive.
- If you misconfigure the local SSD so that it becomes unreachable.
- If you disable project billing. The instance will stop and your data will be lost.
Choosing an interface
You can connect local SSDs to your VMs using either an NVMe interface or a SCSI interface. Most public images include both NVMe and SCSI drivers. For public images that support SCSI, multi-queue SCSI is enabled. For a detailed list, see the supported interfaces column in the operating system details documentation.
Considerations for NVME for custom images
Most images include a kernel with optimized
drivers that allow your VM to achieve the best performance using NVMe. Your
imported custom Linux images achieve the best performance with NVMe if they
include kernel version
4.14.68 or later.
Considerations for SCSI for custom images
If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface. If you are using a custom image that you imported, see Enable multi-queue SCSI.
Choosing a valid number of local SSDs
If you are attaching multiple local SSDs to a single VM instance, there are certain constraints around the valid number of local SSDs you can attach, based on the machine type of the VM instance. Depending on the machine type of the VM, you can attach 1 to 8, 16, or 24 local SSDs to a single VM. For more information, see Local SSDs and machine types restrictions.
Local SSD performance depends heavily on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you must use a special NVMe-enabled image to achieve the best performance. For more information, see Selecting the NVMe or SCSI interfaces.
To reach maximum performance limits with an N1 machine type, use 32 or more vCPUs. To reach maximum performance limits on an N2, N2D, or A2 machine type, use 24 or more vCPUs.
|Storage space (GB)||Partitions||IOPS||Throughput
|Storage space (GB)||Partitions||IOPS||Throughput
Optimizing local SSD performance
There are several VM and disk configuration settings that can improve local SSD performance. For more information, see Optimizing local SSD performance.