About Local SSDs


Compute Engine offers always-encrypted local solid-state drive (SSD) block storage for virtual machine (VM) instances. Each Local SSD disk is 375 GiB in size. You can attach a maximum of 12 TiB (or 32 Local SSD disks) to a single VM. Optionally, you can format and mount multiple Local SSD disks into a single logical volume.

Unlike Persistent Disk, Local SSD disks are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to Persistent Disk. See Configure disks to meet performance requirements for details.

Local SSDs are suitable only for temporary storage such as caches, processing space, or low value data. To store data that is not temporary or ephemeral in nature, use one of our durable storage options.

Local SSD data persistence

Before you create a VM with Local SSD storage, you must understand which events preserve your Local SSD data and which events can cause your Local SSD data to be unrecoverable.

The following information applies to each Local SSD disk attached to a VM.

Scenarios where Compute Engine persists Local SSD data

Data on Local SSDs persist only through the following events:

  • If you reboot the guest operating system.
  • If you configure your VM for live migration and the VM goes through a host maintenance event.

Scenarios where Compute Engine might not persist Local SSD data

Data on Local SSD disks might be lost if a host error occurs on the VM and Compute Engine can't reconnect the VM to the Local SSD disk within a specified time.

You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine cannot reconnect to the disk before the timeout expires, the VM is restarted. When the VM is restarted, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted VM.

The Local SSD recovery timeout is part of a VM's host maintenance policy. For more information, see Local SSD recovery timeout.

Scenarios where Compute Engine does not persist Local SSD data

Data on Local SSDs does not persist through the following events:

  • If you shut down the guest operating system and force the VM to stop.
  • If you configure the VM to be preemptible and the VM goes through the preemption process.
  • If you configure the VM to stop on host maintenance events and the VM goes through a host maintenance event.
  • If you misconfigure the Local SSD so that it becomes unreachable.
  • If you disable project billing, causing the VM to stop.

If Compute Engine was unable to recover a VM's Local SSD data, Compute Engine restarts the VM with a mounted and attached Local SSD disk for each previously attached Local SSD disk.

Stop a VM with Local SSD

When you stop or suspend a VM, all data on the Local SSD is discarded. To stop or suspend a VM with attached Local SSDs, you can use one of the following methods.

  • You can stop or suspend a VM with a Local SSD by including the --discard-local-ssd or --discard-local-ssd=True option in the gcloud compute instances stop and gcloud compute instances suspend commands. This option indicates that the contents of the Local SSD are discarded when the VM stops.

  • If you want to preserve the contents of the Local SSD, you can stop or suspend a VM using --discard-local-ssd=False. This begins a managed migration of Local SSD data to persistent storage upon VM termination or suspension. You are charged for the additional storage utilization while the VM is not running. See the suspend documentation for more details. You might have to remount the Local SSD into the file system when the VM is restarted.

  • You can also shut down the VM from within the guest OS. This will not preserve Local SSD data.

Restrictions

  • --discard-local-ssd=False is currently in public preview only and not covered under the GA terms for Compute Engine.
  • Currently Compute Engine only supports using --discard-local-ssd=False in VMs with at most 16 attached Local SSDs.
  • Saving the Local SSD data is a slow process. Copying the Local SSD data begins only after the suspend or stop request is received.
  • When using Spot VMs or preemptible VMs, preemption may happen at any time, and might interrupt a suspend or resume attempt. In this case, the VM is STOPPED (preempted), not SUSPENDED, and no Local SSD data is retained in persistent storage when the VM resumes or restarts

Before you delete a VM with Local SSDs, make sure that you migrate your critical data from the Local SSD to a persistent disk or to another VM.

If Local SSDs do not meet your redundancy or flexibility requirements, you can use Local SSDs in combination with other storage options.

12 TB maximum capacity

The following table provides an overview of Local SSD capacity and estimated performance using NVMe. To reach maximum performance limits, use the largest possible machine type.

Storage space Partitions Read IOPS Write IOPS Read Throughput
(MB/s)
Write Throughput
(MB/s)
3 TB 8 680,000 360,000 2,650 1,400
6 TB 16 1,600,000 800,000 6,240 3,120
9 TB 24 2,400,000 1,200,000 9,360 4,680
12 TB 32 3,200,000 1,600,000 12,480 6,240

Choosing a valid number of Local SSDs

For most machine types on Compute Engine, you can attach Local SSDs. For some machine types, Local SSD is automatically attached to the machine type.

Depending on the machine type of the VM, there are certain constraints around the valid number of Local SSDs that are allowed on the VM instance. Based on the machine type, you can attach from 1 to 24 Local SSDs to a single VM, as shown in the following table.

N1 machine types Local SSD is automatically attached Number of Local SSD disks allowed per VM instance
All N1 machine types 1 to 8, 16, or 24
N2 machine types
Machine types with 2 to 10 vCPUs, inclusive 1, 2, 4, 8, 16, or 24
Machine types with 12 to 20 vCPUs, inclusive 2, 4, 8, 16, or 24
Machine types with 22 to 40 vCPUs, inclusive 4, 8, 16, or 24
Machine types with 42 to 80 vCPUs, inclusive 8, 16, or 24
Machine types with 82 to 128 vCPUs, inclusive 16 or 24
N2D machine types
Machine types with 2 to 16 vCPUs, inclusive 1, 2, 4, 8, 16, or 24
Machine types with 32 or 48 vCPUs 2, 4, 8, 16, or 24
Machine types with 64 or 80 vCPUs 4, 8, 16, or 24
Machine types with 96 to 224 vCPUs, inclusive 8, 16, or 24
C2 machine types
Machine types with 4 or 8 vCPUs 1, 2, 4, or 8
Machine types with 16 vCPUs 2, 4, or 8
Machine types with 30 vCPUs 4 or 8
Machine types with 60 vCPUs 8
C2D machine types
Machine types with 2 to 16 vCPUs, inclusive 1, 2, 4, 8
Machine types with 32 vCPUs 2, 4, 8
Machine types with 56 vCPUs 4, 8
Machine types with 112 vCPUs 8
C3 machine types
c3-standard-4-lssd 1
c3-standard-8-lssd 2
c3-standard-22-lssd 4
c3-standard-44-lssd 8
c3-standard-88-lssd 16
c3-standard-176-lssd 32
C3D machine types (Preview)
c3d-standard-8-lssd 1
c3d-standard-16-lssd 1
c3d-standard-30-lssd 2
c3d-standard-60-lssd 4
c3d-standard-90-lssd 8
c3d-standard-180-lssd 16
c3d-standard-360-lssd 32
A2 standard machine types
a2-highgpu-1g 1, 2, 4, or 8
a2-highgpu-2g 2, 4, or 8
a2-highgpu-4g 4 or 8
a2-highgpu-8g or a2-megagpu-16g 8
A2 ultra machine types
a2-ultragpu-1g 1
a2-ultragpu-2g 2
a2-ultragpu-4g 4
a2-ultragpu-8g 8
G2 machine types
g2-standard-4 1
g2-standard-8 1
g2-standard-12 1
g2-standard-16 1
g2-standard-24 2
g2-standard-32 1
g2-standard-48 4
g2-standard-96 8
M1 machine types
m1-ultramem-40 Not available
m1-ultramem-80 Not available
m1-megamem-96 1 to 8
m1-ultramem-160 Not available
M3 machine types
m3-ultramem-32 4, 8
m3-megamem-64 4, 8
m3-ultramem-64 4, 8
m3-megamem-128 8
m3-ultramem-128 8
E2, Tau T2D, Tau T2A, and M2 machine types These machine types don't support Local SSD disks.

Choosing an interface

You can connect Local SSDs to your VMs using either an NVMe interface or a SCSI interface. Most public images include both NVMe and SCSI drivers. For public images that support SCSI, multi-queue SCSI is enabled. For a detailed list, see the Interfaces tab for each table in the operating system details documentation.

Considerations for NVMe for custom images

Most images include a kernel with optimized drivers that allow your VM to achieve the best performance using NVMe. Your imported custom Linux images achieve the best performance with NVMe if they include kernel version 4.14.68 or later.

Considerations for SCSI for custom images

If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface.

If you are using a custom image that you imported, see Enable multi-queue SCSI.

Device naming

The Linux device names for the disks attached to your VM depend on the interface that you choose when creating the disks. When you use the lsblk operating system command to view your disk devices, it displays the prefix nvme for disks attached with the NVMe interface, and the prefix sd for disks attached with the SCSI interface.

The ordering of the disk numbers or NVMe controllers is not predictable or consistent across VMs restarts. On the first boot, a persistent disk might be nvme0n1 or sda. On the second boot, the device name for the same persistent disk might be nvme2n1, nvme0n3 or sdc.

When accessing attached disks, you should use the symbolic links created in /dev/disk/by-id/ instead. These names persist across reboots.

SCSI device names

The format of a SCSI-attached disk device is sda for the first attached disk. The disk partitions appear as sda1, sda2, and so on. Each additional disk uses a sequential letter, such as sdb, sdc, and so on. When sdz is reached, the next disks added have the names sdaa, sdab, sdac, and so on, up to sddx.

NVMe device names

The format of a NVMe-attached disk device in Linux operating systems is nvmenumbernnamespace. The number represents the NVMe disk controller number and namespace is an NVMe name space ID that is assigned by the NVMe disk controller. For partitions, pn is appended to the device name, where n is a number, starting with 1, that denotes the nth partition.

The controller number starts at 0. A single NVMe disk attached to your VM has a device name of nvme0n1. Most machine types use a single NVMe disk controller. The Local SSD device names are then nvme0n1, nvme0n2, nvme0n3, and so on.

Local SSDs on C3 and C3D VMs have a separate NVMe controller for each disk. So, on C3 and C3D VMs, Local SSD NVMe-attached device names look like nvme0n1, nvme1n1, nvme2n1, and so on. The number of attached Local SSDs depends on the machine type of your C3 VM or C3D VM.

C3 and C3D VMs use NVMe for both persistent disks and Local SSDs. Each VM has 1 NVMe controller for persistent disks and 1 NVMe controller for each Local SSD. The persistent disk NVMe controller has a single NVMe namespace for all attached persistent disks. So, a VM with 2 persistent disks (each with 2 partitions) and 2 Local SSDs uses the following device names on a C3 or C3D VM:

  • nvme0n1 - first Persistent Disk
  • nvme0n1p1
  • nvme0n1p2
  • nvme0n2 - second Persistent Disk
  • nvme0n2p1
  • nvme0n2p2
  • nvme1n1 - first Local SSD
  • nvme2n1 - second Local SSD

Performance

Local SSD performance depends heavily on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces. If you choose to use NVMe, you must use a special NVMe-enabled image to achieve the best performance. For more information, see Choosing an interface.

To reach maximum performance limits with an N1 machine type, use 32 or more or more vCPUs. To reach maximum performance limits on an N2, N2D, or A2 machine type, use 24 or more vCPUs. Read and write IOPS are expected to be 20% lower on VMs with N2D machine types, compared to VMs with N1, N2, or A2 machine types.

Note that reading and writing to Local SSDs requires CPU cycles from your virtual machine. To achieve high and consistent IOPS levels, you must have free CPUs to process input and output operations. To learn more, see Configure disks to meet performance requirements.

NVMe

Storage space (GiB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 170,000 90,000 660 350
750 2 340,000 180,000 1,320 700
1,125 3 510,000 270,000 1,980 1,050
1,500 4 680,000 360,000 2,650 1,400
1,875 5 680,000 360,000 2,650 1,400
2,250 6 680,000 360,000 2,650 1,400
2,625 7 680,000 360,000 2,650 1,400
3,000 8 680,000 360,000 2,650 1,400
6,000 16 1,600,000 800,000 6,240 3,120
9,000 24 2,400,000 1,200,000 9,360 4,680
12,000 32 3,200,000 1,600,000 12,480 6,240

SCSI

Storage space (GiB) Partitions IOPS Throughput
(MB/s)
Read Write Read Write
375 1 100,000 70,000 390 270
750 2 200,000 140,000 780 550
1,125 3 300,000 210,000 1,170 820
1,500 4 400,000 280,000 1,560 1,090
1,875 5 400,000 280,000 1,560 1,090
2,250 6 400,000 280,000 1,560 1,090
2,625 7 400,000 280,000 1,560 1,090
3,000 8 400,000 280,000 1,560 1,090
6,000 16 900,000 800,000 6,240 3,120
9,000 24 900,000 800,000 9,360 4,680

Optimizing Local SSD performance

There are several VM and disk configuration settings that can improve Local SSD performance. For more information, see Optimizing Local SSD performance.

Local SSDs and preemptible VM instances

You can start a Spot VM or preemptible VM instance with a local SSD and Compute Engine charges you discounted spot prices for the local SSD usage. Local SSDs attached to Spot VMs or preemptible instances work like normal local SSDs, retain the same data persistence characteristics, and remain attached for the life of the VM.

Compute Engine doesn't charge you for local SSDs if the VMs are preempted in the first minute after they start running.

For more information about local SSDs, see Adding local SSDs.

Reserving Local SSDs with committed use discounts

To reserve Local SSD resources in a specific zone, see Reservations of Compute Engine zonal resources.

To receive committed use discounts for Local SSDs in a specific zone, you must create and attach reservations to the commitments that you purchase for those Local SSD resources. For more information, see Attach reservations to commitments.