About Local SSD disks


If your workloads need high performance, low latency temporary storage, consider using Local solid-state drive (Local SSD) disks when you create your virtual machine (VM). Local SSD disks are always-encrypted solid-state storage for Compute Engine VMs.

Local SSD disks are ideal when you need storage for any of the following use cases:

  • Caches or storage for transient, low value data
  • Scratch processing space for high-performance computing or data analytics
  • Temporary data storage like for the tempdb system database for Microsoft SQL Server

Local SSD disks offer superior I/O operations per second (IOPS), and very low latency compared to Persistent Disk and Google Cloud Hyperdisk. This is because Local SSD disks are physically attached to the server that hosts your VM. For this same reason, Local SSD disks can only provide temporary storage.

Because Local SSD is suitable only for temporary storage, you must store data that is not temporary or ephemeral in nature on one of our durable storage options.

Each Local SSD disk comes in a fixed size, and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. For more information, see Choose a valid number of Local SSD disks.

If Local SSD disks don't meet your redundancy or flexibility requirements, you can use Local SSD disks in combination with other storage options.

Performance

Local SSD performance depends on several factors, including the number of attached Local SSD disks, the selected disk interface (NVMe or SCSI), and the VM's machine type. The available performance increases as you attach more Local SSD disks to your VM.

Local SSD performance by number of attached disks

The following tables list the maximum IOPS and throughput for NVMe- and SCSI-attached Local SSD disks. The metrics are listed by the total capacity of Local SSD disks attached to the VM.

NVMe Local SSD performance

# of attached
Local SSD disks
Storage space (TiB) Capacity per disk IOPS Throughput
(MB/s)
Read Write Read Write
1 0.375 375 GiB 170,000 90,000 660 350
2 0.750 375 GiB 340,000 180,000 1,320 700
3 1.125 375 GiB 510,000 270,000 1,980 1,050
4 1.5 375 GiB 680,000 360,000 2,650 1,400
5 1.875 375 GiB 680,000 360,000 2,650 1,400
6 2.25 375 GiB 680,000 360,000 2,650 1,400
7 2.625 375 GiB 680,000 360,000 2,650 1,400
8 3 375 GiB 680,000 360,000 2,650 1,400
16 6 375 GiB 1,600,000 800,000 6,240 3,120
24 9 375 GiB 2,400,000 1,200,000 9,360 4,680
32 12 375 GiB 3,200,000 1,600,000 12,480 6,240
12 36 3 TiB * 6,000,000 6,000,000 36,000 30,000

* Only available with the Z3 (Preview) machine series.

SCSI Local SSD performance

# of combined
Local SSD disks
Storage space (TiB) IOPS Throughput
(MB/s)
Read Write Read Write
1 0.375 (375 GiB) 100,000 70,000 390 270
2 0.750 (750 GiB) 200,000 140,000 780 550
3 1.125 300,000 210,000 1,170 820
4 1.500 400,000 280,000 1,560 1,090
5 1.875 400,000 280,000 1,560 1,090
6 2.250 400,000 280,000 1,560 1,090
7 2.625 400,000 280,000 1,560 1,090
8 3 400,000 280,000 1,560 1,090
16 6 900,000 800,000 6,240 3,120
24 9 900,000 800,000 9,360 4,680

Configure your VM to maximize performance

To reach the stated performance levels, you must configure your VM as follows:

  • Attach the Local SSD disks with the NVMe interface. Disks attached with the SCSI interface have lower performance.

  • The following machine types also require a minimum number of vCPUs to reach these maximums:

    • N2, N2D, or A2 machine types require at least 24 vCPUs.
    • N1 machine types require at least 32 vCPUs.
  • If your VM uses a custom Linux image, the image must use version 4.14.68 or later of the Linux kernel. If you use the public images provided by Compute Engine, you don't have to take any further action.

For additional VM and disk configuration settings that can improve Local SSD performance, see Optimizing local SSD performance.

For more information about selecting a disk interface, see Choose a disk interface.

Local SSD data persistence

Compute Engine preserves the data on Local SSD disks in certain scenarios, and in other cases, Compute Engine does not guarantee Local SSD data persistence.

The following information describes these scenarios and applies to each Local SSD disk attached to a VM.

Scenarios where Compute Engine persists Local SSD data

Data on Local SSD disks persist only through the following events:

Scenarios where Compute Engine might not persist Local SSD data

Data on Local SSD disks might be lost if a host error occurs on the VM and Compute Engine can't reconnect the VM to the Local SSD disk within a specified time.

You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine can't reconnect to the disk before the timeout expires, the VM is restarted. When the VM restarts, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted VM.

The Local SSD recovery timeout is part of a VM's host maintenance policy. For more information, see Local SSD recovery timeout.

Scenarios where Compute Engine does not persist Local SSD data

Data on Local SSD disks does not persist through the following events:

  • If you shut down the guest operating system and force the VM to stop.
  • If you configure the VM to be preemptible and the VM goes through the preemption process.
  • If you configure the VM to stop on host maintenance events and the VM goes through a host maintenance event.
  • If you misconfigure the Local SSD so that it becomes unreachable.
  • If you disable project billing, causing the VM to stop.

If Compute Engine was unable to recover a VM's Local SSD data, Compute Engine restarts the VM with a mounted and attached Local SSD disk for each previously attached Local SSD disk.

Local SSD encryption

Compute Engine automatically encrypts your data when it is written to Local SSD storage space. You can't use customer-supplied encryption keys with Local SSD disk.

Local SSD data backup

Since you can't back up Local SSD data with disk images, standard snapshots, or disk clones, Google recommends that you always store valuable data on a durable storage option.

If you need to preserve the data on a Local SSD disk, attach a Persistent Disk or Google Cloud Hyperdisk to the VM. After you mount the Persistent Disk or Hyperdisk copy the data from the Local SSD disk to the newly attached disk.

Choose a disk interface

To achieve the highest Local SSD performance, you must attach your disks to the VM with the NVMe interface. Performance is lower if you use the SCSI interface.

The disk interface you choose also depends on the machine type and OS that your VM uses. Some of the available machine types in Compute Engine allow you to choose between NVMe and SCSI interfaces, while others support either only NVMe or only SCSI. Similarly, some of the public OS images provided by Compute Engine might support both NVMe and SCSI, or only one of the two.

Disk interface support by machine type and OS image

The following pages provide more information about available machine types and supported public images, as well as performance details.

  • Supported interfaces by machine types: See Machine series comparison. In the Choose VM properties to compare list, select Disk interface type.

  • OS image: For a list of which public OS images provided by Compute Engine support SCSI or NVMe, see the Interfaces tab for each table in the operating system details documentation.

Considerations for NVMe for custom images

If your VM uses a custom Linux image, you must use version 4.14.68 or later of the Linux kernel for optimal NVMe performance.

Considerations for SCSI for custom images

If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface.

If you are using a custom image that you imported, see Enable multi-queue SCSI.

Choose a valid number of Local SSD disks

Most machine types available on Compute Engine support Local SSD disks. Some machine types always include a fixed number of Local SSD disks by default, while others allow you to add specific numbers of disks. You can only add Local SSD disks when you create the VM. You can't add Local SSD disks to a VM after you create it.

For VMs created based on the Z3 (Preview) machine series, each attached disk has 3,000 GiB of capacity. For all other machine series, each disk you attach has 375 GiB of capacity.

Machine types that automatically attach Local SSD disks

The following table lists the machine types that include Local SSD disks by default, and shows how many disks are attached when you create the VM.

Machine type Number of Local SSD disks
automatically attached per VM
C3 machine types
Only the -lssd variants of the C3 machine types support Local SSD.
c3-standard-4-lssd 1
c3-standard-8-lssd 2
c3-standard-22-lssd 4
c3-standard-44-lssd 8
c3-standard-88-lssd 16
c3-standard-176-lssd 32
C3D machine types
Only the -lssd variants of the C3D machine types support Local SSD.
c3d-standard-8-lssd 1
c3d-standard-16-lssd 1
c3d-standard-30-lssd 2
c3d-standard-60-lssd 4
c3d-standard-90-lssd 8
c3d-standard-180-lssd 16
c3d-standard-360-lssd 32
A3 ultra machine types
a3-highgpu-8g 16
A2 ultra machine types
a2-ultragpu-1g 1
a2-ultragpu-2g 2
a2-ultragpu-4g 4
a2-ultragpu-8g 8
Each disk is 3 TiB in size.
Z3 machine types (Preview)
z3-standard-88-lssd 12
z3-standard-176-lssd 12

Machine types that require you to choose a number of Local SSD disks

The machine types listed in the following table don't automatically attach Local SSD disks to a newly created VM. Because you can't add Local SSD disks to a VM after you create it, use the information in this section to determine how many Local SSD disks to attach when you create a VM.

N1 machine types Number of Local SSD disks allowed per VM
All N1 machine types 1 to 8, 16, or 24
N2 machine types
Machine types with 2 to 10 vCPUs, inclusive 1, 2, 4, 8, 16, or 24
Machine types with 12 to 20 vCPUs, inclusive 2, 4, 8, 16, or 24
Machine types with 22 to 40 vCPUs, inclusive 4, 8, 16, or 24
Machine types with 42 to 80 vCPUs, inclusive 8, 16, or 24
Machine types with 82 to 128 vCPUs, inclusive 16 or 24
N2D machine types
Machine types with 2 to 16 vCPUs, inclusive 1, 2, 4, 8, 16, or 24
Machine types with 32 or 48 vCPUs 2, 4, 8, 16, or 24
Machine types with 64 or 80 vCPUs 4, 8, 16, or 24
Machine types with 96 to 224 vCPUs, inclusive 8, 16, or 24
C2 machine types
Machine types with 4 or 8 vCPUs 1, 2, 4, or 8
Machine types with 16 vCPUs 2, 4, or 8
Machine types with 30 vCPUs 4 or 8
Machine types with 60 vCPUs 8
C2D machine types
Machine types with 2 to 16 vCPUs, inclusive 1, 2, 4, 8
Machine types with 32 vCPUs 2, 4, 8
Machine types with 56 vCPUs 4, 8
Machine types with 112 vCPUs 8
A2 standard machine types
a2-highgpu-1g 1, 2, 4, or 8
a2-highgpu-2g 2, 4, or 8
a2-highgpu-4g 4 or 8
a2-highgpu-8g or a2-megagpu-16g 8
G2 machine types
g2-standard-4 1
g2-standard-8 1
g2-standard-12 1
g2-standard-16 1
g2-standard-24 2
g2-standard-32 1
g2-standard-48 4
g2-standard-96 8
M1 machine types
m1-ultramem-40 Not available
m1-ultramem-80 Not available
m1-megamem-96 1 to 8
m1-ultramem-160 Not available
M3 machine types
m3-ultramem-32 4, 8
m3-megamem-64 4, 8
m3-ultramem-64 4, 8
m3-megamem-128 8
m3-ultramem-128 8
E2, Tau T2D, Tau T2A, and M2 machine types These machine types don't support Local SSD disks.

Pricing

For each Local SSD disk you create, you are billed for the total capacity of the disk for the lifetime of the VM that it is attached to.

For detailed information about Local SSD pricing and available discounts, see Local SSD pricing.

Local SSD disks and Spot VM instances

If you start a Spot VM or preemptible VM with a Local SSD disk, Compute Engine charges discounted spot prices for the Local SSD usage. Local SSD disks that are attached to Spot VMs or preemptible VMs work like normal Local SSD disks, retain the same data persistence characteristics, and remain attached for the life of the VM.

Compute Engine doesn't charge you for Local SSD disk usage on a Spot VM or preemptible VM if the VM is preempted within a minute after it starts running.

Reserving Local SSD disks with committed use discounts

To reserve Local SSD resources in a specific zone, see Reservations of Compute Engine zonal resources.

To receive committed use discounts for Local SSD disks in a specific zone, you must purchase resource-based commitments for the Local SSD resources and also attach reservations that specify matching Local SSD resources to your commitments. For more information, see Attach reservations to resource-based commitments.

Use Local SSD disks with a VM

To use a Local SSD disk with a VM, you must complete the following steps:

Device naming on Linux VMs

The Linux device names for the disks attached to your VM depend on the interface that you choose when creating the disks. When you use the lsblk operating system command to view your disk devices, it displays the prefix nvme for disks attached with the NVMe interface, and the prefix sd for disks attached with the SCSI interface.

The ordering of the disk numbers or NVMe controllers is not predictable or consistent across VMs restarts. On the first boot, a persistent disk might be nvme0n1 (or sda for SCSI). On the second boot, the device name for the same persistent disk might be nvme2n1 or nvme0n3 (or sdc for SCSI).

When accessing attached disks, you should use the symbolic links created in /dev/disk/by-id/ instead. These names persist across reboots. For more information about symlinks, see Symbolic links for disks attached to a VM.

SCSI device names

The format of a SCSI-attached disk device is sda for the first attached disk. The disk partitions appear as sda1. Each additional disk uses a sequential letter, such as sdb and sdc. When sdz is reached, the next disks added have the names such as sdaa, sdab, and sdac, up to sddx.

NVMe device names

The format of a NVMe-attached disk device in Linux operating systems is nvmenumbernnamespace. The number represents the NVMe disk controller number and namespace is an NVMe namespace ID that is assigned by the NVMe disk controller. For partitions, pn is appended to the device name, where n is a number, starting with 1, that denotes the nth partition.

The controller number starts at 0. A single NVMe disk attached to your VM has a device name of nvme0n1. Most machine types use a single NVMe disk controller. The Local SSD device names are then nvme0n1, nvme0n2, nvme0n3, and so on.

Local SSD disks on third generation machine series VMs (C3, C3D, Z3 (Preview), and H3) have a separate NVMe controller for each disk. On these VMs, the Local SSD NVMe-attached device names look like nvme0n1, nvme1n1, and nvme2n1. The number of attached Local SSD disks depends on the machine type of your VM.

Third generation machine series VMs VMs use NVMe for both Persistent Disk and Hyperdisk, and also Local SSD disks. Each VM has 1 NVMe controller for Persistent Disk and Hyperdisk and 1 NVMe controller for each Local SSD disk. The Persistent Disk and Hyperdisk NVMe controller has a single NVMe namespace for all attached disks. So, a third generation machine series VM with 2 Persistent Disk (each with 2 partitions) and 2 unformatted Local SSD disks uses the following device names:

  • nvme0n1 - first Persistent Disk
  • nvme0n1p1
  • nvme0n1p2
  • nvme0n2 - second Persistent Disk
  • nvme0n2p1
  • nvme0n2p2
  • nvme1n1 - first Local SSD
  • nvme2n1 - second Local SSD

Stop a VM with Local SSD

When you stop or suspend a VM, Compute Engine discards the data of any Local SSD disks attached to the VM by default.

If you want to preserve the data of the Local SSD disks attached to the VM, you must stop or suspend the VM by using the gcloud CLI and including the --discard-local-ssd=false flag. This begins a managed migration of Local SSD data to persistent storage, and you're charged for the additional storage utilization until you restart the VM. You might have to remount the Local SSD disk into the file system after restarting the VM.

Restrictions

  • --discard-local-ssd=false is in public preview only and not covered under the GA terms for Compute Engine.
  • Compute Engine only supports using --discard-local-ssd=false in VMs with at most 16 Local SSD disks attached.
  • You can't preserve Local SSD data if you stop or suspend a VM from the Google Cloud console. You must use the Google Cloud CLI, the Cloud Client Libraries, or the Compute Engine API.
  • Saving the Local SSD data is a slow process. Copying the Local SSD data begins only after the suspend or stop request is received.
  • When using Spot VMs or preemptible VMs, preemption may happen at any time, and might interrupt a suspend or resume attempt. In this case, the VM is STOPPED (preempted), not SUSPENDED, and no Local SSD data is retained in persistent storage when the VM resumes or restarts.

Remove Local SSD disks

To remove or delete Local SSD disks, you delete the VM they are attached to. Before you delete a VM with Local SSD disks, make sure that you migrate any critical data from the Local SSD disk to a Persistent Disk, Hyperdisk, or to another VM.