About Hyperdisks

Stay organized with collections Save and categorize content based on your preferences.

Hyperdisk is the newest generation of network block storage service in Google Cloud. Designed for the most demanding mission-critical applications, Hyperdisk offers a scalable, high-performance storage service with a comprehensive suite of data persistence and management capabilities. With Hyperdisk you can easily provision, manage, and scale your Compute Engine workloads without the cost and complexity of a typical on-premises storage area network (SAN).

Hyperdisk storage capacity is partitioned and made available to virtual machine (VM) instances as individual volumes. Hyperdisk volumes are decoupled from VMs enabling you to attach, detach, and move volumes between VM instances. Data stored in Hyperdisk volumes are persistent over VM instance reboots and deletions.

Hyperdisk volumes have the following features:

  • A Hyperdisk volume is mounted as a disk in a VM instance using an NVME or SCSI interface, depending on the machine type of the VM.
  • Hyperdisk volumes feature substantially higher maximum IOPS and throughput than Persistent Disks. Unlike Persistent Disks, where performance for a single VM instance is shared across all volumes attached to the VM, each Hyperdisk volume delivers the maximum rated IOPS and throughput. You can add multiple Hyperdisk volumes to a single VM instance.
  • You can scale performance and capacity independently for Hyperdisk volumes. You can adjust provisioned IOPS and the size of a volume to match storage performance and capacity needs once every 4 hours. IOPS can be dialed up or down, but capacity can only be increased.
  • The maximum Hyperdisk volume size is 64 TB and you can provision up to 257 TB of disk space for your VM.

Machine type support

Hyperdisk Extreme is supported on only VM instances with at least 64 vCPUs. Other machine types are not supported. The following table lists performance limits for supported machine types.

Machine type Maximum IOPS
Maximum throughput (MB/s)
M3 VMs* 350,000 5,000
M2 VMs 100,000 4,000
M1 VMs with 80 or more vCPUs 100,000 4,000

* M3 VMs with Hyperdisk Extreme have limited availability.

Hyperdisk Extreme regional availability

Hyperdisk Extreme is available in the following regions:

  • Changhua County, Taiwan—asia-east1
  • Tokyo, Japan—asia-northeast1
  • Osaka, Japan—asia-northeast2
  • Seoul, South Korea—asia-northeast3
  • Mumbai, India—asia-south1
  • Delhi, India—asia-south2
  • Jurong West, Singapore—asia-southeast1
  • Jakarta, Indonesia—asia-southeast2
  • Sydney, Australia—australia-southeast1
  • Madrid, Spain—europe-southwest1
  • St. Ghislain, Belgium—europe-west1
  • London, England—europe-west2
  • Frankfurt, Germany—europe-west3
  • Eemshaven, Netherlands—europe-west4
  • Zurich, Switzerland—europe-west6
  • Milan, Italy—europe-west8
  • Paris, France—europe-west9
  • Tel Aviv, Israel—me-west1
  • Montréal, Québec—northamerica-northeast1
  • Toronto, Ontario—northamerica-northeast2
  • Osasco, São Paulo, Brazil—southamerica-east1
  • Council Bluffs, Iowa—us-central1
  • Moncks Corner, South Carolina—us-east1
  • Ashburn, Virginia—us-east4
  • The Dalles, Oregon—us-west1
  • Los Angeles, California—us-west2
  • Salt Lake City, Utah—us-west3
  • Las Vegas, Nevada—us-west4

Performance limits and workload patterns

To reach maximum IOPS and throughput levels offered by Hyperdisk Extreme volumes, you must consider the following workload parameters:

  • I/O size: Maximum IOPS limits assume that you are using an I/O size of 4 KB. Maximum throughput limits assume that you are using an I/O size of at least 64 KB.
  • Queue length: Queue length is the number of pending requests for a volume. To reach maximum performance limits, you must tune your queue length according to the I/O size, IOPS, and latency sensitivity of your workload. Optimal queue length varies for each workload, but typically should be larger than 256.
  • Working set size: Working set size is the amount of data of a volume being accessed within a short period of time. To achieve optimal performance, working set sizes must be greater than or equal to 32 GB.
  • Multiple attached disks: Hyperdisk Extreme volumes share the per-VM maximum IOPS and throughput limits with all Persistent Disk and Hyperdisk volumes attached to the same VM. When monitoring the performance of your Hyperdisk Extreme volumes, take into account any I/O requests that you are sending to other volumes that are attached to the same VM.

For more information, see Optimize performance of Hyperdisks.


For Hyperdisk Extreme volumes, throughput scales with the number of IOPS you provision at a rate of 256 KB of throughput per I/O. However, throughput is ultimately capped by per-instance limits that depend on the number of vCPUs on the VM instance to which your Hyperdisk Extreme volumes are attached.

Throughput for Hyperdisk Extreme volumes is not full duplex. The maximum throughput limits listed in this document apply to the sum total of read and write throughput.


Hyperdisk Extreme is priced by provisioned capacity and IOPS.

For more pricing information, see Disk pricing.

Restrictions and Limitations

  • You can attach a maximum of 8 Hyperdisk volumes per VM instance.
  • You can't create an image or machine image from a Hyperdisk Extreme volume.
  • You can't clone a Hyperdisk Extreme volume.
  • Hyperdisks volumes are zonal only. You cannot create regional Hyperdisk volumes.
  • You can't attach multiple VM instances in read-only mode to a Hyperdisk Extreme volume.
  • Hyperdisks volumes can't be used in multi-writer mode or attached to multiple VMs.
  • Hyperdisk Extreme volumes can't be used as boot disks.

What's next?