About Hyperdisk for GKE


Google Cloud Hyperdisk is a network block storage option offered on GKE. You can use this storage option in your GKE clusters in a similar way as with other Compute Engine Persistent Disk volumes with added flexibility to tune performance for your workload. Compared to Persistent Disk storage, Hyperdisk provides substantially higher maximum input/output operations per second (IOPS) and throughput. Unlike Persistent Disk volumes where performance is shared across all volumes attached to a node, with Hyperdisk, you can specify and tune the level of performance for each Hyperdisk volume.

You can choose from the following Hyperdisk options on GKE:

  • Hyperdisk Throughput volumes: Optimized for cost-efficient high-throughput, with up to 3 GB/s throughput (>=128 KB IO size). This is a good option if your use case targets scale-out analytics (for example, Hadoop or Kafka) and throughput-oriented costs-sensitive workloads. This storage option is supported on GKE Autopilot and Standard clusters.
  • Hyperdisk Extreme volumes: Optimized for IOPS performance, with >320 K provisioned IOPS and >4.8 GB/s throughput. This is a good option if you are deploying high-performance workloads, such as database management systems. This storage option is supported on Standard clusters only.

In general, we recommend that you use Hyperdisk Throughput if you need high-throughput or better price-performance. You should use Hyperdisk Extreme if you need high IOPs with the best performance.

To learn more about how Hyperdisk storage works, see the Compute Engine documentation.

Benefits

  • With Hyperdisk, you have more predictable performance on stateful workloads you deploy. Hyperdisk decouples throughput and IOPS provisioning from the node. This ensures that the storage instance mounted for a Pod respects the performance settings you set during storage provisioning, independent of the node it is deployed on.
  • With Hyperdisk, you can easily provision, manage, and scale your stateful workloads on GKE without the cost and complexity of managing a on-premises storage area network (SAN).
  • Hyperdisk storage capacity is partitioned and made available to GKE nodes as individual volumes. Hyperdisk volumes are decoupled from nodes enabling you to attach, detach, and move volumes between nodes. Data stored in Hyperdisk volumes persist over node reboots and deletions. You can also add multiple Hyperdisk volumes to a single GKE node.

Pricing

You are billed for the total provisioned capacity of your Hyperdisk volumes until you delete them. You are charged per GiB per month. Additionally, you are billed for the following:

  • Hyperdisk Extreme charges a monthly rate based on the provisioned IOPS.
  • Hyperdisk Throughput charges a monthly rate based on the provisioned throughput (in MB/s).

For pricing information, refer to Disk pricing in the Compute Engine documentation.

Limitations

  • After volume creation, you can only modify throughput for Hyperdisk Throughput and IOPS for Hyperdisk Extreme volumes through the Compute Engine API.
  • You can only attach Hyperdisk volumes to specific instance types; Read-Only attachments are not supported.
  • See the Restrictions and Limitations section in the Compute Engine documentation for additional information.

Supported machine types and Autopilot Compute Classes

The following table summarizes the configurations supported for Hyperdisk volume provisioning on GKE Autopilot clusters. Hyperdisk provisioning on GKE is supported only for the machine series listed in the table. Refer to Machine series comparison in the Compute Engine documentation for the most up-to-date information about machine type support for Hyperdisk.

The nodeSelector Specification column indicates how you can control Pod scheduling on Autopilot clusters based on Autopilot Compute Classes; for more information, see Choose Compute Classes for Autopilot Pods.

Machine Type Hyperdisk Throughput Autopilot Compute Class nodeSelector Specification
N2 Supported Balanced
cloud.google.com/compute-class: "Balanced"
supported-cpu-platform.cloud.google.com/Intel Cascade Lake: "true"
N2D Supported Balanced
cloud.google.com/compute-class: "Balanced"
supported-cpu-platform.cloud.google.com/AMD Rome: "true"
T2D Supported Scale-out
cloud.google.com/compute-class: "Scale-Out"
supported-cpu-platform.cloud.google.com/AMD Milan: "true"

Plan the performance level for your Hyperdisk volumes

Use the following considerations to plan the right level of performance for your Hyperdisk volumes.

Hyperdisk Throughput

With Hyperdisk Throughput, you can provision capacity separately from throughput. To provision throughput, you select the desired level for a given volume. Individual volumes have full throughput isolation—each gets the throughput provisioned to it. However, the throughput is ultimately capped by per-instance limits on the VM instance to which your volumes are attached. To learn more about these limits, see the 'Hyperdisk capacity' and 'Machine type support' sections in the Compute Engine documentation.

Both read and write operations count against the throughput limit provisioned for a Hyperdisk Throughput volume. The throughput provisioned and the maximum limits apply to the combined total of read and write throughput.

When defining a StorageClass, throughput provisioned for Hyperdisk Throughput volumes must follow these rules:

  • At least 10 MB/s per TiB of capacity, and no more than 90 MB/s per TiB of capacity.
  • At most 600 MB/s per volume.

If the total throughput provisioned for one or more Hyperdisk Throughput volumes exceeds the total throughput available at the VM instance level, the throughput is limited to the instance throughput level.

Hyperdisk Extreme

With Hyperdisk Extreme, you can provision capacity separately from the IOPS level. To provision the IOPS level, you select the desired level for a given volume. Individual volumes have full IOPS level isolation—each gets the IOPS level provisioned to it. However, the IOPS is ultimately capped by per-instance limits on the VM instance to which your volumes are attached. To learn more about these limits, see the 'Hyperdisk capacity' and 'Machine type support' sections in the Compute Engine documentation.

Both read and write operations count against the IOPS limit provisioned for a Hyperdisk Extreme volume. The IOPS provisioned, and the maximum limits listed in this document, apply to the total of read and write IOPS.

When defining a StorageClass, IOPS provisioned for Hyperdisk Extreme volumes must be over 320 K IOPS.

If the total IOPS provisioned for one or more Hyperdisk Extreme volumes exceeds the total IOPS available at the VM instance level, the performance is limited to the instance IOPS level. If there are multiple Hyperdisk and Persistent Disk volumes attached to the same VM requesting IOPS at the same time, and the VM limits are reached, then each volume has an IOPS level proportional to their share in the total IOPS provisioned across all attached Hyperdisk Extreme volumes.

What's next