The Compute Engine Persistent Disk CSI driver is the primary way for you to access Hyperdisk storage with Google Kubernetes Engine (GKE) clusters.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
- Set your default region and zone to one of the supported values.
Requirements
To use Hyperdisk volumes in GKE, your clusters must meet the following requirements:
- Use Linux clusters running GKE version 1.26 or later. If you use a release channel, ensure that the channel has the minimum GKE version or later that is required for this driver.
- Make sure that the Compute Engine Persistent Disk CSI driver is enabled. The Compute Engine Persistent Disk driver is enabled by default on new Autopilot and Standard clusters and cannot be disabled or edited when using Autopilot. If you need to enable the Compute Engine Persistent Disk CSI driver from your cluster, see Enabling the Compute Engine Persistent Disk CSI Driver on an existing cluster.
Create a Hyperdisk volume for GKE
This section provides an overview of creating a Hyperdisk volume backed by the Compute Engine CSI driver in GKE.
Create a StorageClass
The following
Persistent Disk storage Type
fields
are provided by the Compute Engine Persistent Disk CSI driver to support
Hyperdisk:
hyperdisk-balanced
hyperdisk-throughput
hyperdisk-extreme
hyperdisk-ml
To create a new StorageClass with the throughput or IOPS level you want, use
pd.csi.storage.gke.io
in the provisioner field, and specify one of the
Hyperdisk storage types.
Each Hyperdisk type has default values for performance determined by the initial disk size provisioned. When creating the StorageClass, you can optionally specify the following parameters depending on your Hyperdisk type. If you omit these parameters, GKE uses the capacity based disk type defaults instead. For guidance on allowable values for throughput or IOPS, see Plan the performance level for your Hyperdisk volume.
Parameter | Hyperdisk Type | Usage |
---|---|---|
provisioned-throughput-on-create |
Hyperdisk Balanced*, Hyperdisk Throughput | Express the throughput value in MiBps using the "Mi" qualifier; for example, if your required throughput is 250 MiBps, specify "250Mi" when creating the StorageClass. |
provisioned-iops-on-create |
Hyperdisk Balanced, Hyperdisk IOPS | The IOPS value should be expressed without any qualifiers; for example, if you require 7,000 IOPS, specify "7000" when creating the StorageClass. |
The following examples show how you can create a StorageClass for each Hyperdisk type:
Hyperdisk Balanced
Save the following manifest in a file named
hdb-example-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: balanced-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: hyperdisk-balanced provisioned-throughput-on-create: "250Mi" provisioned-iops-on-create: "7000"
Create the StorageClass:
kubectl create -f hdb-example-class.yaml
Hyperdisk Throughput
Save the following manifest in a file named
hdt-example-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: throughput-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: hyperdisk-throughput provisioned-throughput-on-create: "50Mi"
Create the StorageClass:
kubectl create -f hdt-example-class.yaml
Hyperdisk Extreme
Save the following manifest in a file named
hdx-example-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: extreme-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: hyperdisk-extreme provisioned-iops-on-create: "50000"
Create the StorageClass:
kubectl create -f hdx-example-class.yaml
To find the name of the StorageClasses available in your cluster, run the following command:
kubectl get sc
Create a PersistentVolumeClaim
You can create a PersistentVolumeClaim that references the Compute Engine Persistent Disk CSI driver's StorageClass.
Hyperdisk Balanced
In this example, you specify the targeted storage capacity of the Hyperdisk Balanced volume as 20 GiB.
Save the following PersistentVolumeClaim manifest in a file named
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: balanced-storage resources: requests: storage: 20Gi
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
Hyperdisk Throughput
In this example, you specify the targeted storage capacity of the Hyperdisk Throughput volume as 2 TiB.
Save the following PersistentVolumeClaim manifest in a file named
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: throughput-storage resources: requests: storage: 2Ti
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
Hyperdisk Extreme
In this example, you specify the minimum storage capacity of the Hyperdisk Extreme volume as 64 GiB.
Save the following PersistentVolumeClaim manifest in a file named
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: extreme-storage resources: requests: storage: 64Gi
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
Create a Deployment to consume the Hyperdisk volume
When using Pods with PersistentVolumes, we recommend that you use a workload controller (such as a Deployment or StatefulSet).
The following example creates a manifest that configures a Pod for deploying a Nginx web server using the PersistentVolumeClaim created in the previous section. Save the following example manifest as
hyperdisk-example-deployment.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: web-server-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /var/lib/www/html name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: podpvc readOnly: false
To create a Deployment based on the
hyperdisk-example-deployment.yaml
manifest file, run the following command:kubectl apply -f hyperdisk-example-deployment.yaml
Confirm the Deployment was successfully created:
kubectl get deployment
It might take a few minutes for Hyperdisk instances to complete provisioning. When the deployment completes provisioning, it reports a
READY
status.You can check the progress by monitoring your PersistentVolumeClaim status by running the following command:
kubectl get pvc
Provision a Hyperdisk volume from a snapshot
To create a new Hyperdisk volume from an existing Persistent Disk snapshot, use the Google Cloud console, the Google Cloud CLI, or the Compute Engine API. To learn how to create a Persistent Disk snapshot, see Creating and using volume snapshots.
Console
Go to the Disks page in the Google Cloud console.
Click Create Disk.
Under Disk Type, choose one of the following for disk type:
- Hyperdisk Balanced
- Hyperdisk Extreme
- Hyperdisk Throughput
Under Disk source type, click Snapshot.
Select the name of the snapshot to restore.
Select the size of the new disk, in GiB. This number must be equal to or larger than the original source disk for the snapshot.
Set the Provisioned throughput or Provisioned IOPS you want for the disk, if different from the default values.
Click Create to create the Hyperdisk volume.
gcloud
Run the gcloud compute disks create
command
to create the Hyperdisk volume from a snapshot.
Hyperdisk Balanced
gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-throughput=TRHROUGHPUT_LIMIT \
--provisioned-iops=IOPS_LIMIT \
--type=hyperdisk-balanced
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB) or tebibytes (TiB), of the new disk. Refer to the Compute Engine documentation for the latest capacity limitations.SNAPSHOT_NAME
: the name of the snapshot being restored.THROUGHPUT_LIMIT
: Optional. For Hyperdisk Balanced disks, this is an integer that represents the throughput, measured in MiBps, that the disk can handle. Refer to the Compute Engine documentation for the latest limitations.IOPS_LIMIT
: Optional. For Hyperdisk Balanced disks, this is the number of IOPS that the disk can handle. Refer to the Compute Engine documentation for the latest performance limitations.
Hyperdisk Throughput
gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-throughput=TRHROUGHPUT_LIMIT \
--type=hyperdisk-throughput
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB or GB) or tebibytes (TiB or TB), of the new disk. Refer to the Compute Engine documentation for the latest capacity limitations.SNAPSHOT_NAME
: the name of the snapshot being restored.THROUGHPUT_LIMIT
: Optional: For Hyperdisk Throughput disks, this is an integer that represents the throughput, measured in MiBps, that the disk can handle. Refer to the Compute Engine documentation for the latest performance limitations.
Hyperdisk Extreme
gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-iops=IOPS_LIMIT \
--type=hyperdisk-iops
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB or GB) or tebibytes (TiB or TB), of the new disk. Refer to the Compute Engine documentation for the latest capacity limitations.SNAPSHOT_NAME
: the name of the snapshot being restored.IOPS_LIMIT
: Optional: For Hyperdisk Extreme disks, this is the number of I/O operations per second that the disk can handle. Refer to the Compute Engine documentation for the latest performance limitations.
Create a snapshot for a Hyperdisk volume
To create a snapshot from a Hyperdisk volume, follow the same steps as creating a snapshot for a Persistent Disk volume:
Update the provisioned throughput or IOPS of an existing Hyperdisk volume
This section covers how to modify provisioned performance for Hyperdisk volumes.
Throughput
Updating the provisioned throughput is supported for Hyperdisk Balanced and Hyperdisk Throughput volumes only.
To update the provisioned throughput level of your Hyperdisk volume, follow the Google Cloud console, gcloud CLI, or Compute Engine API instructions in Changing the provisioned performance for a Hyperdisk volume.
You can change the provisioned throughput level (up to once every 4 hours) for a Hyperdisk volume after volume creation. New throughput levels might take up to 15 minutes to take effect. During the performance change, any performance SLA and SLO are not in effect. You can change the throughput level of an existing volume at any time, regardless of whether the disk is attached to a running instance or not.
The new throughput level you specify must adhere to the supported values for Hyperdisk volumes.
IOPS
Updating the provisioned IOPS is supported for Hyperdisk Balanced and Hyperdisk Extreme volumes only.
To update the provisioned IOPS level of your Hyperdisk volume, follow the Google Cloud console, gcloud CLI, or Compute Engine API instructions in Changing the provisioned performance for a Hyperdisk volume.
You can change the provisioned IOPS level (up to once every 4 hours) for a Hyperdisk IOPS volume after volume creation. New IOPS levels might take up to 15 minutes to take effect. During the performance change, any performance SLA and SLO are not in effect. You can change the IOPS level of an existing volume at any time, regardless of whether the disk is attached to a running instance or not.
The new IOPS level you specify must adhere to the supported values for Hyperdisk volumes.
To update the provisioned IOPS level for a Hyperdisk volume, you must identify the name of the Persistent Disk backing your PersistentVolumeClaim and PersistentVolume resources:
Go to the Object browser in the Google Cloud console.
Find the entry for your PersistentVolumeClaim object.
Click the Volume link .
Open the YAML tab of the associated PersistentVolume. Locate the CSI
volumeHandle
value in this tab.Note the last element of this handle (it should have a value like "
pvc-XXXXX
"). This is the name of your PersistentVolumeClaim. You should also take note of the project and zone.
Monitor throughput or IOPS on a Hyperdisk volume
To monitor the provisioned performance of your Hyperdisk volume, see Analyze provisioned IOPS and throughput in the Compute Engine documentation.
Troubleshooting
This section provides troubleshooting guidance to resolve issues with Hyperdisk volumes on GKE.
Cannot change performance or capacity: ratio out of range
The following error occurs when you attempt to change the provisioned performance level or capacity, but the performance level or capacity that you picked is outside of the range that is acceptable for the volume:
Requested provisioned throughput cannot be higher than <value>.
Requested provisioned throughput cannot be lower than <value>.
Requested provisioned throughput is too high for the requested disk size.
Requested provisioned throughput is too low for the requested disk size.
Requested disk size is too high for current provisioned throughput.
The throughput provisioned for Hyperdisk Throughput volumes must meet the following requirements:
- At least 10 MiBps per TiB of capacity, and no more than 90 MiBps per TiB of capacity.
- At most 600 MiBps per volume.
To resolve this issue, correct the requested throughput or capacity to be within the allowable range and reissue the command.
Cannot change performance: rate limited
The following error occurs when you attempt to change the provisioned performance level, but the performance level has already been changed within the last 4 hours:
Cannot update provisioned throughput due to being rate limited.
Cannot update provisioned iops due to being rate limited.
Hyperdisk Throughput and IOPS volumes can have their provisioned performance updated once every 4 hours. To resolve this issue, wait for the cool-down timer for the volume to elapse, and then reissue the command.
What's next
- Learn how to migrate Persistent Disk volumes to Hyperdisk.
- Learn how to use volume expansion.
- Learn how to use volume snapshots.
- Read more about the driver on GitHub.