Google Kubernetes Engine (GKE) provides a simple way for you to automatically deploy and manage the Compute Engine persistent disk Container Storage Interface (CSI) Driver in your clusters.
The Compute Engine persistent disk CSI Driver version is tied to GKE version numbers. The Compute Engine persistent disk CSI Driver version is typically the latest driver available at the time that the GKE version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.
Benefits of using the Compute Engine persistent disk CSI Driver
Using the Compute Engine persistent disk CSI Driver instead of the Kubernetes in-tree gcePersistentDisk volume plugin provides the following benefits:
- CSI drivers are the future of storage extension in Kubernetes. Kubernetes has announced that the in-tree volume plugins are expected to be removed from Kubernetes in version 1.21. For details, see Kubernetes In-Tree to CSI Volume Migration Moves to Beta. After in-tree volume plugins are removed, existing volumes using in-tree volume plugins will communicate through CSI drivers instead.
- It enables the automatic deployment and management of the persistent disk driver without having to manually set it up, or use the in-tree volume plugin.
- It provides additional persistent disk features in GKE.
For example:
- You can use customer-managed encryption keys (CMEKs) with the Compute Engine persistent disk CSI Driver, but not the in-tree volume plugin. These keys are used to encrypt the data encryption keys that encrypt your data. To learn more about CMEK on GKE, see Using CMEK.
- You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.
- Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence.
Requirements
To use the Compute Engine persistent disk CSI Driver, you must use GKE versions that are 1.14 or later.
Enabling the Compute Engine persistent disk CSI Driver on a new cluster
Newly created clusters using version 1.18.10-gke.2100 or later, or 1.19.3- gke.2100 or later have the Compute Engine persistent disk CSI Driver enabled by default. If you created a cluster with one of these versions, you do not need to take any of the steps in this section.
If you want to create a cluster with a version where the Compute Engine persistent disk CSI Driver is not
automatically enabled, you can use the gcloud
command-line tool or the Google Cloud Console.
To enable the driver on cluster creation, complete the following steps:
gcloud
gcloud container clusters create CLUSTER-NAME \
--addons=GcePersistentDiskCsiDriver \
--cluster-version=VERSION
Replace the following:
CLUSTER-NAME
: the name of your cluster.VERSION
: the GKE version number. You must select a version of 1.14 or higher to use this feature.
For the full list of flags, see the
gcloud container clusters create
documentation.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
Configure your cluster as you want. For details on the types of clusters that you can create, see Types of clusters.
From the navigation pane, under Cluster, click Features.
Select the Enable Compute Engine persistent disk CSI Driver checkbox.
Click Create.
After you have enabled the Compute Engine persistent disk CSI Driver, you can use the driver in Kubernetes
volumes using the driver and provisioner name: pd.csi.storage.gke.io
.
Enabling the Compute Engine persistent disk CSI Driver on an existing cluster
To enable the Compute Engine persistent disk CSI Driver in existing clusters, use the
gcloud
command-line tool or the Google Cloud Console.
To enable the driver on an existing cluster, complete the following steps:
gcloud
gcloud container clusters update CLUSTER-NAME \
--update-addons=GcePersistentDiskCsiDriver=ENABLED
Replace CLUSTER-NAME
with the name of the existing cluster.
Console
In the Cloud Console, go to the Google Kubernetes Engine menu.
Click the cluster's Edit button, which looks like a pencil.
Expand Add-ons.
From the Compute Engine persistent disk CSI Driver dropdown, select Enabled.
Click Save.
Disabling the Compute Engine persistent disk CSI Driver
Disable the Compute Engine persistent disk CSI Driver using gcloud
command-line tool or the Google Cloud Console.
To disable the driver on an existing cluster, complete the following steps:
gcloud
gcloud container clusters update CLUSTER-NAME \
--update-addons=GcePersistentDiskCsiDriver=DISABLED
Replace CLUSTER-NAME
with the name of the existing cluster.
Console
In the Cloud Console, go to the Google Kubernetes Engine menu.
Click the cluster's Edit button, which looks like a pencil.
Expand Add-ons.
From the Compute Engine persistent disk CSI Driver dropdown, select Disabled.
Click Save.
Using the Compute Engine persistent disk CSI Driver
The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE.
Create a StorageClass
After you enable the Compute Engine persistent disk CSI Driver, GKE automatically
installs a StorageClass
for it. The StorageClass name is standard-rwo
. Some older cluster versions,
however, might have one of the following names:
singlewriter-standard
standard-singlewriter
You can find the name of your installed StorageClass by running the following command:
kubectl get sc
You can also install a different StorageClass that uses the Compute Engine persistent disk CSI Driver by
adding pd.csi.storage.gke.io
in the provisioner field.
For example, you could create a
StorageClass using the following file named pd-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pd-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-balanced
Any persistent disk type can be
specified in the type
parameter (for example pd-ssd
, pd-standard
or
pd-balanced
).
After creating the pd-example-class.yaml
file, run the following command:
kubectl create -f pd-example-class.yaml
Create a PersistentVolumeClaim
You can create a PersistentVolumeClaim that references the Compute Engine persistent disk CSI Driver's StorageClass.
The following file, named pvc-example.yaml
, uses the pre-installed storage class
standard-rwo
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard-rwo
resources:
requests:
storage: 6Gi
After creating the PersistentVolumeClaim manifest, run the following command:
kubectl create -f pvc-example.yaml
In the pre-installed StorageClass (standard-rwo
), volumeBindingMode
is set to WaitForFirstConsumer
. When volumeBindingMode
is set to
WaitForFirstConsumer
, the PersistentVolume is not provisioned until a Pod
referencing the PersistentVolumeClaim is scheduled. If volumeBindingMode
in
the StorageClass is set to Immediate
(or it's omitted), a
persistent-disk-backed PersistentVolume is provisioned after the
PersistentVolumeClaim is created.
Create a Pod that consumes the volume
When using Pods with PersistentVolumes, we recommend that you use a workload controller (such as a Deployment or StatefulSet). While you would not typically use a standalone Pod, the following example uses one for simplicity.
The following example consumes the volume that you created in the previous section:
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
Known issues
Due to a 128 character restriction in the CSI Node ID Spec and how GKE generates instance names, installing the Compute Engine persistent disk CSI Driver might fail on new and existing GKE clusters for certain node pools. For more information, see this GitHub issue.
This issue is fixed in the following versions:
- 1.16.15-gke.1700 and later
- 1.17.9-6300 and later
- 1.18.6-4801 and later
If you are using a cluster with an earlier version, upgrade to one of the listed versions to resolve the issue.
What's next
- Learn how to use volume expansion.
- Learn how to use volume snapshots.
- Read more about the driver on GitHub.