Using the Compute Engine persistent disk CSI Driver

This page explains how to use the Compute Engine persistent disk CSI Driver.

Google Kubernetes Engine (GKE) provides a simple way for you to automatically deploy and manage the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver in your clusters.

The Compute Engine persistent disk CSI Driver version is tied to GKE Kubernetes master version numbers. This is typically the latest driver available at the time that the GKE version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.

Benefits of using Compute Engine persistent disk CSI Driver

Using the Compute Engine persistent disk CSI Driver instead of the Kubernetes in-tree gcePersistentDisk volume plugin provides the following benefits:

  • CSI drivers are the future of storage extension in Kubernetes, replacing in- tree volume plugins. Kubernetes has announced that the in-tree volume plugins are expected to be removed from Kubernetes in version 1.21. For details, see Kubernetes In-Tree to CSI Volume Migration Moves to Beta. After this happens, existing volumes using in-tree volume plugins will communicate through CSI drivers instead.
  • It enables the automatic deployment and management of the persistent disk driver without having to manually set it up, or use the in-tree volume plugin.
  • It provides additional persistent disk features in GKE. Customer Managed Encryption Keys (CMEK), which are used to encrypt the data encryption keys that encrypt your data, are available in the Compute Engine persistent disk CSI Driver, but not the in-tree volume plugin. To learn more about CMEK, see Using CMEK.
  • Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This typically results in a faster release cadence.

Requirements

To use this feature, you must use Kubernetes master and node versions that are 1.14 or higher.

Enabling Compute Engine persistent disk CSI Driver on a new cluster

To use the Compute Engine persistent disk CSI Driver in new clusters, enable the feature using gcloud or the Google Cloud Console. This is the only step required.

gcloud

By default, the Compute Engine persistent disk CSI Driver is not enabled when you create a cluster. To enable the driver on cluster creation, run the following command:

gcloud beta container clusters create cluster-name \
  --addons=GcePersistentDiskCsiDriver \
  --cluster-version=version

where:

  • cluster-name is the name of your new cluster.
  • version is the GKE version number. You must select a version of 1.14 or higher to use this feature.

For the full list of optional flags, refer to the gcloud container clusters create documentation.

Console

By default, Compute Engine persistent disk CSI Driver is not enabled when you create a cluster. To enable the driver on cluster creation:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. Configure your cluster as desired. For details on the types of clusters you can create, see Types of clusters.

  4. From the navigation pane, under Cluster, click Features.

  5. Select the Enable Compute Engine persistent disk CSI Driver checkbox.

  6. Click Create.

Once you have enabled Compute Engine persistent disk CSI Driver, the driver can then be used in Kubernetes volumes using the driver and provisioner name: pd.csi.storage.gke.io.

Enabling Compute Engine persistent disk CSI Driver on an existing cluster

The driver can also be enabled on an existing cluster, as long as the master and node versions are 1.14 or above.

gcloud

To enable the driver on an existing cluster, run the following command:

gcloud beta container clusters update cluster-name \
  --update-addons=GcePersistentDiskCsiDriver=ENABLED

where cluster-name is the name of the existing cluster.

Console

To enable the driver on an existing cluster:

  1. Visit the Google Kubernetes Engine menu in the Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's Edit button, which looks like a pencil.

  3. Expand Add-ons.

  4. From the Compute Engine persistent disk CSI Driver dropdown, select Enabled.

  5. Click Save.

Disabling Compute Engine persistent disk CSI Driver

You also have the option to disable the driver.

gcloud

To enable the driver on an existing cluster, run the following command:

gcloud beta container clusters update cluster-name \
  --update-addons=GcePersistentDiskCsiDriver=DISABLED

where cluster-name is the name of the existing cluster.

Console

To enable the driver on an existing cluster:

  1. Visit the Google Kubernetes Engine menu in the Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's Edit button, which looks like a pencil.

  3. Expand Add-ons.

  4. From the Compute Engine persistent disk CSI Driver dropdown, select Disabled.

  5. Click Save.

Using Compute Engine persistent disk CSI Driver

The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE.

Create a StorageClass

Once you enable the Compute Engine persistent disk CSI Driver, GKE automatically pre-installs a StorageClass for it. The StorageClass name is standard-rwo. Some older cluster versions, however, may have one of the following names: + singlewriter-standard + standard-singlewriter

You can find the name of the installed StorageClass by running the following command:

kubectl get sc

You can also install a different StorageClass. For example, you could create a StorageClass using the following file named pd-example-class.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pd-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: pd-ssd

After creating the pd-example-class.yaml file, run the following command:

kubectl create -f pd-example-class.yaml

Create a PersistentVolumeClaim

You can also create a PersistentVolumeClaim. The following example uses the pre-installed storage class singlewriter-standard:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: podpvc
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: singlewriter-standard
  resources:
    requests:
      storage: 6Gi

After creating the PersistentVolumeClaim manifest, run the following command:

kubectl create -f filename.yaml

In the pre-installed StorageClass, singlewriter-standard, volumeBindingMode is set to WaitForFirstConsumer. When volumeBindingMode is set to WaitForFirstConsumer, the PersistentVolume is not provisioned until a Pod referencing the PersistentVolumeClaim is scheduled. If volumeBindingMode in the StorageClass is set to Immediate (or it's omitted), a persistent disk- backed PersistentVolume is provisioned after the PersistentVolumeClaim is created.

Create a Pod that consumes the Volume

When using Pods with PersistentVolumes, we recommend you use a workload controller (such as a Deployment or StatefulSet). While you would not typically use a standalone Pod, the following example uses one for simplicity.

You can change the following YAML file into a Pod template:

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - mountPath: /var/lib/www/html
         name: mypvc
  volumes:
   - name: mypvc
     persistentVolumeClaim:
       claimName: podpvc
       readOnly: false

What's next