Using the Compute Engine persistent disk CSI Driver

Google Kubernetes Engine (GKE) provides a simple way for you to automatically deploy and manage the Compute Engine persistent disk Container Storage Interface (CSI) Driver in your clusters.

The Compute Engine persistent disk CSI Driver version is tied to GKE version numbers. The Compute Engine persistent disk CSI Driver version is typically the latest driver available at the time that the GKE version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.

Benefits

Using the Compute Engine persistent disk CSI Driver provides the following benefits:

  • It enables the automatic deployment and management of the persistent disk driver without having to manually set it up.
  • You can use customer-managed encryption keys (CMEKs). These keys are used to encrypt the data encryption keys that encrypt your data. To learn more about CMEK on GKE, see Using CMEK.
  • You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.
  • Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence.

Requirements

To use the Compute Engine persistent disk CSI Driver, your clusters must be using the following versions:

  • Linux clusters: GKE version 1.14 or later.
  • Windows clusters: GKE version 1.18 or later.

In version 1.22 and later, the in-tree gcePersistentDisk volume plugin is removed. Existing volumes that use the gce-pd provider are migrated to communicate through CSI drivers instead. No changes are required to any StorageClass. The gce-pd provider continues to not support features such as CMEK or volume snapshots. You must use the pd.csi.storage.gke.io provider in the StorageClass to enable these features.

To use the Compute Engine persistent disk CSI Driver with Workload Identity, your clusters must be using the following versions:

  • Linux clusters: GKE version 1.16 or later.
  • Windows clusters: GKE version 1.20.8-gke.900 or later.

Enabling the Compute Engine persistent disk CSI Driver on a new cluster

To create a Standard cluster with a version where the Compute Engine persistent disk CSI Driver is not automatically enabled, you can use the gcloud command-line tool or the Google Cloud Console.

To enable the driver on cluster creation, complete the following steps:

gcloud

gcloud container clusters create CLUSTER-NAME \
    --addons=GcePersistentDiskCsiDriver \
    --cluster-version=VERSION

Replace the following:

  • CLUSTER-NAME: the name of your cluster.
  • VERSION: the GKE version number. You must select a version of 1.14 or higher to use this feature.

For the full list of flags, see the gcloud container clusters create documentation.

Console

  1. Go to the Google Kubernetes Engine page in the Cloud Console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. In the Standard section, click Configure.

  4. Configure the cluster as desired.

  5. From the navigation pane, under Cluster, click Features.

  6. Select the Enable Compute Engine persistent disk CSI Driver checkbox.

  7. Click Create.

After you have enabled the Compute Engine persistent disk CSI Driver, you can use the driver in Kubernetes volumes using the driver and provisioner name: pd.csi.storage.gke.io.

Enabling the Compute Engine persistent disk CSI Driver on an existing cluster

To enable the Compute Engine persistent disk CSI Driver in existing Standard clusters, use the gcloud command-line tool or the Google Cloud Console.

To enable the driver on an existing cluster, complete the following steps:

gcloud

gcloud container clusters update CLUSTER-NAME \
   --update-addons=GcePersistentDiskCsiDriver=ENABLED

Replace CLUSTER-NAME with the name of the existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Features, next to the Compute Engine persistent disk CSI Driver field, click Edit Compute Engine CSI driver.

  4. Select the Enable Compute Engine Persistent Disk CSI Driver checkbox.

  5. Click Save Changes.

Disabling the Compute Engine persistent disk CSI Driver

You can disable the Compute Engine persistent disk CSI Driver for Standard clusters by using gcloud command-line tool or Google Cloud Console.

If you disable the driver, then any Pods currently using PersistentVolumes owned by the driver do not terminate. Any new Pods that try to use those PersistentVolumes also fail to start.

To disable the driver on an existing Standard cluster, complete the following steps:

gcloud

gcloud container clusters update CLUSTER-NAME \
    --update-addons=GcePersistentDiskCsiDriver=DISABLED

Replace CLUSTER-NAME with the name of the existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Features, next to the Compute Engine persistent disk CSI Driver field, click Edit Compute Engine CSI driver.

  4. Clear the Enable Compute Engine Persistent Disk CSI Driver checkbox.

  5. Click Save Changes.

Using the Compute Engine persistent disk CSI Driver for Linux clusters

The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE. These sections are specific to clusters using Linux.

Create a StorageClass

After you enable the Compute Engine persistent disk CSI Driver, GKE automatically installs the following StorageClasses:

  • standard-rwo, using balanced persistent disk
  • premium-rwo, using SSD persistent disk

Some older cluster versions from 1.17 and below might instead have the singlewriter-standard or standard-singlewriter StorageClass instead, which uses standard persistent disk.

For Autopilot clusters, the default StorageClass is standard-rwo, which uses the Compute Engine persistent disk CSI Driver. For Standard clusters, the default StorageClass uses the Kubernetes in-tree gcePersistentDisk volume plugin.

You can find the name of your installed StorageClasses by running the following command:

kubectl get sc

You can also install a different StorageClass that uses the Compute Engine persistent disk CSI Driver by adding pd.csi.storage.gke.io in the provisioner field.

For example, you could create a StorageClass using the following file named pd-example-class.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pd-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: pd-balanced

Any persistent disk type can be specified in the type parameter (for example pd-ssd, pd-standard or pd-balanced).

After creating the pd-example-class.yaml file, run the following command:

kubectl create -f pd-example-class.yaml

Create a PersistentVolumeClaim

You can create a PersistentVolumeClaim that references the Compute Engine persistent disk CSI Driver's StorageClass.

The following file, named pvc-example.yaml, uses the pre-installed storage class standard-rwo:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: podpvc
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: standard-rwo
  resources:
    requests:
      storage: 6Gi

After creating the PersistentVolumeClaim manifest, run the following command:

kubectl create -f pvc-example.yaml

In the pre-installed StorageClass (standard-rwo), volumeBindingMode is set to WaitForFirstConsumer. When volumeBindingMode is set to WaitForFirstConsumer, the PersistentVolume is not provisioned until a Pod referencing the PersistentVolumeClaim is scheduled. If volumeBindingMode in the StorageClass is set to Immediate (or it's omitted), a persistent-disk-backed PersistentVolume is provisioned after the PersistentVolumeClaim is created.

Create a Pod that consumes the volume

When using Pods with PersistentVolumes, we recommend that you use a workload controller (such as a Deployment or StatefulSet). While you would not typically use a standalone Pod, the following example uses one for simplicity.

The following example consumes the volume that you created in the previous section:

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - mountPath: /var/lib/www/html
         name: mypvc
  volumes:
   - name: mypvc
     persistentVolumeClaim:
       claimName: podpvc
       readOnly: false

Using the Compute Engine persistent disk CSI Driver for Windows clusters

The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE. These sections are specific to clusters using Windows.

Ensure that the:

  • Cluster version is 1.19.7-gke.2000, 1.20.2-gke.2000, or later.
  • Node versions is 1.18.12-gke.1203, 1.19.6-gke.800, or later.

Create a StorageClass

Creating a StorageClass for Windows is very similar to Linux. You should be aware that the StorageClass installed by default will not work for Windows because the file system type is different. Compute Engine persistent disk CSI Driver for Windows requires NTFS as the file system type.

For example, you could create a StorageClass using the following file named pd- windows-class.yaml. Make sure to add csi.storage.k8s.io/fstype: NTFS to the parameters list:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pd-sc-windows
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: pd-balanced
  csi.storage.k8s.io/fstype: NTFS

Create a PersistentVolumeClaim

After creating a StorageClass for Windows, you can now create a PersistentVolumeClaim that references that StorageClass:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: podpvc-windows
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: pd-sc-windows
  resources:
    requests:
      storage: 6Gi

Create a Pod that consumes the volume

The following example consumes the volume that you created in the previous task:

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  nodeSelector:
    kubernetes.io/os: windows
  containers:
    - name: iis-server
      image: mcr.microsoft.com/windows/servercore/iis
      ports:
      - containerPort: 80
      volumeMounts:
      - mountPath: /var/lib/www/html
        name: mypvc
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: podpvc-windows
        readOnly: false

Using the Compute Engine persistent disk CSI Driver with non-default filesystem types

The default filesystem type for Compute Engine persistent disks in GKE is ext4. You can also use the xfs storage type as long as your node image supports it. See Storage driver support for a list of supported drivers by node image.

The following example shows you how to use xfs as the default filesystem type instead of ext4 with the Compute Engine persistent disk CSI Driver.

Create a StorageClass

  1. Save the following manifest as a YAML file named pd-xfs-class.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: xfs-class
    provisioner: pd.csi.storage.gke.io
    parameters:
      type: pd-standard
      csi.storage.k8s.io/fstype: xfs
    volumeBindingMode: WaitForFirstConsumer
    
  2. Apply the manifest:

    kubectl apply -f pd-xfs-class.yaml
    

Create a PersistentVolumeClaim

  1. Save the following manifest as pd-xfs-pvc.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: xfs-pvc
    spec:
      storageClassName: xfs-class
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    
  2. Apply the manifest:

    kubectl apply -f pd-xfs-pvc.yaml
    

Create a Pod that consumes the volume

  1. Save the following manifest as pd-xfs-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pd-xfs-pod
    spec:
      containers:
      - name: cloud-sdk
        image: google/cloud-sdk:slim
        args: ["sleep","3600"]
        volumeMounts:
        - mountPath: /xfs
          name: xfs-volume
      volumes:
      - name: xfs-volume
        persistentVolumeClaim:
          claimName: xfs-pvc
    
  2. Apply the manifest:

    kubectl apply -f pd-xfs-pod.yaml
    

Verify that the volume was mounted correctly

  1. Open a shell session in the Pod:

    kubectl exec -it pd-xfs-pod -- /bin/bash
    
  2. Look for xfs partitions:

    df -aTh --type=xfs
    

    The output should be similar to the following:

    Filesystem     Type  Size  Used Avail Use% Mounted on
    /dev/sdb       xfs    30G   63M   30G   1% /xfs
    

Known issues

Due to a 128 character restriction in the CSI Node ID Spec and how GKE generates instance names, installing the Compute Engine persistent disk CSI Driver might fail on new and existing GKE clusters for certain node pools. For more information, see this GitHub issue.

This issue is fixed in the following versions:

  • 1.16.15-gke.1700 and later
  • 1.17.9-6300 and later
  • 1.18.6-4801 and later

If you are using a cluster with an earlier version, upgrade to one of the listed versions to resolve the issue.

What's next