Install additional CSI drivers

This page explains how to install a Container Storage Interface (CSI) storage driver on your GKE clusters.

CSI is an open standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. Also, CSI enables the use of modern storage features, such as and resizing and snapshots.

By default, GKE on Azure provisions Azure Disk volumes with the azuredisk-csi-driver. You can also provision Azure File shares with the azurefile-csi-driver. If you want to use another type of storage volume, you can install a CSI driver.

After installing a CSI driver, you need to create a Kubernetes StorageClass. You set the CSI driver as the provisioner for the StorageClass. Then, you can set the StorageClass as default, or configure your workloads to use the StorageClass.

Before you begin

Connect to your cluster

Install a vendor's CSI driver

Storage vendors are responsible for providing installation instructions for their CSI drivers. See the list of CSI drivers in the CSI documentation.

Follow the installation instructions for your CSI driver, and then continue with the next steps on this page.

Google does not provide support or instructions for third-party drivers. Contact your storage vendor for support.

Verify your driver installation

After you install a CSI driver, you can verify the installation by running the following commands.

kubectl get csinodes \
    -o jsonpath='{range .items[*]} {}{": "} {range .spec.drivers[*]} {.name}{"\n"} {end}{end}'

Use a CSI driver

To use a CSI driver, do the following:

  1. Create a custom StorageClass which references the driver in its provisioner field.

  2. To provision storage, you can either:

Considerations for StorageClass backed by a CSI driver

When you create a StorageClass, consider the following:

  • Check your CSI driver documentation for driver-specific parameters that you provide to your StorageClass, including the provisioner name.

  • You should name the StorageClass after its properties (such as fast or highly-replicated), rather than after the name of the specific driver or appliance behind it. When you name a StorageClass after its properties, you can create StorageClasses with the same name in different clusters and environments. Then, configure your workloads to use the same StorageClass.

What's next?