Manually install a CSI driver


This page explains how to install a Container Storage Interface (CSI) storage driver on Google Kubernetes Engine (GKE) Standard clusters. This page doesn't apply to GKE Autopilot clusters, which automatically use the Compute Engine persistent disk CSI driver.

If you are using the Compute Engine persistent disk CSI driver in your Standard cluster, we recommend automatically deploying the driver to reduce your management overhead.

Overview

CSI is an open standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. Kubernetes volumes are managed by vendor-specific storage drivers, which have historically been compiled into Kubernetes binaries. Previously, you could not use a storage driver that was not included with Kubernetes. Installing a CSI driver adds support for a storage system that is not natively supported by Kubernetes. Also, CSI enables the use of modern storage features, such as snapshots and resizing.

Installing a vendor's CSI driver

Other storage vendors develop their own CSI drivers, and they are responsible for providing installation instructions. In simple cases, installation might only involve deploying manifests to your clusters. See the list of CSI drivers in the CSI documentation.

Verifying a driver installation

After you install a CSI driver, you can verify the installation by running one of the following commands, depending on your cluster's GKE version:

1.14+

kubectl get csinodes \
-o jsonpath='{range .items[*]} {.metadata.name}{": "} {range .spec.drivers[*]} {.name}{"\n"} {end}{end}'

1.13.x

kubectl get nodes \
-o jsonpath='{.items[*].metadata.annotations.csi\.volume\.kubernetes\.io\/nodeid}'

Using a CSI driver

To use a CSI driver:

  1. Create a Kubernetes StorageClass that references the driver in its provisioner field, if a StorageClass is not created for you as part of driver installation. Some CSI drivers deploy a StorageClass when you install them.

  2. To provision storage, you can either:

Considerations for StorageClasses backed by a CSI driver

When you create a StorageClass, consider the following:

  • CSI driver documentation should include the driver-specific parameters that you provide to your StorageClass, including the provisioner name.
  • You should name the StorageClass after its properties, rather than after the name of the specific driver or appliance behind it. Naming the StorageClass after its properties allows you to create StorageClasses with the same name across multiple clusters and environments, and allows your applications to get storage with the same properties across clusters.

Example: Reference StorageClass in a StatefulSet

The following example models how to define a CSI driver in a StorageClass, and then reference the StorageClass in a StatefulSet workload. The example assumes the driver has already been installed to the cluster.

The following is a simple StorageClass named premium-rwo that uses a fictional CSI driver, csi.example.com, as its provisioner:

# fast-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: premium-rwo
provisioner: csi.example.com # CSI driver
parameters: # You provide vendor-specific parameters to this specification
  type: example-parameter # Be sure to follow the vendor's instructions
  datastore: my-datastore
reclaimPolicy: Retain
allowVolumeExpansion: true

You reference the StorageClass in a StatefulSet's volumeClaimTemplates specification.

When you reference a StorageClass in a StatefulSet's volumeClaimTemplates specification, Kubernetes provides stable storage using PersistentVolumes. Kubernetes calls the provisioner defined in the StorageClass to create a new storage volume. In this case, Kubernetes calls the fictional csi.example.com provider, which calls out to the provider's API, to create a volume. After the volume is provisioned, Kubernetes automatically creates a PersistentVolume to represent the storage.

Here is a simple StatefulSet that references the StorageClass:

# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.k8s.io/nginx-slim:0.8
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates: # This is the specification in which you reference the StorageClass
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
      storageClassName: premium-rwo # This field references the existing StorageClass

What's next