Persistent volumes and dynamic provisioning


This page provides an overview of persistent volumes and claims in Kubernetes, and their use with Google Kubernetes Engine (GKE). This page focuses on storage backed by Compute Engine persistent disks.

PersistentVolumes

PersistentVolume resources are used to manage durable storage in a cluster. In GKE, a PersistentVolume is typically backed by a persistent disk. You can also use other storage solutions like NFS. Filestore is a NFS solution on Google Cloud. To learn how to set up a Filestore instance as an NFS PV solution for your GKE clusters, see Access Filestore instances with the Filestore CSI driver in the Filestore documentation.

Another storage option is Cloud Volumes Service. This product is a fully managed, cloud-based data storage service that provides advanced data management capabilities and highly scalable performance. For examples, see Cloud Volumes Service for Google Cloud.

The PersistentVolume lifecycle is managed by Kubernetes. A PersistentVolume can be dynamically provisioned; you do not have to manually create and delete the backing storage.

PersistentVolume resources are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated. PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.

To learn more about PersistentVolume resources, refer to the Kubernetes Persistent Volumes documentation and the Persistent Volumes API reference.

PersistentVolumeClaims

A PersistentVolumeClaim is a request for and claim to a PersistentVolume resource. PersistentVolumeClaim objects request a specific size, access mode, and StorageClass for the PersistentVolume. If a PersistentVolume that satisfies the request exists or can be provisioned, the PersistentVolumeClaim is bound to that PersistentVolume.

Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for the Pod.

Portability is another advantage of using PersistentVolumes and PersistentVolumeClaims. You can easily use the same Pod specification across different clusters and environments because PersistentVolume is an interface to the actual backing storage.

StorageClasses

Volume implementations such as Compute Engine persistent disk Container Storage Interface (CSI) Driver are configured through StorageClass resources.

GKE creates a default StorageClass for you which uses the balanced persistent disk type (ext4). The default StorageClass is used when a PersistentVolumeClaim doesn't specify a StorageClassName. You can replace the provided default StorageClass with your own. For instructions, see Change the default StorageClass.

You can create your own StorageClass resources to describe different classes of storage. For example, classes might map to quality-of-service levels, or to backup policies. This concept is sometimes called "profiles" in other storage systems.

If you are using a cluster with Windows node pools, you must create a StorageClass and specify a StorageClassName in the PersistentVolumeClaim because the default fstype (ext4) is not supported with Windows. If you are using a Compute Engine persistent disk, you must use NTFS as the file storage type.

When defining a StorageClass, you must list a provisioner. On GKE, we recommend that you use one of the following provisioners:

Dynamically provision PersistentVolumes

Most of the time, you don't need to directly configure PersistentVolume objects or create Compute Engine persistent disks. Instead, you can create a PersistentVolumeClaim and Kubernetes automatically provisions a persistent disk for you.

The following manifest describes a request for a disk with 30 gibibytes (GiB) of storage whose access mode allows it to be mounted as read-write by a single node. It also creates a Pod that consumes the PersistentVolumeClaim as a volume.

# pvc-pod-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: standard-rwo
---
kind: Pod
apiVersion: v1
metadata:
  name: pod-demo
spec:
  volumes:
    - name: pvc-demo-vol
      persistentVolumeClaim:
       claimName: pvc-demo
  containers:
    - name: pod-demo
      image: nginx
      resources:
        limits:
          cpu: 10m
          memory: 80Mi
        requests:
          cpu: 10m
          memory: 80Mi
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: pvc-demo-vol

When you create this PersistentVolumeClaim object with kubectl apply -f pvc-pod-demo.yaml, Kubernetes dynamically creates a corresponding PersistentVolume object.

Because the storage class standard-rwo uses volume binding mode WaitForFirstConsumer, the PersistentVolume will not be created until a Pod is scheduled to consume the volume.

The following example shows the PersistentVolume created.

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: pd.csi.storage.gke.io
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/pd-csi-storage-gke-io
  name: pvc-c9a44c07-cffa-4cd8-b92b-15bac9a9b984
  uid: d52af557-edf5-4f96-8e89-42a3008209e6
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 30Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc-demo
    namespace: default
    uid: c9a44c07-cffa-4cd8-b92b-15bac9a9b984
  csi:
    driver: pd.csi.storage.gke.io
    csi.storage.k8s.io/fstype: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1660085000920-8081-pd.csi.storage.gke.io
    volumeHandle: projects/xxx/zones/us-central1-c/disks/pvc-c9a44c07-cffa-4cd8-b92b-15bac9a9b984
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.gke.io/zone
          operator: In
          values:
          - us-central1-c
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard-rwo
  volumeMode: Filesystem
status:
  phase: Bound

Assuming that you haven't replaced the storage class standard-rwo, this PersistentVolume is backed by a new, empty Compute Engine persistent disk.

Deleting persistent storage

By default, deleting a PersistentVolumeClaim for dynamically provisioned volumes like Compute Engine persistent disk will delete both the PersistentVolume object and the actual backing disk. This behavior is controlled by the reclaim policy in the StorageClass and PersistentVolume. For full details, see the open-source Kubernetes documentation.

Access modes

PersistentVolume resources support the following access modes:

  • ReadWriteOnce: The volume can be mounted as read-write by a single node.
  • ReadOnlyMany: The volume can be mounted read-only by many nodes.
  • ReadWriteMany: The volume can be mounted as read-write by many nodes. PersistentVolume resources that are backed by Compute Engine persistent disks don't support this access mode.

Using Compute Engine persistent disks as ReadOnlyMany

ReadWriteOnce is the most common use case for persistent disks and works as the default access mode for most applications. Compute Engine persistent disks also support ReadOnlyMany mode so that many applications or many replicas of the same application can consume the same disk at the same time. An example use case is serving static content across multiple replicas.

For instructions, refer to Use persistent disks with multiple readers.

Use pre-existing persistent disks as PersistentVolumes

Dynamically provisioned PersistentVolume resources are empty when they are created. If you have an existing Compute Engine persistent disk populated with data, you can introduce it to your cluster by manually creating a corresponding PersistentVolume resource. The persistent disk must be in the same zone as the cluster nodes.

Refer to this example of how to create a Persistent Volume backed by a pre-existing persistent disk.

Deployments vs. StatefulSets

You can use a PersistentVolumeClaim or VolumeClaim templates in higher level controllers such as Deployments or StatefulSets respectively.

Deployments are designed for stateless applications so all replicas of a Deployment share the same PersistentVolumeClaim. Since the replica Pods created are identical to each other, only volumes with the ReadWriteMany mode can work in this setting.

Even Deployments with one replica using ReadWriteOnce volume are not recommended. This is because the default Deployment strategy creates a second Pod before bringing down the first Pod on a recreate. The Deployment may fail in deadlock as the second Pod can't start because the ReadWriteOnce volume is already in use, and the first Pod won't be removed because the second Pod has not yet started. Instead, use a StatefulSet with ReadWriteOnce volumes.

StatefulSets are the recommended method of deploying stateful applications that require a unique volume per replica. By using StatefulSets with PersistentVolumeClaim templates, you can have applications that can scale up automatically with unique PersistentVolumesClaims associated to each replica Pod.

Regional persistent disks

Regional persistent disks are multi-zonal resources that replicate data between two zones in the same region, and can be used similarly to zonal persistent disks. In the event of a zonal outage or if cluster nodes in one zone become unschedulable, Kubernetes can failover workloads using the volume to the other zone. You can use regional persistent disks to build highly available solutions for stateful workloads on GKE. You must ensure that both the primary and failover zones are configured with enough resource capacity to run the workload.

Regional SSD persistent disks are an option for applications such as databases that require both high availability and high performance. For more details, see Block storage performance comparison.

As with zonal persistent disks, regional persistent disks can be dynamically provisioned as needed or manually provisioned in advance by the cluster administrator. To learn how to add regional persistent disks, see Provisioning regional persistent disks.

Zones in persistent disks

Zonal persistent disks are zonal resources and regional persistent disks are multi-zonal resources. When you add persistent storage to your cluster, unless a zone is specified, GKE assigns the disk to a single zone. GKE chooses the zone at random. Once a persistent disk is provisioned, any Pods referencing the disk are scheduled to the same zone as the disk.

Volume binding mode WaitForFirstConsumer

If you dynamically provision a persistent disk in your cluster, we recommend you set the WaitForFirstConsumer volume binding mode on your StorageClass. This setting instructs Kubernetes to provision a persistent disk in the same zone that the Pod gets scheduled to. It respects Pod scheduling constraints such as anti-affinity and node selectors. Anti-affinity on zones allows StatefulSet Pods to be spread across zones along with the corresponding disks.

Following is an example StorageClass for provisioning zonal persistent disks that sets WaitForFirstConsumer:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-balanced
  csi.storage.k8s.io/fstype: ext4
volumeBindingMode: WaitForFirstConsumer

For an example using regional persistent disks, see Provisioning regional persistent disks.

What's next