Using volume expansion


In Google Kubernetes Engine (GKE) version 1.24 or later, you can use the Kubernetes volume expansion feature to change a persistent volume's capacity after its creation.

For more information on volume expansion, see the open source Kubernetes documentation.

Prerequisites

Volume expansion has the following prerequisites:

  • If the volume you want to resize is managed by a CSI Driver:
    • Ensure the GKE cluster version is 1.16 or later. If the cluster has Windows node pools, ensure the GKE cluster version is 1.18 or later. If you are using the managed GKE Filestore CSI driver, the cluster version must be 1.21 or later.
    • Check your storage vendor's documentation to verify your CSI driver supports volume expansion. The Compute Engine Persistent Disk CSI driver and the Filestore CSI driver support volume expansion.
  • If the volume you want to resize is managed by an in-tree volume plugin:
    • Ensure the GKE cluster version is 1.11 or greater. While GKE cluster versions 1.11-1.14 support expansion of volumes managed by in-tree plugins, they require all Pods using the volume to be terminated and recreated to complete volume expansion.
    • Check your storage vendor's documentation to verify your in-tree volume plugin supports volume expansion (the Compute Engine Persistent Disk in-tree plugin does).

Using volume expansion

To use volume expansion, perform the following tasks:

  1. Add allowVolumeExpansion: true to your StorageClass, if your StorageClass doesn't already have the field. For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: standard
    provisioner: my.driver
    ...
    allowVolumeExpansion: true
    
  2. Request a change in volume capacity by editing your PersistentVolumeClaim's spec.resources.requests.storage field.

     kubectl edit pvc pvc-name
    

    For example, you could change the following PVC from having a 30 gibibyte (GiB) disk to having a 40 GiB disk.

    Before editing:

     # pvc-demo.yaml
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: pvc-demo
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 30Gi
    

    After editing:

     # pvc-demo.yaml
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: pvc-demo
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 40Gi
    
  3. Verify the change by viewing PVC. To view your PVC, run the following command:

    kubectl get pvc pvc-name -o yaml
    

    Eventually, you should see the new volume in the status.capacity field. For example:

    ...
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 40Gi
      storageClassName: standard
      volumeMode: Filesystem
      volumeName: pvc-078b7484-cc8d-4077-9bcb-2c17d8d4550c
    status:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 40Gi
    ...
    

If the capacity of a PersistentVolume is modified directly, this could lead the container file system to be incorrect. To fix these issues, see troubleshoot volume expansion changes.

Managing volume expansions in StatefulSets

If you need to increase the size of volumes used by Pods within a StatefulSet in Kubernetes, you should adjust the spec.resources.requests.storage field of the PersistentVolumeClaims (PVCs) associated with the Pods. Attempting to modify the volumeClaimTemplates field directly in the StatefulSet object will cause an error.

Additionally, if you increase the replica count of the StatefulSet, it will still create PVCs of the original size. To permanently change the size of the volumes provisioned for the Pods managed by the StatefulSet, you must delete and recreate the StatefulSet object with the updated size specified in the volumeClaimTemplates field. Keep in mind that this process will result in the deletion of the old Pods and their corresponding PVCs. Depending on the ReclaimPolicy, the underlying storage might also be deleted.

You can perform the following steps in order to keep the original Pods up and running while adjusting the StatefulSet to provision future replicas with the new volume size.

  1. Save the existing StatefulSet to a file:

    kubectl get StatefulSet statefulset-name -o yaml > sts-backup.yaml
    
  2. Remove the StatefulSet object from the cluster while keeping the Pods running as standalone Pods:

    kubectl delete sts statefulset-name --cascade=orphan
    
  3. Edit the new volume storage size in the locally saved sts-backup.yaml file, specifically the value of spec.volumeClaimTemplates.spec.resources.requests.storage

  4. Recreate the StatefulSet back in the cluster:

    kubectl apply -f sts-backup.yaml
    

What's next