This document shows you how to use Kubernetes volume cloning to clone
persistent volumes
in your Google Kubernetes Engine (GKE) clusters.
Overview
A clone is a new independent volume that is a duplicate of an existing
Kubernetes volume. A clone is similar to a volume snapshot
in that it's a copy of a volume at a specific point in time. However,
rather than creating a snapshot object from the source volume, volume cloning
provisions the clone with all the data from the source volume.
Requirements
To use volume cloning on GKE, you must meet the following
requirements:
The source PersistentVolumeClaim must be in the same namespace as the
destination PersistentVolumeClaim.
Use a CSI driver that supports volume cloning. The in-tree persistent disk
driver does not support volume cloning.
You can create a regional disk clone from a zonal disk,
but you should be aware of the restrictions of this approach.
Cloning must be done in a compatible zone. Use allowedTopologies to restrict the topology of provisioned volumes to specific zones. Alternatively, nodeSelectors or Affinity and anti-affinity can be used to constrain a Pod so that it is restricted to run on particular node that runs in a compatible zone.
For zonal to zonal cloning, the clone zone must match the source disk zone.
For zonal to regional cloning, one of the replica zones of the clone must match the zone of the source disk.
Using volume cloning
To provision a volume clone, you add a reference to an existing
PersistentVolumeClaim in the same namespace to the dataSource field of a
new PersistentVolumeClaim. The following exercise shows you how to provision
a source volume with data, create a volume clone, and consume the clone.
Create a source volume
To create a source volume, follow the instructions in
Using the Compute Engine persistent disk CSI Driver for Linux clusters
to create a StorageClass, a PersistentVolumeClaim, and a Pod to consume the
new volume. You'll use the PersistentVolumeClaim that you create as the source
for the volume clone.
Add a test file to the source volume
Add a test file to the source volume. You can look for this test file in the
volume clone to verify that cloning was successful.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Create clones of persistent volumes\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis document shows you how to use Kubernetes volume cloning to clone\n[persistent volumes](/kubernetes-engine/docs/concepts/persistent-volumes)\nin your Google Kubernetes Engine (GKE) clusters.\n\nOverview\n--------\n\nA clone is a new independent volume that is a duplicate of an existing\nKubernetes volume. A clone is similar to a [volume snapshot](/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots)\nin that it's a copy of a volume at a specific point in time. However,\nrather than creating a snapshot object from the source volume, volume cloning\nprovisions the clone with all the data from the source volume.\n\nRequirements\n------------\n\nTo use volume cloning on GKE, you must meet the following\nrequirements:\n\n- The source PersistentVolumeClaim must be in the same namespace as the destination PersistentVolumeClaim.\n- Use a CSI driver that supports volume cloning. The in-tree persistent disk driver does not support volume cloning.\n - The [Compute Engine persistent disk CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) version 1.4.0 and later supports volume cloning, and is installed by default on new Linux clusters running GKE version 1.22 or later. You can also [enable the Compute Engine persistent disk CSI Driver on an existing cluster](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#enabling_the_on_an_existing_cluster).\n\nTo verify the Compute Engine persistent disk CSI Driver version, run the following command in the\ngcloud CLI: \n\n kubectl describe daemonsets pdcsi-node --namespace=kube-system | grep \"gke.gcr.io/gcp-compute-persistent-disk-csi-driver\"\n\nIf the output shows a version earlier than `1.4.0`,\n[manually upgrade your control plane](/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrade_cp)\nto get the latest version.\n\nLimitations\n-----------\n\n- Both volumes must use the same [volume mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-mode). By default, GKE sets the VolumeMode to `ext4`.\n- All [restrictions for creating a disk clone from an existing disk](/compute/docs/disks/create-disk-from-source#restrictions) on Compute Engine also apply to GKE.\n- You can create a regional disk clone from a zonal disk, but you should be aware of the [restrictions of this approach](/compute/docs/disks/create-disk-from-source#restrictions_2).\n- Cloning must be done in a compatible zone. Use [allowedTopologies](https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies) to restrict the topology of provisioned volumes to specific zones. Alternatively, [nodeSelectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) or [Affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) can be used to constrain a Pod so that it is restricted to run on particular node that runs in a compatible zone.\n - For zonal to zonal cloning, the clone zone must match the source disk zone.\n - For zonal to regional cloning, one of the replica zones of the clone must match the zone of the source disk.\n\nUsing volume cloning\n--------------------\n\nTo provision a volume clone, you add a reference to an existing\nPersistentVolumeClaim in the same namespace to the `dataSource` field of a\nnew PersistentVolumeClaim. The following exercise shows you how to provision\na source volume with data, create a volume clone, and consume the clone.\n\n### Create a source volume\n\nTo create a source volume, follow the instructions in\n[Using the Compute Engine persistent disk CSI Driver for Linux clusters](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#using_the_for_linux_clusters)\nto create a StorageClass, a PersistentVolumeClaim, and a Pod to consume the\nnew volume. You'll use the PersistentVolumeClaim that you create as the source\nfor the volume clone.\n\n### Add a test file to the source volume\n\nAdd a test file to the source volume. You can look for this test file in the\nvolume clone to verify that cloning was successful.\n\n1. Create a test file in a Pod:\n\n kubectl exec \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -- sh -c 'echo \"Hello World!\" \u003e /var/lib/www/html/hello.txt'\n\n Replace \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e with the name of a Pod that consumes\n the source volume. For example, if you followed the instructions in\n [Using the Compute Engine persistent disk CSI Driver for Linux clusters](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#using_the_for_linux_clusters),\n replace \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e with `web-server`.\n2. Verify that the file exists:\n\n kubectl exec \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -- sh -c 'cat /var/lib/www/html/hello.txt'\n\n The output is similar to the following: \n\n Hello World!\n\n### Clone the source volume\n\n1. Save the following manifest as `podpvc-clone.yaml`:\n\n kind: PersistentVolumeClaim\n apiVersion: v1\n metadata:\n name: podpvc-clone\n spec:\n dataSource:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003ePVC_NAME\u003c/span\u003e\u003c/var\u003e\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n storageClassName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSTORAGE_CLASS_NAME\u003c/span\u003e\u003c/var\u003e\n resources:\n requests:\n storage: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSTORAGE\u003c/span\u003e\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e: the name of the source PersistentVolumeClaim that you created in [Create a source volume](#create-source).\n - \u003cvar translate=\"no\"\u003eSTORAGE_CLASS_NAME\u003c/var\u003e: the name of the StorageClass to use, which must be the same as the StorageClass of the source PersistentVolumeClaim.\n - \u003cvar translate=\"no\"\u003eSTORAGE\u003c/var\u003e: the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim.\n2. Apply the manifest:\n\n kubectl apply -f podpvc-clone.yaml\n\n### Create a Pod that consumes the cloned volume\n\nThe following example creates a Pod that consumes the volume clone that\nyou created.\n\n1. Save the following manifest as `web-server-clone.yaml`:\n\n apiVersion: v1\n kind: Pod\n metadata:\n name: web-server-clone\n spec:\n containers:\n - name: web-server-clone\n image: nginx\n volumeMounts:\n - mountPath: /var/lib/www/html\n name: mypvc\n volumes:\n - name: mypvc\n persistentVolumeClaim:\n claimName: podpvc-clone\n readOnly: false\n\n2. Apply the manifest:\n\n kubectl apply -f web-server-clone.yaml\n\n3. Verify that the test file exists:\n\n kubectl exec web-server-clone \\\n -- sh -c 'cat /var/lib/www/html/hello.txt'\n\n The output is similar to the following: \n\n Hello World!\n\nClean up\n--------\n\nTo avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.\n\n1. Delete the `Pod` objects:\n\n kubectl delete pod \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e web-server-clone\n\n2. Delete the `PersistentVolumeClaim` objects:\n\n kubectl delete pvc podpvc podpvc-clone"]]