apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:premium-rwoprovisioner:csi.example.com# CSI driverparameters:# You provide vendor-specific parameters to this specificationtype:example-parameter# Be sure to follow the vendor's instructionsdatastore:my-datastorereclaimPolicy:RetainallowVolumeExpansion:true
apiVersion:apps/v1kind:StatefulSetmetadata:name:webspec:replicas:2selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:registry.k8s.io/nginx-slim:0.8volumeMounts:-name:wwwmountPath:/usr/share/nginx/htmlvolumeClaimTemplates:# This is the specification in which you reference the StorageClass-metadata:name:wwwspec:accessModes:["ReadWriteOnce"]resources:requests:storage:1GistorageClassName:fast# This field references the existing StorageClass
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["This page explains how to install a Container Storage Interface (CSI) storage\ndriver to clusters created using Google Distributed Cloud (software only) for VMware.\n\nOverview\n\n[CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md)\nis an open standard API that enables Kubernetes to expose arbitrary storage\nsystems to containerized workloads. When you deploy a CSI-compatible storage\ndriver to a cluster, the cluster can connect directly to a compatible storage\ndevice without having to go through vSphere storage.\n\nKubernetes volumes are managed by vendor-specific storage drivers, which have\nhistorically been\n[compiled into Kubernetes binaries](https://kubernetes.io/docs/concepts/storage/).\nPreviously, you could not use a storage driver that was not included with\nKubernetes. Installing a CSI driver adds support for a storage system that is\nnot natively supported by Kubernetes. Also, CSI enables the use of modern\nstorage features, such as snapshots and resizing.\n\nTo use a CSI driver, you need to create a Kubernetes\n[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/).\nYou set the CSI driver as the provisioner for the StorageClass. Then, you can\n[set the StorageClass as the cluster's default](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/default-storage-class),\nor configure your workloads to use the StorageClass ([StatefulSet example](#example_statefulset)).\n\nBefore you begin\n\nBy default, Google Distributed Cloud uses vSphere datastores using the built-in\n[vsphereVolume](https://kubernetes.io/docs/concepts/storage/volumes/#vspherevolume)\ndriver. Additionally, the built-in drivers for NFS and iSCSI can attach and\nmount existing volumes to your workloads.\n| **Note:** The built-in [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs) and [iSCSI](https://kubernetes.io/docs/concepts/storage/volumes/#iscsi) drivers cannot provision new volumes. For dynamic provisioning, you need a CSI storage driver specific to your storage appliance.\n\nInstalling a vendor's CSI driver\n\nStorage vendors develop their own CSI drivers, and they are responsible for\nproviding installation instructions. In simple cases, installation might only\ninvolve deploying manifests to your clusters. See the list of [CSI\ndrivers](https://kubernetes-csi.github.io/docs/drivers.html) in the\nCSI documentation.\n| **Note:** Google does not provide support for, nor instructions for installing, vendors' drivers.\n\nVerifying a driver installation\n\nAfter you install a CSI driver, you can verify the installation by running the\nfollowing command:\n\n```\nkubectl get csidrivers --kubeconfig KUBECONFIG\n```\n\nUsing a CSI driver\n\nTo use a CSI driver:\n\n1. Create a Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/)\n that references the driver in its `provisioner` field.\n\n2. To provision storage, you can either:\n\n - Reference the StorageClass in the `volumeClaimTemplates` specification for a [StatefulSet](/kubernetes-engine/docs/concepts/statefulset) object.\n - [Set it as the cluster's default StorageClass](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/default-storage-class).\n\nConsiderations for StorageClasses backed by a CSI driver\n\nWhen you create a StorageClass, consider the following:\n\n- CSI driver documentation should include the [driver-specific parameters](https://kubernetes.io/docs/concepts/storage/storage-classes/#the-storageclass-resource) that you provide to your StorageClass, including the provisioner name.\n- You should name the StorageClass after its properties, rather than after the name of the specific driver or appliance behind it. Naming the StorageClass after its properties allows you to create StorageClasses with the same name across multiple clusters and environments, and allows your applications to get storage with the same properties across clusters.\n\nExample: Reference StorageClass in a StatefulSet\n\nThe following example models how to define a CSI driver in a StorageClass, and\nthen reference the StorageClass in a [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)\nworkload. The example assumes the driver has already been installed to the\ncluster.\n\nBelow is a simple StorageClass named `premium-rwo` that uses a fictional CSI driver,\n`csi.example.com`, as its provisioner: \n\nfast-sc.yaml \n\n```yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: premium-rwo\nprovisioner: csi.example.com # CSI driver\nparameters: # You provide vendor-specific parameters to this specification\n type: example-parameter # Be sure to follow the vendor's instructions\n datastore: my-datastore\nreclaimPolicy: Retain\nallowVolumeExpansion: true\n```\n\nYou reference the StorageClass in a StatefulSet's `volumeClaimTemplates`\nspecification.\n\nWhen you reference a StorageClass in a StatefulSet's `volumeClaimTemplates`\nspecification, Kubernetes provides stable storage using PersistentVolumes (PVs).\nKubernetes calls the provisioner defined in the StorageClass to create a new\nstorage volume. In this case, Kubernetes calls the fictional `csi.example.com`\nprovider, which calls out to the provider's API, to create a volume. After the\nvolume is provisioned, Kubernetes automatically creates a PV to represent the\nstorage.\n\nHere is a simple StatefulSet that references the StorageClass: \n\nstatefulset.yaml \n\n```yaml\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.8\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates: # This is the specification in which you reference the StorageClass\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi\n storageClassName: fast # This field references the existing StorageClass\n```\n\nWhat's next\n\n- [Read more about Google Distributed Cloud storage concepts](/kubernetes-engine/distributed-cloud/vmware/docs/concepts/storage)\n- [Set a default StorageClass for your cluster](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/default-storage-class)"]]