This page explains Google Distributed Cloud storage concepts.
Summary
Google Distributed Cloud integrates with external block or file storage systems through VMware vSphere storage, Kubernetes in-tree volume plugins (or "drivers"), and Container Storage Interface (CSI) drivers.
Google Distributed Cloud use a default Kubernetes StorageClass to provision storage for stateful workloads on a vSphere datastore. You can also use a StorageClass to provision different storage volumes.
vSphere storage
By default, Google Distributed Cloud use vSphere storage. The admin cluster requires a pre-provisioned vSphere datastore for its etcd data.
When you create a user cluster, Google Distributed Cloud uses the vSphere Kubernetes volume plugin to dynamically provision new virtual machine disks (VMDKs) in a vSphere datastore. (Note that prior 1.2, user clusters used the same datastore as admin clusters.)
The vSphere datastores used by the admin and user clusters may be backed by NFS, vSAN, or VMFS on a block device, such as an external storage array. In a multi-host environment, each block device must be attached to all the hosts in the environment, and the datastore must be configured on each host via the Mount Datastore on Additional Hosts option.
Default storage
Google Distributed Cloud include a default Kubernetes StorageClass, which determines how Kubernetes should provision storage. After Kubernetes provisions storage volumes, they are represented by Kubernetes PersistentVolumes.
The default StorageClass for a user cluster points to a vSphere datastore, which
is set in the datastore
field of StorageClass' configuration. By default,
Kubernetes PersistentVolumes provisioned for the user cluster are VMDKs of that
datastore. This is not necessarily the same datastore used by the admin cluster.
In Google Distributed Cloud, Kubernetes StatefulSets (stateful workloads that typically require persistent storage) use PersistentVolumeClaims backed by StorageClasses that point to vSphere storage by default.
Container Storage Interface
Container Storage Interface (CSI) is an open standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. When you deploy a CSI-compatible volume driver to a Google Distributed Cloud cluster, workloads can connect directly to a compatible storage device without having to go through vSphere storage.
To use CSI in your cluster, you must deploy the CSI driver provided by your storage vendor. Then, you can configure workloads to use the driver's StorageClass, or set it as the default StorageClass.
We have partnered with many storage vendors to qualify their storage systems with Google Distributed Cloud. See the full list of qualified storage partners.
vSphere CSI driver
By default, Google Distributed Cloud leverages the in-tree volume plug-in from VMware, vSphere Cloud Provider (VCP), which automatically enables support for VMware datastores including vSAN. The vSphere CSI driver is the vSphere volume driver implementation of the Container Storage Interface and is a component of VMware Cloud Native Storage. It is automatically deployed in Google Distributed Cloud, and is generally available starting with Google Distributed Cloud version 1.7.
Volume expansion
Volume expansion is a beta feature in Kubernetes 1.20.
You can expand the size of a persistent volume after it has been provisioned by editing the capacity request in the PersistentVolumeClaim (PVC). You can do an online expansion while the volume is in use by a Pod, or an offline expansion where the volume is not in use.
For the vSphere CSI driver, offline expansion is available in vSphere versions >= 7.0, and online expansion is available in vSphere versions >= 7.0 Update 2.
The vSphere CSI driver StorageClass standard-rwo
, which is installed in user
clusters automatically, sets allowVolumeExpansion
to true by default for newly
created clusters running on >= vSphere 7.0. You can use both online and offline
expansion for volumes using this StorageClass. For an upgraded cluster, because
StorageClasses are not modified on cluster upgrades, when a cluster is upgraded
from 1.7 to 1.8, the allowVolumeExpansion
setting in standard-rwo
remains
unset, which means volume expansion is not allowed.
For more information on volume expansion, see Using volume expansion.
CSI Volume snapshots
You can create snapshots of persistent storage by using the
VolumeSnapshot
and
VolumeSnapshotClass
resources. To use this feature on a CSI volume, the CSI driver must support
volume snapshots, and the external-snapshotter
sidecar container must be
included in the CSI driver deployment.
For more information on volume snapshots, see Using volume snapshots.
The CSI snapshot controllers are deployed automatically when you create a cluster.
Starting in Google Distributed Cloud 1.8, v1
versions of VolumeSnapshot,
VolumeSnapshotContent, and VolumeSnapshotClass objects are available. v1beta1
versions are deprecated and will stop being served starting in a later release.
Volume cleanup
When you delete a user cluster, volumes provisioned by the in-tree volume plug-in from VMware might not be deleted. However, when deleting a user cluster, volumes provisioned by the vSphere CSI driver are not deleted. You should confirm that all volumes, PVCs, and StatefulSets are deleted before deleting the cluster.
Kubernetes in-tree volume plugins
Kubernetes ships with a number of in-tree volume plugins. You have the option to use any of these to provide block or file storage for your stateful workloads. In-tree plugins enable workloads to connect directly to storage without having to go through vSphere storage.
Whereas vSphere storage automatically provides dynamic provisioning of volumes inside a datastore backed by any iSCSI, FC, or NFS storage device, many of the in-tree plugins don't support dynamic provisioning. They require that you manually create PersistentVolumes.
The following table describes several in-tree volume plugins:
In-tree volume plugin | Description | Supported access modes | Dynamic provisioning |
---|---|---|---|
Fibre Channel | Generic storage plugin | Read/write single Pod | No |
iSCSI | Generic storage plugin | Read/write single Pod | No |
NFS | Generic storage plugin | Read/write multiple Pods | No |
Configuring cluster storage
If you want to provision storage volumes other than vSphere datastores, you can create a new StorageClass in a cluster that uses a different storage driver. Then, you can set the StorageClass as the cluster's default, or configure your workloads use the StorageClass (StatefulSet example).