This topic explains GKE on-prem's storage concepts.
GKE on-prem integrates with external block or file storage systems through VMware vSphere storage, Kubernetes in-tree volume plugins (or "drivers"), and Container Storage Interface (CSI) drivers.
GKE on-prem clusters use a default Kubernetes StorageClass to provision storage for stateful workloads on a vSphere datastore. You can also use a StorageClass to provision different storage volumes.
By default, GKE on-prem clusters use vSphere storage. The admin cluster requires a pre-provisioned vSphere datastore for its etcd data.
When you create a user cluster, GKE on-prem uses the vSphere Kubernetes volume plugin to dynamically provision new virtual machine disks (VMDKs) in a vSphere datastore. (Note that prior 1.2, user clusters used the same datastore as admin clusters.)
The vSphere datastores used by the admin and user clusters may be backed by NFS, vSAN, or VMFS on a block device, such as an external storage array. In a multi-host environment, each block device must be attached to all the hosts in the environment, and the datastore must be configured on each host via the Mount Datastore on Additional Hosts option.
GKE on-prem clusters include a default Kubernetes StorageClass, which determines how Kubernetes should provision storage. After Kubernetes provisions storage volumes, they are represented by Kubernetes PersistentVolumes.
The default StorageClass for a user cluster points to a vSphere datastore, which
is set in the
datastore field of StorageClass' configuration. By default,
Kubernetes PersistentVolumes provisioned for the user cluster are VMDKs of that
datastore. This is not necessarily the same datastore used by the admin cluster.
Kubernetes in-tree volume plugins
Kubernetes ships with a number of in-tree volume plugins. You have the option to use any of these to provide block or file storage for your stateful workloads. In-tree plugins enable workloads to connect directly to storage without having to go through vSphere storage.
Whereas vSphere storage automatically provides dynamic provisioning of volumes inside a datastore backed by any iSCSI, FC, or NFS storage device, many of the in-tree plugins don't support dynamic provisioning. They require that you manually create PersistentVolumes.
The following table describes several in-tree volume plugins:
|In-tree volume plugin||Description||Supported access modes||Dynamic provisioning|
|Fibre Channel||Generic storage plugin||Read/write single Pod||No|
|iSCSI||Generic storage plugin||Read/write single Pod||No|
|NFS||Generic storage plugin||Read/write multiple Pods||No|
|Ceph RBD||Open source software-defined storage||Read/write single Pod||Yes|
|CephFS||Open source software-defined storage||Read/write multiple Pods||No|
|Portworx||Proprietary software-defined storage||Read/write multiple Pods||Yes|
|Quobyte||Proprietary software-defined storage||Read/write single Pod||Yes|
|StorageOS||Proprietary software-defined storage||Read/write single Pod||Yes|
Container Storage Interface
Container Storage Interface (CSI) is an open standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. When you deploy a CSI-compatible volume driver to a GKE on-prem cluster, workloads can connect directly to a compatible storage device without having to go through vSphere storage.
GKE on-prem supports CSI v1.0. To use CSI in your cluster, you need to deploy the CSI driver provided by your storage vendor. Then, you can configure workloads to use the driver's StorageClass, or set it as the default StorageClass.
Configuring cluster storage
If you want to provision storage volumes other than vSphere datastores, you can create a new StorageClass in a cluster that uses a different storage driver. Then, you can set the StorageClass as the cluster's default, or configure your workloads use the StorageClass (StatefulSet example).