GKE On-Prem integrates with external block or file storage systems through VMware vSphere storage, Kubernetes in-tree volume plugins, and Container Storage Interface (CSI) drivers.
vSphere storage
By default, both GKE On-Prem admin and user clusters use vSphere storage. The admin cluster requires a pre-provisioned VMware datastore for its etcd data.
When you create a user cluster, GKE On-Prem uses the vSphere Kubernetes volume plugin to dynamically provision a new virtual machine disk (VMDK) in the same VMware datastore used by the admin cluster. This VMDK holds the user cluster's etcd data.
The default StorageClass for a user cluster points to a VMware datastore. By default, Kubernetes PersistentVolumes provisioned for the user cluster are VMDKs that datastore. This is not necessarily the same datastore used by the admin cluster.
The VMware datastores used by the admin and user clusters may be backed by NFS, vSAN, or VMFS on a block device, such as an external storage array. In a multi-host environment, each block device must be attached to all the hosts in the environment, and the datastore must be configured on each host via the Mount Datastore on Additional Hosts option.
In GKE On-Prem, StatefulSets use PersistentVolumeClaims backed by StorageClasses that point to vSphere storage.
Kubernetes in-tree volume plugins
Kubernetes ships with a number of in-tree volume plugins. You have the option to use any of these to provide block or file storage for your stateful workloads. In-tree plugins enable workloads to connect directly to storage without having to go through vSphere storage.
Whereas vSphere storage automatically provides dynamic provisioning of volumes inside a datastore backed by any iSCSI, FC, or NFS storage device, many of the in-tree plugins don't support dynamic provisioning. They require that you manually create PersistentVolumes.
The following table describes several in-tree volume plugins:
In-tree volume plugin | Description | Supported access modes | Dynamic provisioning |
---|---|---|---|
Fibre Channel | Generic storage plugin | Read/write single Pod | No |
iSCSI | Generic storage plugin | Read/write single Pod | No |
NFS | Generic storage plugin | Read/write multiple Pods | No |
Ceph RBD | Open source software-defined storage | Read/write single Pod | Yes |
CephFS | Open source software-defined storage | Read/write multiple Pods | No |
Portworx | Proprietary software-defined storage | Read/write multiple Pods | Yes |
Quobyte | Proprietary software-defined storage | Read/write single Pod | Yes |
StorageOS | Proprietary software-defined storage | Read/write single Pod | Yes |
Container Storage Interface
Container Storage Interface (CSI) is a standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. When a CSI-compatible volume driver is deployed in a Kubernetes cluster, workloads can connect directly to storage without having to go through vSphere storage.
GKE On-Prem ships with Kubernetes 1.12+ which supports CSI v0.3. Some storage drivers support this older version of CSI, and you can use those drivers to provide block or file storage for your stateful workloads.
Future plans
In a future release, GKE On-Prem will ship with Kubernetes 1.13+, and you will be able to deploy CSI v1.0 drivers. This will enable direct integration with even more third party storage systems. We will provide a list of recommended storage partners qualified by Google and verified to work well with GKE On-Prem.
We are working on developing advanced data management interfaces that will enable enterprise use cases like application snapshot, backup, and restore. Stay tuned for more details.