Google Distributed Cloud can use several storage configurations, and provides interfaces for block and file storage management through the following Kubernetes objects:
Ephemeral Storage through Kubernetes volumes
Kubernetes Volume
resources are storage units accessible to containers in a
Pod
. Ephemeral storage backs the following volume types:
Ephemeral storage types don't persist after a pod ceases to exist. Use ephemeral storage for configuration information and to cache storage for applications.
Ephemeral storage types share and consume resources from the node's boot disk. You can manage your local ephemeral storage resources in the same way that you manage CPU and memory resources.
Persistent Storage using PersistentVolume
resources
A Kubernetes PersistentVolume
is a resource that a Pod
can use for durable
storage. The lifetime of persistent volumes is independent from the lifetime of
a pod. Thus, the disk and data in a persistent volume continue to exist as the
cluster changes and as pods are deleted and recreated. You can provision
PersistentVolume
resources dynamically through the PersistentVolumeClaims
API, or a cluster administrator can create them explicitly.
Google Distributed Cloud can back persistent storage by using a variety of storage systems, including Container Storage Interface (CSI) drivers and local volumes.
Container Storage Interface (CSI) drivers
Google Distributed Cloud is compatible with CSI v1.0 drivers. CSI is an open standard interface that many major storage vendors support. For production storage, install a CSI driver from an GKE Enterprise Ready storage partner. For the full list of GKE Enterprise Ready storage partners, see GKE Enterprise Ready Storage Partners.
To use CSI in your cluster, deploy the CSI driver that your storage vendor
provided to your clusters. Then, configure workloads to
use the CSI driver with the StorageClass
API,
or set StorageClass
as the default API.
Local volumes
For proof of concept and advanced use cases, you can use local PersistentVolume resources. Google Distributed Cloud bundles the sig-storage-local-static-provisioner, which discovers mount points on each node and creates a local persistent volume for each mount point.
Google Distributed Cloud clusters use the local volume provisioner (LVP) to manage local persistent volumes. There are three types of storage classes for local persistent volumes in an Google Distributed Cloud cluster:
- LVP share
- LVP node mounts
- GKE Enterprise system
LVP share
This option creates a local persistent volume that subdirectories in a local and shared file system are backing. Cluster creation automatically generates these subdirectories. Workloads using this storage class share capacity and input/output operations per second (IOPS) because the same shared file system backs the persistent volumes. To have better isolation, configure disks through LVP node mounts.
For more information, see Configuring an LVP Share.
LVP node mounts
This option creates a local persistent volume for each mounted disk in the configured directory. You must format and mount each disk before or after cluster creation.
For more information, see Configuring LVP node mounts.
GKE Enterprise system
This storage class creates preconfigured local persistent volumes during cluster
creation that the GKE Enterprise system pods use. The storage class name is
anthos-system
. Do not change or delete this storage class and do not use this
storage class for stateful workloads.
What's next
- Learn more about volumes.
- Learn more about Container Storage Interface in Kubernetes.
- Learn how to take volume snapshots.
- Learn how to increase the capacity of persistent volumes.