Google Kubernetes Engine (GKE) provides a way for you to automatically deploy and manage the Kubernetes Filestore CSI driver in your clusters.
The Filestore CSI driver version is tied to Kubernetes minor version numbers. The Filestore CSI driver version is typically the latest driver available at the time that the Kubernetes minor version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.
Filestore CSI driver is enabled by default in Autopilot clusters.
Benefits
Using the Filestore CSI driver provides the following benefits:
- You can use
volume snapshots with the Filestore CSI driver.
CSI volume snapshots
can be used to create Filestore backups,
which creates a differential copy of the file share, including all file data
and metadata into separate storage. You can restore this copy to a new
Filestore instance only. Restoring to an existing
Filestore instance is not supported. You can use the CSI volume
snapshot API to trigger Filestore backups, by adding a
type:backup
field in the volume snapshot class. - You can use volume expansion with the Filestore CSI driver. Volume expansion lets you resize your volume's capacity.
- You can easily access existing Filestore instances by using pre-provisioned Filestore instances in Kubernetes workloads. You can also dynamically create or delete Filestore instances and use them in Kubernetes workloads with a StorageClass or a Deployment.
Requirements
To use the Filestore CSI driver, your clusters must use GKE version 1.21 or later. The Filestore CSI driver is supported for clusters using Linux.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Cloud Filestore API and the Google Kubernetes Engine API. Enable APIs
- Ensure that you have installed the Google Cloud CLI.
- Set up default Google Cloud CLI settings for your project by using one of the following methods:
- Use
gcloud init
, if you want to be walked through setting project defaults. - Use
gcloud config
, to individually set your project ID, zone, and region. -
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
- Follow the instructions to authorize the gcloud CLI to use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
- Choose a default Compute Engine region.
- Set your default project ID:
gcloud config set project PROJECT_ID
- Set your default Compute Engine region (for example,
us-central1
):gcloud config set compute/region COMPUTE_REGION
- Set your default Compute Engine zone (for example,
us-central1-c
):gcloud config set compute/zone COMPUTE_ZONE
- Update
gcloud
to the latest version:gcloud components update
gcloud init
gcloud config
By setting default locations, you can avoid errors in gcloud CLI like the
following: One of [--zone, --region] must be supplied: Please specify location
.
Enabling the Filestore CSI driver on a new cluster
To enable the driver on cluster creation, use the Google Cloud CLI or the Google Cloud console.
gcloud
gcloud container clusters create CLUSTER_NAME \
--addons=GcpFilestoreCsiDriver \
--cluster-version=VERSION
Replace the following:
CLUSTER_NAME
: the name of your cluster.VERSION
: the GKE version number. You must select a version of 1.21 or higher to use this feature. Alternatively, you can use the--release-channel
flag and specify a release channel.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Choose the Standard cluster mode, then click Configure.
Configure the cluster as desired.
From the navigation pane, under Cluster, click Features.
Select the Enable Filestore CSI driver checkbox.
Click Create.
After you enable the Filestore CSI driver, you can use the driver in Kubernetes
volumes using the driver and provisioner name: filestore.csi.storage.gke.io
.
Enabling the Filestore CSI driver on an existing cluster
To enable the Filestore CSI driver in existing clusters, use the Google Cloud CLI or the Google Cloud console.
To enable the driver on an existing cluster, complete the following steps:
gcloud
gcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=ENABLED
Replace CLUSTER_NAME
with the name of the existing cluster.
Console
Go to the Google Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Filestore CSI driver field, click edit Edit Filestore CSI driver.
Select the Enable Filestore CSI driver checkbox.
Click Save Changes.
Disabling the Filestore CSI driver
You can disable the Filestore CSI driver on an existing cluster by using the Google Cloud CLI or the Google Cloud console.
gcloud
gcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=DISABLED
Replace CLUSTER_NAME
with the name of the existing cluster.
Console
In the Google Cloud console, go to the Google Kubernetes Engine menu.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Filestore CSI driver field, click edit Edit Filestore CSI driver.
Clear the Enable Filestore CSI driver checkbox.
Click Save Changes.
Access a volume using the Filestore CSI driver
The following sections describe the typical process for using a Kubernetes volume backed by a Filestore CSI driver in GKE:
- Create a StorageClass
- Use a PersistentVolumeClaim to access the volume
- Create a Deployment that consumes the volume
Create a StorageClass
After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:
standard-rwx
, using the Basic HDD Filestore service tierpremium-rwx
, using the Basic SSD Filestore service tier
You can find the name of your installed StorageClass
by running the following
command:
kubectl get sc
You can also install a different StorageClass
that uses the Filestore CSI driver
by adding filestore.csi.storage.gke.io
in the provisioner
field.
Save the following manifest as
filestore-example-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: filestore-example provisioner: filestore.csi.storage.gke.io volumeBindingMode: Immediate allowVolumeExpansion: true parameters: tier: standard network: default
From the manifest, consider the following parameter configuration:
- Setting
volumeBindingMode
toImmediate
allows the provisioning of the volume to begin immediately. This is possible because Filestore instances are accessible from any zone. Therefore GKE does not need to know the zone where the Pod is scheduled, in contrast with Compute Engine persistent disk. When set toWaitForFirstConsumer
, GKE begins provisioning only after the Pod is scheduled. For more information, see VolumeBindingMode. - Any tier can be
specified in the
tier
parameter (for example,standard
,premium
, orenterprise
). + Thenetwork
parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up.
To create a
StorageClass
resource based on thefilestore-example-class.yaml
manifest file, run the following command:kubectl create -f filestore-example-class.yaml
Use a PersistentVolumeClaim to access the volume
You can create a PersistentVolumeClaim
resource that references the
Filestore CSI driver's StorageClass
.
You can use either a pre-installed or custom StorageClass
.
The following example manifest file creates a
PersistentVolumeClaim
that references the StorageClass
named filestore-example
.
Save the following manifest file as
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteMany storageClassName: filestore-example resources: requests: storage: 1Ti
To create a
PersistentVolume
resource based on thepvc-example.yaml
manifest file, run the following command:kubectl create -f pvc-example.yaml
Create a Deployment that consumes the volume
The following example Deployment manifest consumes the PersistentVolume
resource
named pvc-example.yaml
.
Multiple Pods can share the same PersistentVolumeClaim
resource.
Save the following manifest as
filestore-example-deployment.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: web-server-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /usr/share/nginx/html name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: podpvc --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteMany storageClassName: filestore-example resources: requests: storage: 1Ti
To create a Deployment based on the
filestore-example-deployment.yaml
manifest file, run the following command:kubectl apply -f filestore-example-deployment.yaml
Labeling Filestore instances
You can use labels to group related instances and store metadata about an instance. A label is a key-value pair that helps you organize your Filestore instances. You can attach a label to each resource, then filter the resources based on their labels.
You can provide labels by using the labels
key in StorageClass.parameters
.
A Filestore instance can be labeled with information about what
PersistentVolumeClaim
/PersistentVolume
the instance was created
for. Custom label keys and values must comply with the label
naming convention.
See the Kubernetes
storage class example
to apply custom labels to the Filestore instance.
Using fsgroup with Filestore volumes
Kubernetes uses fsGroup
to change permissions and ownership of the volume to
match a user-requested fsGroup
in the Pod's
SecurityContext.
An fsGroup
is a supplemental group that applies to all containers in a Pod.
You can
apply an fsgroup
to volumes provisioned by the Filestore CSI driver.
What's next
- Learn how to use volume expansion.
- Learn how to use volume snapshots.
- Read more about the driver on GitHub.