Access Filestore instances with the Filestore CSI driver


The Filestore CSI driver is the primary way for you to use Filestore instances with Google Kubernetes Engine (GKE). The Filestore CSI driver provides a fully-managed experience powered by the open source Google Cloud Filestore CSI driver.

The Filestore CSI driver version is tied to Kubernetes minor version numbers. The Filestore CSI driver version is typically the latest driver available at the time that the Kubernetes minor version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.

Benefits

The Filestore CSI driver provides the following benefits:

  • You have access to fully-managed NFS storage through the Kubernetes APIs (kubectl).

  • You can use the GKE Filestore CSI driver to dynamically provision your PersistentVolumes.

  • You can use volume snapshots with the GKE Filestore CSI driver. CSI volume snapshots can be used to create Filestore backups.

    A Filestore backup creates a differential copy of the file share, including all file data and metadata, and stores it separate from the instance. You can restore this copy to a new Filestore instance only. Restoring to an existing Filestore instance is not supported. You can use the CSI volume snapshot API to trigger Filestore backups, by adding a type:backup field in the volume snapshot class.

  • You can use volume expansion with the GKE Filestore CSI driver. Volume expansion lets you resize your volume's capacity.

  • You can access existing Filestore instances by using pre-provisioned Filestore instances in Kubernetes workloads. You can also dynamically create or delete Filestore instances and use them in Kubernetes workloads with a StorageClass or a Deployment.

  • Supports Filestore multishares for GKE. This feature lets you create a Filestore instance and allocate multiple smaller NFS-mounted PersistentVolumes for it simultaneously across any number of GKE clusters.

Requirements

  • To use the Filestore CSI driver, your clusters must use GKE version 1.21 or later.
  • To use the Filestore multishares capability, your clusters must use GKE version 1.23 or later.
  • The Filestore CSI driver is supported for clusters using Linux only; Windows Server nodes are not supported.
  • The minimum instance size for Filestore is at least 1 TiB. The minimum instance size depends on the Filestore service tier you selected. To learn more, see Service tiers.
  • Filestore uses the NFSv3 file system protocol on the Filestore instance and supports any NFSv3-compatible client.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Cloud Filestore API and the Google Kubernetes Engine API.
  • Enable APIs
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Enable the Filestore CSI driver on a new cluster

To enable the Filestore CSI driver CSI driver when creating a new Standard cluster, follow these steps with Google Cloud CLI or the Google Cloud console.

gcloud

gcloud container clusters create CLUSTER_NAME \
    --addons=GcpFilestoreCsiDriver \
    --cluster-version=VERSION

Replace the following:

  • CLUSTER_NAME: the name of your cluster.
  • VERSION: the GKE version number. You must select a version of 1.21 or higher to use this feature. Alternatively, you can use the --release-channel flag and specify a release channel.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. Choose the Standard cluster mode, then click Configure.

  4. Configure the cluster as desired.

  5. From the navigation pane, under Cluster, click Features.

  6. Select the Enable Filestore CSI driver checkbox.

  7. Click Create.

If you want to use Filestore on a Shared VPC network, see Enable the Filestore CSI driver on a new cluster with Shared VPC.

After you enable the Filestore CSI driver, you can use the driver in Kubernetes volumes using the driver and provisioner name: filestore.csi.storage.gke.io.

Enable the Filestore CSI driver on an existing cluster

To enable the Filestore CSI driver in existing clusters, use the Google Cloud CLI or the Google Cloud console.

To enable the driver on an existing cluster, complete the following steps:

gcloud

gcloud container clusters update CLUSTER_NAME \
   --update-addons=GcpFilestoreCsiDriver=ENABLED

Replace CLUSTER_NAME with the name of the existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Features, next to the Filestore CSI driver field, click Edit Filestore CSI driver.

  4. Select the Enable Filestore CSI driver checkbox.

  5. Click Save Changes.

Disable the Filestore CSI driver

You can disable the Filestore CSI driver on an existing Autopilot or Standard cluster by using the Google Cloud CLI or the Google Cloud console.

gcloud

gcloud container clusters update CLUSTER_NAME \
    --update-addons=GcpFilestoreCsiDriver=DISABLED \
    --region REGION

Replace the following values:

  • CLUSTER_NAME: the name of the existing cluster.
  • REGION: the region for your cluster (such as, us-central1).

Console

  1. In the Google Cloud console, go to the Google Kubernetes Engine menu.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Features, next to the Filestore CSI driver field, click Edit Filestore CSI driver.

  4. Clear the Enable Filestore CSI driver checkbox.

  5. Click Save Changes.

Access pre-existing Filestore instances using the Filestore CSI driver

This section describes the typical process for using a Kubernetes volume to access pre-existing Filestore instances using Filestore CSI driver in GKE:

Create a PersistentVolume and a PersistentVolumeClaim to access the instance

  1. Create a manifest file like the one shown in the following example, and name it preprov-filestore.yaml:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: PV_NAME
    spec:
      storageClassName: ""
      capacity:
        storage: 1Ti
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      volumeMode: Filesystem
      csi:
        driver: filestore.csi.storage.gke.io
        volumeHandle: "modeInstance/FILESTORE_INSTANCE_LOCATION/FILESTORE_INSTANCE_NAME/FILESTORE_SHARE_NAME"
        volumeAttributes:
          ip: FILESTORE_INSTANCE_IP
          volume: FILESTORE_SHARE_NAME
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      volumeName: PV_NAME
      resources:
        requests:
          storage: 1Ti
    
  2. To create the PersistentVolumeClaim and PersistentVolume resources based on the preprov-filestore.yaml manifest file, run the following command:

    kubectl apply -f preprov-filestore.yaml
    

Then, proceed to create a Deployment that consumes the volume.

Create a volume using the Filestore CSI driver

The following sections describe the typical process for using a Kubernetes volume backed by a Filestore CSI driver in GKE:

Create a StorageClass

After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:

You can find the name of your installed StorageClass by running the following command:

kubectl get sc

You can also install a different StorageClass that uses the Filestore CSI driver by adding filestore.csi.storage.gke.io in the provisioner field.

  1. Save the following manifest as filestore-example-class.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: filestore-example
    provisioner: filestore.csi.storage.gke.io
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    parameters:
      tier: standard
      network: default
    

    From the manifest, consider the following parameter configuration:

    • Setting volumeBindingMode to Immediate allows the provisioning of the volume to begin immediately. This is possible because Filestore instances are accessible from any zone. Therefore GKE does not need to know the zone where the Pod is scheduled, in contrast with Compute Engine persistent disk. When set to WaitForFirstConsumer, GKE begins provisioning only after the Pod is scheduled. For more information, see VolumeBindingMode.
    • Any tier can be specified in the tier parameter (for example, standard, premium, zonal,or enterprise). + The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up.
  2. To create a StorageClass resource based on the filestore-example-class.yaml manifest file, run the following command:

    kubectl create -f filestore-example-class.yaml
    

If you want to use Filestore on a Shared VPC network, see Create a StorageClass when using the Filestore CSI driver with Shared VPC.

Use a PersistentVolumeClaim to access the volume

You can create a PersistentVolumeClaim resource that references the Filestore CSI driver's StorageClass.

You can use either a pre-installed or custom StorageClass.

The following example manifest file creates a PersistentVolumeClaim that references the StorageClass named filestore-example.

  1. Save the following manifest file as pvc-example.yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: filestore-example
      resources:
        requests:
          storage: 1Ti
    
  2. To create a PersistentVolume resource based on the pvc-example.yaml manifest file, run the following command:

    kubectl create -f pvc-example.yaml
    

Create a Deployment that consumes the volume

The following example Deployment manifest consumes the PersistentVolume resource named pvc-example.yaml.

Multiple Pods can share the same PersistentVolumeClaim resource.

  1. Save the following manifest as filestore-example-deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: web-server-deployment
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: mypvc
          volumes:
          - name: mypvc
            persistentVolumeClaim:
              claimName: podpvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: filestore-example
      resources:
        requests:
          storage: 1Ti
    
  2. To create a Deployment based on the filestore-example-deployment.yaml manifest file, run the following command:

    kubectl apply -f filestore-example-deployment.yaml
    
  3. Confirm the Deployment was successfully created:

    kubectl get deployment
    

    It might take a while for Filestore instances to complete provisioning. Before that, deployments will not report a READY status. You can check the progress by monitoring your PVC status by running the following command:

    kubectl get pvc
    

    You should see the PVC reach a BOUND status, when the volume provisioning completes.

Label Filestore instances

You can use labels to group related instances and store metadata about an instance. A label is a key-value pair that helps you organize your Filestore instances. You can attach a label to each resource, then filter the resources based on their labels.

You can provide labels by using the labels key in StorageClass.parameters. A Filestore instance can be labeled with information about what PersistentVolumeClaim/PersistentVolume the instance was created for. Custom label keys and values must comply with the label naming convention. See the Kubernetes storage class example to apply custom labels to the Filestore instance.

Use fsgroup with Filestore volumes

Kubernetes uses fsGroup to change permissions and ownership of the volume to match a user-requested fsGroup in the Pod's SecurityContext. An fsGroup is a supplemental group that applies to all containers in a Pod. You can apply an fsgroup to volumes provisioned by the Filestore CSI driver.

Use Filestore with Shared VPC

This section covers how to use a Filestore instance on a Shared VPC network from a service project.

Set up a cluster with Shared VPC

To set up your clusters with a Shared VPC network, follow these steps:

  1. Create a host and service project.
  2. Enable the Google Kubernetes Engine API on both your host and service projects.
  3. In your host project, create a network and a subnet.
  4. Enable Shared VPC in the host project.
  5. On the host project, grant the HostServiceAgent user role binding for the service project's GKE service account.
  6. Enable private service access on the Shared VPC network.

Enable the Filestore CSI driver on a new cluster with Shared VPC

To enable the Filestore CSI driver on a new cluster with Shared VPC, follow these steps:

  1. Verify the usable subnets and secondary ranges. When creating a cluster, you must specify a subnet and the secondary IP address ranges to be used for the cluster's Pods and Service.

    gcloud container subnets list-usable \
       --project=SERVICE_PROJECT_ID \
       --network-project=HOST_PROJECT_ID
    

    The output is similar to the following:

    PROJECT                   REGION       NETWORK     SUBNET  RANGE
    HOST_PROJECT_ID  us-central1  shared-net  tier-1  10.0.4.0/22
    ┌──────────────────────┬───────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │
    ├──────────────────────┼───────────────┼─────────────────────────────┤
    │ tier-1-pods          │ 10.4.0.0/14   │ usable for pods or services │
    │ tier-1-services      │ 10.0.32.0/20  │ usable for pods or services │
    └──────────────────────┴───────────────┴─────────────────────────────┘
    
  2. Create a GKE cluster. The following examples show how you can use gcloud CLI to create an Autopilot or Standard cluster configured for Shared VPC. The following examples use the network, subnet, and range names from Creating a network and two subnets.

    Autopilot

    gcloud container clusters create-auto tier-1-cluster \
       --project=SERVICE_PROJECT_ID \
       --region=COMPUTE_REGION \
       --network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME \
       --subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME \
       --cluster-secondary-range-name=tier-1-pods \
       --services-secondary-range-name=tier-1-services
    

    Standard

    gcloud container clusters create tier-1-cluster \
       --project=SERVICE_PROJECT_ID \
       --zone=COMPUTE_REGION \
       --enable-ip-alias \
       --network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME \
       --subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME \
       --cluster-secondary-range-name=tier-1-pods \
       --services-secondary-range-name=tier-1-services \
       --addons=GcpFilestoreCsiDriver
    
  3. Create firewall rules to allow communication between nodes, Pods, and Services in your cluster. The following example shows how you can create a firewall rule named my-shared-net-rule-2.

    gcloud compute firewall-rules create my-shared-net-rule-2 \
       --project HOST_PROJECT_ID \
       --network=NETWORK_NAME \
       --allow=tcp,udp \
       --direction=INGRESS \
       --source-ranges=10.0.4.0/22,10.4.0.0/14,10.0.32.0/20
    

    In the example, the source ranges IP values come from the previous step where you verified the usable subnets and secondary ranges.

Create a StorageClass when using the Filestore CSI driver with Shared VPC

The following example shows how you can create a StorageClass when using the Filestore CSI driver with Shared VPC:

cat <<EOF | kubectl apply -f -

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: filestore-sharedvpc-example
provisioner: filestore.csi.storage.gke.io
parameters:
  network: "projects/HOST_PROJECT_ID/global/networks/SHARED_VPC_NAME"
  connect-mode: PRIVATE_SERVICE_ACCESS
  reserved-ip-range: RESERVED_IP_RANGE_NAME
allowVolumeExpansion: true

EOF

Replace the following:

  • HOST_PROJECT_ID: the ID or name of the host project of the Shared VPC network.
  • SHARED_VPC_NAME: the name of the Shared VPC network you created earlier.
  • RESERVED_IP_RANGE_NAME: the name of the specific reserved IP address range to provision Filestore instance in. This field is optional. If a reserved IP address range is specified, it must be a named address range instead of a direct CIDR value.

If you want to provision a volume backed by Filestore multishares on GKE clusters running v1.23 or later, see Optimize storage with Filestore multishares for GKE.

Reconnect Filestore single share volumes

If you are using Filestore with the basic HDD, basic SSD, or enterprise (single share) tier, you can follow these instructions to reconnect your existing Filestore instance to your GKE workloads.

  1. Find the details of your pre-provisioned Filestore instance by following the instructions in Getting information about a specific instance.

  2. Redeploy your PersistentVolume specification. In the volumeAttributes field, modify the following fields to use the same values as your Filestore instance from step 1:

    • ip: Modify this value to the pre-provisioned Filestore instance IP address.
    • volume: Modify this value to the pre-provisioned Filestore instance's share name.
  3. Redeploy your PersistentVolumeClaim specification. In the volumeName make sure you reference the same PersistentVolume name as from step 2.

  4. Check the binding status of your PersistentVolumeClaim and PersistentVolume by running kubectl get pvc.

  5. Redeploy your Pod specification and ensure that your Pod is able to access the Filestore share again.

What's next