Use the Filestore CSI driver with Shared VPC


This page shows you how to create a Filestore instance on a Shared VPC network from a service project.

Requirements

To use the Filestore CSI driver, your clusters must be running Google Kubernetes Engine (GKE) version 1.21 or later. The Filestore CSI driver is supported for clusters using Linux.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Google Cloud CLI.
  • Set up default Google Cloud CLI settings for your project by using one of the following methods:
    • Use gcloud init, if you want to be walked through setting project defaults.
    • Use gcloud config, to individually set your project ID, zone, and region.

    gcloud init

    1. Run gcloud init and follow the directions:

      gcloud init

      If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

      gcloud init --console-only
    2. Follow the instructions to authorize the gcloud CLI to use your Google Cloud account.
    3. Create a new configuration or select an existing one.
    4. Choose a Google Cloud project.
    5. Choose a default Compute Engine zone.
    6. Choose a default Compute Engine region.

    gcloud config

    1. Set your default project ID:
      gcloud config set project PROJECT_ID
    2. Set your default Compute Engine region (for example, us-central1):
      gcloud config set compute/region COMPUTE_REGION
    3. Set your default Compute Engine zone (for example, us-central1-c):
      gcloud config set compute/zone COMPUTE_ZONE
    4. Update gcloud to the latest version:
      gcloud components update

    By setting default locations, you can avoid errors in gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location.

  1. Enable the Filestore, GKE and Service Networking APIs.

    Enable the APIs

  2. Set up clusters with Shared VPC. For the purpose of this guide, you only need 1 service project and 1 GKE cluster.
  3. Enable private service access on the Shared VPC network.

Enable the Filestore CSI driver on an existing cluster

You can enable the Filestore CSI driver in existing clusters using the Google Cloud CLI or the Google Cloud console:

gcloud

gcloud container clusters update CLUSTER_NAME \
    --update-addons=GcpFilestoreCsiDriver=ENABLED

Replace CLUSTER_NAME with the name of the existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Features, next to the Filestore CSI driver field, click Edit Filestore CSI driver.

  4. Select the Enable Filestore CSI driver checkbox.

  5. Click Save Changes.

Provision and access volume

Create a StorageClass

  1. Save the following manifest as filestore-example-class.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: filestore-sharedvpc-example
    provisioner: filestore.csi.storage.gke.io
    parameters:
      network: "projects/HOST_PROJECT_ID/global/networks/SHARED_VPC_NAME"
      connect-mode: PRIVATE_SERVICE_ACCESS
      reserved-ip-range: RESERVED_IP_RANGE_NAME
    allowVolumeExpansion: true
    

    Replace the following:

    • HOST_PROJECT_ID: the ID or name of the host project of the Shared VPC network.
    • SHARED_VPC_NAME: the name of the Shared VPC network.
    • RESERVED_IP_RANGE_NAME: the name of the specific reserved IP address range to provision filestore instance in. This field is optional.
  2. Apply the manifest to the cluster:

    kubectl apply -f filestore-example-class.yaml
    

Use a PersistentVolumeClaim to access the volume

  1. Save the following manifest as pvc-example.yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: podpvc
    spec:
      accessModes:
       - ReadWriteMany
      storageClassName: filestore-sharedvpc-example
      resources:
        requests:
          storage: 1Ti
    

    This manifest describes a PersistentVolumeClaim that requests 1Ti of storage.

  2. Create a PersistentVolume resource based on the pvc-example.yaml manifest file:

    kubectl create -f pvc-example.yaml
    

Create a Deployment that consumes the volume

  1. Save the following manifest as web-server-deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: web-server-deployment
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: mypvc
          volumes:
          - name: mypvc
            persistentVolumeClaim:
              claimName: podpvc
    

    This manifest describes a Deployment that specifies the PersistentVolumeClaim by name.

  2. Apply the manifest to the cluster:

    kubectl apply -f web-server-deployment.yaml
    

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this page, perform the following tasks:

  1. Delete the Deployment and PersistentVolumeClaim:

    kubectl delete deployment web-server-deployment
    
    kubectl delete pvc podpvc
    
  2. Optionally, you can delete the cluster.

What's next