GKE on AWS versions 1.6 and later support the AWS Elastic File System (EFS) through the EFS CSI Driver. This topic explains how to mount an existing EFS file system as a PersistentVolume on your user clusters.
Before you begin
To perform the steps in this topic, you need the following:
- An existing EFS file system in the same AWS VPC as your GKE on AWS installation.
- At least one EFS mount target in the same AWS VPC as your GKE on AWS installation.
- All of your EFS mount targets must belong to the following:
- The private subnets for your GKE on AWS
installation. By default, GKE on AWS creates subnets named
gke-CLUSTER_ID-private-AWS_ZONE
, where CLUSTER_ID is your user cluster ID and AWS_ZONE is the AWS availability zone. - The
node pool security group.
By default, GKE on AWS creates node pools named
gke-CLUSTER_ID-nodepool
, where CLUSTER_ID is your user cluster ID.
- The private subnets for your GKE on AWS
installation. By default, GKE on AWS creates subnets named
- From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster. Replace CLUSTER_NAME with your user cluster name.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Using an EFS PersistentVolume
To use an EFS file system as a PersistentVolume with your user clusters, you first create a PersistentVolume and then create a PersistentVolumeClaim which you reference in your workload.
Creating a PersistentVolume
To create a PersistentVolume with the EFS CSI driver, perform the following steps.
From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster. Replace CLUSTER_NAME with your user cluster name.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
The PersistentVolume configuration you use depends on if you are connecting directly to the Elastic File System or via an access point. Select if you are connecting to the Elastic File System directly or via an access point.
Connect directly
Copy the following YAML manifest into a file named
efs-volume.yaml
. The manifest references the EFS storage class you created previously.apiVersion: v1 kind: PersistentVolume metadata: name: VOLUME_NAME spec: capacity: # Note: storage capacity is not used by the EFS CSI driver. # It is required by the PersistentVolume spec. storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "" # storageClassName is not required, see note in the following section. claimRef: name: CLAIM_NAME namespace: default csi: driver: efs.csi.aws.com volumeHandle: EFS_FILE_SYSTEM_ID
Replace the following:
- VOLUME_NAME with a name for the persistent volume.
- CLAIM_NAME with the name you want to use for the PersistentVolumeClaim.
- EFS_FILE_SYSTEM_ID with your EFS file system ID. For
example,
fs-12345678a
.
Access point
Copy the following YAML manifest into a file named
efs-volume.yaml
. The manifest references the EFS storage class you created previously.apiVersion: v1 kind: PersistentVolume metadata: name: VOLUME_NAME spec: capacity: # Note: storage capacity is not used by the EFS CSI driver. # It is required by the PersistentVolume spec. storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "" # storageClassName is not required, see note in the following section. claimRef: name: CLAIM_NAME namespace: default csi: driver: efs.csi.aws.com volumeHandle: EFS_FILE_SYSTEM_ID::ACCESS_POINT_ID
Replace the following:
- VOLUME_NAME with a name for the persistent volume.
- CLAIM_NAME with the name you want to use for the PersistentVolumeClaim.
- EFS_FILE_SYSTEM_ID with your EFS file system ID. For
example,
fs-12345678a
. - ACCESS_POINT_ID with your access point's ID. For example,
fsap-1234567890abcde
.
Apply the YAML to your user cluster.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f efs-volume.yaml
The output confirms the PersistentVolume's creation.
persistentvolume/VOLUME_NAME created
Creating a PersistentVolumeClaim
To use your EFS file system with your workloads, you create a PersistentVolumeClaim.
To create the PersistentVolumeClaim, copy the following YAML manifest into a file named
efs-claim.yaml
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: CLAIM_NAME spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 5Gi
Replace CLAIM_NAME with a name for your PersistentVolumeClaim. For example,
efs-claim1
.Apply the YAML to your user cluster.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f efs-claim.yaml
The output confirms the PersistentVolumeClaim's creation.
persistentvolumeclaim/CLAIM_NAME created
Create a StatefulSet
After you have created a PersistentVolumeClaim, you can use it in a workload.
The steps in this section create an example StatefulSet with your EFS file
system mounted. You can also use a PersistentVolumeClaim with other workload
types such as Pods and Deployments by referencing the claim in spec.volumes
.
To create a StatefulSet that mounts the EFS file system referenced in your PersistentVolumeClaim, perform the following steps.
Copy the following YAML manifest into a file named
efs-statefulset.yaml
. This example manifest launches an Ubuntu Linux container that mounts your EFS file system at/efs-data
. The container writes every five seconds to a file on your EFS file system namedout.txt
.apiVersion: apps/v1 kind: StatefulSet metadata: name: efs-shell spec: selector: matchLabels: app: test-efs serviceName: efs-app replicas: 1 template: metadata: labels: app: test-efs spec: terminationGracePeriodSeconds: 10 containers: - name: linux image: ubuntu:bionic command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /efs-data/out.txt; sleep 5; done"] volumeMounts: - name: efs-volume mountPath: /efs-data volumes: - name: efs-volume persistentVolumeClaim: claimName: CLAIM_NAME
Replace the following:
- CLAIM_NAME with the name of the PersistentVolumeClaim
you specified previously. For example,
efs-claim1
.
- CLAIM_NAME with the name of the PersistentVolumeClaim
you specified previously. For example,
Apply the YAML to your user cluster.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f efs-statefulset.yaml
The output confirms the StatefulSet's creation.
statefulset.apps/efs-shell created
The StatefulSet might take several minutes to download the container image and launch.
Confirm the StatefulSet's Pod is in
Running
status withkubectl get pods
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get pods -l app=test-efs
The output includes the name of the Pod and its status. In the following response, the Pod's name is
efs-shell-0
.NAME READY STATUS RESTARTS AGE efs-shell-0 1/1 Running 0 1m
After the Pod is in Running status, use
kubectl exec
to connect to the Pod hosting the StatefulSet.env HTTPS_PROXY=http://localhost:8118 \ kubectl exec -it efs-shell-0 -- bash
The
kubectl
command launches a shell on the Pod.To confirm that your EFS file system is mounted, check the contents of the
out.txt
file with thetail
command.tail /efs-data/out.txt
The output contains recent times in UTC.
Disconnect from the Pod with the
exit
command.exit
Your shell returns to your local machine.
To remove the StatefulSet, use
kubectl delete
.env HTTPS_PROXY=http://localhost:8118 \ kubectl delete -f efs-statefulset.yaml
Cleaning up
To remove the resources you created in the previous sections, run the following commands:
env HTTPS_PROXY=http://localhost:8118 \
kubectl delete -f efs-statefulset.yaml
env HTTPS_PROXY=http://localhost:8118 \
kubectl delete -f efs-claim.yaml
env HTTPS_PROXY=http://localhost:8118 \
kubectl delete -f efs-volume.yaml
What's next
- Learn about additional ways to use EFS volumes in the
aws-efs-csi-driver
examples.