This documentation is for the most recent version of Anthos clusters on AWS, released on November 3rd. See the Release notes for more information. For documentation on the previous generation of Anthos clusters on AWS, see Previous generation.

Use an EFS file system

Stay organized with collections Save and categorize content based on your preferences.

Anthos clusters on AWS supports the AWS Elastic File System (EFS) through the EFS CSI Driver.

This topic explains how to create an EFS file system and access it through a PersistentVolume resource in your cluster. The steps in this process are:

  • Create your EFS file system
  • Create mount targets and configure your cluster network
  • Create a PersistentVolume
  • Create a PersistentVolumeClaim
  • Configure your workload to access your EFS file system

Before you begin

To complete these steps, you should be familiar with how Kubernetes and Anthos clusters on AWS handle workload storage. For more information, see Using persistent storage in your Anthos clusters on AWS workloads .

Create an AWS EFS file system

  1. Get the AWS region where your cluster runs.

    gcloud container aws clusters describe CLUSTER_NAME \
      --location=LOCATION \
      --format="value(awsRegion)"
    

    Replace the following:

    • CLUSTER_NAME: the name of the AWS cluster.
    • LOCATION: the Google Cloud location of the AWS cluster
  2. Create an EFS file system in the same AWS region with the following command.

    aws efs create-file-system \
      --region AWS_REGION \
      --performance-mode generalPurpose \
      --query 'FileSystemId' \
      --output text
    

    Replace the following:

    • AWS_REGION: the AWS region where the cluster runs

The output includes the file system's ID. Save this value. You'll need it later.

Configure your cluster network

The CSI driver for EFS accesses file systems through EFS mount targets. Each mount target provides a private virtual IP within its associated subnet and uses security groups to control access to the underlying EFS file system.

Because each node pool in your cluster can potentially run on a different subnet, you must provision a mount target on each node pool subnet.

In this example, the access to each mount target is protected by a single security group that includes both the mount targets and the node pool machines. Depending on your VPC configuration and security requirements, you might want to split this configuration into two (or more) security groups— for example, one for mount targets and one for node pool worker nodes.

  1. Create a dedicated security group to control access to the EFS file system

    Obtain the ID of the AWS VPC where the cluster runs.

    gcloud container aws clusters describe CLUSTER_NAME \
      --location=LOCATION \
      --format="value(networking.vpcId)"
    

    Replace the following:

    • CLUSTER_NAME
    • LOCATION

    The output includes the ID of your VPC. Save this value. You'll need it later.

  2. Create a security group to control access to your EFS mount target.

    aws ec2 create-security-group \
      --group-name gke-efs-security-group \
      --description "EFS security group" \
      --vpc-id VPC_ID \
      --output text
    

    Replace VPC_ID with the ID of the AWS VPC where the cluster runs.

    The output includes the ID of the new security group. Save this value. You'll need it later.

  3. By default, AWS creates security groups with a default rule that allows all outbound traffic. Remove the default outbound rule.

    aws ec2 revoke-security-group-egress \
      --group-id SECURITY_GROUP_ID
      --ip-permissions '[{"IpProtocol":"-1","FromPort":-1,"ToPort":-1,"IpRanges":[{"CidrIp":"0.0.0.0/0"}]}]'
    

    Replace SECURITY_GROUP_ID with the ID of the AWS security group.

  4. Authorize inbound and outbound traffic for EFS (port 2049).

    aws ec2 authorize-security-group-ingress \
        --group-id SECURITY_GROUP_ID \
        --protocol tcp \
        --port 2049 \
        --source-group SECURITY_GROUP_ID
    
    aws ec2 authorize-security-group-egress \
        --group-id SECURITY_GROUP_ID \
        --protocol tcp \
        --port 2049 \
        --source-group SECURITY_GROUP_ID
    
  5. Create an EFS mount target on each node pool subnet.

    List subnets associated with all node pools.

    gcloud container aws node-pools list \
      --cluster=CLUSTER_NAME \
      --location=LOCATION \
      --format="value(subnetId)"
    

    Replace the following:

    • CLUSTER_NAME
    • LOCATION

    The output is a list of subnet IDs. Save this value. You'll need it later.

  6. For each subnet, create an associated EFS mount target with the security group applied.

    aws efs create-mount-target \
        --file-system-id EFS_ID \
        --subnet-id SUBNET_ID \
        --security-groups SECURITY_GROUP_ID
    

    Replace the following:

    • EFS_ID: the ID of your EFS file system.
    • SUBNET_ID: the ID of the node pool's subnet.
    • SECURITY_GROUP_ID
  7. Add the EFS security group to all cluster node pools.

    Get the list all your node pools.

    gcloud container aws node-pools list \
      --cluster=CLUSTER_NAME \
      --location=LOCATION
    

    Replace the following:

    • CLUSTER_NAME
    • LOCATION

    The output includes the names of your cluster's node pools. Save this value. You'll need it later.

  8. Update each node pool to include the new EFS security group.

    gcloud container aws node-pools update NODE_POOL_NAME \
      --cluster=CLUSTER_NAME \
      --location=LOCATION  \
      --security-group-ids=SECURITY_GROUP_IDS
    

    Replace the following:

    • NODE_POOL_NAME: the name of the node pool.
    • CLUSTER_NAME: the name of your cluster.
    • LOCATION
    • SECURITY_GROUP_IDS the list of security group IDs for worker nodes.

Create a PersistentVolume

In this section, you create a PersistentVolume that mounts an EFS share.

  1. Choose the appropriate tab depending on whether you're connecting directly to the EFS share or via an access point.

    Connect directly

    Copy the following YAML manifest into a file named efs-volume.yaml. The manifest references the EFS storage class you created previously.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: VOLUME_NAME
    spec:
      capacity:
        # Note: storage capacity is not used by the EFS CSI driver.
        # It is required by the PersistentVolume spec.
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: "" # storageClassName is not required, see note in the following section.
      claimRef:
        name: CLAIM_NAME
        namespace: default
      csi:
        driver: efs.csi.aws.com
        volumeHandle: EFS_ID
    

    Replace the following:

    • VOLUME_NAME with a name for the persistent volume.
    • CLAIM_NAME with the name you want to use for the PersistentVolumeClaim.
    • EFS_ID with your EFS file system ID. For example, fs-12345678a.

    Access point

    To create an access point based volume, you need to manually provision it. See Working with EFS access points for more information.

    Dynamic provisioning of access points is not supported.

    Copy the following YAML manifest into a file named efs-volume.yaml. The manifest references the EFS storage class you created previously.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: VOLUME_NAME
    spec:
      capacity:
        # Note: storage capacity is not used by the EFS CSI driver.
        # It is required by the PersistentVolume spec.
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: "" # storageClassName is not required, see note in the following section.
      claimRef:
        name: CLAIM_NAME
        namespace: default
      csi:
        driver: efs.csi.aws.com
        volumeHandle: EFS_ID::ACCESS_POINT_ID
    

    Replace the following:

    • VOLUME_NAME with a name for the persistent volume.
    • CLAIM_NAME with the name you want to use for the PersistentVolumeClaim.
    • EFS_ID with your EFS file system ID. For example, fs-12345678a.
    • ACCESS_POINT_ID with your access point's ID. For example, fsap-1234567890abcde.
  2. Apply the YAML to your cluster.

    kubectl apply -f efs-volume.yaml
    

    The output confirms the PersistentVolume's creation.

    persistentvolume/VOLUME_NAME created
    

Create a PersistentVolumeClaim

To use your EFS file system with your workloads, create a PersistentVolumeClaim.

  1. To create the PersistentVolumeClaim, copy the following YAML manifest into a file named efs-claim.yaml.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: CLAIM_NAME
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 5Gi
    

    Replace CLAIM_NAME with a name for your PersistentVolumeClaim. For example, efs-claim1.

  2. Apply the YAML to your cluster.

      kubectl apply -f efs-claim.yaml
    

    The output confirms the PersistentVolumeClaim's creation.

    persistentvolumeclaim/CLAIM_NAME created
    

Create a StatefulSet

After you have created a PersistentVolumeClaim, you can use it in a workload. For example, this section creates an example StatefulSet that mounts your EFS file system. You can also use a PersistentVolumeClaim with other workload types such as Pods and Deployments by referencing the claim in spec.volumes.

To create a StatefulSet that mounts the EFS file system referenced in your PersistentVolumeClaim, perform the following steps.

  1. Copy the following YAML manifest into a file named efs-statefulset.yaml. This example manifest launches an Ubuntu Linux container that mounts your EFS file system at /efs-data. The container writes every five seconds to a file on your EFS file system named out.txt.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: efs-shell
    spec:
      selector:
        matchLabels:
          app: test-efs
      serviceName: efs-app
      replicas: 1
      template:
        metadata:
          labels:
            app: test-efs
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: linux
            image: ubuntu:bionic
            command: ["/bin/sh"]
            args: ["-c", "while true; do echo $(date -u) >> /efs-data/out.txt; sleep 5; done"]
            volumeMounts:
            - name: efs-volume
              mountPath: /efs-data
          volumes:
          - name: efs-volume
            persistentVolumeClaim:
              claimName: CLAIM_NAME
    

    Replace the following:

    • CLAIM_NAME with the name of the PersistentVolumeClaim you specified previously. For example, efs-claim1.
  2. Apply the YAML to your cluster.

     kubectl apply -f efs-statefulset.yaml
    

    The output confirms the StatefulSet's creation.

    statefulset.apps/efs-shell created
    

    The StatefulSet might take several minutes to download and launch the container image.

  3. Confirm the StatefulSet's Pod is in Running status with kubectl get pods.

      kubectl get pods -l app=test-efs
    

    The output includes the name of the Pod and its status. In the following response, the Pod's name is efs-shell-0.

    NAME          READY   STATUS    RESTARTS   AGE
    efs-shell-0   1/1     Running   0          1m
    
  4. After the Pod is in Running status, use kubectl exec to connect to the Pod hosting the StatefulSet.

      kubectl exec -it efs-shell-0 -- bash
    

    The kubectl command launches a shell on the Pod.

  5. To confirm that your EFS file system is mounted, check the contents of the out.txt file with the tail command.

    tail /efs-data/out.txt
    

    The output contains recent times in UTC.

  6. Disconnect from the Pod with the exit command.

    exit
    

    Your shell returns to your local machine.

  7. To remove the StatefulSet, use kubectl delete.

      kubectl delete -f efs-statefulset.yaml
    

Clean up

To remove the resources you created in the previous sections, run the following commands:

  1. Use kubectl to remove the resources from your cluster

    kubectl delete -f efs-statefulset.yaml
    kubectl delete -f efs-claim.yaml
    kubectl delete -f efs-volume.yaml
    
  2. Find the ID of your mount target

    aws efs describe-mount-targets \
    --file-system-id EFS_ID \
    --profile adminuser \
    --region AWS_REGION
    

    Replace the following:

    • EFS_ID: the ID of your EFS file system
    • AWS_REGION: the AWS region where the cluster runs

    The output includes the mount target id. Save this value. You'll need it later.

  3. Delete your EFS mount target.

    aws efs delete-mount-target \
    --mount-target-id MOUNT_TARGET_ID \
    --profile adminuser \
    --region AWS_REGION
    

    Replace the following:

    • MOUNT_TARGET_ID: the ID of your EFS mount target
    • AWS_REGION
  4. Delete any security groups you created.

  5. Delete the file system by using the delete-file-system CLI command. You can get a list of your file systems by using the describe-file-systems CLI command. You can get the file system ID from the response.

    aws efs delete-file-system \
      --file-system-id EFS_ID \
      --profile adminuser \
      --region AWS_REGION
    

    Replace the following:

    • EFS_ID
    • AWS_REGION

What's next