This document describes how to use NFS, NDB, 9P, CIFS/Samba, and Ceph network file systems with Cloud Run. If you are using Filestore or Cloud Storage FUSE on Cloud Run, refer to the end-to-end tutorials for Filestore on Cloud Run or Cloud Storage FUSE on Cloud Run. You can use a network file system to share and persist data between multiple containers and services in Cloud Run. This feature is only available if you are using the Cloud Run second generation execution environment.
If you need to read and write files in your Cloud Run service using a file system, you have several options:
- If you don't need to persist the data beyond the lifetime of the instance, you can use the built-in memory file system.
- If you need to persist data beyond instance lifetimes, and you want to use standard file system semantics, use a network file system as described in this page. You can use Filestore or self-managed NFS, NDB, 9P, CIFS/Samba, and Ceph network file systems with Cloud Run.
- If you need to access Cloud Storage as if it were a file system without having to run one, use Cloud Storage FUSE.
- If you need to persist data beyond instance lifetimes, and you don't need standard file system semantics, the simplest option is to use Cloud Storage client libraries. This is also a good option if you need to access data from many instances at the same time.
Costs associated with using network file systems
The costs of using a network file system are limited to the cost of the network file system, depending on where the file system is running, and to the usual cost of running a Cloud Run service. For a discussion about these costs, refer to the cost discussion in the Filestore or Cloud Storage FUSE tutorial.
The following considerations apply to using network file systems on Cloud Run:
You must specify the second generation execution environment when you deploy to Cloud Run.
Cloud Run is designed to scale rapidly to a large number of instances. However, most network file systems are not designed for concurrent use by a large number of clients. Consider using the maximum instances feature to limit the number of Cloud Run instances.
Cloud Run doesn't support NFS locking, so when mounting NFS, use the
-o nolockoption. Google recommends designing your application to avoid concurrent writes to the same file location. One way to do this is to use a different system, such as Redis or Memorystore to store a mutex.
Set up a network file system
If you don't already have a file server set up, follow the File servers on Compute Engine solution guide to select and set up the right file system for your needs. If you're using an existing file server, make sure it is accessible from a VPC network
Configure a Serverless VPC Access connector
You need to use Serverless VPC Access connector to connect your Cloud Run service to the VPC network where your network file system is running.
To create a Serverless VPC Access connector on the same VPC network to connect to the Cloud Run service, follow the instructions on the page Connecting to a VPC network.
Mount the file system from your Cloud Run service
To mount a network file system:
Define a startup script that starts your application and specifies the mount point your Dockerfile. The following sample shows a script that uses Filestore as the network file system:
#!/usr/bin/env bash set -eo pipefail # Create mount directory for service. mkdir -p $MNT_DIR echo "Mounting Cloud Filestore." mount -o nolock $FILESTORE_IP_ADDRESS:/$FILE_SHARE_NAME $MNT_DIR echo "Mounting completed." # Run the web service on container startup. Here we use the gunicorn # webserver, with one worker process and 8 threads. # For environments with multiple CPU cores, increase the number of workers # to be equal to the cores available. # Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling. exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
#!/usr/bin/env bash set -eo pipefail # Create mount directory for service mkdir -p $MNT_DIR echo "Mounting Cloud Filestore." mount -o nolock $FILESTORE_IP_ADDRESS:/$FILE_SHARE_NAME $MNT_DIR echo "Mounting completed." # Start the application java -jar filesystem.jar # Exit immediately when one of the background processes terminate. wait -n
Replace the file mount code in the
run.shstartup script using the following examples, replacing the variables as needed:
echo "mounting NFSv4 share" mount -t nfs4 -o nolock IP_ADDRESS:FILE_SHARE_NAME MOUNT_POINT_DIRECTORY
Make sure you use the
-o nolockoption when mounting NFS. Cloud Run doesn't support NFS locking.
echo "mounting ext4 image via NBD" nbd-client -L -name image IP_ADDRESS DEVICE_NAME mount DEVICE_NAME MOUNT_POINT_DIRECTORY
For PD-SSD via NBD
echo "mounting PD-SSD via NBD" nbd-client -L -name disk IP_ADDRESS DEVICE_NAME mount DEVICE_NAME MOUNT_POINT_DIRECTORY
echo "mounting 9p export" mount -t 9p -o trans=tcp,aname=/mnt/diod,version=9p2000.L,uname=root,access=user IP_ADDRESS MOUNT_POINT_DIRECTORY
echo "mounting SMB public share" mount -t cifs -ousername=USERNAME,password=PASSWORD,ip=IP_ADDRESS //FILESHARE_NAME MOUNT_POINT_DIRECTORY echo "mounts completed"
Define your environment configuration with the Dockerfile. You'll use
RUNto specify any additional needed system package, such as
nbd-clientfor NBD. Use
CMDto specify the command to be executed when running the image (the startup script
run.sh) and to provide default arguments for
ENTRYPOINT, which specifies the init process binary.
The following is a sample Dockerfile for a service that uses Filestore:
# Use the official lightweight Python image. # https://hub.docker.com/_/python FROM python:3.11-slim # Install system dependencies RUN apt-get update -y && apt-get install -y \ tini \ nfs-common \ && apt-get clean # Set fallback mount directory ENV MNT_DIR /mnt/nfs/filestore # Copy local code to the container image. ENV APP_HOME /app WORKDIR $APP_HOME COPY . ./ # Install production dependencies. RUN pip install -r requirements.txt # Ensure the script is executable RUN chmod +x /app/run.sh # Use tini to manage zombie processes and signal forwarding # https://github.com/krallin/tini ENTRYPOINT ["/usr/bin/tini", "--"] # Pass the startup script as arguments to tini CMD ["/app/run.sh"]
# Use the official maven image to create a build artifact. # https://hub.docker.com/_/maven FROM maven:3-eclipse-temurin-17-alpine as builder # Copy local code to the container image. WORKDIR /app COPY pom.xml . COPY src ./src # Build a release artifact. RUN mvn package -DskipTests # Use Eclipse Temurin for base image. # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds FROM eclipse-temurin:18-jdk-focal # Install filesystem dependencies RUN apt-get update -y && apt-get install -y \ tini \ nfs-kernel-server \ nfs-common \ && apt-get clean # Set fallback mount directory ENV MNT_DIR /mnt/nfs/filestore # Copy the jar to the production image from the builder stage. COPY --from=builder /app/target/filesystem-*.jar /filesystem.jar # Copy the statup script COPY run.sh ./run.sh RUN chmod +x ./run.sh # Use tini to manage zombie processes and signal forwarding # https://github.com/krallin/tini ENTRYPOINT ["/usr/bin/tini", "--"] # Run the web service on container startup. CMD ["/run.sh"]
Access a network file system from Cloud Run service code
To access the network file systems in your service code, use file read and write operations as you usually do.
Containerize and deploy
When your Cloud Run service code is complete, containerize, and deploy as you usually do for a Cloud Run service, making sure you specify the second generation execution environment.
- Filestore on Cloud Run tutorial
- Cloud Storage FUSE on Cloud Run tutorial
- See the Troubleshooting guide