Learn about expected average performance and recommended performance settings for Filestore.

Expected performance

The following table shows expected performance as a function of the Filestore instance tier and size. Performance should scale linearly with the size of the instance for any instance size not explicitly mentioned here.

Tier Size (TB) Read (Tput) Read (IOPS) Write (Tput) Write (IOPS)
Standard 1 100 MB/s 600 100 MB/s 1000
Standard 10+ 180 MB/s 1000 120 MB/s 5000
Premium 2.5+ 1.2 GB/s 60000 350 MB/s 25000

Performance of any given instance may vary from the numbers reported above due to a variety of issues, such as use of caching on the client or server, the Compute Engine machine type used for the client VM instance, and the workload being tested.

Recommended client machine type

We recommend having a Compute Engine machine type of n1-standard-8 or better for the client VM instance. This allows the client to achieve approximately 16Gbps read bandwidth for cache-friendly workloads.

Linux client mount options

We recommend using the default NFS mount options, especially using a hard mount and having the rsize and wsize options set to 1 MB, to achieve the best performance on Linux client VM instances. For more information on NFS mount options, see nfs.

Testing performance

If you are using Linux, you can use the fio tool to benchmark read and write throughput and IOPS. The examples in this section show common benchmarks you might want to run. You may need to run fio from multiple client VM instances to achieve maximum performance.

The following example benchmarks maximum write throughput:

fio --ioengine=sync --direct=0 \
--fsync_on_close=1 --randrepeat=0 --nrfiles=1  --name=seqwrite --rw=write \
--bs=1m --size=20G --end_fsync=1 --fallocate=none  --overwrite=0 --numjobs=1 \
--directory=/mnt/gcfs --loops=10

The following example benchmarks maximum write IOPS:

fio --ioengine=sync --direct=0 \
--fsync_on_close=1 --randrepeat=0 --nrfiles=1  --name=randwrite --rw=randwrite \
--bs=4K --size=1G --end_fsync=1 --fallocate=none  --overwrite=0 --numjobs=80 \
--sync=1 --directory=/mnt/standard --loops=10

The following example benchmarks maximum read throughput:

fio --ioengine=sync --direct=0 \
--fsync_on_close=1 --randrepeat=0 --nrfiles=1  --name=seqread --rw=read \
--bs=1m --size=240G --end_fsync=1 --fallocate=none  --overwrite=0 --numjobs=1 \
--directory=/mnt/ssd --invalidate=1 --loops=10

The following example benchmarks maximum read IOPS:

fio --ioengine=sync --direct=0 \
--fsync_on_close=1 --randrepeat=0 --nrfiles=1  --name=randread --rw=randread \
--bs=4K --size=1G --end_fsync=1 --fallocate=none  --overwrite=0 --numjobs=20 \
--sync=1 --invalidate=1 --directory=/mnt/standard  --loops=10