Performance

This page describes the expected average performance and the recommended performance settings for Filestore. It also shows you how you can test the performance of your Filestore instances.

Expected performance

Each Filestore service tier provides a different level of performance. The Basic tiers offer consistent performance beyond a 10 TiB instance capacity. For Enterprise and High Scale tier instances, performance grows or shrinks linearly as the capacity scales up or down.

The performance of any given instance may vary from the reported numbers due to various factors, such as the use of caching, the machine type of the client VM, and the workload tested.

The following tables show the expected performance of Filestore instances based on its service tier and configured capacity:

Performance Capacity (TiB) Read/Write IOPS Read/Write Tput (MiB/s)
BASIC_HDD 1–10 600/1,000 100/100
BASIC_HDD 10–63.9 1,000/5,000 180/120
BASIC_SSD 2.5–63.9 60,000/25,000 1,200/350
HIGH_SCALE_SSD 10 90,000/26,000 2,600/880
HIGH_SCALE_SSD 100 920,000/260,000 26,000/8,800
ENTERPRISE 1 12,000/4,000 120/100
ENTERPRISE 10 120,000/40,000 1,200/1,000

Enterprise and High Scale SSD tier performance scaling

The performance of Enterprise and High Scale tier instances scales linearly with the allocated capacity in 1 TiB and 2.5 TiB steps, respectively. The following table shows the amount of performance gained with each 2.5 TiB of capacity.

High Scale

Specification Maximum sustained IOPS1 Maximum sustained Tput2 (MiB/s)
Read per 2.5TiB 23,000 650
Write per 2.5TiB 6,500 220

Enterprise

Specification Maximum sustained IOPS[^1] Maximum sustained Tput[^2] (MiB/s)
Read per 1TiB 12,000 120
Write per 1TiB 4,000 100

In single- and few-client scenarios, you must increase the number of TCP connections with the nconnect mount option to achieve maximum NFS performance. We recommend specifying up to 7 connections for High Scale tier and up to 2 connections for Enterprise tier. In general, the larger the file share capacity and the fewer the connecting client VMs, the more performance you gain by specifying additional connections with nconnect.

Recommended client machine type

We recommend having a Compute Engine machine type, such as n2-standard-8, that provide an egress bandwidth of 16 Gbps. This egress bandwidth allows the client to achieve approximately 16 Gbps read bandwidth for cache-friendly workloads. For additional context, see Network bandwidth.

Linux client mount options

We recommend using the following NFS mount options, especially hard mount, async, and having the rsize and wsize options set to 1 MiB, to achieve the best performance on Linux client VM instances. For more information on NFS mount options, see nfs.

Default option Description
hard The NFS client retries NFS requests indefinitely.
timeo=600 The NFS client waits 600 deciseconds (60 seconds) before retrying an NFS request.
retrans=3 The NFS client attempts NFS requests three times before taking further recovery action.
rsize=1048576 The NFS client can receive a maximum of 1,048,576 bytes (1 MiB) from the NFS server per READ request.
wsize=1048576 The NFS client can receive a maximum of 1,048,576 bytes (1 MiB) from the NFS server per WRITE request.
resvport The NFS client uses a privileged source port when communicating with the NFS server for this mount point.
async The NFS client delays sending application writes to the NFS server until certain conditions are met.
Caution: Using the sync option significantly reduces performance.

Testing performance

If you are using Linux, you can use the fio tool to benchmark read and write throughput and IOPS for Basic tier instances. The examples in this section show common benchmarks you might want to run. You may need to run fio from multiple client VM instances to achieve maximum performance. This method for benchmarking performance does not provide accurate results for Enterprise and High Scale tier instances.

The following example benchmarks maximum write throughput:

fio --ioengine=libaio --filesize=32G --ramp_time=2s \
--runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \
--group_reporting --directory=/mnt/nfs  \
--name=write --blocksize=1m --iodepth=64 --readwrite=write

The following example benchmarks maximum write IOPS:

fio --ioengine=libaio --filesize=32G --ramp_time=2s \
--runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \
--group_reporting --directory=/mnt/nfs  \
--name=randwrite --blocksize=4k --iodepth=256 --readwrite=randwrite

The following example benchmarks maximum read throughput:

fio --ioengine=libaio --filesize=32G --ramp_time=2s \
--runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \
--group_reporting --directory=/mnt/nfs  \
--name=read --blocksize=1m --iodepth=64 --readwrite=read

The following example benchmarks maximum read IOPS:

fio --ioengine=libaio --filesize=32G --ramp_time=2s \
--runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \
--group_reporting --directory=/mnt/nfs  \
--name=randread --blocksize=4k --iodepth=256 --readwrite=randread

What's next


  1. Numbers are measured based on 4K reads/writes IO. 

  2. Numbers are measured based on 1M reads/writes IO.