This page describes the expected average performance and the recommended performance settings for Filestore. It also shows you how you can test the performance of your Filestore instances.
Expected performance
Each Filestore service tier provides a different level of
performance. The Basic tiers offer consistent performance beyond a 10 TB
instance capacity. For High Scale tier instances, performance grows or shrinks
linearly as the capacity scales up or down.
The performance of any given instance may vary from the reported numbers due to a variety of factors, such as the use of caching on the client or server, the Compute Engine machine type used for the client VM instance, and the workload being tested.
The following tables show the expected performance of Filestore instances based on its service tier and configured capacity:
Performance | Capacity (TB ) |
Read/Write IOPS | Read/Write Tput (MB/s) |
---|---|---|---|
BASIC_HDD |
1-10 | 600/1,000 | 100/100 |
BASIC_HDD |
10-63.9 | 1,000/5,000 | 180/120 |
BASIC_SSD |
2.5-63.9 | 60,000/25,000 | 1,200/350 |
HIGH_SCALE_SSD |
60 | 90,000/30,000 | 3,000/660 |
HIGH_SCALE_SSD |
320 | 480,000/160,000 | 16,000/3,520 |
High Scale SSD tier performance scaling and per client performance
The performance of High Scale tier instances scales linearly with the allocated
capacity in 10 TB
steps. The following table shows the amount of performance
gained with each 10 TB
of capacity.
Specification | Maximum sustained IOPS1 | Maximum sustained Tput2 (MB/s) |
---|---|---|
Read per 10TB |
15,000 | 500 |
Write per 10TB |
5,000 | 110 |
These performance numbers also correspond to the performance cap per client for
each instance. For example, even though a High Scale tier instance with 60 TB
of capacity has a total maximum sustained read/write throughput of 3,000/660
MB/s, the maximum throughput achievable per client is 500/110 MB/s. There is
no per-client performance cap for Basic tier instances.
Recommended client machine type
We recommend having a Compute Engine machine type
of n1-standard-8
or better for the client VM instance. This allows the client
to achieve approximately 16 Gbps
read bandwidth for cache-friendly workloads.
Linux client mount options
We recommend using the default NFS mount options, especially using a hard
mount, async
, and having the rsize
and wsize
options set to 1 MB
, to achieve the
best performance on Linux client VM instances. For more information on NFS mount
options, see nfs.
Click to expand - Default NFS
mount
options
Default option | Description |
---|---|
hard |
The NFS client retries NFS requests indefinitely. |
timeo=600 |
The NFS client waits 600 deciseconds (60 seconds) before retrying an NFS request. |
retrans=3 |
The NFS client attempts NFS requests three times before taking further recovery action. |
rsize=1048576 |
The NFS client can receive a maximum of 1048576 bytes (1 MiB) from the NFS server per READ request. |
wsize=1048576 |
The NFS client can receive a maximum of 1048576 bytes (1 MiB) from the NFS server per WRITE request. |
resvport |
The NFS client uses a privileged source port when communicating with the NFS server for this mount point. |
async |
The NFS client delays sending application writes to the NFS server until certain conditions are met. Caution: Using the sync option significantly reduces performance. |
Testing performance
If you are using Linux, you can use the fio tool to benchmark read and write throughput and IOPS for Basic tier instances. The examples in this section show common benchmarks you might want to run. You may need to run fio from multiple client VM instances to achieve maximum performance. This method for benchmarking performance will not provide accurate results for High Scale tier instances.
The following example benchmarks maximum write throughput:
fio --ioengine=sync --direct=0 \ --fsync_on_close=1 --randrepeat=0 --nrfiles=1 --name=seqwrite --rw=write \ --bs=1m --size=20G --end_fsync=1 --fallocate=none --overwrite=0 --numjobs=1 \ --directory=/mnt/gcfs --loops=10
The following example benchmarks maximum write IOPS:
fio --ioengine=sync --direct=0 \ --fsync_on_close=1 --randrepeat=0 --nrfiles=1 --name=randwrite --rw=randwrite \ --bs=4K --size=1G --end_fsync=1 --fallocate=none --overwrite=0 --numjobs=80 \ --sync=1 --directory=/mnt/standard --loops=10
The following example benchmarks maximum read throughput:
fio --ioengine=sync --direct=0 \ --fsync_on_close=1 --randrepeat=0 --nrfiles=1 --name=seqread --rw=read \ --bs=1m --size=240G --end_fsync=1 --fallocate=none --overwrite=0 --numjobs=1 \ --directory=/mnt/ssd --invalidate=1 --loops=10
The following example benchmarks maximum read IOPS:
fio --ioengine=sync --direct=0 \ --fsync_on_close=1 --randrepeat=0 --nrfiles=1 --name=randread --rw=randread \ --bs=4K --size=1G --end_fsync=1 --fallocate=none --overwrite=0 --numjobs=20 \ --sync=1 --invalidate=1 --directory=/mnt/standard --loops=10
What's next
- Troubleshoot performance related issues for Filestore.