This page describes the performance expectations of Google Cloud NetApp Volumes' Premium and Extreme service levels.
Maximum volume throughput
The maximum throughput of a volume is limited and depends on the service level and volume size.
The following table provides the maximum throughput per GiB provisioned for NetApp Volumes' Premium and Extreme service levels.
|Service level||Maximum throughput per GiB provisioned|
Maximum volume throughput example
An example of maximum volume throughput would be a Premium volume with a size of 1,500 GiB results in roughly 93.75 MiBps (1500 GiB x 64 KiBps/GiB / 1024 KiB/MiB).
You can change the size of a volume or assign it to a pool with a different service level as long as the pool has similar settings, such as being set to the same region, CMEK policy, Active Directory settings, VPC network, or LDAP settings. This process takes a few seconds and doesn't disrupt I/O operations. Volume throughput limits can increase up to a maximum limit. Once you hit that limit, increasing the volume size doesn't increase the throughput limit.
This section provides details about NetApp Volumes performance considerations.
Workload real-life performance depends on multiple volume parameters—the ratio between read, write, and metadata operations, block size, I/O concurrency, and service latency.
Metadata operations considerations
Metadata operations are operations such as listing the contents of a folder, deleting a file, setting permissions, and a number of protocol-specific operations which make the protocol work. Metadata operations are usually small and their performance is mostly limited on latency.
Read and write ratio, block size, and I/O concurrency
Read and write ratio, block size, and I/O concurrency parameters are defined by your workload. Though the read and write ratio is usually static, you can change block size in some cases by reconfiguring the application. To achieve greater improvement, increase I/O concurrency where more I/O operations run in parallel without overall runtime increases.
Physical infrastructure, the network path between your clients, and the volume define latency. Different zones within the same region might have different latencies.
Latency affects volume parameters in the following ways:
Volume size: Volume size directly affects latency; you can experience high latency for both small volumes due to throughput limits as well as volume capacity increases to one or more TiB.
Block size: Latency has a dependency on block size. It takes longer to read or write larger blocks because it takes more time to send the data over a network link with a given bandwidth.
High I/O rate latency: For volumes with a high I/O load, the observed I/O latency increases due to queuing effects, see Little's law. As a consequence, it's misleading to measure latency with a maximum throughput test.
To optimize performance, we recommend that you test connection to various zones in your nearest region and choose the zone with the lowest latency rate.
Latency formula examples
The following formulas explain how parameters connect to yield performance numbers:
1s/latency * concurrency
IOPS * blocksize
For this example, you use a single-threaded File Explorer copy with a
volume latency and a
128 KiB block size to copy a large file from a local SSD
to a volume, the following formula applies to your use case:
1 s/0.0005 s * 1 = 2000 IOPS
2000 IOPS * 128 KiB = 256000 KiB/s = 250 MiB/s
If your latency is one millisecond, IOPS and throughput drops by 50%. Latency has a fundamental impact on single-threaded applications. To drive this volume to its maximum performance potential, use multi-threaded applications that provide higher concurrency.
Read about application resilience.