This page provides an overview of the volumes feature of Google Cloud NetApp Volumes.
About volumes
A volume is a file system container in a storage pool that stores application, database, and user data.
You create a volume's capacity using the available capacity in the storage pool and you can define and resize the capacity without disruption to your processes.
Storage pool settings apply to the volumes contained within them automatically. These settings include the service level, location, network (Virtual Private Cloud (VPC)), Active Directory policy, LDAP, and the customer-managed encryption key (CMEK) policy.
Volume performance
Flex storage pools: the performance of a volume depends on the size and capabilities of its storage pool. The pool performance is shared between all volumes in the pool.
Standard storage pools: the volume's performance is defined by the volume size and the service level it inherits from the pool. The volume size can be increased or decreased to optimize performance.
Premium and Extreme storage pools: the volume's performance is defined by the volume size and the service level it inherits from the pool. The volume size can be increased or decreased to optimize performance. Additionally, you can move a volume non-disruptively between Premium and Extreme pools to optimize performance.
Space provisioning
You should provision the right amount of capacity to your volume large enough to hold your data and leave some empty space as buffer for growth.
If a volume becomes full, clients receive an out of space
error when they try
to modify or add data, which can lead to problems for your applications or
users. You should monitor usage of your volumes and maintain a provisioned space
buffer of 20% above your expected volume utilization. For information on
monitoring usage, see Monitor NetApp Volumes.
Snapshots consume the volume's capacity. For more information, see Snapshot space use.
Volume reversion
NetApp Volumes lets you revert volumes to a previously created snapshot. When you revert a volume, it restores all volume contents back to the point in time the snapshot was taken. Any snapshot created after the snapshot used for the reversion is lost. If you don't want to lose data, we recommend that you clone a volume or restore data with snapshots instead.
You can use volume reversion to test and upgrade applications or fend off ransomware attacks. The process is similar to overwriting the volume with a backup, but only takes a few seconds. You can revert a volume to a snapshot independent of the capacity of the volume.
Reversions happen when the volume is online and in use by clients. We recommend stopping all critical applications before you revert to avoid potential data corruption because the reversion changes open files without any notification to the application.
Block volume from deletion when clients are connected
NetApp Volumes lets you block the deletion of volumes when they are mounted by a client. If you use volumes for Google Cloud VMware Engine (GCVE) datastores, you must enable the setting to block the deletion of volumes when clients have mounted volumes. If you enable Block volume from deletion when clients are connected setting, an error message displays when you try to delete a mounted volume.
Volumes support blocking the deletion of volumes when you create a volume, create a new volume from a snapshot, and create a new volume from a backup.
The following protocols support blocking the deletion of volumes:
NFSV3
NFSV4.1
NFSV3 and NFSV4.1
To delete a volume when this option is enabled, all the clients must first unmount the volume. After that, you must wait for more than 52 hours to delete the volume.
Large capacity volumes
Premium and Extreme service levels allow volume sizes between 100 GiB and 102,400 GiB and maximum throughput of up to 4.5 GiBps. Some workloads require larger volumes and higher throughput, which can be achieved by using the large capacity volume option with Premium and Extreme service levels.
Large capacity volumes can be sized between 15 TiB and 1 PiB in increments of 1 GiB and deliver throughput performance of up to 12.5 GiBps.
Large capacity volumes offer six storage endpoints (IP addresses) to load-balance client traffic to the volume and achieve higher performance. The six IP addresses make such volumes an ideal candidate for workloads which require high performance and highly concurrent access to the same data. For recommendations on how to connect your clients, see Connect large capacity volumes with multiple storage endpoints. Volumes cannot be converted into large capacity volumes and the other way around after creation.
Large capacity volumes limitations
The following limitations are applicable for large capacity volumes:
You should use a dedicated service project for large capacity volumes.
Volume replication is not supported.
Volume backups are not supported.
CMEK is not supported.
The interval between snapshots must be 30 minutes or longer. This requirement has implications on scheduled snapshots. You must modify the minute and hour parameters of hourly, daily, and weekly snapshots to make sure that they are taken at least 30 minutes apart from each other.
Auto-tiering
Google Cloud NetApp Volumes lets you enable auto-tiering on a per-volume basis if auto-tiering is enabled on the storage pool. Auto-tiering reduces the overall cost of volume usage. For more information about auto-tiering, see product overview.
Once you enable auto-tiering on a volume, you can pause and resume auto-tiering as needed. However, auto-tiering can't be disabled after it's enabled. You can also adjust the cooling threshold on a per-volume basis. The cooling threshold can be set between 7 to 183 days. The default cooling threshold is 31 days. Data that is cooled beyond the cooling threshold moves to the cold tier once a day.
As a volume administrator, you set the volume size large enough to hold your data and to meet your performance objectives.
After the cooling threshold, the infrequently used data gets automatically moved to the cold tier. The amount of data in the cold tier can be queried. To perform the query, see Lookup tiering statistics. The volume capacity minus the cold tier capacity compromises the hot tier of a volume.
The billing occurs at the storage pool level and includes two components:
Cold tier pricing
Hot tier pricing
The total cold tier size of all volumes within the pool is summed and charged at the cold tier pricing. Any remaining capacity in the pool is charged at the pool's hot tier pricing. However, if the pool's hot tier size is smaller than 2 TiB, then 2 TiB will be charged at the hot tier rate, and the remaining capacity is charged at the cold tier rate. Additionally, there is a charge for network traffic associated with moving data to or from the cold tier of a storage pool. For more information about pricing, see Storage pool pricing.
Auto-tiering considerations
The following considerations apply while you use auto-tiering:
Data on the cold tier is priced lower than data on the hot tier. Using a shorter cooling threshold can move data sooner to the cold tier, which can reduce the overall cost if the data is not accessed again soon.
Data on the cold tier is slower to access than data on the hot tier. Using a cooling threshold which is too short can make access to your data slower.
Moving the data to and from the cold tier has data transfer costs. If you choose a short cooling threshold, data can move more frequently between the hot and cold tiers which can increase the overall cost.
When you use volume replication, the auto-tiering capability is controlled independently for the source and destination volume.
The performance of an auto-tiered volume depends on the sizes of the hot and cold tiers. Each GiB of hot tier size adds 64 KiBps (Premium) or 128 KiBps (Extreme) of throughput capability to the volume. While each GiB of cold tier size adds 2 KiBps of throughput capability to the volume.
On storage pools with auto-tiering enabled, the cold block tracking might occur on existing volumes that don't have auto-tiering enabled. If you enable auto-tiering on these volumes, the old data immediately becomes eligible for tiering and might get moved to the cold tier on the next day.