This document answers frequently asked questions (FAQs) about NetApp Cloud Volumes Service for Google Cloud.
This section provides answers to FAQs about the security and architecture elements of the Cloud Volumes Service for Google Cloud.
What is the data plane architecture for Cloud Volumes Service for Google Cloud?
Cloud Volumes Service for Google Cloud leverages the Google Cloud private Google access framework. In this framework, you can connect to the Cloud Volumes Service from Virtual Private Cloud (VPC) by using private (RFC 1918) addresses. This framework uses service networking and VPC peering constructs similar to other Google Cloud services like Cloud SQL, in order to help ensure isolation between tenants.
For an architecture overview of Cloud Volumes Service for Google Cloud, see Architecture for Cloud Volumes Service.
For more information about how to establish a private services connection to Cloud Volumes Service for Google Cloud, see Setting up private services access.
Is the data in a cloud volume encrypted?
Yes. Data is encrypted at rest using NetApp Volume Encryption (NVE), which uses AES 256-bit encryption that is FIPS 140-2 compliant. Data is encrypted on a per-volume basis, and each volume has a separate key. Encryption keys are accessible only to the embedded key management service within the storage system.
Is the data encrypted in transit between a cloud volume and a Compute Engine instance?
Data traverses over standard Google VPC constructs between a cloud volume and a Compute Engine instance that inherit any in-transit encryption features provided by those constructs.
You can enable SMB encryption on a volume.
You can enable NFSv4.1 Kerberos encryption on a volume of the CVS-Performance service type.
How is access to the data in a cloud volume restricted to specific Compute Engine instances or users?
For NFS volumes, access to mount volumes is controlled by export policies. You can set up to five export policies per volume to control which instances (by IP addresses) have access and what kind of access (read only, read/write) they have.
For SMB volumes, a user-supplied Active Directory (AD) server is required. The AD server handles authentication and authorization to the SMB cloud volume, and you can configure it to handle granular access with NTFS ACLs.
Are there IAM controls to specify who can administer Cloud Volumes Service?
Yes. Cloud Volumes Service provides granular permissions for all objects in the
Cloud Volumes Service API such as volumes, snapshots, and Active Directory.
These granular permissions are fully integrated into the IAM
framework. The permissions are integrated into two predefined roles:
netappcloudvolumes.viewer. You can
assign these roles to users and groups per project to control administration
rights for Cloud Volumes Service. For more information, see
Permissions for Cloud Volumes Service.
How is access to the Cloud Volumes Service API secured?
There are two layers of security to access the Cloud Volumes Service API:
- Authentication: The caller of the API is required to supply valid JWT tokens. You can generate a valid JWT token only from a private key file of a service account. You can disable or delete service accounts to revoke the ability to make calls to the Cloud Volumes Service API.
- Authorization: Cloud Volumes Service checks the granular permission against IAM for each operation requested on the API (when an authenticated request is received). Authentication alone is not sufficient to access the API.
For more information about how to set up a service account, obtain permissions, and call the Cloud Volumes Service API, see Cloud Volumes APIs.
This section provides answers to FAQs about the NFS elements of the Cloud Volumes Service for Google Cloud.
How does NFSv4.1 handle user identifiers?
Cloud Volumes Service enables numerical user identifiers (UIDs) for NFSv4.1
using sec=sys. That means Unix UIDs and group identifiers (GIDs) are used to
identify users and groups. It also means that NFSv4.1 clients should work
immediately without complex user identity management. The only exception is the
root user, who functions properly as root, but gets displayed as
UID= 4294967294 in the
ls output. To fix this, edit the
configuration file in your clients to contain the following:
domain = defaultv4iddomain.com
Currently, users cannot change the name of the NFSv4 domain on the Cloud Volumes Service side.
Modern Linux clients are configured to support numerical IDs by default. If your client doesn’t, you can enable numerical IDs with the following command:
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
What are the Linux NFS concurrency recommendations?
NFSv3 does not have a mechanism to negotiate concurrency between the client and
the server. The client and the server each defines its limit without consulting
the other. For the best performance, use the maximum number of client-side
sunrpc slot table entries that doesn't cause pushback on the server. When a
client overwhelms the server network stack’s ability to process a workload, the
server responds by decreasing the window size for the connection, which can
By default, modern Linux kernels define the per-connection slot table entry size
sunrpc.max_tcp_slot_table_entries) as supporting 65,536 outstanding
operations, and volumes of the CVS service type enforce a limit of 128 slots for
each NFS TCP connection. You do not need to use values this high. In queueing
says that concurrency is the product of the operation rate and the latency. This
means that the I/O rate is determined by concurrency (that is, outstanding I/O)
and latency. 65,536 is orders of magnitude more than the number of slots needed
for even extremely demanding workloads.
For example, a latency of 0.5 ms and a concurrency of 128 can achieve up to 256,000 IOPS. A latency of 1 ms and a concurrency of 128 can achieve up to 128,000 IOPS.
- NFS clients with normal traffic infrequently use more than 128 slots, so you do not need to take any action for these clients.
If you have a few clients that need to push heavy NFS traffic (for example, databases or compute applications), limit their queues to 128 slots by setting the kernel parameter
sunrpc.tcp_max_slot_table_entriesto 128. The way to make the setting persist depends on the Linux distribution:
- Red Hat Enterprise Linux: How do I configure TCP maximum slot table entries to 128 for RHEL 7/8 NFS clients?
- SUSE Linux Enterprise: SLES 12 (or higher) NFS client has occasional slow performance against NetApp NFS server
If your application requires higher IOPS, consider using the
nconnectmount option to use multiple TCP connections.
If you have many clients (more than 30) that need to push heavy NFS traffic—such as in compute farms for high-performance computing (HPC) or electronic design automation (EDA) workflows—consider setting
sunrpc.tcp_max_slot_table_entriesto 16 to improve aggregated performance. The
sunrpc.tcp_max_slot_table_entrieskernel parameter is only available after the
sunrpckernel module is loaded. It is loaded automatically by the
This section provides answers to FAQs about the SMB elements of the Cloud Volumes Service for Google Cloud.
Why does the cloud volumes SMB NetBIOS name not reflect the name configured for the Cloud Volume?
Cloud Volumes Service doesn't let you use more than 10 characters for the cloud
volumes SMB NetBIOS name. The length of the NetBIOS name must be 10 characters
or less, and Cloud Volumes Service reconfigures the NetBIOS name as follows:
Why can't my client resolve the SMB NetBIOS name?
You can map or mount an SMB share created with Cloud Volumes Service by using
its uniform naming convention (UNC) path, which the UI displays. A UNC path
follows the pattern
\\cvssmb-d2e6.cvsdemo.internal\quirky-youthful-hermann). The hostname
is only resolvable through the AD’s built-in DNS servers. Windows hosts joined
to the AD domain usually use these DNS servers. Servers not belonging to the
domain might use different DNS servers (such as Cloud DNS) and cannot resolve
the hostname. This is very common for Linux VMs in a Google Cloud project
and for Windows clients not joined to the domain.
You can use an IP address instead. You can determine the IP address by querying the AD DNS servers:
dig @AD_SERVER_IP_ADDRESS +short cvssmb-d2e6.cvsdemo.internal
nslookup cvssmb-d2e6.cvsdemo.internal AD_SERVER_IP_ADDRESS
Alternatively, you can create a forwarding zone to your AD DNS servers so that a client using your DNS server can resolve fully qualified domain names from the UNC path. The following example demonstrates the creation of a forwarding zone with Cloud DNS:
gcloud dns managed-zones create my-ad \ --project=my-project \ --description="" \ --dns-name="cvsdemo.internal." \ --visibility="private" \ --networks="my-vpc" \ --private-forwarding-targets="AD_SERVER1_IP_ADDRESS,AD_SERVER2_IP_ADDRESS"
How can I check whether my SMB share is using SMB encryption?
When you map an SMB share, the client and server negotiate the SMB version and features like SMB encryption. After you enable the SMB encryption flag for a volume, the CVS SMB server requires SMB encryption. This means that only SMB3 clients that support SMB encryption can access the volume.
To verify whether your SMB connection is using SMB encryption, map a CVS share and run the following PowerShell command:
Get-SmbConnection -servername [CVS_IP_OR_NETBIOS_NAME] | Select-Object -Property Encrypted
How can I identify Active Directory domain controllers used by the CVS and CVS-Performance service types?
Cloud Volumes Service uses Active Directory connections to configure volumes of the CVS and CVS-Performance service types. Each service type can have one AD connection for each Google Cloud region.
Microsoft recommends placing Active Directory domain controllers (DCs) close to SMB servers for good performance. For Google Cloud, place DCs in the same region where Cloud Volumes Service is used. For the CVS service type, DCs must be in the same region; otherwise, volumes of the CVS service type can’t connect to DCs. NetApp recommends having at least two DCs for each region, for service resilience.
Cloud Volumes Service uses standard DNS-based discovery to find suitable DCs. Use the following query to get the list of all DCs in your domain:
nslookup -type=srv _ldap._tcp.dc._msdcs.<domain-name> <dns-server>
dig @<dns-server> +short SRV _ldap._tcp.dc._msdcs.<domain-name>
For volumes of the CVS service type, you must limit the list to DCs installed only in a given Google Cloud region. For volumes of the CVS-Performance service type, limiting the list is recommended, but not required. When you create an Active Directory site, make sure that it contains only DCs located in the desired Google Cloud region. After you create a site, use the following command to check which DCs are contained in the list:
nslookup -type=srv _ldap._tcp.<site_name>._sitesdc._msdcs.<domain-name> <dns-server>
dig @<dns-server> +short SRV _ldap._tcp.<site_name>._sitesdc._msdcs.<domain-name>
Verify that the returned list only contains DCs that are in the desired Google Cloud region.
In your Active Directory connection configuration, enter the AD site name.
Which permissions are needed to create Active Directory machine accounts?
To add CVS machine objects to a Windows Active Directory, you need an account that either has administrative rights to the domain or has delegated permissions to create and modify machine account objects to a specified organizational unit (OU). You can grant these permissions with the Delegation of Control Wizard in Active Directory by creating a custom task that provides a user access to creation and deletion of computer objects with the following access permissions provided:
- Create/Delete All Child Objects
- Read/Write All Properties
- Change/Reset Password
Creating this custom task adds a security ACL for the defined user to the organizational unit in Active Directory and minimizes the access to the Active Directory environment. When a user has been delegated, that username and password can be provided as Active Directory credentials in this window.
SMB performance FAQs
This section answers FAQs about SMB performance best practices for Cloud Volumes Service for Google Cloud.
Is SMB Multichannel enabled by default in SMB shares?
Yes, SMB Multichannel is enabled by default. All currently existing SMB volumes have the feature enabled, and all newly created volumes also have the feature enabled.
You need to reset any SMB connection established prior to the feature enablement to take advantage of the SMB Multichannel functionality. To reset, disconnect and reconnect the SMB share.
Does Cloud Volumes Service support receive-side scaling (RSS)?
With SMB Multichannel enabled, an SMB3 client establishes multiple TCP connections to the Cloud Volumes Service SMB server over a network interface card (NIC) that is single RSS capable.
Which Windows versions support SMB Multichannel?
Do Google Cloud virtual machines support RSS?
To see if your Google Cloud virtual machine NICs support RSS,
run the command
Get-SmbClientNetworkInterface from PowerShell. Then in the output, check the
RSS Capable column:
Does Cloud Volumes Service for Google Cloud support SMB Direct?
No, SMB Direct is not currently supported.
What is the benefit of SMB Multichannel?
The SMB Multichannel feature enables an SMB3 client to establish a pool of connections over a single network interface card (NIC) or multiple NICs and to use them to send requests for a single SMB session. In contrast, by design, SMB1 and SMB2 require the client to establish one connection and send all the SMB traffic for a given session over that connection. This single connection limits the overall protocol performance from a single client.
Is there value in configuring multiple NICs on the SMB client?
No. The SMB client matches the NIC count returned by the SMB server. Each storage volume is accessible from only one storage endpoint. That means that only one NIC is used for any given SMB relationship.
In the following example, the output of the PowerShell command
Get-SmbClientNetworkInterface shows that the virtual machine has two
network interfaces—15 and 12. The command
that even though there are two RSS-capable NICS, only interface 12 is used in
connection with the SMB share; interface 15 is not in use.
Is NIC Teaming supported in Cloud Volumes Service for Google Cloud?
NIC Teaming is not supported in Cloud Volumes Service for Google Cloud. Although multiple network interfaces are supported on Google Cloud virtual machines, each interface is attached to a different VPC network. Also, the bandwidth available to a Google Cloud virtual machine is calculated for the machine itself and not any individual network interface.
What's the performance for SMB Multichannel?
The following tests and graphs demonstrate the power of SMB Multichannel on single-instance workloads.
With SMB Multichannel disabled on the client, 4-KiB read and write tests were
performed using FIO and a 40-GiB working set. The SMB share was detached between
each test, with increments of the SMB client connection count per RSS network
interface settings of 1,4,8,6,
The tests show that the default setting of 4 is sufficient for I/O intensive
workloads; incrementing to 8 and 16 has no effect and as such has been left off
of the following graph.
During the test, the command
netstat -na | findstr 445 shows that additional
connections were established with increments from 1 to 4, to 8, and to 16. Four
CPU cores were fully utilized for SMB during each test, as confirmed by the
Per Processor Network Activity Cycles statistic (not included in this
Tests similar to the random I/O tests were performed with 64-KiB sequential I/O. As with the random I/O tests, increasing the client connection count per RSS network interface beyond 4 had no noticeable effect on sequential I/O. The following graph compares the sequential throughput tests.
Therefore, it's best to follow the Microsoft best practice to use default RSS network tunables.
What test parameters were used to produce the performance graphs above?
The following configuration file was used with the Flexible IO (fio) load
generator. Note that the
iodepth parameter is marked as
iodepth value test by test, IO and throughput maximums were
[global] name=fio-test directory=.\ #This is the directory where files are written direct=1 #Use directio numjobs=1 #To match how many users on the system nrfiles=4 #Num files per job runtime=300 #If time_based is set, run for this amount of time time_based #This setting says run the jobs until run time elapses bs=64K||4K rw=rw||randrw #choose rw if sequential io, choose randrw for random io rwmixread=100||0 #<-- Modify to get different i/o distributions iodepth=TBD #<-- Modify this to get the i/o they want (latency * target op count) size=40G #Aggregate file size per job (if nrfiles = 4, files=2.5GiB) ramp_time=20 #Warm up [test]
What performance is expected with a single VM with a 1 TiB dataset?
To provide more detailed insight into workloads with read/write mixes, the
following two charts show the performance of a single, Extreme service-level
cloud volume of 50 TiB with a 1 TiB dataset and with an SMB multichannel of 4.
An optimal IODepth of 16 was used, and Flexible IO (fio) parameters were used to
ensure the full use of the network bandwidth (
The following chart shows the results for 4k random I/O, with a single VM instance and a read/write mix at 10% intervals:
The following chart shows the results for sequential I/O:
What performance is expected when scaling out using 5 VMs with a 1 TiB dataset?
These tests with 5 VMs use the same testing environment as the single VM, with each process writing to its own file.
The following chart shows the results for random I/O:
The following chart shows the results for sequential I/O:
How do you monitor VirtIO ethernet adapters and ensure that you maximize network capacity?
One strategy used in testing with fio is to set
numjobs=16. Doing so forks
each job into 16 specific instances to maximize the Google VirtIO Ethernet
You can check for activity on each of the adapters in Windows Performance Monitor by selecting Performance Monitor > Add Counters > Network Interface > Google VirtIO Ethernet Adapter.
After you have data traffic running in your volumes, you can monitor your adapters in Windows Performance Monitor. If you do not use all of these 16 virtual adapters, you might not maximize your network bandwidth capacity.
Are jumbo frames supported?
What is SMB Signing, and is it supported by Cloud Volumes Service for Google Cloud?
The SMB protocol provides the basis for file and print sharing and other networking operations such as remote Windows administration. To prevent man-in-the-middle attacks that modify SMB packets in transit, the SMB protocol supports the digital signing of SMB packets.
SMB Signing is supported for all SMB protocol versions that are supported by Cloud Volumes Service for Google Cloud.
What is the performance impact of SMB Signing?
SMB Signing has a detrimental effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875 MiB/s to approximately 250 MiB/s.
Capacity management FAQs
This section provides answers to FAQs about the capacity-management elements of the Cloud Volumes Service for Google Cloud.
How do I determine if a directory is approaching the limit size?
You can use the
stat command from a Cloud Volumes Service client to see
whether a directory is approaching the maximum size limit (320 MB).
For a 320 MB directory, the number of 512-byte blocks is 655360 (320x1024x1024/512).
[makam@cycrh6rtp07 ~]$ stat bin File: 'bin' Size: 4096 Blocks: 8 IO Block: 65536 directory [makam@cycrh6rtp07 ~]$ stat tmp File: 'tmp' Size: 12288 Blocks: 24 IO Block: 65536 directory [makam@cycrh6rtp07 ~]$ stat tmp1 File: 'tmp1' Size: 4096 Blocks: 8 IO Block: 65536 directory
This section provides answers to billing-related FAQs about the Cloud Volumes Service for Google Cloud.
Will I be billed for accessing my cloud volume from a Compute Engine instance located in a different region or zone?
Yes, standard Google Cloud inter-region data movement is charged according to the transfer rates.
CVS service-type volumes: If your Compute Engine instance is located within the same zone as the cloud volume, you are not charged for any traffic movement. If your Compute Engine instance is located in a different zone than the cloud volume, you are charged for traffic movement between zones.
CVS-Performance service-type volumes: If your Compute Engine instance is located within the same region as the cloud volume, you are not charged for any traffic movement, even if that movement occurs within different zones in a region.