FAQs about NetApp Cloud Volumes Service for Google Cloud

This document answers frequently asked questions (FAQs) about NetApp Cloud Volumes Service for Google Cloud.

Security FAQs

This section provides answers to FAQs about the security and architecture elements of the Cloud Volumes Service for Google Cloud.

What is the data plane architecture for Cloud Volumes Service for Google Cloud?

Cloud Volumes Service for Google Cloud leverages the Google Cloud private Google access framework. In this framework, you can connect to the Cloud Volumes Service from Virtual Private Cloud (VPC) by using private (RFC 1918) addresses. This framework uses service networking and VPC peering constructs similar to other Google Cloud services like Cloud SQL, in order to help ensure isolation between tenants.

For an architecture overview of Cloud Volumes Service for Google Cloud, see Understanding the Cloud Volumes Service architecture.

For more information about how to establish a private services connection to Cloud Volumes Service for Google Cloud, see Setting up private services access for Cloud Volumes Service.

Is the data in a cloud volume encrypted?

Yes. Data is encrypted at rest using NetApp Volume Encryption (NVE), which uses AES 256-bit encryption that is FIPS 140-2 compliant. Data is encrypted on a per-volume basis, and each volume has a separate key. Encryption keys are accessible only to the embedded key management service within the storage system.

Is the data encrypted in transit between a cloud volume and a Compute Engine instance?

Data traverses over standard Google VPC constructs between a cloud volume and a Compute Engine instance that inherit any in-transit encryption features provided by those constructs.

How is access to the data in a cloud volume restricted to specific Compute Engine instances or users?

For NFS volumes, access to mount volumes is controlled by export policies. You can set up to five export policies per volume to control which instances (by IP addresses) have access and what kind of access (read only, read/write) they have.

For SMB volumes, a user-supplied Active Directory (AD) server is required. The AD server handles authentication and authorization to the SMB cloud volume, and you can configure it to handle granular access with NTFS ACLs.

Are there IAM controls to specify who can administer Cloud Volumes Service?

Yes. Cloud Volumes Service provides granular permissions for all objects in the Cloud Volumes Service API such as volumes, snapshots, and Active Directory. These granular permissions are fully integrated into the Cloud IAM framework. The permissions are integrated into two predefined roles: netappcloudvolumes.admin and netappcloudvolumes.viewer. You can assign these roles to users and groups per project to control administration rights for Cloud Volumes Service. For more information, see Permissions for Cloud Volumes Service.

How is access to the Cloud Volumes Service API secured?

There are two layers of security to access the Cloud Volumes Service API:

  • Authentication: The caller of the API is required to supply valid JWT tokens. You can generate a valid JWT token only from a private key file of a service account. You can disable or delete service accounts to revoke the ability to make calls to the Cloud Volumes Service API.
  • Authorization: Cloud Volumes Service checks the granular permission against Cloud IAM for each operation requested on the API (when an authenticated request is received). Authentication alone is not sufficient to access the API.

For more information about how to set up a service account, obtain permissions, and call the Cloud Volumes Service API, see Cloud Volumes APIs

SMB FAQs

This section provides answers to FAQs about the SMB elements of the Cloud Volumes Service for Google Cloud.

Why does the cloud volumes SMB NetBIOS name not reflect the name configured for the Cloud Volume?

Cloud Volumes Service doesn't let you use more than 10 characters for the cloud volumes SMB NetBIOS name. The length of the NetBIOS name must be 10 characters or less, and Cloud Volumes Service reconfigures the NetBIOS name as follows: your_netbios-suffix

Example: \\myvolname becomes \\myvolname-c372.

SMB performance FAQs

This section answers FAQs about SMB performance best practices for Cloud Volumes Service for Google Cloud.

Is SMB Multichannel enabled by default in SMB shares?

Yes, SMB Multichannel is enabled by default as of January 24, 2020. All currently existing SMB volumes have the feature enabled, and all newly created volumes also have the feature enabled.

You need to reset any SMB connection established prior to the feature enablement to take advantage of the SMB Multichannel functionality. To reset, disconnect and reconnect the SMB share.

Does Cloud Volumes Service support receive-side scaling (RSS)?

With SMB Multichannel enabled, an SMB3 client establishes multiple TCP connections to the Cloud Volumes Service SMB server over a network interface card (NIC) that is single RSS capable.

Which Windows versions support SMB Multichannel?

Windows has supported SMB Multichannel since Windows 2012. For more information, see Deploy SMB Multichannel and The basics of SMB Multichannel.

Do Google Cloud virtual machines support RSS?

To see if your Google Cloud virtual machine NICs support RSS, run the command Get-SmbClientNetworkInterface from PowerShell. Then in the output, check the RSS Capable column:

Output showing RSS capability.

Does Cloud Volumes Service for Google Cloud support SMB Direct?

No, SMB Direct is not currently supported.

What is the benefit of SMB Multichannel?

The SMB Multichannel feature enables an SMB3 client to establish a pool of connections over a single network interface card (NIC) or multiple NICs and to use them to send requests for a single SMB session. In contrast, by design, SMB1 and SMB2 require the client to establish one connection and send all the SMB traffic for a given session over that connection. This single connection limits the overall protocol performance from a single client.

Is there value in configuring multiple NICs on the SMB client?

No. The SMB client matches the NIC count returned by the SMB server. Each storage volume is accessible from only one storage endpoint. That means that only one NIC is used for any given SMB relationship.

In the following example, the output of the PowerShell command Get-SmbClientNetworkInterface shows that the virtual machine has two network interfaces—15 and 12. The command Get-SmbMultichannelConnection shows that even though there are two RSS-capable NICS, only interface 12 is used in connection with the SMB share; interface 15 is not in use.

Output showing NIC count.

Is NIC Teaming supported in Cloud Volumes Service for Google Cloud?

NIC Teaming is not supported in Cloud Volumes Service for Google Cloud. Although multiple network interfaces are supported on Google Cloud virtual machines, each interface is attached to a different VPC network. Also, the bandwidth available to a Google Cloud virtual machine is calculated for the machine itself and not any individual network interface.

What's the performance for SMB Multichannel?

The following tests and graphs demonstrate the power of SMB Multichannel on single-instance workloads.

Random I/O

With SMB Multichannel disabled on the client, 4-KiB read and write tests were performed using FIO and a 40-GiB working set. The SMB share was detached between each test, with increments of the SMB client connection count per RSS network interface settings of 1,4,8,6, set-SmbClientConfiguration-ConnectionCountPerRSSNetworkInterface count. The tests show that the default setting of 4 is sufficient for I/O intensive workloads; incrementing to 8 and 16 has no effect and as such has been left off of the following graph.

Bar chart showing difference in read and write IOPS with SMB Multichannel disabled and enabled.

During the test, the command netstat -na | findstr 445 shows that additional connections were established with increments from 1 to 4, to 8, and to 16. Four CPU cores were fully utilized for SMB during each test, as confirmed by the perfmon Per Processor Network Activity Cycles statistic (not included in this document.)

Sequential IO

Tests similar to the random I/O tests were performed with 64-KiB sequential I/O. As with the random I/O tests, increasing the client connection count per RSS network interface beyond 4 had no noticeable effect on sequential I/O. The following graph compares the sequential throughput tests.

Bar chart showing difference in read and write throughput with SMB Multichannel disabled and enabled.

Therefore, it is best to follow the Microsoft best practice to use default RSS network tunables.

What test parameters were used to produce the performance graphs above?

The following configuration file was used with the Flexible IO (fio) load generator. Note that the iodepth parameter is marked as TBD. By increasing the iodepth value test by test, IO and throughput maximums were determined.

[global]
    name=fio-test
    directory=.\       #This is the directory where files are written
    direct=1           #Use directio
    numjobs=1          #To match how many users on the system
    nrfiles=4          #Num files per job
    runtime=300        #If time_based is set, run for this amount of time
    time_based         #This setting says run the jobs until run time elapses
    bs=64K||4K
    rw=rw||randrw      #choose rw if sequential io, choose randrw for random io
    rwmixread=100||0   #<-- Modify to get different i/o distributions
    iodepth=TBD        #<-- Modify this to get the i/o they want (latency * target op count)
    size=40G           #Aggregate file size per job (if nrfiles = 4, files=2.5GiB)
    ramp_time=20       #Warm up
    [test]

Are jumbo frames supported?

Jumbo frames are not supported with Compute Engine virtual machines.

What is SMB Signing, and is it supported by Cloud Volumes Service for Google Cloud?

The SMB protocol provides the basis for file and print sharing and other networking operations such as remote Windows administration. To prevent man-in-the-middle attacks that modify SMB packets in transit, the SMB protocol supports the digital signing of SMB packets.

SMB Signing is supported for all SMB protocol versions that are supported by Cloud Volumes Service for Google Cloud.

What is the performance impact of SMB Signing?

SMB Signing has a detrimental effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875 MiB/s to approximately 250 MiB/s.

Capacity management FAQs

This section provides answers to FAQs about the capacity-management elements of the Cloud Volumes Service for Google Cloud.

How do I determine if a directory is approaching the limit size?

You can use the stat command from a Cloud Volumes Service client to see whether a directory is approaching the maximum size limit (320 MB).

For a 320 MB directory, the number of 512-byte blocks is 655360 (320x1024x1024/512).

Examples:

[makam@cycrh6rtp07 ~]$ stat bin
File: 'bin'
Size: 4096            Blocks: 8          IO Block: 65536  directory

[makam@cycrh6rtp07 ~]$ stat tmp
File: 'tmp'
Size: 12288           Blocks: 24         IO Block: 65536  directory

[makam@cycrh6rtp07 ~]$ stat tmp1
File: 'tmp1'
Size: 4096            Blocks: 8          IO Block: 65536  directory

Billing FAQs

This section provides answers to billing-related FAQs about the Cloud Volumes Service for Google Cloud.

Will I be billed for accessing my cloud volume from a Compute Engine instance located in a different region?

Yes, standard Google Cloud inter-region data movement is charged according to the transfer rates. If your Compute Engine instance is located within the same region as the cloud volume, you are not charged for any traffic movement, even if that movement occurs within different zones in a region.