Using NFS volume as vSphere datastore hosted by Filestore
You can use Filestore zonal, regional, and enterprise tier instances as external datastores for VMware ESXi hosts in Google Cloud VMware Engine.
To do so, you can create your Filestore instances in regions where both VMware Engine and Filestore are available and then mount them as external datastores to your existing VMware ESXi hosts in VMware Engine.
VMware Engine offers the following vSphere storage options:
- VMware vSAN. This includes the storage that comes with each VMware Engine node.
- External NFS storage. This includes the following options:
- Filestore instances used as a vSphere datastore.
- Google Cloud NetApp Volumes service instances used as a vSphere datastore.
Why external datastores for VMware Engine?
VMware Engine vSAN provides high performance virtual storage for VMs running in VMware Engine. The VMware Engine service uses hardware nodes with local NVMe solid-state drives (SSDs) that are managed by vSAN to offer a virtual infrastructure for VMware VMs. If you want to scale only the storage resources in your cluster, you must purchase an entire node, along with compute and networking capabilities—resources you might not need. This limitation of vSAN-based hyper-converged infrastructure (HCI) creates demand to scale storage independent of other resources.
With external NFS datastores, you can scale storage independently of computing resources, relying on VMware Engine for all of your VMware workloads.
High Scale and Enterprise tier instances are VMware certified for use with VMware Engine datastores and are available in all VMware Engine regions.
Feature limitations
The following limitations apply:
- Available only for Filestore High Scale and Enterprise tier instances. Basic SSD and Basic HDD tier instances are not supported.
- Crash-consistent snapshot support available in Filestore Enterprise tier instances only.
- Backup support available for both High Scale and Enterprise tiers.
- Copy offload (VAAI) is not available.
- You can't mount Filestore instances that are connected with direct peering to VMware Engine. For more information, see Network configuration and IP resource requirements.
Protocol support
The NFSv3 protocol is supported.
Networking
Filestore and VMware Engine services are connected through private service access (PSA). Network charges resulting from storage access within a region don't apply.
Before you begin
The steps in this document assume that you have done the following:
- Earmarked a
/26
CIDR for the Google Cloud VMware Engine service network to be used for external NFS storage.
Service subnets
When you create a private cloud, VMware Engine creates additional service subnets
(for example, service-1
, service-2
, service-3
). Service subnets are
targeted for appliance or service deployment scenarios, such as storage, backup
and disaster recovery, or media streaming, providing high scale, linear
throughput and packet processing for even the largest-scaled private clouds. VM
communication across a service subnet travels from the VMware ESXi host directly
into the Google Cloud networking infrastructure, empowering high-speed
communication.
NSX-T gateway and distributed firewall rules don't apply to any service subnet.
Configuring service subnets
Service subnets don't have a CIDR allocation on initial creation. Instead, you must specify a non-overlapping CIDR range and prefix for service subnets using the VMware Engine console or API.
The first usable address becomes the gateway address. To allocate a CIDR range and prefix edit one of the service subnets.
Service subnets can be updated if CIDR requirements change. However, modification of an existing service subnet CIDR can cause network availability disruption for VMs attached to that service subnet.
You should add the reserved CIDR allocations for the service subnets you defined in the VMware Engine portal to the list of imported clients in your network's VPC peering connection.
Failing to do so returns the following error or similar in vmkernel.log
:
2022-09-23T04:58:14.266Z cpu23:2103354 opID=be2a0887)NFS: 161: Command: (mount)
Server: (10.245.17.21) IP: (10.245.17.21) Path: (/vol-g-shared-vmware-002) Label:
(NFS) Options: (None)
...
2022-09-23T04:58:14.270Z cpu23:2103354 opID=be2a0887)NFS: 194: NFS mount
10.245.17.21:/vol-g-shared-vmware-002 failed: The mount request was denied by the
NFS server. Check that the export exists and that the client is permitted to
mount it.
Create and manage Filestore instances
To see how to use the Google Cloud console to create and manage a Filestore instance, see Create an instance.
To see how to import the reserved CIDR allocations you created for your service subnets, see Update a peering connection.
Customers must reach out to GCVE support to mount their Filestore NFS datastores. After the NFS datastore is mounted to all hosts in a given cluster and becomes available, you can use the vCenter console to provision VMs against the external datastore, view metrics and view logs related to Google I/O operations performed against the external datastore.
If you're interested in this feature, contact your account team or Google Cloud support.
What's next
- Learn more about Filestore.
- Compare the relative advantages of block, file, and object storage.
- Review the storage options for High Performance Computing (HPC) workloads in Google Cloud.