Filestore as storage for Google Cloud VMware Engine datastores

You can use Filestore High Scale and Enterprise tier instances as external datastores for [VMware ESXi](https://en.wikipedia.org/wiki/VMware_ESXi){: .external} hosts in Google Cloud VMware Engine. To do so, you can create your Filestore instances in regions where both VMware Engine and Filestore [are available](/about/locations#regions) and then mount them as external datastores to your existing VMware ESXi hosts in VMware Engine. VMware Engine currently offers the following vSphere storage options: * [VMware vSAN](https://www.vmware.com/products/vsan.html){: .external}. This includes the storage that comes with each VMware Engine node. * External NFS storage. This includes the following options: * [**Filestore High Scale or Enterprise tier instances**](/filestore/docs/service-tiers) used as vSphere datastore. * [**NetApp Cloud Volume Service**](/architecture/partners/netapp-cloud-volumes) instances used as vSphere datastore.

Why external datastores for VMware Engine?

VMware Engine vSAN provides high performance virtual storage for VMs running in VMware Engine. The VMware Engine service uses hardware nodes with local NVMe solid-state drives (SSDs) that are managed by vSAN to offer a virtual infrastructure for VMware VMs. If you want to scale only the storage resources in your cluster, you must purchase an entire node, along with compute and networking capabilities—resources you may not need. This limitation of vSAN-based hyper-converged infrastructure (HCI) creates demand to scale storage independent of other resources.

With external NFS datastores, you can scale storage independently of computing resources, relying on VMware Engine for all of your VMware workloads.

High Scale and Enterprise tier instances are VMware certified for use with VMware Engine datastores and are available in all VMware Engine regions.

Feature limitations

The following limitations apply:

  • Available only for Filestore High Scale and Enterprise tier instances. Basic SSD and Basic HDD tier instances are not supported.
  • Crash-consistent snapshot support available in Filestore Enterprise tier instances only.
  • Backup support available for both High Scale and Enterprise tiers.
  • Copy offload (VAAI) is not available.

Protocol support

The NFSv3 protocol is supported.

Networking

Filestore and VMware Engine services are connected through private service access (PSA). Network charges resulting from storage access within a region do not apply.

Before you begin

The steps in this document assume that you have done the following:

  • Earmarked a /26 CIDR for the Google Cloud VMware Engine service network to be used for external NFS storage.

Service subnets

When you create a private cloud, VMware Engine creates additional service subnets (for example, service-1, service-2, service-3). Service subnets are targeted for appliance or service deployment scenarios, such as storage, backup and disaster recovery, or media streaming, providing high scale, linear throughput and packet processing for even the largest-scaled private clouds. VM communication across a service subnet travels from the VMware ESXi host directly into the Google Cloud networking infrastructure, empowering high-speed communication.

NSX-T gateway and distributed firewall rules don't apply to any service subnet.

Configuring service subnets

Service subnets do not have a CIDR allocation on initial creation. Instead, you must specify a non-overlapping CIDR range and prefix for service subnets using the VMware Engine console or API.

The first usable address becomes the gateway address. To allocate a CIDR range and prefix edit one of the service subnets.

Service subnets can be updated if CIDR requirements change. However, modification of an existing service subnet CIDR can cause network availability disruption for VMs attached to that service subnet.

You should add the reserved CIDR allocations for the service subnets you defined in the VMware Engine portal to the list of imported clients in your network's VPC peering connection.

Failing to do so returns the following error or similar in vmkernel.log:

2022-09-23T04:58:14.266Z cpu23:2103354 opID=be2a0887)NFS: 161: Command: (mount)
Server: (10.245.17.21) IP: (10.245.17.21) Path: (/vol-g-shared-vmware-002) Label:
(NFS) Options: (None)
...
2022-09-23T04:58:14.270Z cpu23:2103354 opID=be2a0887)NFS: 194: NFS mount
10.245.17.21:/vol-g-shared-vmware-002 failed: The mount request was denied by the
NFS server. Check that the export exists and that the client is permitted to
mount it.

Create and manage Filestore instances

To see how to use the Google Cloud console to create and manage a Filestore instance, see Create an instance.

To see how to import the reserved CIDR allocations you created for your service subnets, see Update a peering connection.

After the NFS datastore is mounted to all hosts in a given cluster and becomes available, you can use the vCenter console to provision VMs against the external datastore, view metrics and view logs related to Google I/O operations performed against the external datastore.

If you're interested in this feature, contact your account team or Google Cloud support.

What's next