File Servers on Google Compute Engine

A file server, also called a storage filer, provides a way for applications to read and update files that are shared across machines. Typically, a file server uses a protocol that enables client machines to mount a filesystem and access the files as if they were hosted locally. The most common protocols for exporting file shares are Network File System (NFS) for Linux and the Common Internet File System (CIFS) for Windows.

In many cases, you can share files by using Google Cloud Storage or Compute Engine Persistent Disks and avoid hosting a file server altogether. This solution describes several options for sharing files:

An underlying factor in the performance and predictability of all of the Google Cloud Platform services is the network stack that Google has evolved over many years. With the Jupiter Fabric, Google has built a robust, scalable, and stable networking stack that can continue to evolve without affecting your workloads. As Google improves and bolsters its network abilities internally, your file-sharing solution will benefit from the added performance. For more details on the Jupiter Fabric, see the 2015 paper that describes its evolution.

One feature of Cloud Platform that can help you get the most out of your investment is the ability to specify custom VM types. When choosing the size of your filer, you can pick exactly the right mix of memory and CPU, so that your filer is operating at optimal performance without being oversubscribed. Custom VM types allow you to allocate up to 208 GB of memory and up to 32 cores per machine. If you are replicating or moving your existing filer over to GCP, you can use the exact same specifications to ensure parity between environments.

Cloud Storage

Cloud Storage is a great way to store your data with high levels of redundancy at a low cost, while enabling you to scale to petabytes and beyond. With Cloud Storage, you can upload and download objects to namespaces, called buckets, which are similar to folders.

Objects uploaded to Cloud Storage can be terabytes in size and can be uploaded in parallel using composite objects. When your upload is successful, your object is available globally to all readers, thanks to Cloud Storage's strong consistency. You can configure access control to provide fine grained policies scoped at the user, account, and even domain levels. To control your costs, Cloud Storage offers the ability to store data at varying storage classes that are designed to provide different levels of availability and latency.

Cloud Storage has a powerful set of features, and in many situations has benefits over using a file server. In some cases, however, a file server might be more appropriate for your situation. Here are some things to consider if you choose to share files using Cloud Storage:

  • Reads and writes are done on the entire file rather than at offsets, which means a full overwrite of the file is necessary when uploading.

  • When multiple writers are operating at the same time, the last write wins, and overwrites the other changes to the file unless you provide your own synchronization mechanism.

  • If your application requires access to POSIX file metadata attributes, like last-modified timestamps, you must use the Cloud Storage API rather than a stat call on your host.

Persistent disks

If you have data that doesn't change over time, you might be able to use Compute Engine's persistent disks, and avoid hosting a file server altogether. With persistent disks, you can attach volumes in both read-write and read-only modes. This means that you can first attach a volume to an instance, load it with the data you need, and then attach it as a read-only disk to hundreds of virtual machines simultaneously. Employing read-only persistent disks does not work for all use cases, but it can greatly reduce complexity, compared to using a file server.

Compute Engine's persistent disks are a great way to store data in Google Cloud Platform (GCP), because they give you flexibility in balancing scale and performance against cost. Persistent disks can also be resized on the fly, allowing you to start with a low cost and low capacity volume, and not requiring you to spin up additional instances or disks to scale your capacity. Persistent disk throughput and IOPS scale linearly with disk size. That means you can scale your performance by doing a resize, which requires little to no downtime. You no longer have to stripe together a set of disks with software-based RAID mechanisms to get the aggregate performance you want.

Another advantage of persistent disks is their consistent performance. All disks of the same size that you attach to your instance have the same performance characteristics. You don't need to pre-warm or test your persistent disks before using them in production.

Performance is not the only thing that is easy to predict: the cost of persistent disks is very easy to determine, because there are no IO costs to consider after provisioning your volume. You can easily balance cost and performance, because you have the option of using three different types of disks with varying costs and performance characteristics.

For some workloads, total capacity is the main scaling factor; for those, you can use cheaper spinning disks, called standard persistent disks, instead of leveraging the additional IOPS and cost of an SSD persistent disk. If your data is ephemeral and requires sub-millisecond latency and high IOPS, you can leverage up to 1.5 TB of local SSDs for extreme performance. Local SSDs allow for up to ~700k IOPS with speeds similar to DDR2 RAM, all while not using up your instances’ allotted network capacity. For a comparison of the many disk types available to Compute Engine instances, see the documentation for block storage.

Considerations when choosing a filer solution

Choosing a filer solution requires you to make tradeoffs regarding cost, performance, and scalability. Making the decision is easier if you have a well defined workload, but unfortunately that often isn't the case. In situations where workloads evolve over time or are highly variant, it is prudent to trade cost savings for flexibility and elasticity, so you can grow into your solution. On the other hand, if you have a workload that is temporal and well known, you can create a purpose-built filer architecture that can easily be torn down and rebuilt to meet your immediate storage needs.

One of the first decisions to make is whether you want to pay for a supported filer solution, or if you have the staff available to create your own solution and maintain it in the long run. Once you have decided on a support model, the next decision involves figuring out the durability requirements of your filer. Can you afford to lose any of the data? If not, how much are you willing to pay to ensure that the data is properly replicated to allow for disaster recovery. Next, consider the overall size of your present and future data sets, as this will heavily influence the cost of your filer. Finally, consider your mount locations. The locations of the compute farms you use to access your data will influence your choice of filer solution, as only some solutions allow hybrid on-premises and in-cloud access.

Filer options

Single Node File Server

The easiest way to get a filer up and running on GCP is to use Single Node File Server, which you can deploy automatically by using Google Cloud Launcher. The deployment includes monitoring via Grafana. In minutes you can have a fully functional filer.

Diagram of Single Node File Server

When you use Cloud Launcher to deploy Single Node File Server, you can configure the type of backing disk you'd like: standard or SSD. You can also configure the instance type and total data disk size. Keep in mind that the performance of your filer depends on both the size and type of disk as well as the instance type. The type and size of disk determine the total throughput. The larger the disk, the more performance you will get. The instance type determines how much network bandwidth is available to your filer. For each core, you will see up to 2 Gb/s of network throughput. With those guidelines in place, you should be able to decide where you need to start.

After your filer is fully deployed, you can mount your shares by using NFS or SMB mounts from any host on the local subnet. Keep in mind that you can start with smaller disks and then resize them as necessary to scale with your performance or capacity needs.

If you can tolerate downtime, you can also scale up your filer by stopping the instance, changing the instance type, and then starting it again. Although Single Node File Server cannot scale horizontally to provide a shared pool of disks, you can create as many of the individual filers as you need. This approach could be useful if you are doing development or testing against a shared filesystem back end.

Although Single Node File Server does not provide redundancy for your data, you can create snapshots of your data disk in order to take periodic backups. There is no official paid support for Single Node Filer, so the costs of running it are tied directly to the instance, disk, and network costs. In general, this option should be very low maintenance and require little to no administration.

With Single Node File Server you can scale up to:

  • 64 TB of persistent disk SSD for data
  • 3.0 TB of local SSD - 700k+ IOPS (similar speed to DDR2 RAM)
  • 240 MB/s disk throughput
  • 15,000 write IOPS, 10,000 read IOPS
  • 8 Gb/s of network bandwidth

GlusterFS

GlusterFS is an open source distributed file system that scales to hundreds of terabytes of data and is supported commercially as Red Hat Gluster Storage. This flexible storage system allows clients to connect via NFSv3 and SMB out of the box, and with NFSv4 using the NFS-Ganesha fileserver. Gluster allows for both scaling up and scaling out your filer infrastructure.

Diagram of GlusterFS

Each volume, or share, in the pool of Gluster storage can be configured with varying levels of replication and striping attributes. The three main volume types in Gluster are distributed, replicated, and striped. Distributed volumes are the default; they allow you to use the capacity of multiple servers to provide a single file share, with each file living on a single server. Replicated volumes ensure that your data is persisted to multiple servers.

Striped volumes, like distributed volumes, allow you to use the storage capacity of many nodes as a single share. Unlike distributed volumes, they let you split up files across multiple nodes. This can help with the performance of very large files that require concurrent access across many clients.

In addition to the three types of volumes, Gluster also provides the ability to combine capabilities. For example, if you require the capacity of distributed volumes but would also like the redundancy of replicated volumes, you can create a distributed replicated volume, as seen in the diagram above. If you require all of the features of Gluster and can tolerate the storage and compute costs, you can create a distributed striped replicated volume. In addition to zone-to-zone cluster replication, Gluster provides geo-replication, which can be used as a disaster recovery mechanism or for sharing data between regions within GCP.

Although getting a basic Gluster set up is simple, maintaining a performant and scalable cluster requires some effort. In addition to choosing from the different volume types, you need to consider how individual volumes can be configured with various parameters to tune the performance for the particular workload you expect. Red Hat provides professional services, training, and support for Gluster, which can alleviate some of the configuration challenges and provide you with guidance as your cluster grows.

If you want to deploy a Red Hat Gluster Storage cluster on Compute Engine, see this white paper for instructions on how to provision a multi-node cluster that includes cross-zone and cross-region replication.

If you want to deploy an open source Gluster cluster on Compute Engine, see this repository for instructions on how to provision a multi-node cluster using an Ansible playbook.

Avere vFXT

For workloads that require the utmost read performance, Avere Systems provides a best-of-breed solution. With Avere’s cloud based vFXT clustered cloud filesystem, you can provide your users with petabytes of storage and millions of IOPS.

Diagram of Avere vFXT

The Avere vFXT is not only a filer, but also a read/write cache that allows for minimal changes to your existing workflow by putting working data sets as close to your compute cluster as possible. With Avere, you can employ the cost effectiveness of Google Cloud Storage as a backing store, along with the performance, scalability and per-minute pricing of Compute Engine.

Avere also allows you to make the most of your current on-premises footprint. In addition to being able to leverage GCP with the vFXT, you can use Avere’s on-premises FXT series to unify the storage of your legacy devices and storage arrays into an extensible filer with a single namespace.

If you are considering a transition away from your on-premises storage footprint, you can use Avere's FlashMove technology to migrate to Cloud Storage with zero downtime to your clients. If you want to provide a disaster recovery mechanism for your on-premises data, you can use the FlashMirror feature to replicate your on-premise storage in Cloud Storage. If you find yourself in need of a large amount of storage for a brief period of time, you can use Cloud Storage to burst your workload into the cloud. You can use as much storage and compute as you need, and then deprovision it without paying any ongoing costs.

Avere uses fast local devices, like SSDs and RAM, to cache the currently active data set as close to your compute devices as possible. With the vFXT, you can use the global redundancy and immense scale of Cloud Storage, while still providing your users with the illusion that their data is local to their compute cluster.

To get Avere up and running as your filer solution, contact Avere directly. For more information about Avere, see:

Summary of file server options

The following table summarizes the features of persistent disks and three file server options:

Filer solutionOptimal data setThroughputManaged supportExport protocolsHighly availableHybrid
Single Node File Server< 64 TB8 Gb/sNoNFSv3, SMB3NoNo
GlusterFS10s to 100s of TB10s to 100s of Gb/sOSS or RedHat supportNFSv3, SMB3YesNo
Avere10s to 100s of TB10s to 100s of Gb/sAvereNFSv3, SMB2YesYes
Read-only PD< 64 TB240 MB/sNoDirect attachmentNoNo

Send feedback about...