The click-to-deploy single-node file server provides a ZFS file server running on a single Google Compute Engine instance. The file server runs on Debian 8, the current stable release of Debian.
This page explains how to deploy, configure, and monitor the file server. To use the file server, you should be familiar with Debian Linux configuration and administration. In particular, you should have experience configuring ZFS, NFS, and Samba.
Deploying the file server
When you click to deploy the file server, you are asked to configure the Compute Engine instance for the file server. Use the following settings to maximize performance:
- Machine type: n1-highmem-8 or greater
- Storage disk type: SSD Persistent Disk to maximize IOPS
- Storage disk size: 1500 GB to maximize throughput; larger to maximize IOPS
The following components are installed on the file server:
- ZFS, with performance tuning for Compute Engine
- File serving
- Monitoring system
The installation process will configure a ZFS storage pool using the attached storage disk. Any local SSDs will be attached to the ZFS pool as L2ARC (cache) devices.
After the installation process finishes, the instance reboots and validates the installation. Log files for the setup and validation process are available in:
The performance of the single-node file server depends on the type of Compute Engine instance that you deploy, as well as the size and type of disks that are attached to the instance. Real-world performance may vary based on additional factors, such as network latency between the file server and its clients.
If the expected performance does not meet your requirements, you can deploy multiple file servers. You can also investigate using a distributed or clustered file system, such as GlusterFS.
Network performance is limited by the number of cores in your Compute Engine instance. The maximum performance is 2 gigabits per second (Gbps) per core, up to a maximum of 10 Gbps. For sustained workloads, network performance typically ranges from 6 to 8 Gbps. Some of the network capacity is used to read from and write to the persistent disk.
For more details about network performance, see Egress throughput caps.
Sustained disk throughput is limited by the performance characteristics of Persistent Disk. Throughput increases with the disk's size, up to a maximum. For reads, the file server can sustain much higher throughput after the data has been cached in memory or on a local SSD.
The following table shows the maximum throughput, excluding caching:
|Throughput limit per instance||Standard Persistent Disk||SSD Persistent Disk|
|Read||180 MB/s||240 MB/s|
|Write||120 MB/s||240 MB/s|
For more details, see Selecting the right Persistent Disk type and size.
Input/output operations per second (IOPS)
IOPS are also limited by the performance characteristics of Persistent Disk. IOPS increase with the disk's size, up to a maximum. For reads, the file server can sustain much higher IOPS after the data has been cached in memory or on a local SSD.
The following table shows the maximum IOPS, excluding caching:
|IOPS limit per instance||Standard Persistent Disk||SSD Persistent Disk|
|Read||7,500 IOPS||15,000 IOPS|
|Write||15,000 IOPS||15,000 IOPS|
For more details, see Selecting the right Persistent Disk type and size.
Configuring the file server
This section describes configuration settings on the file server that you may want to change.
Configuring NFS and SMB sharing
By default, NFS and SMB file sharing is enabled for the local Compute Engine network. Consider updating the configuration to further restrict access. For more information, see the Debian Administrator's Handbook entries for NFS and SMB.
Adding file systems
When you deploy the file server, a storage pool is automatically configured. You can add file systems to the storage pool using the following command:
zfs create storagepool_name/file_system_name
Changing mount points
You can change the mount point for a file system using the following command:
zfs set mountpoint=/mount_point storagepool_name/file_system_name
Tuning ZFS settings
The file server comes with the following optimizations to its ZFS settings:
- 75% of available RAM is allocated to the ARC. Another 10% of available RAM is allocated to the ZIL write cache, leaving 15% for the operating system and other services.
- Caching is enabled for streaming reads to local SSD disks. This setting allows for better performance when a file is written and then read, and when a file is read multiple times.
- Write speeds are increased to the L2ARC on local SSD disks.
- The write buffer is emptied quickly, preventing it from filling up during sustained write workloads.
If desired, you can make additional changes to the file server's ZFS configuration. However, you should avoid using local SSD disks as ZIL devices, for the following reasons:
- Potential data loss. Local SSD disks do not persist beyond the lifetime of the Compute Engine instance. They are unreplicated and have no data redundancy.
- No increase in performance. During normal operation, a ZIL device is only written to. Data is read only on startup, and only if transactions were not written to disk.
Forcing synchronous writes
By default, ZFS will respect the synchronous or asynchronous nature of writes. You can use one of the following options to force writes to be synchronous:
- Set the
sync=alwaysoption on the ZFS pool or file system.
- Use the
syncoption when mounting your NFS share.
Restoring the local SSD cache
In rare cases, the local SSD disk for your file server may become unavailable. If this happens, you must remove the L2ARC, then restore it. This process cannot cause data loss, because data stored on an L2ARC device is already committed to disk. Use the following command:
zpool remove zpool_name /dev/disk/by-id/google-local-ssd-X
Mounting the file server
NFS client libraries are required to mount the file server from another Compute Engine instance.
To install the NFS client libraries on the client Compute Engine instance using Debian or Ubuntu:
apt-get install nfs-common
For other Linux distributions, consult the documentation. To mount the file server on the client Compute Engine instance:
mount -t nfs file_server_hostname:/remote_mount_point /local_mount_point
Monitoring the file server
The file server provides a web interface for monitoring storage usage and overall system performance.
Setting up access to the web interface
To access the web interface, you must create an SSH tunnel to port 3000 of your Compute Engine instance. You must also find the admin password in your Compute Engine instance's metadata.
To create an SSH tunnel, use the following gcloud command:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=PROJECT --zone=ZONE INSTANCE_NAME
To find the admin password, look for the field
ADMIN_PASSWORD in your
instance's custom metadata.
Using the web interface
To view the web interface, open the URL
your SSH tunnel is running. The username is
admin, and the password is the
value in your Compute Engine instance's metadata, as described above.
The web interface provides two dashboards:
- Storage shows disk usage, throughput, and IOPS.
- System shows CPU, memory, and network usage.
You should change the admin password after you log in for the first time. To change the admin password:
- Click Grafana admin, then click Global Users. If Grafana admin is not visible, click the Grafana logo in the upper left corner of the window to open the sidebar.
- Click the Edit button next to the admin user.
- In the New password field, type the new password, then click Update.
- Click Exit admin to return to the list of dashboards.