This page provides an overview of local SSD support in Kubernetes, and how to use local SSDs with Google Kubernetes Engine.
Overview
Local SSDs provide high-performance, ephemeral storage to every node in the cluster. Local SSDs provide higher throughput and lower latency than standard disks. Local caching and processing are suitable workloads for local SSDs.
You can create node pools with local SSDs within your cluster's machine type limits and your project's quotas.
Restrictions
Be aware of the following restrictions as you use local SSDs:
- Because local SSDs are physically attached to the node's host virtual machine instance, any data stored in them only exists on that node. Since the data stored on the disks is local, your application must be resilient to this data being unavailable.
- Data stored on local SSDs are ephemeral. A Pod that writes to a local SSD might lose access to the data stored on the disk if the Pod is rescheduled away from that node. Additionally, if the node is terminated, upgraded, or repaired the data will be erased.
- You cannot add local SSDs to an existing node pool.
Creating a cluster with local SSDs
You can create a cluster with local SSDs using Google Cloud Console or the gcloud
command-line tool.
gcloud
To create a cluster or node pool with local SSDs using gcloud
, specify the
--local-ssd count
flag.
To create a cluster in which the default pool uses local SSD disks, run the following command:
gcloud container clusters create [CLUSTER_NAME] \ --num-nodes 2 --local-ssd-count [NUMBER_OF_DISKS]
where [NUMBER_OF_DISKS] is the desired number of disks as an absolute number.
To create a node pool with local SSD disks in an existing cluster:
gcloud beta container node-pools create [POOL_NAME] --cluster [CLUSTER_NAME] \ --num-nodes 1 --local-ssd-count [NUMBER_OF_DISKS]
The --local-ssd-count
specifies the number of local SSDs to be created
per node. The maximum number varies by the machine type and region (see
Local SSDs). When nodes are created, the
local SSDs are automatically formatted and mounted on the host OS at the
subdirectory /mnt/disks/
, with each local SSD mounted at a "ssd#"
directory.
Console
You can create a cluster or node pool with local SSDs from the GKE menu in Cloud Console.
To create a cluster in which the default pool uses local SSD disks, perform the following steps:
Visit the Google Kubernetes Engine menu in Cloud Console.
Click Create cluster.
Under Node pools, click Advanced edit.
For Number of nodes, enter 2.
For Local SSD disks, enter the desired number of SSDs as an absolute number.
Click Save, and then click Create.
To create a node pool with local SSD disks in an existing cluster:
Visit the Google Kubernetes Engine menu in Cloud Console.
Select the desired cluster.
Click Edit.
Under Node pools, click Add node pool.
For Size, enter 1.
For Local SSD disks (per node) enter the desired number of SSDs as an absolute number.
Click Save.
Using local SSDs
The following section explains how to use local SSDs with GKE.
You can access local SSDs with one of two methods:
Hostpath volumes are recommended for:
- Workloads that use DaemonSets.
- Workloads that use dedicated node pools. All the local SSDs have to be accessed at the same path across all instances of a DaemonSet.
Local PersistentVolumes are available in beta and are recommended for:
- Workloads using StatefulSets and volumeClaimTemplates.
- Workloads that share node pools. Each local SSD can be reserved through a PersistentVolumeClaim (PVC), and specific host paths are not encoded directly in the Pod spec.
- Pods that require data gravity to the same local SSD. A Pod is always scheduled to the same node as its local PersistentVolume.
Example using hostPath volumes
If you create a node pool with three local SSDs, the host OS
mounts the disks at /mnt/disks/ssd0
, /mnt/disks/ssd1
and /mnt/disks/ssd2
.
Your Kubernetes containers access the disks using the hostPath
parameter in
defined in your object's configuration file.
This example Pod configuration file references a local SSD:
/mnt/disks/ssd0
:
apiVersion: v1 kind: Pod metadata: name: "test-ssd" spec: containers: - name: "shell" image: "ubuntu:14.04" command: ["/bin/sh", "-c"] args: ["echo 'hello world' > /test-ssd/test.txt && sleep 1 && cat /test-ssd/test.txt"] volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd" volumes: - name: "test-ssd" hostPath: path: "/mnt/disks/ssd0" nodeSelector: cloud.google.com/gke-local-ssd: "true"
Example using local PersistentVolumes
Local SSDs can be specified as PersistentVolumes.
You can create PersistentVolumes from local SSDs by manually creating a PersistentVolume, or by running the local volume static provisioner.
Caveats
Currently cluster autoscaling and dynamic provisioning are not supported with this feature.
Upgrading a GKE cluster or repairing nodes deletes the Compute Engine instances, which also deletes all data on the local SSDs.
You should not enable node auto-upgrades or node auto-repair for clusters or node pools using local SSDs for persistent data. You must backup your application data first, then restore the data to a new cluster or node pool.
Manually creating the PersistentVolume
You can manually create a PersistentVolume for each local SSD on each node in your cluster.
Use the nodeAffinity
field in a PersistentVolume object to reference a local
SSD on a specific node. For example, the following is a PersistentVolume
specification for a local SSD mounted at /mnt/disks/ssd0
on node
gke-test-cluster-default-pool-926ddf80-f166
:
apiVersion: v1 kind: PersistentVolume metadata: name: "example-local-pv" spec: capacity: storage: 375Gi accessModes: - "ReadWriteOnce" persistentVolumeReclaimPolicy: "Retain" storageClassName: "local-storage" local: path: "/mnt/disks/ssd0" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: "kubernetes.io/hostname" operator: "In" values: "gke-test-cluster-default-pool-926ddf80-f166"
If you delete the PersistentVolume, you need to manually erase the data from the disk.
Running the local volume static provisioner
You can create PersistentVolumes for local SSDs automatically using the local volume static provisioner. The provisioner is a DaemonSet that manages the local SSDs on each node, creates and deletes the PersistentVolumes for them, and cleans up the data on the local SSD when the PersistentVolume is released.
To run the local volume static provisioner:
Download the
provisioner_generated_gce_ssd_count.yaml
specification from the external-storage repo, and modify the specification'snamespace
fields as needed.The specification includes:
- ServiceAccount for the provisioner
- ClusterRole and ClusterRoleBindings for permissions to:
- Create and Delete PersistentVolume objects
- Get Node objects
- ConfigMap with provisioner settings for GKE
- DaemonSet for running the provisioner
Link your Cloud Identity and Access Management account to the Kubernetes cluster-admin role. This is required to get ClusterRole creation permissions.
Deploy the provisioner:
kubectl apply -f provisioner_generated_gce_ssd_count.yaml
After the provisioner is running successfully, it will create a PersistentVolume object for each local SSD in the cluster.
Enabling delayed volume binding
For improved scheduling, it is recommended to also create a StorageClass with
volumeBindingMode: WaitForFirstConsumer
. This will delay PersistentVolumeClaim (PVC) binding
until Pod scheduling, so that a local SSD is chosen from an appropriate node
that can run the Pod. This enhanced scheduling behavior means that Pod CPU and
memory requests, node affinity, Pod affinity and anti-affinity, and multiple PVC
requests are considered along with which nodes have available local SSDs.
For example, consider the following StorageClass manifest, local_class.yaml
:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "local-scsi" provisioner: "kubernetes.io/no-provisioner" volumeBindingMode: "WaitForFirstConsumer"
To create a StorageClass with delayed binding, run the following command:
kubectl apply -f local_class.yaml