This page provides an overview of local SSD support in Kubernetes, and how to use local SSDs with Google Kubernetes Engine (GKE).
Local SSDs provide high-performance, ephemeral storage to every node in the cluster. Local SSDs provide higher throughput and lower latency than standard disks. Local SSDs work well in workloads that provide local caching and processing.
Data written to a local SSD is ephemeral and does not persist when the node is deleted, repaired, upgraded or experiences an unrecoverable error.
A local SSD is attached to only one node, and nodes themselves are ephemeral. A workload can be scheduled onto a different node at any point in time.
For more information on the benefits and limitations of local SSDs, see Local SSDs in the Compute Engine documentation.
Creating a cluster with local SSDs
You can create a cluster or update a cluster to use local SSDs using
Google Cloud Console or the
gcloud command-line tool.
You can create a cluster or node pool with local SSDs from the GKE menu in Cloud Console.
To create a cluster in which the default pool uses local SSD disks:
Visit the Google Kubernetes Engine menu in Cloud Console.
Click the Create cluster button.
From the navigation pane, under Node pools, click default-pool.
For Number of nodes, enter
From the navigation pane, under default-pool, click Nodes.
Expand the CPU Platform and GPU menu.
For Local SSD disks, enter the desired number of SSDs as an absolute number.
To create a node pool with local SSD disks in an existing cluster:
Visit the Google Kubernetes Engine menu in Cloud Console.
Select the desired cluster.
Under Node pools, click Add node pool.
For Size, enter
For Local SSD disks (per node) enter the desired number of SSDs as an absolute number.
To create a cluster or node pool with local SSDs using
gcloud, specify the
--local-ssd count flag. The
--local-ssd-count specifies the number
of local SSDs to be created per node. The maximum number varies by machine
type and region. When nodes are created,
the local SSDs are automatically formatted and mounted on the node's
filesystem at mount points such as
/mnt/disks/ssd0 for the first disk,
/mnt/disks/ssd1 for the second disk, and so on.
To create a cluster in which the default pool uses local SSD disks, run the following command:
gcloud container clusters create cluster-name \ --num-nodes 2 \ --local-ssd-count number-of-disks
where number-of-disks is the desired number of disks as an absolute number.
To create a node pool with local SSD disks in an existing cluster, run the following command:
gcloud container node-pools create pool-name \ --cluster cluster-name \ --num-nodes 1 \ --local-ssd-count number-of-disks
Formatting local SSDs for Windows Server clusters
When you use local SSDs with your clusters running Windows Server node pools, you need to log in to the node and format the disk before using it. In the following example, the local SSD disk is formatted with the NTFS file system. You can also create directories under the disk. In this example, the directories are under disk D.
PS C:\> Get-Disk | Where partitionstyle -eq 'raw' | Initialize-Disk -PartitionStyle MBR -PassThru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem ntfs -Confirm:$false PS C:\> mkdir D:\test-ss
Using local SSDs
The following section explains how to use local SSDs with GKE.
You can access local SSDs with one of two methods:
Hostpath volumes are recommended for:
- Workloads that use DaemonSets.
- Workloads that use dedicated node pools. All the local SSDs have to be accessed at the same path across all instances of a DaemonSet.
Local PersistentVolumes are generally available (GA) in GKE v1.14.x and higher.
They are recommended for:
- Workloads using StatefulSets and volumeClaimTemplates.
- Workloads that share node pools. Each local SSD can be reserved through a PersistentVolumeClaim (PVC), and specific host paths are not encoded directly in the Pod spec.
- Pods that require data gravity to the same local SSD. A Pod is always scheduled to the same node as its local PersistentVolume.
Examples using hostPath volumes
The following examples show you how to use hostPath volumes for Windows and Linux.
If you create a node pool with three local SSDs, the host OS
mounts the disks at
Your Kubernetes containers access the disks using the
defined in your object's configuration file.
This example Pod configuration file references a local SSD:
apiVersion: v1 kind: Pod metadata: name: "test-ssd" spec: containers: - name: "shell" image: "ubuntu:14.04" command: ["/bin/sh", "-c"] args: ["echo 'hello world' > /test-ssd/test.txt && sleep 1 && cat /test-ssd/test.txt"] volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd" volumes: - name: "test-ssd" hostPath: path: "/mnt/disks/ssd0" nodeSelector: cloud.google.com/gke-local-ssd: "true"
apiVersion: v1 kind: Pod metadata: name: "test-ssd" spec: containers: - name: "test" image: "mcr.microsoft.com/windows/servercore/iis" volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd" volumes: - name: "test-ssd" hostPath: path: "d:\\test-ssd" nodeSelector: cloud.google.com/gke-local-ssd: "true" kubernetes.io/os: windows
Examples using local PersistentVolumes
Local SSDs can be specified as PersistentVolumes.
You can create PersistentVolumes from local SSDs by manually creating a PersistentVolume, or by running the local volume static provisioner.
Upgrading a GKE cluster or repairing nodes deletes the Compute Engine instances, which also deletes all data on the local SSDs.
Do not enable node auto-upgrades or node auto-repair for clusters or node pools using local SSDs for persistent data. You must back up your application data first, then restore the data to a new cluster or node pool.
- Local PersistentVolume objects are not automatically cleaned up when a node is deleted, upgraded, repaired, or scaled down. We recommend you periodically scan and delete stale Local PersistentVolume objects associated with deleted nodes.
Manually creating the PersistentVolume
You can manually create a PersistentVolume for each local SSD on each node in your cluster.
nodeAffinity field in a PersistentVolume object to reference a local
SSD on a specific node. For example, the following Linux example is a PersistentVolume
specification for a local SSD mounted at
/mnt/disks/ssd0 on node
apiVersion: v1 kind: PersistentVolume metadata: name: "example-local-pv" spec: capacity: storage: 375Gi accessModes: - "ReadWriteOnce" persistentVolumeReclaimPolicy: "Retain" storageClassName: "local-storage" local: path: "/mnt/disks/ssd0" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: "kubernetes.io/hostname" operator: "In" values: "gke-test-cluster-default-pool-926ddf80-f166"
apiVersion: v1 kind: PersistentVolume metadata: name: ssd-local-pv spec: capacity: storage: 375Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: "d:\\test-ssd" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - gke-gke-cluster-windows-dds-2263bc7c-wq6m
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ssd-local-claim spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 37Gi
You can now create a Pod to access the disk. To ensure your Pods are correctly scheduled onto Windows Server nodes, you must add a node selector to your Pod specification:
apiVersion: v1 kind: Pod metadata: name: "test-ssd" spec: containers: - name: "test" image: "mcr.microsoft.com/windows/servercore/iis" volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd" volumes: - name: "test-ssd" persistentVolumeClaim: claimName: ssd-local-claim nodeSelector: cloud.google.com/gke-local-ssd: "true" tolerations: - key: "node.kubernetes.io/os" value: "windows" operator: "Equal" effect: "NoSchedule"
If you delete the PersistentVolume, you need to manually erase the data from the disk.
Running the local volume static provisioner
You can create PersistentVolumes for local SSDs automatically using the local volume static provisioner. The provisioner is a DaemonSet that manages the local SSDs on each node, creates and deletes the PersistentVolumes for them, and cleans up the data on the local SSD when the PersistentVolume is released.
To run the local volume static provisioner:
gke.yamlspecification from the sig-storage-local-static-provisioner repo , and modify the specification's
namespacefields as needed.
The specification includes:
- ServiceAccount for the provisioner
- ClusterRole and ClusterRoleBindings for permissions to:
- Create and Delete PersistentVolume objects
- Get Node objects
- ConfigMap with provisioner settings for GKE
- DaemonSet for running the provisioner
Deploy the provisioner:
kubectl apply -f gke.yaml
After the provisioner is running successfully, it creates a PersistentVolume object for each local SSD in the cluster.
Enabling delayed volume binding
For improved scheduling, it is recommended to also create a StorageClass with
volumeBindingMode: WaitForFirstConsumer. This delays PersistentVolumeClaim (PVC) binding
until Pod scheduling, so that a local SSD is chosen from an appropriate node
that can run the Pod. This enhanced scheduling behavior considers Pod CPU and
memory requests, node affinity, Pod affinity and anti-affinity, and multiple PVC
requests, along with which nodes have available local SSDs, when selecting a
node for a runnable Pod.
This example uses delayed volume binding mode:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "local-scsi" provisioner: "kubernetes.io/no-provisioner" volumeBindingMode: "WaitForFirstConsumer"
When using clusters with Windows Server node pools, you should create a StorageClass, because the default StorageClass uses ext4 as the file system type, which only works for Linux containers.
The following YAML file named storageclass-name uses a Compute Engine persistent disk with NTFS as the file storage type. You can use this StorageClass when working with Windows clusters.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: storageclass-name parameters: type: pd-standard fstype: NTFS provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
To create a StorageClass with delayed binding, save the YAML manifest to a local file and apply it to the cluster using the following command:
kubectl apply -f filename