Local SSDs

This page provides an overview of local SSD support in Kubernetes, and how to use local SSDs with Google Kubernetes Engine (GKE).

Overview

Local SSDs provide high-performance, ephemeral storage to every node in the cluster. Local SSDs provide higher throughput and lower latency than standard disks. Local SSDs work well in workloads that provide local caching and processing.

You can create node pools with local SSDs within your cluster's machine type limits and your project's quotas.

  • Data written to a local SSD is ephemeral and does not persist when the node is deleted, repaired, upgraded or experiences an unrecoverable error.

  • A local SSD is attached to only one node, and nodes themselves are ephemeral. A workload can be scheduled onto a different node at any point in time.

For more information on the benefits and limitations of local SSDs, see Local SSDs in the Compute Engine documentation.

Creating a cluster with local SSDs

You can create a cluster or update a cluster to use local SSDs using Google Cloud Console or the gcloud command-line tool.

Console

You can create a cluster or node pool with local SSDs from the GKE menu in Cloud Console.

To create a cluster in which the default pool uses local SSD disks:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. From the navigation pane, under Node pools, click default-pool.

  4. For Number of nodes, enter 2.

  5. From the navigation pane, under default-pool, click Nodes.

  6. Expand the CPU Platform and GPU menu.

  7. For Local SSD disks, enter the desired number of SSDs as an absolute number.

  8. Click Create.

To create a node pool with local SSD disks in an existing cluster:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Select the desired cluster.

  3. Click Edit.

  4. Under Node pools, click Add node pool.

  5. For Size, enter 1.

  6. For Local SSD disks (per node) enter the desired number of SSDs as an absolute number.

  7. Click Save.

gcloud

To create a cluster or node pool with local SSDs using gcloud, specify the --local-ssd count flag.

To create a cluster in which the default pool uses local SSD disks, run the following command:

gcloud container clusters create cluster-name \
  --num-nodes 2 \
  --local-ssd-count number-of-disks

where number-of-disks is the desired number of disks as an absolute number.

To create a node pool with local SSD disks in an existing cluster:

gcloud container node-pools create pool-name \
  --cluster cluster-name \
  --num-nodes 1 \
  --local-ssd-count number-of-disks

The --local-ssd-count specifies the number of local SSDs to be created per node. The maximum number varies by machine type and region). When nodes are created, the local SSDs are automatically formatted and mounted on the node's filesystem at mount points such as /mnt/disks/ssdnumber.

Using local SSDs

The following section explains how to use local SSDs with GKE.

You can access local SSDs with one of two methods:

  • Hostpath volumes are recommended for:

    • Workloads that use DaemonSets.
    • Workloads that use dedicated node pools. All the local SSDs have to be accessed at the same path across all instances of a DaemonSet.
  • Local PersistentVolumes are generally available (GA) in GKE v1.14.x and higher.

    They are recommended for:

    • Workloads using StatefulSets and volumeClaimTemplates.
    • Workloads that share node pools. Each local SSD can be reserved through a PersistentVolumeClaim (PVC), and specific host paths are not encoded directly in the Pod spec.
    • Pods that require data gravity to the same local SSD. A Pod is always scheduled to the same node as its local PersistentVolume.

Example using hostPath volumes

If you create a node pool with three local SSDs, the host OS mounts the disks at /mnt/disks/ssd0, /mnt/disks/ssd1 and /mnt/disks/ssd2. Your Kubernetes containers access the disks using the hostPath parameter defined in your object's configuration file.

This example Pod configuration file references a local SSD: /mnt/disks/ssd0:

apiVersion: v1
kind: Pod
metadata:
  name: "test-ssd"
spec:
  containers:
  - name: "shell"
    image: "ubuntu:14.04"
    command: ["/bin/sh", "-c"]
    args: ["echo 'hello world' > /test-ssd/test.txt && sleep 1 && cat /test-ssd/test.txt"]
    volumeMounts:
    - mountPath: "/test-ssd/"
      name: "test-ssd"
  volumes:
  - name: "test-ssd"
    hostPath:
      path: "/mnt/disks/ssd0"
  nodeSelector:
    cloud.google.com/gke-local-ssd: "true"

Example using local PersistentVolumes

Local SSDs can be specified as PersistentVolumes.

You can create PersistentVolumes from local SSDs by manually creating a PersistentVolume, or by running the local volume static provisioner.

Caveats

Currently cluster autoscaling and dynamic provisioning are not supported with this feature.

Upgrading a GKE cluster or repairing nodes deletes the Compute Engine instances, which also deletes all data on the local SSDs.

Do not enable node auto-upgrades or node auto-repair for clusters or node pools using local SSDs for persistent data. You must back up your application data first, then restore the data to a new cluster or node pool.

Manually creating the PersistentVolume

You can manually create a PersistentVolume for each local SSD on each node in your cluster.

Use the nodeAffinity field in a PersistentVolume object to reference a local SSD on a specific node. For example, the following is a PersistentVolume specification for a local SSD mounted at /mnt/disks/ssd0 on node gke-test-cluster-default-pool-926ddf80-f166:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: "example-local-pv"
spec:
  capacity:
    storage: 375Gi
  accessModes:
  - "ReadWriteOnce"
  persistentVolumeReclaimPolicy: "Retain"
  storageClassName: "local-storage"
  local:
    path: "/mnt/disks/ssd0"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "kubernetes.io/hostname"
          operator: "In"
          values: "gke-test-cluster-default-pool-926ddf80-f166"

If you delete the PersistentVolume, you need to manually erase the data from the disk.

Running the local volume static provisioner

You can create PersistentVolumes for local SSDs automatically using the local volume static provisioner. The provisioner is a DaemonSet that manages the local SSDs on each node, creates and deletes the PersistentVolumes for them, and cleans up the data on the local SSD when the PersistentVolume is released.

To run the local volume static provisioner:

  1. Download the gke.yaml specification from the sig-storage-local-static-provisioner repo, and modify the specification's namespace fields as needed.

    The specification includes:

    • ServiceAccount for the provisioner
    • ClusterRole and ClusterRoleBindings for permissions to:
      • Create and Delete PersistentVolume objects
      • Get Node objects
    • ConfigMap with provisioner settings for GKE
    • DaemonSet for running the provisioner
  2. Deploy the provisioner:

    kubectl apply -f gke.yaml
    

After the provisioner is running successfully, it creates a PersistentVolume object for each local SSD in the cluster.

Enabling delayed volume binding

For improved scheduling, it is recommended to also create a StorageClass with volumeBindingMode: WaitForFirstConsumer. This delays PersistentVolumeClaim (PVC) binding until Pod scheduling, so that a local SSD is chosen from an appropriate node that can run the Pod. This enhanced scheduling behavior considers Pod CPU and memory requests, node affinity, Pod affinity and anti-affinity, and multiple PVC requests, along with which nodes have available local SSDs, when selecting a node for a runnable Pod.

This example uses delayed volume binding mode:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "local-scsi"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"

To create a StorageClass with delayed binding, save the YAML manifest to a local file and apply it to the cluster using the following command:

kubectl apply -f filename

What's next