Custom boot disks

This page shows you how to customize a node boot disk in your Google Kubernetes Engine (GKE) clusters and node pools.

Overview

When you create a GKE cluster or node pool, you can choose the type of persistent disk onto which the Kubernetes node filesystem is installed for each node. By default, GKE uses standard persistent disks. You can also specify an SSD persistent disk.

SSD persistent disks can improve the performance of your nodes for certain workloads. However, choosing an SSD persistent disk as your node boot disk incurs additional costs. For more information, see Storage Options.

Benefits of using an SSD boot disk

Using an SSD persistent disk as a boot disk for your nodes offers some performance benefits:

  • Nodes have faster boot times.
  • Binaries and files served from containers are available to the node faster. This can increase performance for I/O-intensive workloads, such as web-serving applications that host static files or short-running, I/O-intensive batch jobs.
  • Files stored on the node's local media (exposed through hostPath or emptyDir volumes) can see improved I/O performance.

For additional information on how SSD persistent disks perform compared to standard persistent disks, see the Block storage performance comparison.

Specifying a node boot disk type

You can specify the boot disk type, standard or SSD, when you create a cluster or node pool.

gcloud

To create a cluster with a custom boot disk, run the following command.

[DISK-TYPE] can be either:

  • pd-standard, a standard persistent disk (the default)
  • pd-ssd, an SSD persistent disk
gcloud container clusters create [CLUSTER_NAME] --disk-type [DISK_TYPE]

To create a node pool in an existing cluster:

gcloud container node-pools create [POOL_NAME] --disk-type [DISK_TYPE]

For example, the following command creates a cluster, example-cluster, with the SSD persistent disk type, pd-ssd:

gcloud container clusters create example-cluster --disk-type pd-ssd

Console

To select the boot disk when creating your cluster with Google Cloud Console, perform the following steps:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.

  4. Configure your cluster as desired.

  5. Click Advanced edit for the default node pool. From the Boot disk type drop-down menu, select Standard persistent disk or SSD persistent disk.

  6. Click Save to close the Advanced edit overlay.

  7. Click Create. The cluster's default node pool uses the specified boot disk type.

To create a node pool with a custom boot disk:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Select the desired cluster.

  3. From the Node pools menu, click Add node pool.

  4. Configure your node pool as desired.

  5. From the Boot disk type drop-down menu, select Standard persistent disk or SSD persistent disk.

  6. Click Create.

Protecting node boot disks

A node boot disk stores your container image, some system process logs, Pod logs, and the writable container layer by default.

If your workloads use configMap, emptyDir, or hostPath volumes, your Pods could write additional data to node boot disks. You can configure emptyDir to be backed by tmpfs to stop this. To learn how, see the Kubernetes documentation. Since secret, downwardAPI, and projected volumes are backed by tmpfs the Pods using them don't write data to the node boot disk.

By default, Google Cloud encrypts customer content at rest including your node boot disks, and GKE manages encryption for you without any action on your part.

However, when using volumes that write to the node boot disk, you may want to further control how your workload data is protected in GKE. You can do this by either preventing Pods from writing to node boot disks , or using Customer Managed Encryption Keys (CMEK) for node boot disks.

Preventing Pods from writing to boot disks

You can avoid writing data directly from Pods to node boot disks, and instead require them to write only to attached disks. To restrict your Pods from writing to boot disks, exclude the following volumes from the volumes field of your PodSecurityPolicy:

  • configMap
  • emptyDir (if not backed by tmpfs)
  • hostPath

The following example PodSecurityPolicy, preventWritingToBootDisk.yaml, prevents writes to the boot disks:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: preventWritingToBootDisk
spec:
  Volumes: # Exclude configMap, emptyDir (if not backed by tmpfs), and hostPath.
           # Include all other desired volumes.
     - 'persistentVolumeClaim'
  # Required fields.
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'

Customer-managed encryption

If you want to control and manage encryption key rotation yourself, you can use Customer Managed Encryption Keys (CMEK). These keys are used to encrypt the data encryption keys that encrypt your data. To learn how to use CMEK for node boot disks, see Using customer-managed encryption keys.

There are some limitations when using CMEK for node boot disks:

  • You cannot enable or disable customer-managed encryption for boot disks in existing clusters or node pools.
  • You can only use CMEK for pd-standard and pd-ssd persistent disks.

What's next