Configuring a custom boot disk


This page shows you how to customize a node boot disk in your Google Kubernetes Engine (GKE) clusters and node pools.

Overview

When you create a GKE cluster or node pool, you can choose the type of Persistent Disk onto which the Kubernetes node file system is installed for each node. By default, GKE uses balanced Persistent Disks in version 1.24 or later. You can also specify other Persistent Disk types, such as standard or SSD. For more information, see Storage options.

Balanced and SSD Persistent Disks have disk quotas which are different from standard Persistent Disk quotas. If you are switching from standard to balanced Persistent Disks, you may need to request for quota increases. For more information, see Resource quotas.

Benefits of using an SSD boot disk

Using an SSD Persistent Disk as a boot disk for your nodes offers some performance benefits:

  • Nodes have faster boot times.
  • Binaries and files served from containers are available to the node faster. This can increase performance for I/O-intensive workloads, such as web-serving applications that host static files or short-running, I/O-intensive batch jobs.
  • Files stored on the node's local media (exposed through hostPath or emptyDir volumes) can see improved I/O performance.

Specifying a node boot disk type

You can specify the boot disk type when you create a cluster or node pool.

gcloud

To create a cluster with a custom boot disk, run the following command.

[DISK-TYPE] can be one of the following values:

  • pd-balanced (the default in version 1.24 or later)
  • pd-standard (the default in version 1.23 or earlier)
  • pd-ssd
  • hyperdisk-balanced

See Persistent Disk Types for more information about this choice.

gcloud container clusters create [CLUSTER_NAME] --disk-type [DISK_TYPE]

To create a node pool in an existing cluster:

gcloud container node-pools create [POOL_NAME] --disk-type [DISK_TYPE]

For example, the following command creates a cluster, example-cluster, with the SSD Persistent Disk type, pd-ssd:

gcloud container clusters create example-cluster --disk-type pd-ssd

Console

To select the boot disk when creating your cluster with the Google Cloud console:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. Configure your cluster as needed.

  4. From the navigation menu, expand default-pool and click Nodes.

  5. In the Boot disk type drop-down list, select a Persistent Disk type.

  6. Click Create.

To create a node pool with a custom boot disk for an existing cluster:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Click Add Node Pool.

  4. Configure your node pool as needed.

  5. From the navigation menu, click Nodes.

  6. In the Boot disk type drop-down list, select a Persistent Disk type.

  7. Click Create.

Protecting node boot disks

A node boot disk stores your container image, some system process logs, Pod logs, and the writable container layer by default.

If your workloads use configMap, emptyDir, or hostPath volumes, your Pods could write additional data to node boot disks. You can configure emptyDir to be backed by tmpfs to stop this. To learn how, see the Kubernetes documentation. Since secret, downwardAPI, and projected volumes are backed by tmpfs the Pods using them don't write data to the node boot disk.

By default, Google Cloud encrypts customer content at rest including your node boot disks, and GKE manages encryption for you without any action on your part.

However, when using volumes that write to the node boot disk, you may want to further control how your workload data is protected in GKE. You can do this by either preventing Pods from writing to node boot disks , or using Customer Managed Encryption Keys (CMEK) for node boot disks.

Prevent Pods from writing to boot disks

To prevent Pods from writing data directly to the node boot disk, use one of the following methods.

Policy Controller

Policy Controller is a feature of GKE Enterprise that lets you declare and enforce custom policies at scale across your GKE clusters in fleets.

  1. Install Policy Controller.
  2. Define a constraint that restricts the following volume types by using the k8sPspVolumeTypes constraint template:

The following example constraint restricts these volume types in all Pods in the cluster:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPVolumeTypes
metadata:
  name: deny-boot-disk-writes
spec:
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
  parameters:
    volumes:
    - configMap
    - emptyDir
    - hostPath

PodSecurity admission controller

The built-in Kubernetes PodSecurity admission controller lets you enforce different levels of the Pod Security Standards in specific namespaces or in the cluster. The Restricted policy prevents Pods from writing to the node boot disk.

To use the PodSecurity admission controller, see Apply predefined Pod-level security policies using PodSecurity.

Customer-managed encryption

If you want to control and manage encryption key rotation yourself, you can use Customer Managed Encryption Keys (CMEK). These keys are used to encrypt the data encryption keys that encrypt your data. To learn how to use CMEK for node boot disks, see Using customer-managed encryption keys.

A limitation of CMEK for node boot disks is that it cannot be changed after node pool creation. This means:

  • If the node pool was created with customer-managed encryption, you cannot subsequently disable encryption on the boot disks.
  • If the node pool was created without customer-managed encryption, you cannot subsequently enable encryption on the boot disks. However, you can create a new node pool with customer-managed encryption enabled and delete the previous node pool.

Limitations

Before configuring a custom boot disk, consider the following limitations:

What's next