This page shows you how to customize a node boot disk in your Google Kubernetes Engine (GKE) clusters and node pools.
Overview
When you create a GKE cluster or node pool, you can choose the type of persistent disk onto which the Kubernetes node filesystem is installed for each node. By default, GKE uses balanced persistent disks in version 1.24 or later. You can also specify other persistent disk types, such as standard or SSD. For more information, see Storage Options.
Balanced and SSD persistent disks have disk quotas which are different from standard persistent disk quotas. If you are switching from standard to balanced persistent disks, you may need to request for quota increases. For more information, see Resource quotas.
Benefits of using an SSD boot disk
Using an SSD persistent disk as a boot disk for your nodes offers some performance benefits:
- Nodes have faster boot times.
- Binaries and files served from containers are available to the node faster. This can increase performance for I/O-intensive workloads, such as web-serving applications that host static files or short-running, I/O-intensive batch jobs.
- Files stored on the node's local media (exposed through
hostPath
oremptyDir
volumes) can see improved I/O performance.
Specifying a node boot disk type
You can specify the boot disk type when you create a cluster or node pool.
gcloud
To create a cluster with a custom boot disk, run the following command.
[DISK-TYPE]
can be one of the following values:
pd-balanced
(the default in version 1.24 or later)pd-standard
(the default in version 1.23 or earlier)pd-ssd
See Persistent Disk Types for more information about this choice.
gcloud container clusters create [CLUSTER_NAME] --disk-type [DISK_TYPE]
To create a node pool in an existing cluster:
gcloud container node-pools create [POOL_NAME] --disk-type [DISK_TYPE]
For example, the following command creates a cluster, example-cluster
,
with the SSD persistent disk type, pd-ssd
:
gcloud container clusters create example-cluster --disk-type pd-ssd
Console
To select the boot disk when creating your cluster with the Google Cloud console:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure your cluster as desired.
From the navigation pane, expand default-pool and click Nodes.
In the Boot disk type drop-down list, select a persistent disk type.
Click Create.
To create a node pool with a custom boot disk for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Click add_box Add Node Pool.
Configure your node pool as desired.
From the navigation pane, click Nodes.
In the Boot disk type drop-down list, select a persistent disk type.
Click Create.
Protecting node boot disks
A node boot disk stores your container image, some system process logs, Pod logs, and the writable container layer by default.
If your workloads use configMap
, emptyDir
, or hostPath
volumes, your Pods could
write additional data to node boot disks. You can configure emptyDir
to
be backed by tmpfs to stop this. To learn how, see the
Kubernetes documentation.
Since secret
, downwardAPI
, and projected
volumes are backed by tmpfs the
Pods using them don't write data to the node boot disk.
By default, Google Cloud encrypts customer content at rest including your node boot disks, and GKE manages encryption for you without any action on your part.
However, when using volumes that write to the node boot disk, you may want to further control how your workload data is protected in GKE. You can do this by either preventing Pods from writing to node boot disks , or using Customer Managed Encryption Keys (CMEK) for node boot disks.
Preventing Pods from writing to boot disks
You can avoid writing data directly from Pods to node boot disks, and instead
require them to write only to attached disks. To restrict your Pods from writing
to boot disks, exclude the following volumes from the volumes
field of your
PodSecurityPolicy:
configMap
emptyDir
(if not backed by tmpfs)hostPath
The following example PodSecurityPolicy, preventWritingToBootDisk.yaml
,
prevents writes to the boot disks:
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: preventWritingToBootDisk spec: Volumes: # Exclude configMap, emptyDir (if not backed by tmpfs), and hostPath. # Include all other desired volumes. - 'persistentVolumeClaim' # Required fields. runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny'
Customer-managed encryption
If you want to control and manage encryption key rotation yourself, you can use Customer Managed Encryption Keys (CMEK). These keys are used to encrypt the data encryption keys that encrypt your data. To learn how to use CMEK for node boot disks, see Using customer-managed encryption keys.
A limitation of CMEK for node boot disks is that it cannot be changed after node pool creation. This means:
- If the node pool was created with customer-managed encryption, you cannot subsequently disable encryption on the boot disks.
- If the node pool was created without customer-managed encryption, you cannot subsequently enable encryption on the boot disks. However, you can create a new node pool with customer-managed encryption enabled and delete the previous node pool.
Limitations
Before configuring a custom boot disk, consider the following limitations:
- The C3 machine series
and G2 machine series do not support the
pd-standard
node boot disk type.
What's next
- Learn how to specify a minimum CPU platform.
- Learn more about customer managed encryption.
- Learn about using Customer Managed Encryption keys in GKE.