This page describes the node images available for Google Kubernetes Engine (GKE) nodes.
GKE Autopilot nodes always use
Container-Optimized OS with containerd (cos_containerd
), which is the
recommended node operating system. If you use GKE
Standard, you can choose the operating system image that runs on each
node during cluster or node pool creation. You can also upgrade an
existing Standard cluster to use a different node image. For
instructions on how to set the node image, see
Specifying a node image.
Available node images
GKE offers the following node image options per OS for your cluster:
OS | Node images |
---|---|
Container-Optimized OS |
|
Ubuntu |
|
Windows Server |
|
Container-Optimized OS
The Container-Optimized OS from Google node images are based on a recent version of the Linux kernel and are optimized to enhance node security. Container-Optimized OS images are backed by a team at Google that can quickly patch images for security and iterate on features. The Container-Optimized OS images provides better support, security, and stability than other images.
For information about the image project and family, see Node image source projects.
Container-Optimized OS variants
Two container runtimes are offered with Container-Optimized OS. The images are the same, other than the choice of container runtime.
- Container-Optimized OS with containerd (
cos_containerd
): Thecos_containerd
image uses containerd as the container runtime directly integrated with Kubernetes. GKE Autopilot clusters always use this image. For more information, see Containerd node images. - Container-Optimized OS with Docker (
cos
): Thecos
image uses the Docker container runtime.
Ubuntu
The Ubuntu node images has been validated against GKE's node image requirements. You should use the Ubuntu node images if your nodes require support for XFS, CephFS, or Debian packages.
For information about the image project and family, see Feature support by operating system.
Ubuntu variants
Two container runtimes are offered with Ubuntu. The images are the same, other than the choice of container runtime.
Ubuntu with containerd (
ubuntu_containerd
): Theubuntu_containerd
image uses containerd as the container runtime. For more information, see Containerd node images.Ubuntu with Docker (
ubuntu
): Theubuntu
image uses Docker as the container runtime.
Windows Server
When creating a cluster using Windows Server node pools you can use a Windows Server Semi-Annual Channel (SAC) or Windows Server Long-Term Servicing Channel (LTSC) node image. All Windows node images are Windows Server Datacenter Core images. A single cluster can have multiple Windows Server node pools using different Windows Server versions, but each individual node pool can only use one Windows Server version. For more information, see Choose your Windows node image.
Two container runtimes are offered with Windows Server LTSC and SAC node images: Docker and containerd. The images are the same, other than the choice of container runtime.
Containerd runtime images (available in GKE version 1.21 and later):
Windows Server LTSC with containerd (
windows_ltsc_containerd
): Thewindows_ltsc_containerd
image uses containerd as the container runtime. Currently, this image type maps to two node images: Windows Server 2022 and Windows Server 2019. You can create Windows LTSC2022 node pools through the CLI command with flagwindows-os-version
.For more information about creating Windows Server 2022 node pools, see Create Windows Node Pools
For more information about containerd node images, see Containerd node images.
Windows Server SAC with containerd (
windows_sac_containerd
): Thewindows_sac_containerd
image uses containerd as the container runtime.For more information, see Containerd node images.
Docker runtime images (available in GKE version 1.16 and later):
- Windows Server LTSC with Docker (
windows_ltsc
): Thewindows_ltsc
image uses Docker as the container runtime. - Windows Server SAC with Docker (
windows_sac
): Thewindows_sac
image uses Docker as the container runtime.
- Windows Server LTSC with Docker (
For information about the image project and family, see Feature support by operating system.
Linux node image comparison
The following sections compare the operational aspects of the Container-Optimized OS and Ubuntu node images, including:
- Software package management
- System initialization
- Logs collection
- File system layout
- Storage driver support
Software package manager
The cos
and cos_containerd
node images use a minimal root file system with built-in support for
the Docker (containerd) container runtime, which also serves as the software package manager
for installing software on the host. The Ubuntu image uses the
APT package manager.
Managing software on Container-Optimized OS
The Container-Optimized OS image does not provide package management software
such as apt-get
. You can't install arbitrary software onto the nodes using
conventional mechanisms. Instead, create a container image that contains the
software you need.
On Standard clusters for debugging purposes only,
Container-Optimized OS includes the
CoreOS Toolbox for installing
and running common debugging tools such as ping
, psmisc
, or pstree
.
For more information about debugging Container-Optimized OS nodes, see the
Container-Optimized OS how-to guides.
Managing software on Ubuntu
The Ubuntu image uses the APT package manager. You can use the apt-get
command to install packages on these images. For example, to install ceph
packages:
sudo apt-get update
sudo apt-get install ceph
System initialization
Both the Container-Optimized OS and Ubuntu node image use systemd
to manage
system resources and services during the system initialization process.
Both node images use
systemd service files
to define services
on the node, and
systemd.targets
to group boot targets via dependencies.
Logs collection
The Container-Optimized OS and Ubuntu node images use systemd-journald
for collecting system-wide logs.
Viewing logs on Container-Optimized OS and Ubuntu
To view logs on a node with the Container-Optimized OS or Ubuntu node image, you
must use the journalctl
command. For example, to view containerd daemon logs:
sudo journalctl -u containerd
To view kubelet logs:
sudo journalctl -u kubelet
File system layout
The Ubuntu node image uses the standard Linux file system layout.
The Container-Optimized OS node image file system layout is optimized to enhance node security. The boot disk space is split into three types of partitions:
- Root partition, which is mounted as read-only
- Stateful partitions, which are writable and stateful
- Stateless partitions, which are writable but the contents do not persist across reboots
When using Container-Optimized OS, be aware of the partitioning if you run your own services that have certain expectations about the filesystem layout outside of containers.
Working with the Container-Optimized OS file system
The following is a list of paths in the Container-Optimized OS node image file system, along with their properties and recommended usage:
Path | Properties | Purpose |
---|---|---|
/ |
|
The root filesystem is mounted as read-only to maintain integrity. The kernel verifies integrity root filesystem during boot up, and refuses to boot in case of errors. |
/home /var |
|
These paths are meant for storing data that persists for the lifetime of
the boot disk. They are mounted from /mnt/stateful_partition . |
/var/lib/google /var/lib/docker /var/lib/toolbox |
|
These paths are working directories for Compute Engine packages (for example, the accounts manager service), Docker, and Toolbox respectively. |
/var/lib/cloud |
|
This path is the working directory of the cloud-init package. |
/etc |
|
Typically holds your configuration (for example,
systemd services defined via cloud-init ).
It's a good idea to capture the desired state of your instances in
cloud-init , as cloud-init is applied when an
instance is newly created as well as when an instance is
restarted. |
/tmp |
|
Typically used as a scratch space and should not be used to store persistent data. |
/mnt/disks |
|
You can mount persistent disks at directories under /mnt/disks . |
Storage driver support
Each node image differs in the kinds of storage plugins it supports. The following terms apply when describing a node image's support for a particular storage driver:
- Yes - Fully Tested/Supported: This storage plugin is fully supported and tested with the specified node image.
- Yes - Limited Testing: This storage plugin works with the specified node image, but have been tested only in a limited fashion; you might encounter unexpected behavior. For Container-Optimized OS, these plugins will eventually be fully tested and supported.
- Unsupported: This storage plugin has not been tested or used with the specified node image and GKE cannot provide any guarantee of functionality. There are no plans to test this storage plugin.
- No: This storage plugin does not work with the specified node image due to a limitation inherent to the node OS or Google Cloud.
The following matrix describes how each GKE node image supports some common storage plugins.
Volume Type | Does it work on Container-Optimized OS (cos )? |
Does it work on Ubuntu? |
---|---|---|
Compute Engine Persistent Disk (EXT4 or XFS) |
Yes - Fully Tested/Supported (XFS is supported in cos-85 and later only.) See the GKE release notes | Yes - Fully Tested/Supported |
NFSv3 | Yes - Fully Tested/Supported | Yes - Fully Tested/Supported |
NFSv4 | Yes - Fully Tested/Supported | Yes - Fully Tested/Supported |
CephFS | No | Yes - Limited Testing (Driver is not installed by default. You must install the ceph client, preferably via DaemonSet .) |
Cinder | No | No |
Fibre Channel | No | No |
Flocker | Unsupported | Unsupported |
iSCSI | No | No |
RBD | No | No |
Node VM modifications
Modifications on the boot disk of a node VM do not persist across node re-creations. Nodes are re-created during manual upgrade, auto-upgrade, auto-repair, and auto-scaling. In addition, nodes are re-created when you enable a feature that requires node re-creation, such as GKE Sandbox, intranode visibility, and shielded nodes.
To preserve modifications across node re-creation, use a DaemonSet.
It's not recommended to manage critical software provided by a node image, such
as the kernel or container runtime (whether containerd
or docker
). Node
images are tested extensively, and modifying critical software provided in the
node image puts the node into an unknown and untestable state.
GKE Autopilot nodes don't allow node software
modification.
Map Container-Optimized OS node image versions to GKE patch versions
GKE publishes a JSON mapping of GKE patch versions to Container-Optimized OS node image versions:
You can use this mapping to upgrade to a specific version of GKE to get a specific image version. For example, if your cluster needs a certain feature or fix from an image version, you can find the mapping, and upgrade your cluster to a specific GKE version to get the Container-Optimized OS image version with the changes. For details on Container-Optimized OS image releases, see the Container-Optimized OS release notes.
This list is updated weekly, approximately. To see the freshness of the
information, refer to the creation_time
field in the JSON file.
Node images release notes
Container-Optimized OS
Google provides comprehensive documentation for Container-Optimized OS:
Ubuntu
Periodically, Google updates the Ubuntu images that are available for use on your cluster's Nodes. Refer to the GKE release notes for information about these updates, including a link to a manifest listing the packages that are installed by default.
Known issues
Random connection resets on GKE nodes using Container-Optimized OS with Docker runtime
GKE node that uses Container-Optimized OS with Docker (cos
) may
experience random TCP connection resets when two pods on the same node
communicate using a Kubernetes ClusterIP Service.
The following GKE versions are impacted:
- 1.20.5-gke.100 or later
To workaround the issue, use one of the following options:
Node image source projects
The available node images for GKE clusters are contained in the following source projects:
- Container-Optimized OS images:
gke-node-images
- Ubuntu images:
ubuntu-os-gke-cloud
- Windows Server images:
gke-windows-node-images
In addition to the source projects listed above, GKE also uses the following source projects for exclusive usage by the GKE team:
ubuntu-os-gke-cloud-private
(reserved for exclusive usage of the GKE team)ubuntu-os-gke-cloud-devel
(reserved for exclusive usage of the GKE team)
You might need to know the source project names while setting up highly-secure clusters. The listed source projects are subject to change.