About the Docker node image deprecation


This page gives you information about the containerd container runtime, support for Docker in Google Kubernetes Engine (GKE), and an overview of why you must migrate to node images that use containerd. For instructions on how to migrate to a containerd node image, refer to Migrate from Docker to containerd node images.

Overview

Kubernetes nodes use the container runtime to launch, manage, and stop containers running in Pods. The Kubernetes project is removing built-in support for the Docker runtime in Kubernetes version 1.24 and later. To achieve this, Kubernetes is removing a component called dockershim, which allows Docker to communicate with Kubernetes components like the kubelet.

Containerd, the new default runtime, is an industry-standard container runtime that's supported by Kubernetes, and used by many other projects. The containerd runtime provides the layering abstraction that allows for the implementation of a rich set of features like gVisor and Image streaming to extend GKE functionality. The runtime is also considered more resource efficient and secure than the Docker runtime.

Because of this change, GKE does not support node images that use Docker as the runtime in GKE version 1.24 and later. A cluster is impacted if any of its node pools uses Docker-based node images or uses node auto-provisioning with a Docker-based default node image type.

Before July 31, 2023, the GKE version 1.23 end of life date, GKE pauses automatic upgrades and prevents manual upgrades to version 1.24. To upgrade your cluster's control plane to GKE version 1.24 and later before this date, you must update your node auto-provisioning configuration and all nodes to the containerd runtime. To upgrade a node pool, you must migrate to a node image that uses the containerd runtime.

After 1.23 has reached end of life, GKE unblocks manual control plane upgrades to 1.24 for clusters that have not yet completed migration, and begins gradually upgrading clusters for security and compatibility purposes. To learn more about how GKE upgrades your clusters to 1.24, and what we recommend you do to migrate your clusters from Docker node images, see Automatic migration when version 1.23 reaches end of life.

Docker and containerd node images

Containerd has been the default runtime for all new GKE nodes since version 1.19 on Linux and 1.21 on Windows. However, GKE Standard clusters also continued to support node images that used Docker as the runtime. The following table describes Docker-based node images that won't be supported in GKE version 1.24 and later, and the containerd equivalents:

Docker runtime containerd runtime
Container-Optimized OS with Docker (cos) Container-Optimized OS with containerd (cos_containerd)
Ubuntu with Docker (ubuntu) Ubuntu with containerd (ubuntu_containerd)
Windows Server LTSC with Docker (windows_ltsc) Windows Server LTSC with containerd (windows_ltsc_containerd)

Windows Server SAC with Docker (windows_sac)

Windows Server SAC with containerd (windows_sac_containerd)

Timeline and milestones

In GKE version 1.23, you can no longer do the following:

  • Create new clusters that use Docker-based node images.
  • Add new node pools with Docker-based node images to an existing cluster.
  • Enable node auto-provisioning with the --autoprovisioning-image-type flag set to Docker-based node images.

If you're upgrading to GKE version 1.23, you can continue using the following:

  • Existing node pools with Docker-based node images created before the upgrade.
  • Cluster autoscaler on node pools with Docker node images.
  • Node auto-provisioning with the --autoprovisioning-image-type flag set to Docker-based node images, if enabled before upgrading.

In GKE version 1.24, you can no longer do the following:

  • If the control plane runs version 1.24, you cannot use the node auto-provisioning with the --autoprovisioning-image-type flag set to Docker-based node images.
  • If the node pool runs version 1.24, the nodes cannot use Docker-based node images.

The following table provides a summary of the changes to expect when you interact with upcoming GKE versions:

Action GKE version 1.23 GKE version 1.24
Create new clusters with Docker No No
Create new node pools with Docker No No
Enable node auto-provisioning with Docker No No
Upgrade from previous version with existing Docker node auto-provisioning configuration Yes No
Use Docker-based node images Yes No

Automatic migration when version 1.23 reaches end of life

If you don't upgrade to version 1.24 and migrate to containerd node images before version 1.23 reaches end of life on July 31, 2023, GKE does the following over time:

  1. If your cluster has node auto-provisioning with a Docker-based default node image type, GKE updates the configuration to use the equivalent node image that uses the containerd runtime. You can't block this operation with a maintenance exclusion. To verify that there's no adverse effect on your workloads, we recommend that you manually update your node auto-provisioning default image type to a containerd-based image before GKE automatically updates the configuration.

  2. GKE upgrades your cluster's control plane to version 1.24.

  3. GKE migrates any node pools that still use Docker to containerd node images starting September 1st, 2023. We recommend that you manually migrate your node images before this date. Alternatively, you can request that GKE initiates an operation to migrate your cluster to use containerd images. To make this request, contact Cloud Customer Care.

    To temporarily block the automatic migration, upgrade your cluster to version 1.24 or later and configure a maintenance exclusion. For more information on this maintenance exclusion, see Temporarily delay the automatic migration to containerd node images.

  4. GKE upgrades node pools on version 1.23 to 1.24, as is done with any unsupported version to ensure cluster health alignment with the open source version skew policy. To learn more, see the GKE minor version life cycle. You can temporarily block this upgrade with a maintenance exclusion.

Temporarily delay the automatic migration to containerd node images

After your cluster's control plane has been upgraded to 1.24 or later, you can configure a maintenance exclusion to temporarily prevent the nodes from being migrated until February 29, 2024, when version 1.25 reaches end-of-life. To be eligible for this maintenance exclusion, your cluster must be enrolled in a release channel. Configure the maintenance exclusion with the "No minor or node upgrades" scope, and set the --add-maintenance-exclusion-end flag to 2024-02-29T00:00:00Z or earlier. We recommend that you unblock your migration as soon as possible and allow the nodes to be upgraded to version 1.24. Minor versions that have reached end-of-life will no longer receive security patches and bug fixes.

Migrate from Docker to containerd node images

See Migrate from Docker to containerd node images to identify clusters and node pools using Docker-based node images and migrate those node pools to containerd node images.

As part of the GKE shared responsibility model, it is part of the customer's responsibilities to maintain the health of workloads, and it is part of Google's responsibilities to ensure that the cluster remains functional, which includes running a supported version. We strongly recommend that you test your workloads and migrate your cluster before GKE automatically does so.

Before 1.23 is end-of-life, GKE prevents automatic or manual upgrades to 1.24 for clusters that have node pools that use Docker node images. After you migrate your cluster to use only containerd images, automatic upgrades can resume within a day—according to the GKE release schedule—or you can manually upgrade your cluster.

After 1.23 is end-of-life, GKE unblocks automatic or manual upgrades to 1.24 and follows the automatic migration process.

Impact of migrating

The main change when you migrate to containerd node images is that Docker no longer manages the lifecycle of your containers (such as starting and stopping them). You therefore cannot use Docker commands or the Docker API to view or interact with GKE containers running on nodes that use containerd images.

Most user workloads don't have a dependency on the container runtime. The Docker runtime also implements containerd, so your workloads behave similarly on containerd node images.

You might be impacted if the following situations apply:

  • You run privileged Pods that execute Docker commands.
  • You run scripts on nodes from outside the Kubernetes infrastructure (for example, to use ssh to troubleshoot issues).
  • You run third-party tools that perform similarly privileged operations.
  • Some of your tooling responds to Docker-specific logs in your monitoring system.
  • You deploy logging, monitoring, security, or continuous integration tooling supplied by outside vendors into your GKE cluster. Contact these vendors to confirm impact.

You are not impacted in the following situations:

  • You have a build pipeline outside the GKE cluster that uses Docker to build and push container images.
  • You use docker build and docker push commands in your GKE cluster. Linux node images with containerd include the Docker binary and support these commands.

Using privileged Pods to access Docker

If your users access Docker Engine on a node using a privileged Pod, you should update those workloads so that there's no direct reliance on Docker. For example, consider migrating your logging and monitoring extraction process from Docker Engine to GKE system add-ons.

Building container images with containerd

You cannot use containerd to build container images. Linux images with containerd include the Docker binary so that you can use Docker to build and push images. However, we don't recommend using individual containers and local nodes to run commands to build images.

Kubernetes is not aware of system resources used by local processes outside the scope of Kubernetes, and the Kubernetes control plane cannot account for those processes when allocating resources. This can starve your GKE workloads of resources or cause instability on the node.

Consider accomplishing these tasks using other services outside the scope of the individual container, such as Cloud Build, or use a tool such as kaniko to build images as a Kubernetes workload.

If none of these suggestions work for you, and you understand the risks, you can continue using Docker on the local node to build images. You must push the images to a registry before you can use them in a GKE cluster. Kubernetes with containerd is unaware of images locally-built using Docker.

Debugging containers on containerd nodes

For debugging or troubleshooting on Linux nodes, you can interact with containerd using the portable command-line tool built for Kubernetes container runtimes: crictl. crictl supports common functionalities to view containers and images, read logs, and execute commands in the containers. Refer to the crictl user guide for the complete set of supported features and usage information.

For Windows Server nodes, the containerd daemon runs as a Windows service named containerd.

Logs are available as follows:

  • Windows: C:\etc\kubernetes\logs\containerd.log
  • Linux: run journalctl -u containerd

You can also view logs for Windows and Linux nodes in Logs Explorer under LOG NAME: "container-runtime".

Troubleshooting

For troubleshooting, go to Troubleshoot issues with the containerd runtime.

What's next