Google Kubernetes Engine (GKE) clusters use containerd node images with all worker nodes that run version 1.24 and later. The worker nodes use a specific version of containerd, based on the GKE version:
- Nodes that run GKE 1.32 or earlier, with containerd node images, use containerd 1.7 or earlier versions.
- Nodes that run GKE 1.33 use containerd 2.0.
When GKE nodes are upgraded from 1.32 to 1.33, the nodes migrate from using containerd 1.7 to the new major version, containerd 2.0. You can't change which containerd version a GKE version uses.
You can skip reading this page if you know that your workloads run as expected on containerd 2.
How GKE is transitioning to containerd 2
Review the following timeline to understand how GKE is transitioning existing clusters to use containerd 2:
- With minor version 1.32, GKE uses containerd 1.7. containerd 1.7 deprecated both Docker Schema 1 images and the Container Runtime Interface (CRI) v1alpha2 API. To learn about other features deprecated in earlier versions, see Deprecated config properties.
- With minor version 1.33, GKE uses containerd 2.0, which removes support for Docker Schema 1 images and the CRI v1alpha2 API.
- The following containerd config properties in the CRI plugin are deprecated
and will be removed in containerd 2.1, with a GKE version yet
to be announced:
registry.auths
,registry.configs
,registry.mirrors.
For approximate timing of automatic upgrades to later minor versions such as 1.33, see the Estimated schedule for release channels.
Impact of the transition to containerd 2
Read the following section to understand the impact of this transition to containerd 2.
Paused automatic upgrades
GKE pauses automatic upgrades to 1.33 when it detects that a cluster uses the deprecated features. However, if your cluster nodes use these features, we recommend creating a maintenance exclusion to prevent node upgrades. The maintenance exclusion ensures that your nodes aren't upgraded if GKE doesn't detect usage.
After you migrate from using these features, if 1.33 is an automatic upgrade target for your cluster nodes and there are no other factors blocking auto-upgrades, GKE resumes automatic minor upgrades to 1.33. For Standard cluster node pools, you can also manually upgrade the node pool.
End of support and the impact of failing to prepare for migration
GKE pauses automatic upgrades until the end of standard support. If your cluster is enrolled in the Extended channel, your nodes can remain on a version until the end of extended support. For more details about automatic node upgrades at the end of support, see Automatic upgrades at the end of support.
If you don't migrate from these features, when 1.32 reaches the end of support, and your cluster nodes are automatically upgraded to 1.33, you could experience the following issues with your clusters:
- Workloads using Docker Schema 1 images fail.
- Applications calling the CRI v1alpha2 API experience failures calling the API.
Migrate from deprecated features
Review the following content to understand how to migrate from features deprecated with containerd 2.
Migrate from Docker Schema 1 images
Identify workloads using images that must be migrated, then migrate those workloads.
Find images to be migrated
You can use different tools to find images that must be migrated.
Use Cloud Logging
You can use the following query in Cloud Logging to check containerd logs to find Docker Schema 1 images in your cluster:
jsonPayload.SYSLOG_IDENTIFIER="containerd"
"conversion from schema 1 images is deprecated"
If more than 30 days have passed since the image was pulled, you might not see logs for an image.
Use the ctr
command directly on a node
To query a specific node to return all non-deleted images that were pulled as Schema 1, run the following command on a node:
ctr --namespace k8s.io images list 'labels."io.containerd.image/converted-docker-schema1"'
This command can be useful if, for example, you're troubleshooting a specific node and you don't see log entries in Cloud Logging because it's been more than 30 days since the image was pulled.
Use the crane
open-source tool
You can also use open-source tools such as crane to check for images.
Run the following crane
command to check the schema version for an image:
crane manifest $tagged_image | jq .schemaVersion
Prepare workloads
To prepare workloads that run Docker Schema 1 images, you must migrate those workloads to Schema 2 Docker images, or Open Container Initiative (OCI) images. Consider the following options for migrating:
- Find a replacement image: you might be able to find a publicly available open-source or vendor-provided image.
- Convert the existing image: if you can't find a replacement image, you
can convert existing Docker Schema 1 images to OCI images with the following
steps:
- Pull the Docker image into containerd, which automatically converts it to an OCI image.
- Push the new OCI image to your registry.
Migrate from the CRI v1alpha2 API
The CRI v1alpha2 API was removed in Kubernetes 1.26. You must identify workloads that access the containerd socket and update these applications to use the v1 API.
Identify workloads
You can use different techniques to identify workloads that must be migrated.
Use kubectl
The following command helps you find workloads that access the containerd
socket. This command returns workloads that mount hostPath
directories that
include the socket. This query might lead to false positives because some
workloads mount these directories or other child directories, but don't access
the containerd socket.
Run the following command:
kubectl get pods --all-namespaces -o json | jq -r '.items[] |
select(.spec.volumes[]? | select(.hostPath.path? and (.hostPath.path |
startswith("/run") or startswith("/var/run") or . == "/"))) |
.metadata.namespace + "/" + .metadata.name'
Check application code
You can check your application code to see if it's importing CRI v1alpha2 API client libraries.
For example, see the following golang code:
package main
import (
...
runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
)
func foo() {
...
client := runtimeapi.NewRuntimeServiceClient(conn)
version, err := client.Version(ctx, &runtimeapi.VersionRequest{})
...
}
Here, the application imports the v1alpha2 library and uses it to issue RPCs. If the RPCs use the connection to the containerd socket, then this application is causing GKE to pause auto-upgrades for the cluster.
Do the following steps to search and update your application code:
Identify problematic golang applications by running the following command to search for the v1alpha2 import path:
grep -r "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
If the output of this command shows that the v1alpha2 library is used in the file, you must update the file.
For example, replace the following application code:
runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1alpha2"
Update the code to use v1:
runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1"