GKE release notes (new features)

This page documents new features in Google Kubernetes Engine (GKE). You can periodically check this page for new feature announcements. The overall release notes also include the information in this page.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/gke-new-features-release-notes.xml

April 16, 2024

The Z3 machine family is generally available in Standard clusters running for GKE 1.25 and later. You can select this family by using the --machine-type flag when creating a cluster or node pool. The following limitations apply:

  • Node auto-provisioning for Z3 is supported in 1.29 and later.
  • GKE Autopilot is supported in 1.29 and later.
  • Z3 machines are gracefully terminated during host maintenance.

April 12, 2024

GPUDirect-TCPX is now supported on GKE version 1.27 and later and requires the following patch versions:

  • For GKE version 1.27, use GKE patch version 1.27.7-gke.1121000 or later.
  • For GKE version 1.28, use GKE patch version 1.28.8-gke.1095000 or later.
  • For GKE version 1.29, use GKE patch version 1.29.3-gke.1093000 or later.

To use GPUDirect-TCPX, see Maximize GPU network bandwidth with GPUDirect-TCPX and multi-networking.

April 10, 2024

The N4 machine family is generally available in GKE Standard clusters running on GKE 1.29 and later. You can select this family by using the --machine-type flag when creating a cluster or node pool. The following limitations apply:

  • Confidential GKE nodes is not supported.
  • Local SSD is not supported.
  • hyperdisk-balanced is the only supported boot disk type.

April 09, 2024

Cloud Tensor Processing Units (TPUs) are now available in GKE Autopilot clusters running version 1.29.2-gke.1521000 or later. To learn more, visit Deploy TPU workloads on GKE Autopilot.

April 05, 2024

GPU NVIDIA Multi-Process Service (MPS) is available in version 1.27.7-gke.1088000 and later, which allows multiple workloads to share a single NVIDIA GPU hardware accelerator with NVIDIA MPS.

April 03, 2024

The GKE compliance dashboard now offers compliance evaluation for CIS Kubernetes Benchmark 1.5, Pod Security Standards (PSS) Baseline, and PSS Restricted standards in Preview. To learn more, see About the compliance dashboard.

GKE threat detection is now available in Preview. Threats against the Kubernetes control plane impacting your GKE Enterprise clusters are now visible in the GKE security posture dashboard. To learn more, see About GKE threat detection.

April 02, 2024

Observability for Google Kubernetes Engine: Added a dashboard for Tensor Processing Unit (TPU) metrics on the Observability tab of both the cluster listing and cluster details pages for GKE clusters. The charts on this dashboard are populated with data only if the cluster has TPU nodes and GKE system metrics is enabled. For more information, see View observability metrics.

March 19, 2024

Cilium cluster-wide network policies are now generally available with the following GKE versions:

  • 1.28.6-gke.1095000 or later
  • 1.29.1-gke.1016000 or later

You can now control your GKE workloads' ingress and egress traffic cluster-wide, without being bound to a namespace for your network policies. This new capability is intended to streamline network policies for GKE platform administrators looking for a uniform way to apply policies across namespaces or application teams.

Cilium cluster-wide network policy is available in all GKE editions.

To learn more, read Control cluster-wide communication using network policies.

March 11, 2024

Private clusters created on GKE versions 1.29.0-gke.1384000 and later use Private Service Connect (PSC) for nodes to privately communicate with the control plane. There is no price increase for using GKE private clusters running on PSC.

For private clusters created with a different GKE version, the clusters continue to use VPC Peering for node-to-control plane communication.

Secret Manager add-on for GKE is now available. With the add-on, you can access the secrets stored in Secret Manager as volumes mounted in Kubernetes Pods. The add-on is supported on Standard and Autopilot clusters versioned 1.29 and later. For more info, see Use Secret Manager add-on with GKE.

Opportunistic bursting and lower Pod minimums are now available on newly created GKE Autopilot clusters at version 1.29.2-gke.1060000 or later, and on existing clusters created at 1.26 or later that have been fully upgraded (including all nodes) to 1.29.2-gke.1060000 or later. To learn more, see Configure Pod bursting on GKE.

March 07, 2024

You can now preload data or container images in new nodes to get fast workload deployment and auto scaling. This feature is available in Preview starting from GKE version 1.28.3-gke.1067000.

March 04, 2024

NVIDIA H100 (80 GB) GPUs are now available in GKE Autopilot mode in versions 1.28.6-gke.1369000 or later, and 1.29.1-gke.1575000 or later.

GPU workloads running in Autopilot mode can now be configured using the Accelerator Compute Class. This configuration supports resource reservations, Compute Engine committed use discounts, and a new pricing model in GKE versions 1.28.6-gke.1095000 and later, and 1.29.1-gke.1143000 and later.

February 28, 2024

The Performance Compute Class, designed for running whole-machine CPU workloads, is available in Autopilot mode from versions 1.28.6-gke.1369000 and 1.29.1-gke.1575000 and later.

February 26, 2024

GKE now supports Gemma (2B, 7B), Google's new state-of-the-art open models. To learn more, refer to the following guides:

Deployment to GKE is also supported via Vertex AI Model Garden as part of our Hugging Face, Vertex AI, and GKE integration.

February 21, 2024

The GKE Stateful HA Operator is now available in GA starting in GKE versions 1.28.5-gke.1113000 and later, or 1.29.0-gke.1272000 and later. The GKE Stateful HA Operator is enabled in new Autopilot clusters and opt-in for new Standard clusters.

February 20, 2024

You can now use the GKE API to apply Resource Manager tags to your GKE nodes. GKE attaches these tags to the underlying Compute Engine VMs. You can use these tags to selectively enforce Cloud Firewall network firewall policies. This feature is generally available in GKE version 1.28 and later.

Kubernetes Engine best practice observability packages, including control plane logs, control plane metrics, and kube state metrics are now enabled by default for new managed GKE Enterprise clusters to ensure availability of necessary data when it's needed for troubleshooting or optimization. Control plane metrics and kube state metrics are included in GKE Enterprise Edition at no additional charge.

GKE now delivers insights and recommendations if your cluster's Certificate Authority (CA) is expired or will expire in the next 180 days. To learn more, see Find clusters with expiring or expired credentials.

February 02, 2024

FQDN network policies are now generally available with the following GKE versions:

  • 1.26.4-gke.500 and later.
  • 1.27.1-gke.400 and later.
  • 1.28 and later.

You can further control your GKE workloads' egress traffic to a public or private service or endpoint by using a network policy matching a fully-qualified domain name or a regular expression.

FQDN Network Policy is only available and supported with GKE Enterprise.

To learn more, read Control Pod egress traffic using FQDN network policies.

February 01, 2024

You can now encrypt Pod-to-Pod traffic between nodes in the same cluster or in a multi-cluster environment natively with GKE. Inter-node transparent encryption is now generally available, only with GKE Enterprise, for GKE clusters in the following versions:

  • 1.26.9-gke.1024000 and later.
  • 1.27.6-gke.1506000 and later.
  • 1.28.2-gke.1098000 and later.
  • 1.29 and later.

To learn more, see Encrypt your data in-transit in GKE with user-managed encryption keys.

January 31, 2024

The africa-south1 region in Johannesburg, South Africa is now available.

December 19, 2023

You can now modify the vm.max_map_count Linux kernel attribute for nodes in a GKE Standard cluster node pool using the node system configuration. To learn more, see Sysctl configuration options.

December 18, 2023

The GKE NEG controller now supports IPv6 endpoints with GKE version 1.28.4-gke.1083000 and later.

With this new capability, when you create a dual stack Service in a dual stack GKE cluster, any NEGs associated with the Service will now contain both IPv4 and IPv6 endpoints. Existing dual stack Services utilizing NEGs (i.e. Ingress, Services using Standalone NEGs) will be migrated from "IPv4 only" endpoints to "IPv4 + IPv6" endpoints.

The migration will be completed in approximately one hour. In the event that a NEG contains a single endpoint, you might experience brief downtime of approximately 1-2 minutes during the migration of that endpoint.

Note that Having IPv6 endpoints in NEGs doesn't necessarily mean that the load balancer uses IPv6 for communication. How the load balancer communicates with your Pod depends on how the BackendService is configured, such as fields like IpAddressSelectionPolicy.

All newly created Google Kubernetes Engine (GKE) Autopilot clusters starting with 1.27.4-gke.900 will automatically collect and send metrics from the kube-state-metrics package to Managed Service for Prometheus.

December 15, 2023

The Observability tab in the cluster details page for each cluster and in the GKE cluster list page now shows GPU metrics if the cluster has GPU nodes. For more information, see View observability metrics.

November 29, 2023

Starting in GKE version 1.27.6-gke.1248000, clusters in Autopilot mode detect nodes that can't fit all DaemonSets and, over time, migrate workloads to larger nodes that can fit all DaemonSets. For more information, see Best practices for DaemonSets on Autopilot.

Starting in GKE 1.27.7, you can configure your workloads to use TPU reservations with node auto-provisioning.

November 17, 2023

You can now run workloads on L4 GPUs in Autopilot clusters that use GKE version 1.28.3-gke.1203000 and later. For instructions, see Deploy GPU workloads in Autopilot.

November 15, 2023

Dynamic Workload Scheduler support on GKE through the Provisioning Request API launched in Preview in version 1.28. Use the Dynamic Workload Scheduler to get large atomic sets of available GPU models in GKE Standard clusters. For more information, see Deploy GPUs for batch workloads with ProvisioningRequest.

November 10, 2023

The Observability tab for a GKE deployment now shows application performance metrics if the metrics are available. The supported metric sources include Istio, GKE Ingress, NGINX Ingress and gRPC, and HTTP metrics collected by using Google Managed Service for Prometheus. For more information, see Use application performance metrics.

November 09, 2023

GKE Infrastructure Dashboards and Metrics Packages are now generally available for both GKE Autopilot and Standard clusters with control plane version 1.27.2-gke.1200 and later.

You can now configure your Autopilot or Standard clusters to export a predefined list of metrics emitted by GKE managed kube-state-metrics (KSM) for workloads state and persistent storage. The component will run in the GKE system namespace "gke-managed-cim" to collect the metrics using Google Cloud Managed Service for Prometheus and send them to Cloud Monitoring. You can view the metrics in the new Persistent and Workloads State dashboards in the Observability tab.

November 08, 2023

New inference-focused Cloud Tensor Processing Unit (TPU) v5e machine types are available in GKE. These single-host TPU VMs are designed for inference workloads and contain one, four, or eight TPU v5e chips. These three new TPU v5e machine types (ct5l-hightpu-1t, ct5l-hightpu-4t, and ct5l-hightpu-8t) are currently available in the us-central1-a and europe-west4-b zones.

Cloud Tensor Processing Unit (TPU) v5e is generally available in clusters running GKE version 1.27.2-gke.2100 and later.

TPU v5e is purpose-built to bring the cost-efficiency and performance required for medium- and large-scale training and inference. TPU v5e delivers up to 2x higher training performance per dollar and up to 2.5x inference performance per dollar for LLMs and gen AI models compared to Cloud TPU v4. At less than half the cost of TPU v4, TPU v5e makes it possible for more organizations to train and deploy larger, more complex AI models.

October 31, 2023

GKE multi-cluster Gateway is now generally available in GKE versions 1.24 and later for GKE Standard clusters, and versions 1.26 and later for GKE Autopilot clusters. Use the Gateway API to express the intent of your inbound HTTP(S) traffic into your fleet of GKE clusters. The multi-cluster Gateway controller deploys and manages the Application Load Balancers that forward traffic to your applications. To learn more, see Enable multi-cluster Gateways. For the list of supported Cloud Load Balancers and their features, refer to GatewayClass capabilities.

October 20, 2023

You can now use the GKE API to apply Resource Manager tags to your GKE resources. GKE attaches these tags to the underlying Compute Engine VMs. You can use these tags to selectively enforce Cloud Firewall network firewall policies. This feature is available in Public Preview in GKE version 1.28 and later.

October 19, 2023

Compute resources can now be reserved in advance for use with GKE. Create a future reservation to request assurance of important or difficult-to-obtain capacity in advance. There are no additional costs for creating future reservation requests. You only start to pay when Compute Engine provisions the reserved resources, and you're charged at the same cost as on-demand reservations.

October 16, 2023

Filestore Enterprise now supports backups on GKE, allowing you to make reliable copies of your data to be stored for later use. To trigger backups on Filestore Enterprise, use Kubernetes volume snapshots. Backups are currently not supported for Filestore Enterprise instances with multishares enabled.

October 13, 2023

Starting in GKE 1.28.1-gke.1066000, two new TPU usage metrics are available: TensorCore utilization and Memory Bandwidth utilization.

October 09, 2023

If you are using a third generation machine series (for example, C3), GKE configures Local SSD volumes as the local ephemeral storage by default. You no longer need to specify the --ephemeral-storage-local-ssd flag when provisioning clusters or node pools. When you configure Local SSD volumes as raw block storage with the --local-nvme-ssd-block flag, specifying the count value is now optional.

October 02, 2023

GKE now delivers insights and recommendations if users have installed webhooks that intercept system resources or webhooks that have no available endpoints. To learn more, see Ensure control plane stability when using webhooks.

September 21, 2023

The Observability dashboards on the GKE Clusters List, Cluster Details, and Workload List pages are now customizable. Additionally, the Cluster Details dashboards can be customized across the entire project, or per-cluster for specific use cases.

September 19, 2023

The me-central2 region in Dammam, Saudi Arabia is now available.

September 12, 2023

You can now use node auto-provisioning for TPU slices. With this feature, Standard clusters with GKE version 1.28 and later provision TPU node pools and multi-host TPU accelerators automatically to ensure the capacity required to schedule AI/ML workloads. To learn more, see Configuring TPU node auto-provisioning.

August 30, 2023

GKE now supports the ability to create nodes and workloads with multiple network interfaces. You can create new clusters with version 1.27 and later with multi networking enabled. The additional network interfaces on the Pods can be regular interfaces or high performance interfaces where the network interface is directly attached to the Pod. For more information, see Setup multi-network support for Pods.

Your clusters can now perform operations, such as node auto-provisioning or version upgrades, on multiple node pools in parallel. You no longer have to wait for an operation to complete before you initiate another operation. This feature is enabled for all GKE versions. This change provides you with benefits like the following:

  • More efficient scaling, which results in improved savings and faster workload deployment
  • Faster, less disruptive node pool upgrades
  • Fewer "operation already in progress" messages that could delay subsequent planned operations
  • More reliable rollback behavior to fix upgrade-related disruptions in production
  • Automatic control plane resize operations won't block other operations on the cluster

The Google Cloud Platform Terraform provider has also been updated to take advantage of this change.

August 29, 2023

You can now create Cloud Tensor Processing Unit (TPU) nodes in GKE to run AI workloads, from training to inference models. GKE manages your cluster by automating TPU resource provisioning, scaling, scheduling, repairing, and upgrading. GKE provides TPU infrastructure metrics in Cloud Monitoring, TPU logs, and error reports for better visibility and monitoring of TPU node pools in GKE clusters. TPUs are available with GKE Standard clusters. GKE supports TPU v4 in version 1.26.1.gke-1500 and later, and supports TPU v5e in version 1.27.2-gke.1500 and later. To learn more, see About TPUs in GKE.

You can now sequence the rollout of cluster upgrades across fleets or across scopes. To learn more, see About cluster upgrades with rollout sequencing.

August 25, 2023

GKE now delivers insights and recommendations to ensure your workloads are ready for disruption using features such as Pod Disruption Budgets. To learn more, see Ensure stateful workloads are disruption-ready.

August 22, 2023

The europe-west10 region in Berlin, Germany is now available.

August 17, 2023

You can now easily identify clusters that use deprecated Kubernetes APIs removed in versions 1.25, 1.26, and 1.27. Kubernetes deprecation insights are now available for these versions.

August 16, 2023

GKE Infrastructure Dashboards and Metrics Packages are now available for both GKE Autopilot and Standard clusters with control plane version 1.27.2-gke.1200 and later. You can now configure Autopilot or Standard clusters to export a predefined list of metrics emitted by GKE managed KSM (kube-state-metrics) for workloads state and Persistent Storage. These metrics are collected by Google Cloud Managed Service for Prometheus and are sent to Cloud Monitoring. You can also view new dashboards (Persistent and Workloads state) rendering those metrics in the Observability tab. For more information, see View observability metrics.

You can now troubleshoot issues with CPU limit utilization and Memory limit utilization of containers running in GKE by using the new "interactive playbook" dashboards in Cloud Monitoring.

August 09, 2023

The Filestore CSI driver now supports smaller share sizes (10Gi) for Filestore multishares for GKE for enterprise instances starting in version 1.27.

August 02, 2023

You can now run workloads on A100 80GB GPUs in Autopilot clusters that use GKE version 1.27 and later.

July 25, 2023

Kubernetes control plane logs and Kubernetes control plane metrics are now available for GKE Autopilot clusters with control plane version 1.22.0 and later and 1.22.13 and later, respectively. You can now configure Autopilot cluster to export logs and certain metrics emitted by the Kubernetes API server, scheduler, and controller manager to Cloud Logging and Cloud Monitoring.

July 24, 2023

GKE Autopilot supports extended duration Pods from 1.27 or later with the cluster-autoscaler.kubernetes.io/safe-to-evict=false annotation. To learn more, see how to extend the run time of Autopilot Pods.

July 13, 2023

The managed Cloud Storage FUSE CSI driver for GKE is now GA in versions 1.26.5 and later. You can use this driver to consume Cloud Storage buckets for GKE workloads.

July 12, 2023

GKE Dataplane V2 observability is now available in Public Preview starting in GKE versions 1.26.4-gke.500 or later, or 1.27.1-gke.400 or later. You can now enable Dataplane V2 metrics and observability tools on your cluster. Dataplane V2 metrics are included in new Autopilot clusters and opt-in for new Standard clusters. You can opt-in to enable Dataplane V2 observability tools for Autopilot and Standard clusters. Existing clusters can also be updated to enable metrics and observability tooling.

For more information, check out GKE Dataplane V2 observability.

In GKE version 1.24 and later, new beta APIs are, by default, disabled in new clusters. Starting in version 1.27, which is the first new minor version since 1.24 where new beta APIs are introduced, you can enable new APIs on cluster creation or for an existing cluster.

For more information, see how to Use Kubernetes beta APIs with GKE clusters.

July 11, 2023

You can now troubleshoot common GKE issues by using the new "interactive playbook" dashboards in Cloud Monitoring: unschedulable pods and crashlooping containers. You can also access the interactive playbooks from GKE UI insights and set alerts that will allow you to know once those issues occurs.

For information about using these dashboards, see the GKE troubleshooting documentation for unschedulable pods and crashlooping.

Starting in GKE version 1.27, cluster autoscaler always considers Compute Engine Reservations when making the scale-up decisions. The node pools with matching unused reservations are prioritized when choosing the node pool to scale up, even when the node pool is not the most efficient one. Additionally, unused reservations are always prioritized when balancing multi-zonal scale-ups.

For more information, see how to use cluster autoscaler.

July 10, 2023

The new release of the GKE Gateway controller (2023-R2) is now generally available. With this release, the GKE Gateway controller will provide the following new capabilities:

  • New GatewayClasses supporting the regional external Application Load Balancer
  • Identity-aware Proxy (IAP) Integration
  • Custom request and response headers
  • URL Rewrites and Path Redirects

To learn more, see the supported capabilities per GatewayClass.

June 26, 2023

Managed Service for Prometheus is enabled by default in new GKE Standard clusters running version 1.27 and later. Existing clusters that upgrade to 1.27 will not automatically enable this feature. For more information, see Enable managed collection: GKE.

June 23, 2023

Automatic GPU driver installation is available in version 1.27.2-gke.1200 and later, which enables you to install NVIDIA GPU drivers on nodes without manually applying a DaemonSet.

For instructions, see Running GPUs.

June 22, 2023

GKE Autopilot now supports the ability to deploy your own service mesh. Many service meshes, such as Istio or LinkerD, require CAP_NET_ADMIN Linux capability to function, which is disabled on Autopilot clusters by default to reduce the size of the security attack surface. You can now optionally enable NET_ADMIN on your Autopilot clusters if you need this capability for your service meshes or other opt-in use cases. See Autopilot Security for more information for how to enable NET_ADMIN.

June 21, 2023

GKE support for Hyperdisk Throughput and Hyperdisk Extreme as an attached persistent disk option is now generally available. Support is available for both Autopilot and Standard clusters running GKE versions 1.26 and later.

June 14, 2023

Clusters with low or no utilization can be identified by Idle Cluster insights.

June 12, 2023

Dual-stack LoadBalancer Services are now available in Preview. Dual-stack LoadBalancer Services are supported on both GKE Standard and Autopilot dual-stack clusters. To learn more, see Single-stack and dual-stack Services.

You can now use deprecation insights to identify clusters on versions 1.21 to 1.24 that use Pod Security Policy, which is unsupported on GKE version 1.25 and later.

June 09, 2023

In addition to the existing egress network policy GKE already supports, you can now control the egress traffic of your Pods by using a network policy that matches a fully-qualified domain name or a regular expression. FQDN Network Policy is now available in Preview for clusters in version 1.26.4-gke.500 and later, and 1.27.1-gke.400 and later. For more information, see Control Pod egress traffic using FQDN network policies.

June 01, 2023

Agones on GKE users will get recommendations and insights if they did not install the Agones controller on dedicated nodes.

May 26, 2023

The Observability tab for each of your GKE clusters now includes metrics for ephemeral storage. For more information, see View observability metrics.

May 22, 2023

The C3 machine family is generally available for GKE Standard clusters running on version 1.22 and later. You can select this family by using the --machine-type flag when creating a cluster or node pool.

The following features are not supported for this machine family:

  • Node auto-provisioning.
  • Confidential GKE nodes.
  • Local SSD.
  • Standard persistent disks (pd-standard).

For more information, refer to the C3 machine series documentation.

May 12, 2023

The g2-standard machine family with NVIDIA L4 is generally available for node pools in clusters running GKE version 1.22 and later. To select the machine family, use the --machine-type flag in your create command.

May 09, 2023

Now in GA for both GKE Standard and Autopilot clusters with GKE version 1.26 and later, you can add more IPv4 secondary Pod ranges to a new or existing cluster with the --additional-pod-ipv4-ranges flag. To learn more, see Adding Pod IP addresses.

May 02, 2023

The managed Cloud Storage FUSE CSI driver for GKE is now available in Preview in GKE versions 1.26.3 and later. You can use this driver to consume Cloud Storage buckets for GKE workloads.