You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.
To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly.
November 20, 2024
You can now specify a custom resource policy as a compact placement policy with node auto-provisioning in clusters running GKE version 1.31.1-gke.2010000 or later. To learn more, see Use compact placement for node auto-provisioning.
November 19, 2024
GKE version 1.31 introduces increased scalability, allowing users to create clusters with up to 65,000 nodes. For clusters exceeding 5,000 nodes, a quota increase is required. Contact Google Cloud support to request this increase.
November 18, 2024
Performance horizontal Pod autoscaling (HPA) profile is now available in Preview for new and existing GKE clusters running version 1.31.2-gke.1138000 or later. This feature speeds up HPA reaction time and enables quick recalculation of up to 1,000 HPA objects. To learn more, see Configuring Performance HPA profile.
November 11, 2024
DNS-based access for GKE clusters control plane is now generally available. This capability provides each cluster with a unique domain name system (DNS) name or fully-qualified domain name (FQDN). Access to clusters is controlled through IAM policies, eliminating the need for bastion hosts or proxy nodes. Authorized users can connect to the control plane from different cloud networks, on-prem deployments, or from remote locations, without relying on proxies.
To learn more, see About network isolation in GKE.
November 07, 2024
GKE clusters running version 1.28 or later now support automatic application monitoring in public preview. Enabling this feature automatically deploys PodMonitoring
configurations to capture key metrics for supported workloads like Apache Airflow, Istio, and RabbitMQ. These metrics are integrated with Cloud Monitoring dashboards for observability. To learn more, see Configure automatic application monitoring for workloads.
November 06, 2024
The GKE Volume Populator is generally available on GKE clusters running version 1.31.1-gke.1729000 or later. This feature provides a way to automate data transfer from a Google Cloud Storage bucket source storage to a destination PersistentVolumeClaim backed by a Parallelstore instance. To learn more, see Transfer data from Cloud Storage during dynamic provisioning using GKE Volume Populator.
November 05, 2024
Generally available: In GKE version 1.26 and later, Hyperdisk Balanced volumes can be created in Confidential mode for custom boot disks and persistent volumes and attached to Confidential GKE Nodes.
Cloud TPU v6e machine types are now in public preview for GKE clusters running version 1.30.4-gke.1167000 or later. These TPU VMs (ct6e-standard
) are available in the following zones: us-east5-b
, europe-west4-a
, us-east1-d
, asia-northeast1-b
, and us-south1-a
. To learn more, see Plan TPUs in GKE.
October 31, 2024
For GKE clusters running version 1.31.1-gke.1146000 or later, Cloud Tensor Processing Unit (TPU) v3 machine types are generally available. These TPU VMs (ct3-hightpu-4t
and ct3p-hightpu-4t
) are currently available in us-east1-d, europe-west4-a, us-central1-a, us-central1-b, and us-central1-f. To learn more, see TPUs in GKE.
GKE control plane authority is now generally available with version 1.31.1-gke.1846000 or later. GKE control plane authority provides enhanced visibility, security controls, and customization of the GKE control plane. For more information, see the About GKE control plane authority.
October 30, 2024
Weighted load balancing for GKE External LoadBalancer Services is now available in Preview. Weighted load balancing is a more efficient way to distribute traffic to nodes based on the number of serving Pods they have backing the Service. To learn more, see About LoadBalancer Services.
October 29, 2024
Three new metrics are added for measuring node and workload startup latency:
kubernetes.io/node/latencies/startup
: The total startup latency of a node, from the GCE instance'sCreationTimestamp
toKubernetes Node Ready
for the first time.kubernetes.io/pod/latencies/pod_first_ready
: The Pod end-to-end startup latency (from PodCreated
toReady
), including image pulls. This metric is available for clusters with GKE version 1.31.1-gke.1678000 or later.kubernetes.io/autoscaler/latencies/per_hpa_recommendation_scale_latency_seconds
: Horizontal Pod Autoscaling (HPA) scaling recommendation latency (the time between metrics being created and the corresponding scaling recommendation being applied to the API server) for the HPA target. This metric is available for clusters running the following versions or later:- 1.30.4-gke.1348001
- 1.31.0-gke.1324000
October 28, 2024
The A3 Edge (a3-edgegpu-8g
) machine type with H100 80GB GPUs attached is now available on GKE Standard clusters. To learn more, see About GPUs.
October 17, 2024
You can now use NVIDIA H100 80GB GPUs on GKE in the following smaller machine types:
a3-highgpu-1g
(1 GPU)a3-highgpu-2g
(2 GPUs)a3-highgpu-4g
(4 GPUs)
These machine types are available through Dynamic Workload Scheduler Flex Start mode, Spot VMs in GKE Standard mode clusters, or Spot Pods in GKE Autopilot mode clusters. You can only provision these machine types if there's available capacity in your region.
GKE continues to support the 8 GPU H100 80GB machine types: a3-highgpu-8g
and a3-megagpu-8g
.
The new release of the GKE Gateway controller (2024-R2) is now generally available. With this release, the GKE Gateway controller provides the following new capabilities:
- ReferenceGrant support
- Cloud Armor backend security policy support for internal Gateways
- Regional selection for regional multi-cluster Gateways
- Traffic-based autoscaling for single cluster Gateways
Conformance:
To learn more about our GKE Gateway controller capabilities, see the supported capabilities per GatewayClass.
In GKE clusters with the control plane running version 1.29.1-gke.1425000 or later, TPU slice nodes support SIGTERM
signals that alert the node of an imminent shutdown. The imminent shutdown notification is configurable up to five minutes in TPU nodes. To configure GKE to terminate your workloads gracefully within this notification timeframe, see Manage GKE node disruption for GPUs and TPUs.
October 15, 2024
You can now create workloads with multiple network interfaces in GKE Autopilot clusters running version 1.29.5-gke.1091000 and later or version 1.30.1-gke.1280000 and later. For more information, see Setup multi-network support for Pods.
October 04, 2024
The following beta APIs were added in Kubernetes 1.31 and are available in GKE version 1.31.1-gke.1361000 and later:
- networking.k8s.io/v1beta1/ipaddresses
- networking.k8s.io/v1beta1/servicecidrs
Enabling both APIs at the same time enables the Multiple Service CIDRs Kubernetes feature in a GKE cluster. For more information, see the following resources:
During the beta phase, you can only create Service CIDRs in the 34.118.224.0/20
reserved IP address range to avoid possible issues with overlapping IP address ranges.
Ray Operator on GKE is now generally available on 1.29 and later. Ray Operator is a GKE add-on that lets you manage and scale Ray applications. To learn more, see the Ray Operator documentation.
October 01, 2024
GKE now supports the Parallelstore CSI driver in allowlisted general availability (GA), which means that you can reach out to your Google support team to use the service under GA terms.
Parallelstore accelerates AI/ML training and excels at saturating individual compute clients, ensuring that expensive compute resources are efficiently used. The product demonstrated a 3.9x training time improvement and 3.7x better throughput improvement compared to native ML framework data loaders and saturates single clients NIC bandwidth at 90%+.
For details, see About the GKE Parallelstore CSI driver.
In GKE version 1.30.3-gke.1639000 and later and 1.31.0-gke.1058000 and later, GKE can handle GPU and TPU node disruptions by notifying you in advance of a shutdown and by gracefully terminating your workloads. This feature is generally available. For details, see Manage GKE node disruption for GPUs and TPUs.
September 11, 2024
For GPU node pools created in GKE Standard clusters running version 1.30.1-gke.115600 or later, GKE automatically installs the default
NVIDIA GPU driver version corresponding to the GKE version if you don't specify the gpu-driver-version
flag.
August 27, 2024
Starting from version 1.30.3-gke.1451000, new and upgraded GKE clusters support the GKE Metrics Server updates where the addon-resizer runs in the cluster's control plane instead of worker nodes.
August 21, 2024
GKE support for Hyperdisk ML as an attached persistent disk option is now generally available. Support is available for both Autopilot and Standard clusters running GKE versions 1.30.2-gke.1394000 and later.
August 20, 2024
The C4 machine family is generally available in the following versions:
- Standard clusters in version 1.29.2-gke.1521000 and later. To use this family in GKE Standard, you can use the
--machine-type
flag when creating a cluster or node pool. - Autopilot clusters in 1.30.3-gke.1225000 and later. To use this family in GKE Autopilot, you can use the Performance compute class when scheduling your workloads.
- Cluster autoscaler and node auto-provisioning are supported in 1.30.3-gke.1225000 and later.
August 13, 2024
Custom compute classes are a new set of capabilities in GKE that provide an API for fine-grained control over fallback compute priorities, autoscaling configuration, obtainability and node consolidation. Custom compute classes offer enhanced flexibility and control over your GKE compute infrastructure so that you can ensure optimal resource allocation for your workloads. You can use custom compute classes in GKE version 1.30.3-gke.1451000 and later. To learn more, see About custom compute classes.
August 06, 2024
You can now keep a GKE Standard cluster on a minor version for longer with the Extended release channel. Clusters running 1.27 or later can be enrolled in the Extended channel, and automatically receive security patches during the extended support period after the end of standard support. To learn more, see Get long-term support with the Extended channel.
August 02, 2024
The NVIDIA GPU Operator can now be used as an alternative to fully managed GKE for both Container-Optimized OS and Ubuntu node images. Choose this option to manage your GPU stack if you're looking for a consistent multi-cloud experience, already using the NVIDIA GPU Operator, or have software reliant on it.
August 01, 2024
You can now enable NCCL Fast Socket on your multi-GPU Autopilot workloads. NCCL Fast Socket is a transport layer plugin designed to improve NVIDIA Collective Communication Library (NCCL) performance on Google Cloud. To enable NCCL Fast Socket on GKE Autopilot, you must use a GKE Autopilot cluster with control plane version 1.30.2-gke.1023000 or later. For more information, see Improve workload efficiency using NCCL Fast Socket.
July 31, 2024
You can now keep a GKE Standard cluster on a minor version for longer with the Extended release channel. Clusters running 1.27 or later can be enrolled in the Extended channel, and automatically receive security patches during the extended support period after the end of standard support. To learn more, see Get long-term support with the Extended channel.
July 16, 2024
Compute flexible committed use discounts (CUDs), previously known as Compute Engine Flexible CUDs, have been expanded to include several GKE Autopilot and Cloud Run SKUs (see the GKE CUD documentation for details). The legacy GKE Autopilot CUD will be removed from sale on October 15, 2024. GKE Autopilot CUDs purchased before this date will continue to apply through their term.
July 08, 2024
Ray Operator on GKE is now generally available in the Rapid channel. Ray Operator is a GKE add-on that allows you to manage and scale Ray applications. To learn more, see the Ray Operator documentation.
July 03, 2024
You can now preload data or container images in new nodes on GKE, enabling faster workload deployment and autoscaling. This feature is Generally Available and production-ready, with support for Autopilot and Terraform. To learn more, see Use secondary boot disks to preload data or container images.
GKE Managed DCGM Metrics Package is now available in Preview for both GKE Standard and Autopilot clusters running version 1.30.1-gke.1204000 and later.
You can now configure Autopilot and Standard clusters to export a predefined list of DCGM metrics emitted by GKE Managed DCGM exporter including metrics for GPU performance, utilization, and I/Os in the GPU node pools with GKE-managed NVIDIA drivers. These metrics are collected by Google Cloud Managed Service for Prometheus. You can view the curated DCGM metrics in the Observability Tab on the Kubernetes Clusters page or in Cloud Monitoring.
For more information, see Collect and view DCGM metrics.
June 07, 2024
Fully managed cAdvisor/Kubelet metrics are now available on GKE clusters running version 1.29.3-gke.1093000 or later.
May 24, 2024
GKE now provides insights and recommendations to create a backup plan for unprotected clusters that have existed for more than 7 days. These insights and recommendations are currently available in us-central1-a
. See Backup for GKE and protect clusters with Backup for GKE documents for details.
May 22, 2024
The C4 machine family is available in Public Preview for Standard clusters running GKE version 1.29.2-gke.1521000 and later. You can select this family by using the --machine-type
flag when creating a cluster or node pool. The following limitations apply:
- GKE versions prior to 1.29.2-gke.1521000 might encounter a volume device path mounting error which can cause Pods to be stuck in a Pending state. If you encounter this issue, try deleting and re-creating the Pod, to trigger re-processing of the volume mount.
- Confidential GKE nodes are not supported in Public Preview.
- Local SSD is not supported.
- Nested virtualization is not supported in Public Preview.
May 10, 2024
In new Standard clusters running GKE version 1.29 and later, GKE assigns IP addresses for GKE Services from a Google-managed range: 34.118.224.0/20
by default. With this feature, you don't need to specify your own IP address range for Services. For more information, see Subnet secondary IP address range for Services.
May 02, 2024
The new release of the GKE Gateway controller (2024-R1) is now generally available. With this release, the GKE Gateway controller will provide the following new capabilities and fixes:
New capabilities:
- Gateway API CRDs v1.0.0
- Cloud Armor backend security policy support for Regional external Gateways
- Self-managed certificates with Certificate Manager on Regional internal & external Gateways
- Google-managed certificates with Certificate Manager on Regional internal & external Gateways [Preview]
Bug fixes:
- Fixed missing permissions to MCI service agent role for regional SSL policy
To learn more about our GKE Gateway controller capabilities, see the supported capabilities per GatewayClass.
Starting in GKE 1.30, the metric scheduler_pod_scheduling_duration_seconds
in control plane metrics package will no longer be available, as a result of deprecation in the upstream OSS. The replacement metric scheduler_pod_scheduling_sli_duration_seconds
will be exported as part of the the control plane metrics package instead.
April 30, 2024
You can now configure access to private image registries that use private certificates using a containerd configuration file. For details, see Customize containerd configuration in GKE nodes.
In GKE 1.29.2-gke.1355000 and later, GPU workloads using the Accelerator compute class in GKE Autopilot support scheduling multiple GPU pods on a single node. To schedule multiple GPU Pods on the same node, specify the gke-accelerator-count
node selector with a value that's higher than the Pod GPU request. For details, see Deploy GPU workloads in GKE Autopilot.
A Quick Start Solution and Reference Architecture are now available for developing and deploying Retrieval Augmented Generation (RAG) applications on GKE. RAG improves the quality of Large Language Model (LLM) responses for a specific application. For example, RAG can enable a customer service chatbot to access help center articles, a shopping assistant to tap into product catalogs and customer reviews, or a travel booking agent to access up-to-date flight and hotel information.
April 29, 2024
Dual-stack LoadBalancer Services are now generally available with GKE. You can now create a dual-stack GKE cluster and expose GKE Services using either IPv4, IPv6 ,or a combination of both, depending on your ipFamilyPolicy
and ipFamilies
specs.
To learn more, see GKE LoadBalancer Service parameters.
Cloud DNS additive VPC scope is now available in Preview. You can now configure your GKE clusters to add GKE headless Service entries to your Cloud DNS private zone visible from your VPC networks, on top of using Cloud DNS (cluster scope) as your GKE DNS provider.
To learn more, see Cloud DNS scopes for GKE.
April 26, 2024
You can now use the node system configuration file in GKE to enable and use Linux huge pages in your Pods. For instructions, see Linux huge page configuration options.
GKE Standard clusters now support nested virtualization. For details, including requirements and limitations, see Use nested VMs with GKE Standard clusters.
GKE Sandbox supports the use of NVIDIA GPUs (H100, A100, L4, and T4) in Public Preview in GKE version 1.29.2-gke.1108000 and later on both Standard and Autopilot clusters. GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes. For GPUs, while GKE Sandbox doesn't mitigate all NVIDIA driver vulnerabilities, it helps protect against Linux kernel vulnerabilities. For details, see GPUs in GKE Sandbox.
April 16, 2024
The Z3 machine family is generally available in Standard clusters running for GKE 1.25 and later. You can select this family by using the --machine-type
flag when creating a cluster or node pool. The following limitations apply:
- Node auto-provisioning for Z3 is supported in 1.29 and later.
- GKE Autopilot is supported in 1.29 and later.
- Z3 machines are gracefully terminated during host maintenance.
April 12, 2024
GPUDirect-TCPX is now supported on GKE version 1.27 and later and requires the following patch versions:
- For GKE version 1.27, use GKE patch version 1.27.7-gke.1121000 or later.
- For GKE version 1.28, use GKE patch version 1.28.8-gke.1095000 or later.
- For GKE version 1.29, use GKE patch version 1.29.3-gke.1093000 or later.
To use GPUDirect-TCPX, see Maximize GPU network bandwidth with GPUDirect-TCPX and multi-networking.
April 10, 2024
The N4 machine family is generally available in GKE Standard clusters running on GKE 1.29.3-gke.1121000 and later. You can select this family by using the --machine-type
flag when creating a cluster or node pool. The following limitations apply:
- Confidential GKE nodes is not supported.
- Local SSD is not supported.
hyperdisk-balanced
is the only supported boot disk type.
This note was updated on June 3, 2024. The GKE version required for N4 machine type support has been updated.
April 09, 2024
Cloud Tensor Processing Units (TPUs) are now available in GKE Autopilot clusters running version 1.29.2-gke.1521000 or later. To learn more, visit Deploy TPU workloads on GKE Autopilot.
April 05, 2024
GPU NVIDIA Multi-Process Service (MPS) is available in version 1.27.7-gke.1088000 and later, which allows multiple workloads to share a single NVIDIA GPU hardware accelerator with NVIDIA MPS.
April 03, 2024
The GKE compliance dashboard now offers compliance evaluation for CIS Kubernetes Benchmark 1.5, Pod Security Standards (PSS) Baseline, and PSS Restricted standards in Preview. To learn more, see About the compliance dashboard.
GKE threat detection is now available in Preview. Threats against the Kubernetes control plane impacting your GKE Enterprise clusters are now visible in the GKE security posture dashboard. To learn more, see About GKE threat detection.
April 02, 2024
Observability for Google Kubernetes Engine: Added a dashboard for Tensor Processing Unit (TPU) metrics on the Observability tab of both the cluster listing and cluster details pages for GKE clusters. The charts on this dashboard are populated with data only if the cluster has TPU nodes and GKE system metrics is enabled. For more information, see View observability metrics.
March 19, 2024
Cilium cluster-wide network policies are now generally available with the following GKE versions:
- 1.28.6-gke.1095000 or later
- 1.29.1-gke.1016000 or later
You can now control your GKE workloads' ingress and egress traffic cluster-wide, without being bound to a namespace for your network policies. This new capability is intended to streamline network policies for GKE platform administrators looking for a uniform way to apply policies across namespaces or application teams.
Cilium cluster-wide network policy is available in all GKE editions.
To learn more, read Control cluster-wide communication using network policies.
March 11, 2024
Private clusters created on GKE versions 1.29.0-gke.1384000 and later use Private Service Connect (PSC) for nodes to privately communicate with the control plane. There is no price increase for using GKE private clusters running on PSC.
For private clusters created with a different GKE version, the clusters continue to use VPC Peering for node-to-control plane communication.
Secret Manager add-on for GKE is now available. With the add-on, you can access the secrets stored in Secret Manager as volumes mounted in Kubernetes Pods. The add-on is supported on Standard and Autopilot clusters versioned 1.29 and later. For more info, see Use Secret Manager add-on with GKE.
Opportunistic bursting and lower Pod minimums are now available on newly created GKE Autopilot clusters at version 1.29.2-gke.1060000 or later, and on existing clusters created at 1.26 or later that have been fully upgraded (including all nodes) to 1.29.2-gke.1060000 or later. To learn more, see Configure Pod bursting on GKE.
March 07, 2024
You can now preload data or container images in new nodes to get fast workload deployment and auto scaling. This feature is available in Preview starting from GKE version 1.28.3-gke.1067000.
March 04, 2024
NVIDIA H100 (80 GB) GPUs are now available in GKE Autopilot mode in versions 1.28.6-gke.1369000 or later, and 1.29.1-gke.1575000 or later.
GPU workloads running in Autopilot mode can now be configured using the Accelerator Compute Class. This configuration supports resource reservations, Compute Engine committed use discounts, and a new pricing model in GKE versions 1.28.6-gke.1095000 and later, and 1.29.1-gke.1143000 and later.
February 28, 2024
The Performance Compute Class, designed for running whole-machine CPU workloads, is available in Autopilot mode from versions 1.28.6-gke.1369000 and 1.29.1-gke.1575000 and later.
February 26, 2024
GKE now supports Gemma (2B, 7B), Google's new state-of-the-art open models. To learn more, refer to the following guides:
- Serve Gemma on GKE with GPUs using Hugging Face TGI
- Serve Gemma on GKE with GPUs using vLLM
- Serve Gemma on GKE with GPUs using TensorRT-LLM
- Serve Gemma on GKE with TPUs using SaxML
Deployment to GKE is also supported via Vertex AI Model Garden as part of our Hugging Face, Vertex AI, and GKE integration.
February 21, 2024
The GKE Stateful HA Operator is now available in GA starting in GKE versions 1.28.5-gke.1113000 and later, or 1.29.0-gke.1272000 and later. The GKE Stateful HA Operator is enabled in new Autopilot clusters and opt-in for new Standard clusters.
February 20, 2024
You can now use the GKE API to apply Resource Manager tags to your GKE nodes. GKE attaches these tags to the underlying Compute Engine VMs. You can use these tags to selectively enforce Cloud Firewall network firewall policies. This feature is generally available in GKE version 1.28 and later.
Kubernetes Engine best practice observability packages, including control plane logs, control plane metrics, and kube state metrics are now enabled by default for new managed GKE Enterprise clusters to ensure availability of necessary data when it's needed for troubleshooting or optimization. Control plane metrics and kube state metrics are included in GKE Enterprise Edition at no additional charge.
GKE now delivers insights and recommendations if your cluster's Certificate Authority (CA) is expired or will expire in the next 180 days. To learn more, see Find clusters with expiring or expired credentials.
February 02, 2024
FQDN network policies are now generally available with the following GKE versions:
- 1.26.4-gke.500 and later.
- 1.27.1-gke.400 and later.
- 1.28 and later.
You can further control your GKE workloads' egress traffic to a public or private service or endpoint by using a network policy matching a fully-qualified domain name or a regular expression.
FQDN Network Policy is only available and supported with GKE Enterprise.
To learn more, read Control Pod egress traffic using FQDN network policies.
February 01, 2024
You can now encrypt Pod-to-Pod traffic between nodes in the same cluster or in a multi-cluster environment natively with GKE. Inter-node transparent encryption is now generally available, only with GKE Enterprise, for GKE clusters in the following versions:
- 1.26.9-gke.1024000 and later.
- 1.27.6-gke.1506000 and later.
- 1.28.2-gke.1098000 and later.
- 1.29 and later.
To learn more, see Encrypt your data in-transit in GKE with user-managed encryption keys.
January 31, 2024
The africa-south1
region in Johannesburg, South Africa is now available.
December 19, 2023
You can now modify the vm.max_map_count
Linux kernel attribute for nodes in a GKE Standard cluster node pool using the node system configuration. To learn more, see Sysctl configuration options.
December 18, 2023
The GKE NEG controller now supports IPv6 endpoints with GKE version 1.28.4-gke.1083000 and later.
With this new capability, when you create a dual stack Service in a dual stack GKE cluster, any NEGs associated with the Service will now contain both IPv4 and IPv6 endpoints. Existing dual stack Services utilizing NEGs (i.e. Ingress, Services using Standalone NEGs) will be migrated from "IPv4 only" endpoints to "IPv4 + IPv6" endpoints.
The migration will be completed in approximately one hour. In the event that a NEG contains a single endpoint, you might experience brief downtime of approximately 1-2 minutes during the migration of that endpoint.
Note that Having IPv6 endpoints in NEGs doesn't necessarily mean that the load balancer uses IPv6 for communication. How the load balancer communicates with your Pod depends on how the BackendService is configured, such as fields like IpAddressSelectionPolicy
.
All newly created Google Kubernetes Engine (GKE) Autopilot clusters starting with 1.27.4-gke.900 will automatically collect and send metrics from the kube-state-metrics package to Managed Service for Prometheus.
December 15, 2023
The Observability tab in the cluster details page for each cluster and in the GKE cluster list page now shows GPU metrics if the cluster has GPU nodes. For more information, see View observability metrics.
November 29, 2023
Starting in GKE version 1.27.6-gke.1248000, clusters in Autopilot mode detect nodes that can't fit all DaemonSets and, over time, migrate workloads to larger nodes that can fit all DaemonSets. For more information, see Best practices for DaemonSets on Autopilot.
Starting in GKE 1.27.7, you can configure your workloads to use TPU reservations with node auto-provisioning.
November 17, 2023
You can now run workloads on L4 GPUs in Autopilot clusters that use GKE version 1.28.3-gke.1203000 and later. For instructions, see Deploy GPU workloads in Autopilot.
November 15, 2023
Dynamic Workload Scheduler support on GKE through the Provisioning Request API launched in Preview in version 1.28. Use the Dynamic Workload Scheduler to get large atomic sets of available GPU models in GKE Standard clusters. For more information, see Deploy GPUs for batch workloads with ProvisioningRequest.
November 10, 2023
The Observability tab for a GKE deployment now shows application performance metrics if the metrics are available. The supported metric sources include Istio, GKE Ingress, NGINX Ingress and gRPC, and HTTP metrics collected by using Google Managed Service for Prometheus. For more information, see Use application performance metrics.
November 09, 2023
GKE Infrastructure Dashboards and Metrics Packages are now generally available for both GKE Autopilot and Standard clusters with control plane version 1.27.2-gke.1200 and later.
You can now configure your Autopilot or Standard clusters to export a predefined list of metrics emitted by GKE managed kube-state-metrics (KSM) for workloads state and persistent storage. The component will run in the GKE system namespace "gke-managed-cim" to collect the metrics using Google Cloud Managed Service for Prometheus and send them to Cloud Monitoring. You can view the metrics in the new Persistent and Workloads State dashboards in the Observability tab.
November 08, 2023
New inference-focused Cloud Tensor Processing Unit (TPU) v5e machine types are available in GKE. These single-host TPU VMs are designed for inference workloads and contain one, four, or eight TPU v5e chips. These three new TPU v5e machine types (ct5l-hightpu-1t
, ct5l-hightpu-4t
, and ct5l-hightpu-8t
) are currently available in the us-central1-a
and europe-west4-b
zones.
Cloud Tensor Processing Unit (TPU) v5e is generally available in clusters running GKE version 1.27.2-gke.2100 and later.
TPU v5e is purpose-built to bring the cost-efficiency and performance required for medium- and large-scale training and inference. TPU v5e delivers up to 2x higher training performance per dollar and up to 2.5x inference performance per dollar for LLMs and gen AI models compared to Cloud TPU v4. At less than half the cost of TPU v4, TPU v5e makes it possible for more organizations to train and deploy larger, more complex AI models.
October 31, 2023
GKE multi-cluster Gateway is now generally available in GKE versions 1.24 and later for GKE Standard clusters, and versions 1.26 and later for GKE Autopilot clusters. Use the Gateway API to express the intent of your inbound HTTP(S) traffic into your fleet of GKE clusters. The multi-cluster Gateway controller deploys and manages the Application Load Balancers that forward traffic to your applications. To learn more, see Enable multi-cluster Gateways. For the list of supported Cloud Load Balancers and their features, refer to GatewayClass capabilities.
October 20, 2023
You can now use the GKE API to apply Resource Manager tags to your GKE resources. GKE attaches these tags to the underlying Compute Engine VMs. You can use these tags to selectively enforce Cloud Firewall network firewall policies. This feature is available in Public Preview in GKE version 1.28 and later.
October 19, 2023
Compute resources can now be reserved in advance for use with GKE. Create a future reservation to request assurance of important or difficult-to-obtain capacity in advance. There are no additional costs for creating future reservation requests. You only start to pay when Compute Engine provisions the reserved resources, and you're charged at the same cost as on-demand reservations.
October 16, 2023
Filestore Enterprise now supports backups on GKE, allowing you to make reliable copies of your data to be stored for later use. To trigger backups on Filestore Enterprise, use Kubernetes volume snapshots. Backups are currently not supported for Filestore Enterprise instances with multishares enabled.
October 13, 2023
Starting in GKE 1.28.1-gke.1066000, two new TPU usage metrics are available: TensorCore utilization and Memory Bandwidth utilization.
October 09, 2023
If you are using a third generation machine series (for example, C3), GKE configures Local SSD volumes as the local ephemeral storage by default. You no longer need to specify the --ephemeral-storage-local-ssd
flag when provisioning clusters or node pools. When you configure Local SSD volumes as raw block storage with the --local-nvme-ssd-block
flag, specifying the count
value is now optional.
October 02, 2023
GKE now delivers insights and recommendations if users have installed webhooks that intercept system resources or webhooks that have no available endpoints. To learn more, see Ensure control plane stability when using webhooks.
September 21, 2023
The Observability dashboards on the GKE Clusters List, Cluster Details, and Workload List pages are now customizable. Additionally, the Cluster Details dashboards can be customized across the entire project, or per-cluster for specific use cases.
September 19, 2023
The me-central2
region in Dammam, Saudi Arabia is now available.
September 12, 2023
You can now use node auto-provisioning for TPU slices. With this feature, Standard clusters with GKE version 1.28 and later provision TPU node pools and multi-host TPU accelerators automatically to ensure the capacity required to schedule AI/ML workloads. To learn more, see Configuring TPU node auto-provisioning.
August 30, 2023
GKE now supports the ability to create nodes and workloads with multiple network interfaces. You can create new clusters with version 1.27 and later with multi networking enabled. The additional network interfaces on the Pods can be regular interfaces or high performance interfaces where the network interface is directly attached to the Pod. For more information, see Setup multi-network support for Pods.
Your clusters can now perform operations, such as node auto-provisioning or version upgrades, on multiple node pools in parallel. You no longer have to wait for an operation to complete before you initiate another operation. This feature is enabled for all GKE versions. This change provides you with benefits like the following:
- More efficient scaling, which results in improved savings and faster workload deployment
- Faster, less disruptive node pool upgrades
- Fewer "operation already in progress" messages that could delay subsequent planned operations
- More reliable rollback behavior to fix upgrade-related disruptions in production
- Automatic control plane resize operations won't block other operations on the cluster
The Google Cloud Platform Terraform provider has also been updated to take advantage of this change.
August 29, 2023
You can now create Cloud Tensor Processing Unit (TPU) nodes in GKE to run AI workloads, from training to inference models. GKE manages your cluster by automating TPU resource provisioning, scaling, scheduling, repairing, and upgrading. GKE provides TPU infrastructure metrics in Cloud Monitoring, TPU logs, and error reports for better visibility and monitoring of TPU node pools in GKE clusters. TPUs are available with GKE Standard clusters. GKE supports TPU v4 in version 1.26.1.gke-1500 and later, and supports TPU v5e in version 1.27.2-gke.1500 and later. To learn more, see About TPUs in GKE.
You can now sequence the rollout of cluster upgrades across fleets or across scopes. To learn more, see About cluster upgrades with rollout sequencing.
August 25, 2023
GKE now delivers insights and recommendations to ensure your workloads are ready for disruption using features such as Pod Disruption Budgets. To learn more, see Ensure stateful workloads are disruption-ready.
August 22, 2023
The europe-west10
region in Berlin, Germany is now available.
August 17, 2023
You can now easily identify clusters that use deprecated Kubernetes APIs removed in versions 1.25, 1.26, and 1.27. Kubernetes deprecation insights are now available for these versions.
August 16, 2023
GKE Infrastructure Dashboards and Metrics Packages are now available for both GKE Autopilot and Standard clusters with control plane version 1.27.2-gke.1200 and later. You can now configure Autopilot or Standard clusters to export a predefined list of metrics emitted by GKE managed KSM (kube-state-metrics) for workloads state and Persistent Storage. These metrics are collected by Google Cloud Managed Service for Prometheus and are sent to Cloud Monitoring. You can also view new dashboards (Persistent and Workloads state) rendering those metrics in the Observability tab. For more information, see View observability metrics.
You can now troubleshoot issues with CPU limit utilization and Memory limit utilization of containers running in GKE by using the new "interactive playbook" dashboards in Cloud Monitoring.
August 09, 2023
The Filestore CSI driver now supports smaller share sizes (10Gi) for Filestore multishares for GKE for enterprise instances starting in version 1.27.
August 02, 2023
You can now run workloads on A100 80GB GPUs in Autopilot clusters that use GKE version 1.27 and later.
July 25, 2023
Kubernetes control plane logs and Kubernetes control plane metrics are now available for GKE Autopilot clusters with control plane version 1.22.0 and later and 1.22.13 and later, respectively. You can now configure Autopilot cluster to export logs and certain metrics emitted by the Kubernetes API server, scheduler, and controller manager to Cloud Logging and Cloud Monitoring.
July 24, 2023
GKE Autopilot supports extended duration Pods from 1.27 or later with the cluster-autoscaler.kubernetes.io/safe-to-evict=false
annotation. To learn more, see how to extend the run time of Autopilot Pods.
July 13, 2023
The managed Cloud Storage FUSE CSI driver for GKE is now GA in versions 1.26.5 and later. You can use this driver to consume Cloud Storage buckets for GKE workloads.
July 12, 2023
GKE Dataplane V2 observability is now available in Public Preview starting in GKE versions 1.26.4-gke.500 or later, or 1.27.1-gke.400 or later. You can now enable Dataplane V2 metrics and observability tools on your cluster. Dataplane V2 metrics are included in new Autopilot clusters and opt-in for new Standard clusters. You can opt-in to enable Dataplane V2 observability tools for Autopilot and Standard clusters. Existing clusters can also be updated to enable metrics and observability tooling.
For more information, check out GKE Dataplane V2 observability.
In GKE version 1.24 and later, new beta APIs are, by default, disabled in new clusters. Starting in version 1.27, which is the first new minor version since 1.24 where new beta APIs are introduced, you can enable new APIs on cluster creation or for an existing cluster.
For more information, see how to Use Kubernetes beta APIs with GKE clusters.
July 11, 2023
You can now troubleshoot common GKE issues by using the new "interactive playbook" dashboards in Cloud Monitoring: unschedulable pods and crashlooping containers. You can also access the interactive playbooks from GKE UI insights and set alerts that will allow you to know once those issues occurs.
For information about using these dashboards, see the GKE troubleshooting documentation for unschedulable pods and crashlooping.
Starting in GKE version 1.27, cluster autoscaler always considers Compute Engine Reservations when making the scale-up decisions. The node pools with matching unused reservations are prioritized when choosing the node pool to scale up, even when the node pool is not the most efficient one. Additionally, unused reservations are always prioritized when balancing multi-zonal scale-ups.
For more information, see how to use cluster autoscaler.
July 10, 2023
The new release of the GKE Gateway controller (2023-R2) is now generally available. With this release, the GKE Gateway controller will provide the following new capabilities:
- New GatewayClasses supporting the regional external Application Load Balancer
- Identity-aware Proxy (IAP) Integration
- Custom request and response headers
- URL Rewrites and Path Redirects
To learn more, see the supported capabilities per GatewayClass.
June 26, 2023
Managed Service for Prometheus is enabled by default in new GKE Standard clusters running version 1.27 and later. Existing clusters that upgrade to 1.27 will not automatically enable this feature. For more information, see Enable managed collection: GKE.
June 23, 2023
Automatic GPU driver installation is available in version 1.27.2-gke.1200 and later, which enables you to install NVIDIA GPU drivers on nodes without manually applying a DaemonSet.
For instructions, see Running GPUs.
June 22, 2023
GKE Autopilot now supports the ability to deploy your own service mesh. Many service meshes, such as Istio or LinkerD, require CAP_NET_ADMIN Linux capability to function, which is disabled on Autopilot clusters by default to reduce the size of the security attack surface. You can now optionally enable NET_ADMIN on your Autopilot clusters if you need this capability for your service meshes or other opt-in use cases. See Autopilot Security for more information for how to enable NET_ADMIN.
June 21, 2023
GKE support for Hyperdisk Throughput and Hyperdisk Extreme as an attached persistent disk option is now generally available. Support is available for both Autopilot and Standard clusters running GKE versions 1.26 and later.
June 14, 2023
Clusters with low or no utilization can be identified by Idle Cluster insights.
June 12, 2023
Dual-stack LoadBalancer Services are now available in Preview. Dual-stack LoadBalancer Services are supported on both GKE Standard and Autopilot dual-stack clusters. To learn more, see Single-stack and dual-stack Services.
You can now use deprecation insights to identify clusters on versions 1.21 to 1.24 that use Pod Security Policy, which is unsupported on GKE version 1.25 and later.
June 09, 2023
In addition to the existing egress network policy GKE already supports, you can now control the egress traffic of your Pods by using a network policy that matches a fully-qualified domain name or a regular expression. FQDN Network Policy is now available in Preview for clusters in version 1.26.4-gke.500 and later, and 1.27.1-gke.400 and later. For more information, see Control Pod egress traffic using FQDN network policies.
June 01, 2023
Agones on GKE users will get recommendations and insights if they did not install the Agones controller on dedicated nodes.
May 26, 2023
The Observability tab for each of your GKE clusters now includes metrics for ephemeral storage. For more information, see View observability metrics.
May 22, 2023
The C3 machine family is generally available for GKE Standard clusters running on version 1.22 and later. You can select this family by using the --machine-type
flag when creating a cluster or node pool.
The following features are not supported for this machine family:
- Node auto-provisioning.
- Confidential GKE nodes.
- Local SSD.
- Standard persistent disks (pd-standard).
For more information, refer to the C3 machine series documentation.
May 12, 2023
The g2-standard machine family with NVIDIA L4 is generally available for node pools in clusters running GKE version 1.22 and later. To select the machine family, use the --machine-type
flag in your create command.
May 09, 2023
Now in GA for both GKE Standard and Autopilot clusters with GKE version 1.26 and later, you can add more IPv4 secondary Pod ranges to a new or existing cluster with the --additional-pod-ipv4-ranges
flag. To learn more, see Adding Pod IP addresses.
May 02, 2023
The managed Cloud Storage FUSE CSI driver for GKE is now available in Preview in GKE versions 1.26.3 and later. You can use this driver to consume Cloud Storage buckets for GKE workloads.