Release notes

This page documents production updates to Google Kubernetes Engine (GKE). You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.

Release notes for the Rapid release channel and the early-access program for GKE are included in this page and also in the Rapid channel release notes.

Other resources

For more detailed information about security-related known issues, see the security bulletin page.

To view release notes for versions prior to 2020, see the Release notes archive.

You can see the latest product updates for all of Google Cloud on the Google Cloud release notes page.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/kubernetes-engine-release-notes.xml

July 28, 2020 (R25)

Change default machine type to E2

GKE is changing the default machine type for new clusters and node pools from n1-standard-1 to e2-medium. This change impacts new node pools created using versions 1.17.6 and higher. If you do not specify a machine type during your cluster or node pool creation workflow from node pool version 1.17.6 onwards, the newly provisioned clusters and node pools will default to e2-medium VMs. Note that this change does not impact your existing node pools that are auto-upgraded or manually upgraded to version 1.17.6 or higher. What do I need to do?

Use the following interface(s) of your choice to explicitly configure your machine type setting for newly provisioned machines to be anything other than the new e2-medium default:

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

Version 1.15.12-gke.13 is now available.

Version 1.16.13-gke.1 is now available.

This version includes node image upgrades for Ubuntu (ubuntu-gke-1804-1-16-v20200610).

Version 1.18.6-gke.500 is now available.

This version is available in preview. Before creating GKE v1.18 clusters, you must review the known issues and urgent upgrade notes.

Stable channel

GKE continues to upgrade control planes in clusters on the Stable channel to 1.15.12-gke.2. Upgrades will proceed gradually over several GKE releases.
GKE begins to upgrade nodes in clusters on the Stable channel to 1.15.12-gke.2. Upgrades will proceed gradually over several GKE releases.

Regular channel

Version 1.16.13-gke.1 is now available in the Regular channel.

Auto-upgrading nodes in the Regular channel automatically upgrade to version 1.16.11-gke.5 with this release.

Rapid channel

Version 1.17.9-gke.600 is now available in the Rapid channel.

Version 1.17.8-gke.17 is now available in the Rapid channel.

This version is now the default.

Auto-upgrading nodes in the Rapid channel automatically upgrade to version 1.17.8-gke.17 with this release.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.17.7-gke.15

Changes

GKE nodes now have the label cloud.google.com/machine-family applied. The value of this label is the Compute Engine instance family.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • In the next release (R26), GKE will begin to upgrade control planes in clusters not enrolled in a channel to 1.15. Upgrades will proceed gradually over several GKE releases.

July 22, 2020 (R24)

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.15.12-gke.2 (previously 1.14.10-gke.36).

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.14.0 to 1.14.10-gke.41 1.14.10-gke.42
1.15.0 to 1.15.12-gke.1 1.15.12-gke.2
1.16.0 to 1.16.9-gke.5 1.16.9-gke.6

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

Version 1.18.4-gke.1201 is now available.

This version is available in preview. Before creating GKE v1.18 clusters, you must review the known issues and urgent upgrade notes.

Stable channel

Version 1.15.12-gke.9 is now available in the Stable channel.

Auto-upgrading nodes in the Stable channel automatically upgrade to version 1.15.12-gke.2 with this release.

Regular channel

Version 1.16.12-gke.3 is now available in the Regular channel.

Version 1.16.11-gke.5 is now available in the Regular channel.

This version is now the default.

Rapid channel

Version 1.17.8-gke.17 is now available in the Rapid channel.

This version includes node image upgrades for Ubuntu (ubuntu-gke-1804-1-17-v20200610) and Windows Server (windows-server-1909-dc-core-uefi-gke-v1592940889 and windows-server-2019-dc-core-uefi-gke-v1592939281).

Version 1.17.7-gke.15 is now available in the Rapid channel.

This version is now the default.

Auto-upgrading nodes in the Rapid channel automatically upgrade to version 1.17.7-gke.15 with this release.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.16.8-gke.15
  • 1.16.9-gke.2

  • 1.15.9-gke.24

  • 1.15.11-gke.15

  • 1.15.11-gke.17

  • 1.14.10-gke.36

  • 1.14.10-gke.37

  • 1.14.10-gke.40

  • 1.14.10-gke.41

Security bulletin

A privilege escalation vulnerability was recently discovered in Kubernetes. This vulnerability allows an attacker that has already compromised a node to execute a command in any Pod in the cluster. For more information, see the GCP-2020-009 security bulletin.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • In the next release (R25), GKE will begin to upgrade nodes in clusters on the Stable channel to 1.15.12-gke.2. Upgrades will proceed gradually over several GKE releases.

  • In the next release (R25), GKE will begin to upgrade nodes in clusters on the Regular channel to 1.16.11-gke.5. Upgrades will proceed gradually over several GKE releases.

  • In the next release (R25), GKE will begin to upgrade nodes in 1.16 clusters not on a release channel to 1.16.9-gke.5. Upgrades will proceed gradually over several GKE releases.

  • GKE version 1.14 will be deprecated in R26.

Changes

Starting September 1, 2020, we will automatically delete Google Kubernetes Engine (GKE) clusters that have ERROR status.

What do I need to know?

GKE clusters might end up with ERROR status (red exclamation mark in the cluster status page) in rare cases when cluster creation or deletion operation encounters an unexpected error. Previously, such clusters remained in your accounts and could have been partially usable. ERROR status clusters are excluded from the GKE cluster management fee.

Starting September 1, 2020, we will begin blocking access to such ERROR status clusters and deleting them automatically.

What do I need to do?

If you are relying on any of the clusters with ERROR status in your projects, stop using them by September 1, 2020.

July 17, 2020

New features

Up to 50 TCP/UDP ports are supported per internal TCP/UDP load balancer IP when deploying through GKE Services with shared IP addresses. This also permits multi-protocol TCP and UDP support for the same Service IP. Shared IP is now available in Beta for all existing GKE versions.

SSL Policies which allow policy-enforced TLS and cipher settings are available in Beta for external Ingress and multi-cluster Ingress. Custom health checks, which allow users to declaratively customize parameters of the load balancer health check, are also now available for external, internal, and multi-cluster Ingress. For feature support status and version compatibility see Ingress Features.

The BackendConfig CRD is now GA in GKE 1.16-gke.3+ clusters which promotes most BackendConfig features (IAP, timeouts, affinity, user-defined request headers, and so on) to GA across internal, external, and multi-cluster Ingress. See Ingress Features for detail on explicit version support.

Container-native Ingress using Network Endpoint Groups (NEGs) is now default (with some exceptions) for new Services deployed in GKE 1.17.6-gke.7+ clusters. The cloud.google.com/neg: '{"ingress": true}' annotation will be automatically applied to any Services deployed in these clusters without any explicit action from users to enable container-native Ingress.

Customer Managed Encryption Keys (CMEK) are now generally available for GKE. CMEK for GKE lets you secure your node boot disks as well as attached persistent storage by encrypting the data encryption keys that encrypt your data. To learn more, see Using customer-managed encryption keys.

Changes

The Kubernetes Engine Monitoring feature has been renamed in the Google Cloud Console and documentation to Cloud Operations for GKE. No functional changes were made with this change. Enabling Cloud Operations for GKE continues to collect logs and metrics for your cluster and workloads as it did before.

Known issues

If you have node pools with kubernetes.io or k8s.io labels and want to upgrade to 1.16, you must remove the labels before upgrading.

For more information on this change, see the Kubernetes Node Restriction enhancement.

July 13, 2020 (R23)

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

Version 1.14.10-gke.46 is now available.

Version 1.15.12-gke.9 is now available.

Version 1.16.11-gke.5 is now available.

Stable channel

Version 1.14.10-gke.46 is now available in the Stable channel.

Version 1.15.12-gke.2 is now available in the Stable channel.

Regular channel

Version 1.16.11-gke.5 is now available in the Regular channel.

Auto-upgrading nodes in the Regular channel automatically upgrade to version 1.16.9-gke.6 with this release.

Rapid channel

Version 1.17.7-gke.15 is now available in the Rapid channel.

Fixed issues

A bug in gVisor has been fixed. Default gVisor node labels are now applied when user-specified labels.

Other updates

Beginning with this release, GKE releases also include a release number to reference changes. This release is R23 for 2020.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • In the next release (R24), GKE will begin to upgrade control planes in clusters on the Stable channel to 1.15.12-gke.2. Upgrades will proceed gradually over several GKE releases.

  • In an upcoming release, GKE will begin to upgrade nodes in clusters on the Stable channel to 1.15.12-gke.2. Upgrades will proceed gradually over several GKE releases.

July 2, 2020

New features

NodeLocal DNSCache is now generally available.

GKE Node System Configuration is now beta. With this feature you can specify custom configurations for Kubelet and Kernel settings on your node pools.

Starting with GKE 1.17.6, Vertical Pod Autoscaler recommendations are more fine-grained, starting from 1 mCPU and 1 MiB.

June 29, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.15.x 1.15.11-gke.15

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.45 is now available.

1.15.12-gke.6 is now available.

1.16.10-gke.8 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

1.16.9-gke.6 is now available.

Rapid channel

1.17.6-gke.11 is now available.

Node image changes

GKE 1.14

The COS image for GKE 1.14.10-gke.45 clusters is cos-73-11647-534-0.

GKE 1.15

The COS image for GKE 1.15.12-gke.6 clusters is cos-77-12371-251-0.

June 24, 2020

Known issue

There is a known that may cause multiple Pods on the same node to be allocated with the same IPv4 address leading to possible service disruption. We will automatically upgrade your cluster master to the next available patch version which will include a fix to the issue.

What do I need to know?

  • Ensure your cluster(s) are subscribed to a release channel, or you have node auto-upgrade enabled. If so, your cluster(s) will be automatically upgraded as described below.
  • If you are experiencing any issues or do not want to use auto-upgrade you can manually initiate an upgrade at your earliest convenience.

What do I need to do?

If you are experiencing issues and wish to update proactively:

  • Follow the steps in the Manually upgrading a cluster page to upgrade the cluster master.
  • Upgrade your node pool by applying the latest patch available for your node version.
  • Consider using surge upgrade for your nodepool upgrade. Surge upgrade allows you to set the number of additional nodes to be created temporarily for the upgrade process which the disruption to running workloads.
  • Use the following table to determine which patch version is applicable for your cluster(s):
Channel Action required Upgrade target Date available
No channel Upgrade to the recommended patch version available 1.15: Node pool
v1.15.12-gke.3 or higher
June 23, 2020
1.16: Node pool
v1.16.9-gke.6 or higher
June 30, 2020
Rapid Patch will be applied automatically Master and node pool v1.17.6.gke-4 or higher June 16, 2020
Regular Patch will be applied automatically Node pool v1.16.9-gke.6 or higher June 30, 2020
Stable No action is required Patch is not required N/A

June 23, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.15.x 1.15.11-gke.15

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.42 is now available.

1.15.12-gke.3 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.17.6-gke.7 is now available.

Node image changes

GKE 1.14

The COS image for GKE 1.14.10-gke-43 clusters and is cos-73-11647-459-0.

June 15, 2020

New features

Node auto-repair is now enabled by default by the Google Kubernetes Engine API for new node pools.

June 8, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.41 is now available.

1.15.11-gke.17 is now available.

1.15.12-gke.2 is now available.

1.16.9-gke.6 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.17.6-gke.4 is now available.

Versions no longer available

  • 1.15.11-gke.3
  • 1.15.11-gke.5
  • 1.15.11-gke.9
  • 1.15.11-gke.11
  • 1.15.11-gke.12

Node image changes

GKE 1.15

The COS image for GKE 1.15.12-gke.2 clusters and up is now cos-77-12371-227-0.

The Ubuntu image for GKE 1.15.11-gke.17 clusters and up is ubuntu-gke-1804-1-15-v20200330.

GKE release channels

The COS image for GKE clusters in the Rapid release channel is now cos-81-12871-119-0.

New features

The region asia-southeast2 in Jakarta is now available.

June 2, 2020

Known issue

As part of ensuring better representation of available resources on the node for e2 burstable node types, GKE has decided to reduce the allocatable CPU resources available to schedule user workloads (known as the node allocatable resources) on e2-micro, e2-small, and e2-medium machine types.

What do I need to know?

Today, e2-micro, e2-small, and e2-medium have 1930 mCPU of allocatable resources for Kubernetes to schedule Pods on per node, and following this change it will be 940m CPU. Kubernetes uses the node allocatable resources during scheduling to decide how many Pods it should place on the node. If your workloads are currently requesting more CPU resources than what will be available after upgrading, they may become unscheduled after upgrade.

We are making this change in order to more accurately represent the resources available in these machine types. These machine types can temporarily burst to 2 vCPUs, but this is not sustained. The underlying compute capabilities and resources are not changing, the machines retain the ability to temporarily burst to 2 vCPU, this change only affects how many resources the Kubernetes scheduler considers when allocating Pods to nodes.

When your cluster is upgraded to 1.14.10-gke.42, 1.15.11-gke.18, 1.16.8-gke.17, or 1.17.5-gke.5 (whether you perform this manually or you are automatically upgraded), your workloads may become unscheduled if there are not enough allocatable resources in the cluster.

What do I need to do?

Prior to upgrading your nodes to version 1.16.8-gke.17 and 1.17.5-gke.5 or later

Take a moment to review your Pod resource requests. To see the allocated resources on your node, you can open Kubernetes Engine in the Google Cloud Console and select your cluster. On the Nodes tab for your cluster, the column CPU requested shows the total CPU requests on the node.

Alternatively, from the command line:

  1. Run kubectl get nodes to get a list of node names.
  2. Run kubectl describe node node-name and look at the Allocated resources section. Under the column Requests, find the row for cpu.

If you have nodes of type e2-micro, e2-small, and e2-medium where more than 940mCPU is requested, Pods will be rescheduled onto other nodes after upgrade. You must have enough allocatable capacity on other nodes.

To ensure you have enough allocatable capacity, you can:

  • Enable auto-scaling on your node pool. Auto-scaling will automatically provision the right number of nodes automatically, provided the Pod requests do not exceed that of the entire node
  • Increase the number of nodes in the cluster, or add larger node types if you have Pods that make CPU requests which exceed the capacity of existing nodes
  • Decrease the resource requests made by your workloads on these nodes so that they will still fit after the upgrade, by modifying the CPU resource requests of the PodSpec. Pods will be able to burst to the original CPU capacity for short periods as long as resource limits are not changed.

After you have upgraded your nodes to versions 1.14.10-gke.42, 1.15.11-gke.18, 1.16.8-gke.17, or 1.17.5-gke.5 or later:

Review the status of your Pods by running kubectl get pods.

If any are indicated as Pending, it may indicate that there were not enough resources available to schedule them. Follow the steps above to either add more nodes, or reduce the CPU resource requests in the PodSpec.

June 1, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.40 is now available.

1.15.11-gke.15 is now available.

1.16.9-gke.2 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

Auto-upgrading nodes in the Regular release channel automatically upgrade to version 1.16.8-gke.15 in this release.

Rapid channel

1.17.5-gke.9 is now available.

Node image changes

GKE 1.14

The Ubuntu image for GKE 1.14.10-gke.40 clusters is ubuntu-gke-1804-1-14-v20200219.

May 27, 2020

Known issue

Due to a newly discovered issue, the following versions are no longer available:

  • 1.17.5-gke.6
  • 1.16.8-gke.17
  • 1.15.11-gke.14

May 19, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.14.x 1.14.10-gke.36

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.16.8-gke.17 is now available.

1.15.11-gke.14 is now available. In clusters using version 1.15.11-gke.14, NodeLocal DNSCache now reaches out to custom stubDomains using UDP. For more information, see Setting up NodeLocal DNSCache.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

Existing clusters in the Regular release channel automatically upgrade to version 1.16.8-gke.15 in this release.

Rapid channel

1.17.5-gke.6 is now available. In clusters using version 1.17.5-gke.6, NodeLocal DNSCache now reaches out to custom stubDomains using UDP. For more information, see Setting up NodeLocal DNSCache.

Node image changes

GKE 1.17

The COS image for GKE 1.17 clusters is now cos-81-12871-96-0.

etcd version changes

  • In the Rapid release channel, all GKE clusters running 1.17.3-gke.3 and up will have etcd upgraded to 3.4.7-0-gke.1. All new GKE clusters with 1.17.3-gke.3 and up will be created with etcd 3.4.7-0-gke.1.

New features

Google Kubernetes Engine now supports the use of non-RFC 1918 private address ranges and the private reuse of public IP addresses in VPC-native clusters. For details and caveats about enabling these addresses, see Enabling non-RFC 1918 reserved IP address ranges.

May 15, 2020

New features

Container Threat Detection is now available in Beta. Container Threat Detection can detect the most common container runtime attacks and alert you in Security Command Center and optionally in Cloud Logging. Container Threat Detection includes several detection capabilities, an analysis tool, and an API.

Container Threat Detection currently supports the following versions on the Regular and Rapid channels:

  • 1.15.9-gke.12 and higher
  • 1.16.5-gke.2 and higher
  • 1.17 and higher

In a future update, Container Threat Detection will support version 1.14 and the Stable channel.

May 13, 2020

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.14.10-gke.36.

Scheduled automatic upgrades

Masters with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.14.10-gke.27 1.14.10-gke.36

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.15.11-gke.13 is now available.

1.16.8-gke.15 is now generally available for new clusters. Existing clusters and nodes will not automatically upgrade in this release.

Important

Before you migrate to GKE v1.16, you must review:

Stable channel

1.14.10-gke.36 is now available in the Stable release channel.

Regular channel

1.16.8-gke.15 is now generally available for new clusters. Existing clusters and nodes will not automatically upgrade in this release.

Important

Before you migrate to GKE v1.16, you must review:

Rapid channel

1.17.5-gke.0 is now available in the Rapid release channel.

Node image changes

GKE 1.16

The COS image for GKE 1.16 clusters is now cos-77-12371-251-0.

GKE 1.17

The COS image for GKE 1.17 clusters is now cos-81-12871-69-0.

Versions no longer available

  • 1.14.10-gke.27
  • 1.14.10-gke31
  • 1.14.10-gke.32
  • 1.14.10-gke.34

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

Google Kubernetes Engine will gradually upgrade clusters in the Regular channel to GKE 1.16 beginning in an upcoming release. To read more about API deprecations in 1.16, see Kubernetes 1.16 deprecated APIs.

May 08, 2020

New features

Specifying a VPC subnet for internal Load Balancer Service IPs is now supported as a per-Service annotation in GKE clusters 1.16.8-gke.10+ and 1.17+.

May 04, 2020

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.15.11-gke.12 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

There are no new releases in the Rapid release channel.

Node image changes

GKE 1.16

The Ubuntu image for GKE 1.16 clusters is now ubuntu-gke-1804-1-16-v20200330.

The COS image for GKE 1.16 clusters is now cos-77-12371-227-0.

GKE 1.17

The COS image for GKE 1.17 clusters is now cos-81-12871-69-0.

etcd default version changes

  • The default etcd version for new GKE 1.13 and 1.14 clusters is etcd 3.2.27-0-gke.6
  • The default etcd version for new GKE 1.15 and 1.16 clusters is etcd 3.3.18-0-gke.4
  • The default etcd version for new GKE 1.17 and higher clusters is etcd 3.4.7-0-gke.1

Autoupgrades in existing clusters will occur at a later date.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

Google Kubernetes Engine will gradually upgrade clusters in the regular channel to GKE 1.16 beginning in an upcoming release. Read more about API deprecations in 1.16, see Kubernetes 1.16 deprecated APIs.

April 29, 2020

New features

Ingress for Anthos is now Generally Available (GA) for all GKE versions 1.14 and up. Ingress for Anthos provides a Kubernetes-native interface to deploy Ingress resources for internet traffic across multiple clusters and multiple regions.

Ingress is helpful for use cases including:

  • A global and stable anycast VIP, independent of cluster backends.
  • Multi-regional and multi-cluster high availability.
  • Low latency serving of traffic to the closest cluster.
  • Intelligent and safe traffic management across many clusters.

April 27, 2020

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.37 is now available.

1.15.11-gke.11 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.17.4-gke.10 is now available in the Rapid release channel.

Upgrading

Although clusters in the Rapid channel upgrade automatically, you should still review:

Changes

To improve the safety of upgrades and reduce disruption, all new node pools have surge upgrades turned on by default with the configuration: maxSurge=1 maxUnavailable=0. For more information, see Determining your optimal surge configuration.

GKE is also gradually reconfiguring existing node pools to use surge upgrades with the same configuration. Node pools that already have upgrade_settings defined remain unaffected.

1.17 Changes

The following notable changes are coming in 1.17:

The RunAsUsername feature in 1.17 is now beta and allows specifying the username when running a Windows container.

The RuntimeClass scheduler in 1.17 simplifies scheduling Windows Pods to appropriate nodes

The following node labels are deprecated in 1.17:

Cluster Versions Deprecated Label New Label
1.14+ beta.kubernetes.io/os kubernetes.io/os
1.14+ beta.kubernetes.io/arch kubernetes.io/arch
1.17+ beta.kubernetes.io/instance-type node.kubernetes.io/instance-type
1.17+ failure-domain.beta.kubernetes.io/zone topology.kubernetes.io/zone
1.17+ failure-domain.beta.kubernetes.io/region topology.kubernetes.io/region

You must identify any node selectors using beta labels and modify them to use GA labels.

RBAC in the apps/v1alpha1 and apps/v1beta1 API versions are deprecated in 1.17 and will no longer be served in 1.20. Update your manifests and API clients to use the rbac.authorization.k8s.io/v1 APIs before v1.20 to avoid any issues.

Known issues with 1.15 and higher

A known kernel bug in Linux kernel 4.18, 4.19, 4.20 and 5.0 may cause softlockup when running eBPF workloads. This may affect nodes with GKE version 1.15 or higher using cos-77-*, and GKE version 1.15 using Ubuntu. Before the fix is released, please avoid upgrading nodes to these affected versions if you run eBPF workloads.

Coming soon

Google Kubernetes Engine will gradually upgrade clusters in the regular channel to GKE 1.16.

Versions no longer available

  • 1.15.9-gke.24
  • 1.15.9-gke.26
  • 1.15.11-gke.1

April 24, 2020

New features

The ability to create new GKE clusters or update existing GKE clusters with node pools running Windows Server is now generally available.

Master global access for private clusters is now available in beta. With master global access, you can access the master's private endpoint from any Google Cloud region or on-premises environment no matter what the private cluster's region is.

Issues

A known kernel bug in Linux kernel 4.18, 4.19, 4.20 and 5.0 may cause softlockup when running eBPF workloads. This may affect nodes with GKE version 1.15 or higher using cos-77-*, and GKE version 1.15 using Ubuntu. Before the fix is released, please avoid upgrading nodes to these affected versions if you run eBPF workloads.

April 20, 2020

New features

The region us-west4 in Las Vegas is now available.

April 15, 2020

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.36 is now available.

1.15.11-gke.9 is now available. This version updates Calico to 3.8.7. This version fixes an issue where Calico Pods would fail to initialize after restarting. The issue occurred because the Calico CNI script tried to overwrite a file which was referenced by Kubelet at the same time. For more information on the fix, see the open source documentation.

1.17.4-gke.6 is now available in alpha clusters.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.16.8-gke.9 is now available in the Rapid release channel.

Coming soon

Google Kubernetes Engine will gradually upgrade clusters in the regular channel to GKE 1.16.

April 10, 2020

New features

Ingress access logging is now a configurable feature called logging in versions 1.16.8-gke.10 and later. This allows Ingress access logging to be toggled on or off through the BackendConfig resource.

Changes

HTTP access logging for newly created Ingress resources is being deprecated across various GKE versions on May 12th, 2020. Any new Ingress resources created with the following versions after May 12th will have access logging disabled for that Ingress resource and will not record Ingress HTTP requests in Cloud Logging. Note that existing Ingress resources will continue to log HTTP requests unless the Ingress resource is redeployed. The following GKE versions are affected:

  • 1.12
  • 1.13
  • 1.14 clusters less than 1.14.10-gke.30
  • 1.15 clusters less than 1.15.9-gke.22
  • 1.16 clusters less than 1.16.6-gke.12

Clusters whose masters are on 1.14.10-gke.30, 1.15.9-gke.22, 1.16.6-gke.12 or later versions are not impacted and HTTP access logging remains defaulted to "on" for all new and existing Ingress resources. If you're currently using access logging for GKE Ingress, we highly recommend upgrading to these versions or higher before May 12th to avoid loss of HTTP access logs for new Ingress resources.

In GKE 1.18, access logging will be changed to default to "off" for the GKE Ingress. Enabling access logging through the logging parameter is required to enable it for Ingress resources.

Coming soon

Google Kubernetes Engine will gradually upgrade clusters in the regular channel to GKE 1.16 beginning on or after April 13, 2020.

April 07, 2020

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.15.11-gke.5 is now available.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.16.8-gke.8 is now available in the Rapid release channel. The node image for Container-Optimized OS is updated to cos-77-12371-208-0.

Known issues

Due to the recent Windows Server security update provided by Microsoft in February 2020, a container incompatibility issue was introduced. To avoid disruption to your workloads, we have turned off Google Kubernetes Engine (GKE) auto-upgrade for the impacted clusters.

What do I need to know?

As a consequence of Microsoft's security update, your workloads may end up in a failed state due to broken compatibility if the host Windows Server image has the security update and the container base image does not have the update.

We have turned off auto-upgrade on the impacted GKE clusters to prevent this compatibility issue from affecting your workloads.

The security update will be available in the rapid channel in GKE starting April 6, 2020. Beginning April 20, 2020, Windows Server container support in GKE, along with the security update will be available on the regular channel.

What do I need to do?

We strongly recommend you to rebuild your container images with the base Windows images that include Windows Updates from March 2020, then manually upgrade your node pool to the latest GKE version. Please follow the following steps:

  1. Disable auto-upgrade on the Windows node pool(s).
  2. When the first step is complete, Google will restart the cluster auto-upgrade. Please note that this could take a few days. The cluster's master and the Linux node pool(s) will be upgraded. The Windows node pool will not get upgraded as auto-upgrade is disabled in step number one.
  3. After the cluster master upgrade is complete and you have rebuilt your container images, manually upgrade your Windows node pool.
  4. After completing step number three, you can turn back on the auto-upgrade option. If you choose to turn the auto-upgrade option back on, please use multi-arch (or multi-platform) images to take advantage of the auto-upgrade feature.

Incompatibility issues such as this one are a rare occurrence as it is against Microsoft's typical guidance for the security updates. Rest assured that if such issues occur again, we will keep you posted. Please stay up to date with GKE's release notes for the latest info.

If you have any questions or require assistance, please email us or contact Google Cloud Support.

April 1, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.15.9-gke.22 1.15.9-gke.24

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.34 is now available.

1.15.11-gke.3 is now available.

1.17.4-gke.2 preview is now available.

Before creating GKE v1.17 clusters, you must review the known issues and urgent upgrade notes.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

1.15.9-gke.24 is now available in the Regular release channel.

Rapid channel

1.16.8-gke.4 is now available in the Rapid release channel.

Versions no longer available

  • 1.15.9-gke.22

Security bulletin

A vulnerability was recently discovered in Kubernetes that allows any user authorized to make POST requests to execute a remote Denial-of-Service attack on a Kubernetes API server. For more information, see the GCP-2020-003 security bulletin.

March 26, 2020

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.14.10-gke.27.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.14.0 to 1.14.10-gke.26 1.14.10-gke.27
1.15.0 to 1.15.9-gke.21 1.15.9-gke.22

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.15.11-gke.1 is now available. The node image for Container-Optimized OS is updated to cos-77-12371-183-0.

Stable channel

1.14.10-gke.27 is now available in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.16.8-gke.3 is now available in the Rapid release channel.

Versions no longer available

  • 1.14.10-gke.17
  • 1.14.10-gke.21
  • 1.14.10-gke.22
  • 1.14.10-gke.24
  • 1.15.8-gke.3
  • 1.15.9-gke.12

March 23, 2020

Changes

You can no longer apply the labels of kubernetes.io or k8s.io to node pools. Existing node pools with these labels aren't affected. For more information on this change, see the Kubernetes Node Restriction enhancement.

March 20, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.13.0 to 1.13.12-gke.25 1.14.10-gke.17

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.32 is now available.

1.15.9-gke.26 is now available.

1.17.3-gke.7 is now available to preview. is now available.

Before creating GKE v1.17 clusters, you must review the known issues and urgent upgrade notes.

Stable channel

There are no new releases in the Stable release channel.

Regular channel

There are no new releases in the Regular release channel.

Rapid channel

1.16.6-gke.18 is now available in the Rapid release channel.

Versions no longer available

  • 1.15.9-gke.8
  • 1.15.9-gke.9

March 16, 2020

New features

Workload Identity is now generally available in versions 1.14.10-gke.27 and above, 1.15.9-gke.22 and above, and 1.16.6-gke.12 and above. Workload Identity is the recommended way to access Google Cloud services from within GKE clusters.

You can now use node auto-provisioning to create node pools with preemptible VMs from clusters running in the Regular release channel.

Enabling TPUs on existing clusters is now in Beta. With this feature you can toggle Cloud TPU support instead of creating new clusters and migrating your workloads.

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.14.10-gke.24

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.13.12 or lower 1.14.10-gke.17

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

1.14.10-gke.30 is now available.

1.15.9-gke.24 is now available.

1.17.3-gke.4 is now available for preview.

Before creating GKE v1.17 clusters, you must review the known issues and urgent upgrade notes.

Stable channel

1.14.10-gke.24 is now available in the Stable release channel.

Regular channel

1.15.9-gke.22 is now available in the Regular release channel.

Rapid channel

1.16.6-gke.13 is now available in the Rapid release channel.

Versions no longer available

The following version is no longer available to create a new cluster:

  • 1.13.12-gke.30

Fixed issues

The issue reported February 14 with private clusters with VPC peering reuse enabled has been resolved.

March 6, 2020

New features

The user interface for creating clusters in Google Cloud Console has been redesigned. The new design makes it easier to follow GKE best practices.

You can now configure automated deployment for your existing GKE workloads with Cloud Build.

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.13.12-gke.25 1.14.10-gke.17
1.14.8, 1.14.9 1.14.10-gke.17
1.15.7, 1.15.8-gke.2 1.15.8-gke.3

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

No channel

v.1.13.x
  • There are no new 1.13 versions this week.
v.1.14.x
v.1.15.x

Stable channel

  • There are no new versions in the Stable channel this week.

Regular channel

Rapid channel

Coming Soon

1.16 will be moving to the regular channel.

The v1.16 release stops serving the following API versions in favor of newer and more stable API versions:

  • NetworkPolicy in the extensions/v1beta1 API version, deprecated since v1.9, is no longer served. Migrate to the networking.k8s.io/v1 API version, available since v1.8.
  • PodSecurityPolicy in the extensions/v1beta1 API version, deprecated since v1.10, is no longer served. Migrate to the policy/v1beta1 API version, available since v1.10.
  • DaemonSet in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.templateGeneration is removed.
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.updateStrategy.type now defaults to RollingUpdate.
  • Deployment in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.rollbackTo is removed.
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.progressDeadlineSeconds now defaults to 600 seconds.
    • spec.revisionHistoryLimit now defaults to 10.
    • maxSurge and maxUnavailable now default to 25%.
  • StatefulSet in the apps/v1beta1 and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.updateStrategy.type now defaults to RollingUpdate.
  • ReplicaSet in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.

February 27, 2020

New features

Container-native load balancing with standalone network endpoint groups (NEGs) is generally available. You can use Standalone NEGs to create load balancers for several use cases including backends composed of Kubernetes and non-Kubernetes workloads.

Ingress for Anthos is now Beta for GKE versions 1.14.x+ and in the Rapid and Regular release channels. Ingress for Anthos supports Internet-facing Ingress shared across multiple backend GKE clusters and multiple Google Cloud regions. Ingress can now support use cases such as multi-regional and multi-cluster availability, low backend to user latency, and seamless cluster migrations.

February 25, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.13.12-gke.25 1.14.10-gke.17
1.14.8 1.14.10-gke.17

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

No channel

v.1.13.x
  • There are no new 1.13 versions this week.
v.1.14.x
  • 1.14.10-gke.17
v.1.15.x
  • 1.15.9-gke.12

Stable channel

  • There are no new versions in the Stable channel this week.

Regular channel

  • 1.15.8-gke.3

Rapid channel

  • 1.16.6-gke.4

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.13.12-gke.25
  • 1.14.8-gke.33

Coming Soon

1.16 will be moving to the regular channel.

The v1.16 release stops serving the following API versions in favor of newer and more stable API versions:

  • NetworkPolicy in the extensions/v1beta1 API version, deprecated since v1.9, is no longer served. Migrate to the networking.k8s.io/v1 API version, available since v1.8.
  • PodSecurityPolicy in the extensions/v1beta1 API version, deprecated since v1.10, is no longer served. Migrate to the policy/v1beta1 API version, available since v1.10.
  • DaemonSet in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.templateGeneration is removed.
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.updateStrategy.type now defaults to RollingUpdate.
  • Deployment in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.rollbackTo is removed.
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.progressDeadlineSeconds now defaults to 600 seconds.
    • spec.revisionHistoryLimit now defaults to 10.
    • maxSurge and maxUnavailable now default to 25%.
  • StatefulSet in the apps/v1beta1 and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.
    • spec.updateStrategy.type now defaults to RollingUpdate.
  • ReplicaSet in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 API versions, deprecated since v1.9, is no longer served. Migrate to the apps/v1 API version, available since v1.9. Notable changes:
    • spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades.

February 24, 2020

New features

The region us-west3 in Salt Lake City is now available.

The ability to use the Google Cloud Compute Engine Persistent Disk CSI driver in GKE is now in Beta. This feature provides a simple mechanism for users to enable the driver in GKE.

Ingress for Internal HTTP(S) Load Balancing is now available in Beta in the Rapid release channel. This enables private L7 load balancing inside the VPC that can be deployed with Ingress resources.

February 21, 2020

Starting February 24, 2020, GKE will gradually enable Node Auto Upgrade on all nodepools running on version 1.10.x and older to ensure reliability and supportability of these clusters.

February 18, 2020

Version updates

GKE cluster versions have been updated.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

No channel

v.1.13.x
  • 1.13.12-gke.30
v.1.14.x
  • 1.14.10-gke.24
v.1.15.x
  • 1.15.9-gke.9

Node image for Container-Optimized OS updated to cos-77-12371-141-0.

Stable channel

  • There are no new versions in the Stable channel this week.

Regular channel

  • There are no new versions in the Regular channel this week.

Rapid channel

  • 1.16.5-gke.2

Node image for Container-Optimized OS updated to cos-77-12371-141-0.

New features

The --node-locations flag is now generally available. This flag enables you to specify zones for your node pools independently of setting the zone for a cluster.

February 14, 2020

Known Issues

Private clusters created on and after January 15, 2020 that use VPC peering reuse might experience an issue where VPC peering is removed after attempting to reschedule a cluster for deletion after the first attempt fails.

To mitigate this issue, create a private cluster in the same location as your existing private clusters. Creating a new cluster recreates the required VPC peering. You can delete the new cluster after VPC peering is recreated.

February 11, 2020

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.14.10-gke.17.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.13.x 1.13.12-gke.25

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

No channel

v.1.13.x
  • 1.13.12-gke.25
v.1.14.x
  • 1.14.10-gke.17
v.1.15.x
  • 1.15.9-gke.8

Stable channel

  • 1.14.10-gke.17

Regular channel

  • 1.15.7-gke.23

Rapid channel

  • 1.16.4-gke.30

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.13.11-gke.14
  • 1.13.11-gke.15
  • 1.13.11-gke.23 (moved to LEGACY version)
  • 1.13.12-gke.8
  • 1.13.12-gke.13
  • 1.13.12-gke.14
  • 1.13.12-gke.16
  • 1.13.12-gke.17

New features

  • Surge upgrades are generally available. Surge upgrades allow you to configure speed and disruption of node upgrades.

February 4, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.12.x 1.13.12-gke.13

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

No channel

v.1.13.x
  • There are no new 1.13 versions this week.
v.1.14.x
  • 1.14.10-gke.21
v.1.15.x
  • 1.15.8-gke.3

Stable channel

  • There are no new versions in the Stable channel this week.

Regular channel

  • 1.15.7-gke.23

Rapid channel

  • 1.16.4-gke.27

Features

Autoscaling profiles for GKE are now available in Beta. Autoscaling profiles let you choose whether the cluster autoscaler should optimize for resource utilization or resource availability when deciding to remove nodes from a cluster.

Changes

All GKE clusters running 1.15 and up will have etcd upgraded to etcd 3.3.18-0-gke.1, and all new GKE clusters with 1.15 and up will be created with etcd 3.3.18-0-gke.1.

January 29, 2020

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.12.x 1.13.12-gke.13
1.14.0.x to 1.14.8-gke.32 1.14.8-gke.33
1.14.9.x to 1.14.9-gke.22 1.14.9-gke.23
1.14.10.x to 1.14.10-gke.16 1.14.10-gke.17

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

v.1.13.x
  • There are no new 1.13 versions this week.
v.1.14.x
  • There are no new 1.14 versions this week.
v.1.15.x
  • 1.15.8-gke.2

Stable channel

  • There are no new versions in the Stable channel this week.

Regular channel

  • There are no new versions in the Regular channel this week.

Rapid channel

  • 1.16.4-gke.25

Versions no longer available

The following version is no longer available for new clusters or upgrades.

  • 1.14.7-gke.40

New features

Config Connector is now generally available. Config Connector is a GKE addon that allows you to manage your Google Cloud resources through Kubernetes configuration.

Config Sync is now generally available. Config Sync allows you to manage Kubernetes deployments using files stored in a Git repository

GKE Sandbox is now generally available. GKE Sandbox protects the host kernel on your nodes when containers in the Pod execute unknown or untrusted code.

January 27, 2020

The ability to create clusters with node pools running Microsoft Windows Server is now in Beta. This feature is currently only available in the Rapid release channel.

January 24, 2020

This issue was resolved January 27, 2020.

Do not create a cluster with versions 1.15.7-gke.23, 1.14.10-gke.17, or 1.14.9-gke.23 if you depend on Workload Identity. Workload Identity is not working for newly created clusters in these versions due to a recently-discovered issue. Clusters upgraded to one of these versions are not affected. A fix will be released in the next GKE release. As workaround, you can create a cluster at a lower version, then upgrade.

The region asia-northeast3 in Seoul is now available.

January 22, 2020

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.13.11-gke.23.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.12.x 1.13.12-gke.13
v1.15.x 1.15.7-gke.23

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

v.1.13.x
  • 1.13.11-gke.23
  • 1.13.12-gke.25
v.1.14.x
  • 1.14.7-gke.40
  • 1.14.8-gke.33
  • 1.14.9-gke.23
  • 1.14.10-gke.17
v.1.15.x
  • 1.15.7-gke.23

Stable channel
and 1.13.x

Stable channel
  • 1.13.11-gke.23
No channel
  • 1.13.11-gke.23
  • 1.13.12-gke.25

Regular channel
and 1.14.x

Regular channel
  • 1.14.8-gke.33
No channel
  • 1.14.7-gke.40
  • 1.14.8-gke.33
  • 1.14.9-gke.23
  • 1.14.10-gke.17

Rapid channel
and 1.16.x

Rapid channel

  • 1.16.4-gke.22

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.14.7-gke.23
  • 1.14.7-gke.25
  • 1.14.8-gke.12
  • 1.14.8-gke.14
  • 1.14.8-gke.17
  • 1.14.8-gke.18
  • 1.14.8-gke.21
  • 1.14.9-gke.2
  • 1.14.10-gke.0
  • 1.15.4-gke.22
  • 1.15.7-gke.2
  • 1.16.0-gke.11 (preview)
  • 1.16.0-gke.20 (preview)
  • 1.16.4-gke.3 (preview)

Node image changes

The COS kernel previously reported on November 22nd, 2019, was discovered to cause kernel panics in certain workloads. The 1.13 and 1.14 versions available in this release were rolled back to a known stable version of COS. GKE 1.13 and 1.14 will continue to use cos-u-73-11647-293-0 while our team works to develop a permanent fix.

New features

Application Delivery is now in Beta. This feature manages configurations for your GKE workloads declaratively with Git. For more information, see Application Delivery.

NodeLocal DNSCache is now in Beta for GKE clusters 1.15 and above. NodeLocal DNS is an optional feature for local DNS resolution to every GKE node for enhanced DNS scale and capacity.

Object Browser is now available to inspect resources on GKE clusters in Google Cloud Console. For more information, go to Dashboards.

Changes

All private clusters you create now reuse VPC Network Peering connections.

January 8, 2020

Do not update to version 1.16.0-gke.20 if you depend on HPA. Horizontal Pod Autoscaling is not working in this version due to a recently discovered issue. A fix will be released with GKE 1.16.3+.

January 7, 2020

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.14.8-gke.12 (previously 1.13.11-gke.14).

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.12.x 1.13.12-gke.13

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No channel

v.1.13.x
1.13.12-gke.17
v.1.14.x
1.14.10-gke.0
v.1.15.x
1.15.7-gke.2

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
  • 1.13.12-gke.17

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

No channel
  • 1.14.10-gke.0

Rapid channel
and 1.16.x

Rapid channel

There are no changes to the Rapid channel this week.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.10-gke.17
  • 1.12.10-gke.20
  • 1.12.10-gke.22

New features

You can now use Customer-managed encryption keys (beta) to control the encryption used for node boot disks as well as attached persistent disks in your clusters.

Consuming reservations in GKE is now generally available. Reservations allow you to reserve resources in a specific zone to ensure sufficient capacity is available for your workloads.

Changes

New clusters and node-pools created with the GKE API will have node auto-upgrade enabled by default. This change ensures that your clusters have the most recent default Kubernetes version, bug fixes, and security patches. Existing scripts running against the gcloud CLI or integrating with the GKE API will follow this new default behavior.

Node autoupgrades keep the nodes in your cluster up to date with the cluster master version when your master is updated on your behalf. To disable it explicitly, set autoUpgrade to false in the NodeManagement object.