GKE release notes archive

Stay organized with collections Save and categorize content based on your preferences.
This page contains a historical archive of all release notes for Google Kubernetes Engine prior to 2020. To view more recent release notes, see the Release notes.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or you can programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/gke-main-release-notes.xml

December 23, 2019

Rapid channel
(1.16.x)

Global access for internal TCP/UDP load balancing Services is now Beta. Global access allows internal load balancing IP addresses to be accessed from any region within a VPC.

December 13, 2019

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

No Channel

v.1.12.x
1.12.10-gke.22
v.1.15.x
1.15.4-gke.22

GKE 1.15 is generally available for new clusters.

Upgrading

Before creating GKE v1.15 clusters, you must review the known issues and urgent upgrade notes.

New features

By default, firewall rules restrict your cluster master to only initiate TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. For example, in Kubernetes 1.9 and older, kubectl top accesses heapster, which needs a firewall rule to allow TCP connections on port 8080. To grant such access, you can add firewall rules.

Node-local DNS caching is now available in beta. This does create a single point of failure. If the node-cache goes down DNS for all Pods on that node will be broken until the cache is up.

Known Issues

There is a low risk that consumers of the published OpenAPI document that made assumptions about the absence of schema info for a given type (for example, "no schema info means a resource is a custom resource") could have those assumptions broken once custom resources start publishing schema definitions.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
  • 1.13.11-gke.15
  • 1.13.12-gke.16

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel, but 1.15 will be available in this channel in January 2020.

No channel
  • 1.14.7-gke.25
  • 1.14.8-gke.21
  • 1.14.9-gke.2

Rapid channel
(1.16.x)

Rapid channel
1.16.0-gke.20

GKE 1.16.0-gke.20 (alpha) is now available for testing and validation in the Rapid release channel.

Retired APIs

extensions/v1beta1, apps/v1beta1, and apps/v1beta2 won't be served by default.

  • All resources under apps/v1beta1 and apps/v1beta2 - use apps/v1 instead.
  • daemonsets, deployments, replicasets resources under extensions/v1beta1 - use apps/v1 instead.
  • networkpolicies resources under extensions/v1beta1 - use networking.k8s.io/v1 instead.
  • podsecuritypolicies resources under extensions/v1beta1 - use policy/v1beta1 instead.

Changes

New clusters have the cos-metrics-enabled flag enabled by default. This change allows kernel crash logs to be collected. You can disable by adding --metadata cos-metrics-enabled=false when you create clusters.

Fixed

All of the versions made available include a fix for the issue where newly created node pools are created successfully but are incorrectly shown as PROVISIONING, as reported on December 6th, 2019.

New features

Maintenance windows and exclusions, which was previously available in beta, is now generally available.

Changes

The beta version of Stackdriver Kubernetes Engine Monitoring is no longer supported.

Legacy Stackdriver support for Google Kubernetes Engine (GKE) is deprecated. If you're using Legacy Stackdriver for logging or monitoring, you must migrate to Stackdriver Kubernetes Engine Monitoring before Legacy Stackdriver is decommissioned. For more information, see Legacy Stackdriver support for GKE deprecation.

December 6, 2019

The December 4, 2019 rollout is paused. Versions that were made available for upgrades and new clusters in that release will no longer be available. This is to address an issue where newly created node pools are created successfully but are incorrectly shown as PROVISIONING.

December 4, 2019

Fixed

We have fixed an issue with cluster upgrade from a version earlier than 1.14.2-gke.10 when gVisor is enabled in the cluster. It's now safe to upgrade to any version greater than 1.14.7-gke.17. This issue was originally noted in the release notes for October 30, 2019.

Version updates

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
1.13.12-gke.14

This version updates COS to cos-stable-73-11647-348-0 .

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

No channel
1.14.8-gke.18

This version updates COS to cos-stable-73-11647-348-0 .

Rapid channel
(1.15.x)

Rapid channel

There are no changes to the Rapid channel this week.

November 22, 2019

Fixed

The known issue in the COS kernel that may cause kernel panic, previously reported on November 5th, 2019, is resolved. The versions available in this release use updated versions of COS. GKE 1.12 uses cos-69-10895-348-0 and versions 1.13 and 1.14 use cos-stable-73-11647-348-0.

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.12.10-gke.15 1.12.10-gke.17

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.20

This version uses cos-69-10895-348-0 which fixes the known issue that may cause kernel panics, previously reported on November 5th, 2019.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
1.13.12-gke.13

This version uses cos-stable-73-11647-348-0 which fixes the known issue that may cause kernel panics, previously reported on November 5th, 2019.

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

No channel
1.14.8-gke.17

This version uses cos-stable-73-11647-348-0 which fixes the known issue that may cause kernel panics, previously reported on November 5th, 2019.

Rapid channel
(1.15.x)

Rapid channel

There are no changes to the Rapid channel this week.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.10-gke.15
  • 1.13.11-gke.5
  • 1.13.11-gke.9
  • 1.13.11-gke.11
  • 1.13.12-gke.2
  • 1.14.7-gke.10
  • 1.14.7-gke.14
  • 1.14.7-gke.17
  • 1.14.8-gke.2

November 18, 2019

Fixed

The known issue in the COS kernel that may cause nodes to crash, previously reported on November 5th, 2019, is resolved. This release downgrades COS to cos-73-11647-293-0.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.13.0-gke.0 to 1.13.11-gke.13 1.13.11-gke.14 (Stable channel)
1.13.12-gke.0 to 1.13.12-gke.7 1.13.12-gke.8
1.14.0-gke.0 to 1.14.7-gke.22 1.14.7-gke.23
1.14.8-gke.0 to 1.14.8-gke.11 1.14.8-gke.12 (Regular channel)

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.17

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel
1.13.11-gke.14

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

No channel
1.13.12-gke.8

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

Regular channel
and 1.14.x

Regular channel
1.14.8-gke.12

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

No channel
1.14.7-gke.23

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

Rapid channel
(1.15.x)

1.15.4-gke.15

No new v1.15.x versions this week.

November 11, 2019

Changes

After November 11, 2019, new clusters and node pools created with gcloud have node auto-upgrade enabled by default.

November 05, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.12.x 1.12.10-gke.15
v1.13.x 1.13.11-gke.5
v1.14.x 1.14.7-gke.10

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

v1.12.10-gke.17

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Updated containerd to 1.2.10

Stable channel
(1.13.x)

v1.13.11-gke.11

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Updated containerd to 1.2.10

v1.13.12-gke.2

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Updated containerd to 1.2.10

Regular channel
(1.14.x)

v1.14.7-gke.17

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

v1.14.8-gke.2

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Rapid channel
(1.15.x)

v1.15.4-gke.18

GKE 1.15.4-gke.18 (alpha) is now available for testing and validation in the Rapid release channel. For more details, refer to the release notes for Kubernetes v1.15.

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Known issues

We have found an issue in COS that might cause kernel panics on nodes.

This impacts node versions:
  • 1.13.11-gke.9
  • 1.13.11-gke.11
  • 1.13.11-gke.12
  • 1.13.12-gke.1
  • 1.13.12-gke.2
  • 1.13.12-gke.3
  • 1.13.12-gke.4
  • 1.14.7-gke.14
  • 1.14.7-gke.17
  • 1.14.8-gke.1
  • 1.14.8-gke.2
  • 1.14.8-gke.6
  • 1.14.8-gke.7

A patch is being tested and will rollout soon, but we recommend customers avoid these node versions or downgrade to previous, unaffected patches.

New features

Surge upgrades are now in beta. Surge upgrades allow you to configure speed and disruption of node upgrades

Changes

Node auto-provisioning has reached General Availability. Node auto-provisioning creates or deletes node pools from your cluster based upon resource requests.

October 30, 2019

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now v1.13.11-gke.9 (previously v1.13.10-gke.0). Clusters enrolled in the stable release channel will be auto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.12.x versions 1.12.10-gke.17
1.13.x versions 1.13.11-gke.5
1.14.x versions 1.14.7-gke.10

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel
1.13.11-gke.9

Update containerd to 1.2.10.

Update COS to cos-u-73-11647-329-0.

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

Regular channel
and 1.14.x

Regular channel
1.14.7-gke.10

This version was generally available on October 18, 2019 and is now available in the Regular release channel.

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

No channel
1.14.7-gke.14

Update COS to cos-u-73-11647-329-0.

Rapid channel
(1.15.x)

1.15.4-gke.17

GKE 1.15.4-gke.17 (alpha) is now available for testing and validation in the Rapid release channel.

Fixes a known issue reported on October 11, 2019 regarding fdatasync performance regression on COS/Ubuntu. Node image for Container-Optimized OS updated to cos-77-12371-89-0. Node image for Ubuntu updated to ubuntu-gke-1804-d1903-0-v20191011a

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.10-gke.15
  • 1.13.7-gke.24
  • 1.13.9-gke.3
  • 1.13.9-gke.11
  • 1.13.10-gke.0
  • 1.13.10-gke.7
  • 1.14.6-gke.1
  • 1.14.6-gke.2
  • 1.14.6-gke.13

Known Issues

If you use Sandbox Pods in your GKE cluster and plan to upgrade from a version less than 1.14.2-gke.10 to a version greater than 1.14.2-gke.10, you need to manually run kubectl delete mutatingwebhookconfiguration gvisor-admission-webhook-config after the upgrade.

October 18, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.12.x versions 1.13.7-gke.24
1.14.x versions 1.14.6-gke.0 and older 1.14.6-gke.1

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.15

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
1.13.11-gke.5

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

No channel
1.14.7-gke.10

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

Rapid channel
(1.15.x)

1.15.4-gke.15

GKE 1.15.4-gke.15 (alpha) is now available for testing and validation in the Rapid release channel.

This release includes a patch for CVE-2019-11253. For more information, see the security bulletin for October 16, 2019.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.9-gke.15
  • 1.12.9-gke.16
  • 1.12.10-gke.5
  • 1.12.10-gke.11

Security bulletin

A vulnerability was recently discovered in Kubernetes, described in CVE-2019-11253, which allows any user authorized to make POST requests to execute a remote Denial-of-Service attack on a Kubernetes API server. For more information, see the security bulletin.

October 11, 2019

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now v1.13.10-gke.0 (previously v1.13.7-gke.24). Clusters enrolled in the stable release channel will be auto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
versions older than 1.12.9-gke.13 1.12.9-gke.15
1.13.x versions older than 1.13.7-gke.19 1.13.7-gke.24
1.14.x versions older than 1.14.6-gke.0 1.14.6-gke.1

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.11

Upgrade containerd to 1.2.9

Node image for Container-Optimized OS updated to cos-69-10895-348-0.

Node image for Ubuntu updated to ubuntu-gke-1804-d1703-0-v20190917).

Stable channel
(1.13.x)

Stable channel
1.13.10-gke.0

This version was generally available on September 16, 2019 and is now available in the Stable release channel.

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see the security bulletin for September 16, 2019.

No channel
1.13.10-gke.7

Upgrade containerd to 1.2.9

Node image for Container-Optimized OS updated to cos-u-73-11647-293-0.

Node image for Ubuntu updated to ubuntu-gke-1804-d1809-0-v20190918. Upgrades Nvidia GPU driver to 418 driver, adds Vulkan ICD for graphical workloads, and fixes nvidia-uvm installation order.

Regular channel
(1.14.x)

Regular channel
1.14.6-gke.1

This version was generally available on September 9, 2019 and is now available in the Regular release channel.

No channel
1.14.6-gke.13

Enable SecureBoot on master VMs.

Node image for Container-Optimized OS updated to cos-u-73-11647-293-0.

Node image for Ubuntu updated to ubuntu-gke-1804-d1809-0-v20190918. Upgrades Nvidia GPU driver to 418 driver, adds Vulkan ICD for graphical workloads, and fixes nvidia-uvm installation order.

Upgrades GPU device plugin to the latest version with Vulkan support.

Do not upgrade to this version if you use Workload Identity. There is a known issue where the gke-metadata-server Pods crashloop if you create or uprade a cluster to 1.14.6-gke.13.

Fixes an issue where cronjobs cannot be scheduled when the total number of existing jobs exceeds 500.

Rapid channel
(1.15.x)

1.15.3-gke.18

GKE 1.15.3-gke.18 (alpha) is now available for testing and validation in the Rapid release channel.

Upgraded Istio to 1.2.5.

Improvements to gVisor.

Node image for Container-Optimized OS updated to cos-rc-77-12371-44-0. This update includes upgrading the kernel to 4.19 from 4.14 and upgrading Docker to 19.03 from 18.09.

Node image for Ubuntu updated to ubuntu-gke-1804-d1903-0-v20190917a. This update includes upgrading the kernel to 5 from 4.15 and upgrading Docker to 19.03 from 18.09.

Do not update to this version if you have clusters with hundreds of nodes per cluster or with I/O intensive workloads. Clusters with these characteristics may be impacted by a known issue in versions 4.19 and 5.0 of the Linux kernel that introduces performance regressions in the fdatasync system call.

Versions no longer available

v1.14.3-gke.11 is no longer available for new clusters or upgrades.

Features

Node auto-provisioning is now generally available.

Vertical Pod Autoscaler is now generally available.

Changes

Upgrade Cloud Run on GKE to 0.9.0.

Fixed issues

Fixed a bug with fluentd that would prevent new nodes from starting on large clusters with over 1000 nodes on v1.12.6.

October 2, 2019

Maintenance windows and exclusions now give you granular control over when automatic maintenance occurs on your clusters. You can specify the start time, duration, and recurrence of a cluster's maintenance window. You can also designate specific periods of time when non-essential automatic maintenance should not occur.

September 26, 2019

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now v1.13.7-gke.24 (previously v1.13.7-gke.8). Clusters enrolled in the stable release channel will be auto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
versions older than 1.12.9-gke.13 1.12.9-gke.15
1.13.x versions older than 1.13.7-gke.19 1.13.7-gke.24

Auto-upgrades are currently occurring two days behind the rollout schedule. Some 1.11 clusters will be upgraded to 1.12 in the week of October 7th.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

1.12.x

No new v1.12.x versions this week.

Stable channel
(1.13.x)

No new v1.13.x versions this week.

v1.13.7-gke.24 is now available in the Stable release channel.

Regular channel
(1.14.x)

There are no changes to the Regular channel in this release.

1.14.6-gke.2

This release includes a patch for CVE-2019-9512 and CVE-2019-9514.

Reduces startup time for GPU nodes running Container-Optimized OS.

Rapid channel
(1.15.x)

GKE 1.15.3-gke.1 (alpha) is now available for testing and validation in the Rapid release channel.

For more details, refer to the release notes for Kubernetes v1.15.

Starting with GKE v1.15, the open source Kubernetes Dashboard is no longer natively supported in GKE as a managed add-on. To deploy it manually, follow the deployment instructions in the Kubernetes Dashboard documentation.

Resizing PersistentVolumes is now a beta feature. As part of this change, resizing a PersisntentVolume no longer requires you to restart the Pod.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.7-gke.25
  • 1.12.7-gke.26
  • 1.12.8-gke.10
  • 1.12.8-gke.12
  • 1.12.9-gke.7
  • 1.12.9-gke.13
  • 1.13.6-gke.13
  • 1.13.7-gke.8
  • 1.13.7-gke.19

September 20, 2019

Ingress Controller v1.6, which was previously available in beta, is generally available for clusters running v1.13.7-gke.5 and higher.

Along with Ingress Controller, the following are also generally available:

This note has been corrected. Using Google-managed SSL certificates is currently in Beta.

September 16, 2019

Version updates

GKE cluster versions have been updated.

The release notes for September 16, 2019 were incorrectly published early, on September 9. The incorrect release notes included an announcement of the availability of a security patch that was not actually made available on that date. For more information about the security patch, see the security bulletin for September 16, 2019.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
v1.11 v1.12

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

v1.12.10-gke.5

Fixes an issue where Vertical Pod Autoscaler would reject valid Pod patches.

Stable channel
(1.13.x)

1.13.10-gke.0

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see the security bulletin for September 16, 2019.

Reduces startup time for GPU nodes running Container-Optimized OS.

v1.13.7-gke.8

This version was generally available on June 27, 2019 and is now available in the Stable release channel.

Regular channel
(1.14.x)

v1.14.6-gke.1

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see the security bulletin for September 16, 2019.

Reduces startup time for GPU nodes running Container-Optimized OS.

v1.14.3-gke.11

This version was generally available on September 5, 2019 and is now available in the Regular release channel.

Rapid channel
(1.14.x)

v1.14.6-gke.1

GKE v1.14.6-gke.1 (alpha) is now available for testing and validation in the Rapid release channel. For more details, refer to the release notes for Kubernetes v1.14.6.

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see the security bulletin for September 16, 2019.

Reduces startup time for GPU nodes running Container-Optimized OS.

New features

Ingress Controller v1.6, which was previously available in beta, is generally available for clusters running v1.13.7-gke.5 and higher.

Network Endpoint Groups, which allow HTTP(S) load balancers to target Pods directly, are now generally available.

Release channels, which provide more control over which automatic upgrades your cluster receives, are generally available. In addition to the Rapid channel, you can now enroll your clusters in the Regular or Stable channel.

September 9, 2019

Correction

The release notes for September 16, 2019 were incorrectly published early, on September 9. The incorrect release notes included an announcement of the availability of a security patch that was not actually made available until the week of September 16, 2019. For more information avbout the patch, see the security bulletin for September 16, 2019.

No GKE releases occurred the week of September 9, 2019.

September 5, 2019

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters is now 1.13.7-gke.8 (previously 1.12.8-gke.10).

Scheduled automatic upgrades

Auto-upgrades are no longer paused.

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version upgrade version
1.11.x 1.12.7-gke.25

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.10-gke.6

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.12.x

1.12.9-gke.16

Minor bug fixes and performance improvements.

v1.13.x

1.13.9-gke.3

Bug fixes and performance improvements.

v1.14.x

1.14.3-gke.11

GKE 1.14 is generally available.

Upgrading

Before upgrading clusters to GKE v1.14, you must review the known issues and urgent upgrade notes.

For example, the default RBAC policy no longer grants access to discovery and permission-checking APIs, and you must take specific action to preserve the old behavior for newly-created cluster users.

Differences between GKE v1.14.x and Kubernetes 1.14

GKE v1.14.x has the following differences from Kubernetes 1.14.x:

  • Storage Migrator is not supported on GKE v1.14.x.

  • CSI Inline Volumes (Alpha) are not supported on GKE v1.14.x.

  • Huge Pages is not supported on GKE 1.14.x. If you are interested in support for Huge Pages, register your interest.

New features

Pod Ready++ is generally available and supported on GKE v1.14.x.

Pod priority and preemption is generally available and supported on GKE v1.14.x.

The RunAsGroup feature has been promoted to beta and enabled by default. PodSpec and PodSecurityPolicy objects can be used to control the primary GID of containers on Docker and containerd runtimes.

Early-access to test Windows containers is now available. If you are interested in testing Windows containers, fill out this form.

Other changes

The node.k8s.io API group and runtimeclasses.node.k8s.io resource have been migrated to a built-in API. If you were using RuntimeClasses, you must recreate each of them after upgrading, and also delete the runtimeclasses.node.k8s.io CRD. RuntimeClasses can no longer be created without a defined handler.

When creating a new GKE cluster, Stackdriver Kubernetes Engine Monitoring is now the default Stackdriver support option. This is a change from prior versions where Stackdriver Logging and Stackdriver Monitoring were the default Stackdriver support option. For more information, see Overview of Stackdriver support for GKE.

OS and Arch information is now recorded in kubernetes.io/os and kubernetes.io/arch labels on Node objects. The previous labels (beta.kubernetes.io/os and beta.kubernetes.io/arch) are still recorded, but are deprecated and targeted for removal in Kubernetes 1.18.

Known Issues

Users with the Quobyte Volume plugin are advised not to upgrade between GKE 1.13.x and 1.14.x due to an issue with Kubernetes 1.14. This will be fixed in an upcoming release.

Bug fixes and performance improvements.

Rapid

The following versions are available to clusters enrolled in the Rapid release channel.

1.14.5-gke.5

GKE 1.14.5-gke.5 is now available in the Rapid release channel. It includes bug fixes and performance improvements. For more details, refer to the release notes for Kubernetes v1.14.

New features

Intranode visibility is generally available.

You can now use Customer-managed encryption keys (beta) to control the encryption used for attached persistent disks in your clusters. This is available as a dynamically provisioned PersistentVolume.

Rollout schedule

The rollout schedule is now included in Upgrades.

August 22, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Auto-upgrades are currently paused.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.6

This version was previously released and is available again. It mitigates against the vulnerability described in the security bulletin published on August 5, 2019.

v1.12.x

Multiple v1.12.x versions are available this week:

1.12.9-gke.13

This version mitigates against the vulnerability described in the security bulletin published on August 5, 2019.

1.12.9-gke.15

Fixes an issue that can cause Horizontal Pod Autoscaler to increase the replica count to the maximum, regardless of other autoscaling factors.

Upgrade Istio to 1.1.13, to address address two vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is now cos-69-10895-329-0.

v1.13.x

Multiple v1.13.x versions are available this week:

1.13.7-gke.19

This version mitigates against the vulnerability described in the security bulletin published on August 5, 2019.

1.13.7-gke.24

Fixes an issue that can cause Horizontal Pod Autoscaler to increase the replica count to the maximum during a rolling update, regardless of other autoscaling factors.

Upgrade Istio to 1.1.13, to address address two vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is now cos-73-11647-267-0.

Rapid channel

1.14.3-gke.11

GKE 1.14.3-gke.11 (alpha) is now available for testing and validation in the Rapid release channel. For more details, refer to the release notes for Kubernetes v1.14.

This version mitigates against the vulnerability described in the security bulletin published on August 5, 2019.

Upgrade Istio to 1.1.13, to address address two vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is now cos-73-11647-267-0.

New features

Config Connector is a Kubernetes addon that allows you to manage your Google Cloud resources through Kubernetes configuration.

Rollout schedule

The rollout schedule is now included in Upgrades.

August 12, 2019

Version updates

GKE cluster versions have been updated.

Important information about v1.10.x nodes

In addition to GKE's version policy, Kubernetes has a version skew policy of supporting only the three newest minor versions. Older versions are not guaranteed to receive bug fixes or security updates, and the control plane may become incompatible with nodes running unsupported versions.

Specifically, the Kubernetes v1.13.x control plane is not compatible with nodes running v1.10.x. Clusters in such a configuration could become unreachable or fail to run your workloads correctly. Additionally, security patches are not applied to v1.10.x and below.

We previously published a notice that Google would enable node auto-upgrade to node pools running v1.10.x or lower, to bring those clusters into a supported configuration and mitigate the incompatibility risk described above. To allow for sufficient time for customers to complete the upgrade themselves, Google postponed upgrading cluster control planes to 1.13 until mid-September 2019. Please plan your manual node upgrade to keep your clusters healthy and up to date.

Scheduled automatic upgrades

Auto-upgrades are currently paused.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.6

Fixes the vulnerability announced in the security bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

v1.12.x

Multiple v1.12.x versions are available this week:

1.12.9-gke.13

Fixes the vulnerability announced in the security bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

1.12.9-gke.10

Fixes a problem where Vertical Pod Autoscaler would reject valid patches to Pods.

Improvements to Cluster Autoscaler.

Updates Istio to v1.0.9-gke.0.

v1.12.8-gke.12

Updates Istio to v1.0.9-gke.0.

1.12.7-gke.2

Updates Istio to v1.0.9-gke.0.

Fixes a problem where the kubelet could fail to start a Pod for the first time if the node was not completely configured and the Pod's restart policy was NEVER.

v1.13.x

Multiple v1.13.x versions are available this week:

1.13.7-gke.19

Fixes the vulnerability announced in the security bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

1.13.7-gke.15

Fixes a problem where Vertical Pod Autoscaler would reject valid patches to Pods.

Improvements to Cluster Autoscaler.

You can now use Vulkan with GPUs to process graphics workloads. The Vulkan configuration directorhy is mounted on /etc/vulkan/icd.d in the container.

Updates Istio to v1.1.10-gke.0.

Fixes a problem where the kubelet could fail to start a Pod for the first time if the node was not completely configured and the Pod's restart policy was NEVER.

Rapid (v1.14.x)

1.14.3-gke.10

GKE 1.14.3-gke.10 (alpha) is now available for testing and validation in the Rapid release channel. For more details, refer to the release notes for Kubernetes v1.14.

Fixes the vulnerability announced in the security bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

In v1.14.3-gke.10 and higher, GKE Sandbox uses the gvisor.config.common-webhooks.networking.gke.io webhook, which is created when the cluster starts and makes sandboxed nodes available faster.

Security bulletin

Kubernetes recently discovered a vulnerability, CVE-2019-11247, which allows cluster-scoped custom resource instances to be acted on as if they were namespaced objects existing in all Namespaces. This vulnerability is fixed in GKE versions also announced today. For more information, see the security bulletin.

New features

Clusters running v1.13.6-gke.0 or higher can use Shielded GKE Nodes (beta), which provide strong, verifiable node identity and integrity to increase the security of your nodes.

Rollout schedule

The rollout schedule is now included in Upgrades.

August 1, 2019

New versions available for upgrades and new clusters

During the week of July 8, 2019, a release resulted in a partial rollout. Release notes were not published at that time. Changes discussed in the rest of this entry were applied only to the following zones:

  • europe-west2-a
  • us-east1
  • us-east1-d

In those zones only, the following new versions are available:

  • 1.13.7-gke.15
  • 1.12.9-gke.10
  • 1.12.7-gke.26
  • 1.12.8-gke.12

In those zones only, the following versions are no longer available for new clusters or nodes:

  • 1.11.10-gke.5

In those zones only, clusters running v1.11.x with auto-upgrade enabled were upgraded to v1.12.7-gke.25.

Security bulletin

GKE v1.13.7.x includes patches that mitigate multiple vulnerabilities that are present in v1.13.6. Clusters running any v1.13.6.x version should upgrade to 1.13.7.x, to mitigate against these vulnerabilities, which are described in the following security bulletins:

New features

GKE usage metering (Beta) now supports tracking actual consumption, in addition to resource requests, for clusters running v1.12.8-gke.8 and higher, v1.13.6-gke.7 and higher, or 1.14.2-gke.8 and higher. A new BigQuery table, gke_cluster_resource_consumption, is created automatically in the BigQuery dataset. For more information about this and other improvements to Usage Metering, see Usage metering (Beta).

Node auto-provisioning is supported on regional clusters running v1.12.x or higher.

July 29, 2019

VPC-native is no longer the default cluster network mode for new clusters created using gcloud v256.0.0 or higher. Instead, the routes-based cluster network mode is used by default. We recommend manually enabling VPC-native, to avoid exhausting routes quota.

VPC-native clusters are created by default when you use Google Cloud console or gcloud versions 251.0.0 through 255.0.0. Routes-based clusters are created by default when using the REST API.

June 27, 2019

Version updates

GKE cluster versions have been updated.

Important changes to clusters running unsupported versions

In addition to GKE's version policy, Kubernetes has a version skew policy of supporting only the three newest minor versions. Older versions are not guaranteed to receive bug fixes or security updates, and the control plane may become incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodes running v1.10.x. Clusters in such a configuration could become unreachable or fail to run your workloads correctly. Additionally, security patches are not applied to v1.10.x and below.

To keep your clusters operational and to protect Google's infrastructure, we strongly recommend that you upgrade existing nodes to v1.11.x or higher before the end of June 2019. At that time, Google will enable node auto-upgrade on node pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the control plane can be upgraded to v1.13.x and remain compatible with existing node pools.

We strongly recommend leaving node auto-upgrade enabled.

NOTE: As of 1.12 all kubelets are issued certificates from the cluster CA and verification of kubelet certificates is enabled automatically if all nodepools are 1.12+. We have observed that introducing older (pre 1.12) nodepools after certificate verification has started may cause connection problems for kubectl logs/exec/attach/portforward commands, and should be avoided.

Versions no longer available for upgrades and new clusters

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.8-gke.10
  • 1.11.10-gke.4
  • 1.12.7-gke.10
  • 1.12.7-gke.21
  • 1.12.7-gke.22
  • 1.12.8-gke.6
  • 1.12.8-gke.7
  • 1.12.9-gke.3
  • 1.13.6-gke.5
  • 1.13.6-gke.6
  • 1.13.7-gke.0

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.5

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

v1.12.x

1.12.7-gke.25

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

1.12.8-gke.10

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

1.12.9-gke.7

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

v1.13.x

1.13.6-gke.13

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

1.13.7-gke.8

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

Rapid channel

1.14.3-gke.9

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associated security bulletin for more information.

Security bulletins

Patched versions are now available to address TCP vulnerabilities in the Linux Kernel. For more information, see the security bulletin In accordance with the documented support policy, patches will not be applied to GKE version 1.10 and older.

Kubernetes recently discovered a vulnerability in kubectl, CVE-2019-11246. For more information, see the security bulletin.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Early access to test Windows Containers
  • Usage metering will become generally available

June 4, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.11.9 1.12.7-gke.10

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.8-gke.6
  • 1.11.9-gke.8
  • 1.11.9-gke.13
  • 1.14.2-gke.1 [Preview]

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

No v1.11.x versions this week.

v1.12.x

v1.12.8-gke.7 includes the following changes:

Improved Node Auto-Provisioning support for multi-zonal clusters with GPUs.

Cloud Run 0.6

v1.13.x

v1.13.6-gke.6 includes the following changes:

Improved Node Auto-Provisioning support for multi-zonal clusters with GPUs.

Cloud Run 0.6

COS images now use the Nvidia GPU 418.67 driver. Nvidia drivers on COS are now pre-compiled, greatly reducing driver installation time.

GKE nodes running Kubernetes v1.13.6 are affected by CVE-2019-11245. Information about the impact and mitigation of this vulnerability is available in this Kubernetes issue report. In addition to security concerns, this bug can cause Pods that must run as a specific UID to fail.

Rapid channel

v1.14.1-gke.5 is the default for new Rapid channel clusters. This version includes patched node images that address CVE-2019-11245.

GKE nodes running Kubernetes v1.14.2 are affected by CVE-2019-11245. Information about the impact and mitigation of this vulnerability is available in this Kubernetes issue report. In addition to security concerns, this bug can cause Pods that must run as a specific UID to fail.

Security bulletin

GKE nodes running Kubernetes v1.13.6 and v1.14.2 are affected by CVE-2019-11245. Information about the impact and mitigation of this vulnerability is available in this Kubernetes issue report. In addition to security concerns, this bug can cause Pods that must run as a specific UID to fail.

Changes

Currently, VPC-native is the default for new clusters created with gcloud or the Google Cloud console. However, VPC-native is not the default for new clusters created with the REST API.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Early access to test Windows Containers
  • Usage metering will become generally available
  • New clusters will begin to default to VPC-native

June 3, 2019

Corrections

Basic authentication and client certificate issuance are disabled by default for clusters created with GKE 1.12 and higher. We recommend switching your clusters to use OpenID instead. However, you can still enable basic authentication and client certificate issuance manually.

To learn more about cluster security, see Hardening your cluster.

This information was inadvertently omitted from the February 27, 2019 release note. However, the documentation about cluster routing was updated.

The rollout dates for the May 28, 2019 releases are incorrect. Day 2 spanned May 29-30, day 3 is May 31, and day 4 is June 3.

May 28, 2019

Version updates

GKE cluster versions have been updated.

Important changes to clusters running unsupported versions

In addition to GKE's version policy, Kubernetes has a version skew policy of supporting only the three newest minor versions. Older versions are not guaranteed to receive bug fixes or security updates, and the control plane may become incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodes running v1.10.x. Clusters in such a configuration could become unreachable or fail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, we strongly recommend that you upgrade existing nodes to v1.11.x or higher before the end of June 2019. At that time, Google will enable node auto-upgrade on node pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the control plane can be upgraded to v1.13.x and remain compatible with existing node pools.

We strongly recommend leaving node auto-upgrade enabled.

Scheduled automatic upgrades

No new automatic upgrades this week; previously-announced automatic upgrades may still be ongoing.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

v1.11.10-gke.4 includes the following changes:

The node image for Container-Optimized OS (COS) is now cos-69-10895-242-0.

The node image for Ubuntu is now ubuntu-gke-1804-d1703-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see the security bulletin.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see the security bulletin.

v1.12.x

v1.12.8-gke.6 includes the following changes:

The node image for Container-Optimized OS (COS) is now cos-69-10895-242-0.

The node image for Ubuntu is now ubuntu-gke-1804-d1703-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see the security bulletin.

v1.13.x

v1.13.6-gke.5 includes the following changes:

The node image for Container-Optimized OS (COS) is now cos-u-73-11647-182-0.

The node image for Ubuntu is now ubuntu-gke-1804-d1809-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see the security bulletin.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see the security bulletin.

Rapid channel

v1.14.2-gke.2 is the default for new Rapid channel clusters, and includes the following changes:

GKE Sandbox is supported on v1.14.x clusters running v1.14.2-gke.2 or higher.

The node image for Container-Optimized OS (COS) is now cos-u-73-11647-182-0.

The node image for Ubuntu is now ubuntu-gke-1804-d1809-0-v20190517.

  • Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see the security bulletin.

    The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see the security bulletin.

  • Nodes using these images are now shielded VMs with the following properties:

The following IP ranges have been added to default non-IP-masq iptables rules:

  • 100.64.0.0/10
  • 192.0.0.0/24
  • 192.0.2.0/24
  • 192.88.99.0/24
  • 198.18.0.0/15
  • 198.51.100.0/24
  • 203.0.113.0/24
  • 240.0.0.0/4

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Cloud Run will be upgraded
  • Istio will be upgraded for v1.13.x clusters
  • Early access to test Windows Containers, expected in early June
  • New clusters will begin to default to VPC-native

May 20, 2019

Version updates

GKE cluster versions have been updated.

Important changes to clusters running unsupported versions

In addition to GKE's version policy, Kubernetes has a version skew policy of supporting only the three newest minor versions. Older versions are not guaranteed to receive bug fixes or security updates, and the control plane may become incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodes running v1.10.x. Clusters in such a configuration could become unreachable or fail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, we strongly recommend that you upgrade existing nodes to v1.11.x or higher before the end of June 2019. At that time, Google will enable node auto-upgrade on node pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the control plane can be upgraded to v1.13.x and remain compatible with existing node pools.

We strongly recommend leaving node auto-upgrade enabled.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.10.x (nodes only, completing) 1.11.8-gke.6
1.12.6-gke.10 1.12.6-gke.11
1.14.1-gke.4 and older 1.14.x (Alpha) 1.14.1-gke.5 (Alpha)

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

No v1.11.x versions this week.

v1.12.x

No v1.12.x versions this week.

Correction: Istio was not upgraded to 1.1.3 in v1.12.7-gke.17. The release note for May 13, 2019 has been corrected.

v1.13.x

v1.13.6-gke.0 is available.

This version includes support for GKE Sandbox.

Update Istio to v1.1.3.

Node images have been updated as follows:

Nodes using these images are now shielded VMs with the following properties:

Rapid channel

No v1.14.x versions this week.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.12.6-gke.10

New features

Google Cloud's operations suite Kubernetes Engine Monitoring is now generally available for clusters using the following GKE versions:

  • 1.12.x clusters v1.12.7-gke.17 and newer
  • 1.13.x clusters v1.13.5-gke.10 and newer
  • 1.14.x (Alpha) clusters v1.14.1-gke.5 and newer

Users of the legacy Google Cloud's operations suite support are encouraged to migrate to Google Cloud's operations suite Kubernetes Engine Monitoring before support for legacy Google Cloud's operations suite is removed.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • GKE Sandbox support for v1.14.x (Alpha) clusters
  • v1.14.x nodes will be shielded VMs
  • Early access to test Windows Containers, expected in early June

May 13, 2019

Version updates

GKE cluster versions have been updated.

Important changes to clusters running unsupported versions

In addition to GKE's version policy, Kubernetes has a version skew policy of supporting only the three newest minor versions. Older versions are not guaranteed to receive bug fixes or security updates, and the control plane may become incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodes running v1.10.x. Clusters in such a configuration could become unreachable or fail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, we strongly recommend that you upgrade existing nodes to v1.11.x or higher before the end of June 2019. At that time, Google will enable node auto-upgrade on node pools older than v1.11.x, and these nodes will be updated to v1.11.x so that the control plane can be upgraded to v1.13.x and remain compatible with existing node pools.

We strongly recommend leaving node auto-upgrade enabled.

New default version

The default version for new clusters is now 1.12.7-gke.10 (previously 1.11.8-gke.6). If your cluster is using v1.12.6-gke.10, upgrade to this version to avoid a potential issue that causes auto-repairing nodes to fail.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing after unpausing node auto-upgrade) v1.11.8-gke.6
v1.11.x versions older than v1.11.8-gke.6 v1.11.8-gke.6

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

v1.11.x

v1.11.9-gke.13
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Cloud Run for GKE now uses the default Istio sidecar injection behavior
  • Fix an issue that prevented the kubelet from seeing all GPUs available to nodes using the Ubuntu node image.

v1.12.x

v1.12.7-gke.17
  • Upgrade Ingress controller to 1.5.2
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Fix an issue that prevented the kubelet from seeing all GPUs available to nodes using the Ubuntu node image
  • Fix an issue that sets the dynamic maximum volume count to 16 if your nodes use a custom machine type. The value is now set to 128.

v1.13.x

v1.13.5-gke.10
Upgrading to GKE v1.13.x

To prepare to upgrade your clusters, read the Kubernetes 1.13 release notes and the following information. You may need to modify your cluster before upgrading.

scheduler.alpha.kubernetes.io/critical-pod is deprecated. To mark Pods as critical, use Pod priority and preemption.

node.status.volumes.attached.devicePath is deprecated for Container Storage Interface (CSI) volumes and will not be enabled in future releases.

The built-in system:csi-external-provisioner and system:csi-external-attacher Roles are no longer automatically created You can create your own Roles and modify your Deployments to use them.

Support for CSI drivers using 0.3 and older versions of the CSI API is deprecated. Users should upgrade CSI drivers to use the 1.0 API during the deprecation period.

Kubernetes cannot distinguish between manually-provisioned zonal and regional persistent disks with the same name. Ensure that persistent disks have unique names across the Google Cloud project. This issue does not occur when using dynamically provisioned persistent disks.

If kubelet fails to register a CSI driver, it does not make a second attempt. To work around this issue, restart the CSI driver Pod.

After resizing a PersistentVolumeClaim (PVC), the PVC is sometimes left with a spurious RESIZING condition when expansion has already completed. The condition is spurious as long as the PVC's reported size is correct. If the value of pvc.spec.capacity['storage'] matches pvc.status.capacity['storage'], the condition is spurious and you can delete or ignore it.

The CSI driver-registrar external sidecar container v1.0.0 has a known issue where it takes up to a minute to restart.

DaemonSets now use scheduling features that require kubelet version 1.11 or higher. Google will update kubelet to 1.11 before upgrading clusters to v1.13.x.

kubelet can no longer delete their Node API objects.

Use of the --node-labels flag to set labels under the kubernetes.io/ and k8s.io/ prefix will be subject to restriction by the NodeRestriction admission plugin in future releases. See the admission plugin documentation for the list of allowed labels.

Rapid channel

1.14.1-gke.5

GKE v1.14.1-gke.5 (alpha) is now available for testing and validation in the Rapid release channel. For more details, refer to the release notes for Kubernetes v1.14.

GKE v1.14.x has the following differences from Kubernetes 1.14.x.

You cannot yet create an alpha cluster running GKE v1.14.x. If you attempt to use the --enable-kubernetes-alpha flag, cluster creation fails.

Security bulletin

If you run untrusted code in your own multi-tenant services within Google Kubernetes Engine, we recommend that you disable Hyper-Threading to mitigate Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see the security bulletin.

New features

With GKE 1.13.5-gke.10, GKE 1.13 is now generally available for use in production. You can upgrade clusters running older v1.13.x versions manually.

GKE v1.13.x has the following differences from Kubernetes 1.13:

For information about upgrading from v1.12.x, see Upgrading to GKE v1.13.x in New versions available for upgrades and new clusters.

We are introducing Release channels, a new way to keep your GKE clusters up to date. The Rapid release channel is available, and includes v1.14.1-gke.5 (alpha). You can sign up to try release channels and preview GKE v1.14.x.

GKE Sandbox (Beta) is now available for clusters running v1.12.7-gke.17 and higher and v1.13.5-gke.15 and higher. You can use GKE Sandbox to isolate untrusted workloads in a sandbox to protect your nodes, other workloads, and cluster metadata from defective or malicious code.

Changes

For clusters running v1.12.x or higher and using nodes with less than 1 GB of memory, GKE reserves 255 MiB of memory. This is not a new change, but it was not previously noted. For more details about node resources, see Allocatable memory and CPU resources.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

April 29, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters only with auto-upgrade enabled will be upgraded as follows:

Current version Upgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing) 1.11.8-gke.6
1.13.4-gke.x 1.13.5-gke.10

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

  • 1.12.6-gke.11
    • Nodes continue to use Docker as the default runtime.
    • Fix a performance regression introduced in 1.12.6-gke.10. This regression caused delays when the kubelet reads the /sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.9-gke.5
  • 1.12.7-gke.7
  • 1.13.4-gke.10
  • 1.13.5-gke.7

Fixed issues

A problem was fixed in the Stackdriver Kubernetes Monitoring (Beta) Metadata agent. This problem caused the agent to generate unnecessary log messages.

Changes

Alpha clusters running Kubernetes 1.13 and higher created with the Google Cloud CLI version 242.0.0 and higher have auto-upgrade and auto-repair disabled. Previously, you were required to disable these feature manually.

Known issues

Under certain circumstances, Google-managed SSL certificates (Beta) are not being provisioned in regional clusters. If this happens, you are unable to create or update managed certificates. If you are experiencing this issue, contact Google Cloud support.

Node auto-upgrade is currently disabled. You can still upgrade node pools manually.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Node auto-upgrade will be re-enabled
  • etcd will be upgraded
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Improvements to Managed Certificates

April 26, 2019

Due to delays during the April 22 GKE release rollout, the release will not complete by April 26, 2019 as originally planned. Rollout is expected to complete by April 29, 2019 GMT.

April 25, 2019

Changes

Google Cloud's operations suite Kubernetes Monitoring users: Google Cloud's operations suite Kubernetes Monitoring logging label fields change when you upgrade your GKE clusters to GKE v1.12.6 or higher. The following changes were effective the week of March 26, 2019:

  • Kubernetes Pod labels, currently located in the metadata.userLabels field, are moved to the labels field in the LogEntry and the label keys have a prefix prefix of k8s-pod/. The filter expressions in your sinks, logs-based metrics, log exclusions, or queries might need to change.
  • Google Cloud's operations suite system labels that are in the metadata.systemLabels field are no longer available.

For detailed information about what changed, see the release guide for Google Cloud's operations suite Beta Monitoring and Logging, also known as Google Cloud's operations suite Kubernetes Monitoring (Beta).

April 22, 2019

Version updates

GKE cluster versions have been updated.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
All 1.10.x versions, including v1.10.12-gke.14 1.11.8-gke.6

This roll-out will be phased across multiple weeks.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

  • 1.11.9-gke.8

    • Node image for Container-Optimized OS updated to cos-69-10895-211-0
      • Fix a performance regression introduced in v1.11.x node images older than 1.11.9-gke.8. This regression caused delays when the kubelet reads the /sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.
    • Upgrade Node Problem Detector to 0.6.3
  • 1.12.7-gke.10

    • Node image for Container-Optimized OS updated to cos-69-10895-211-0
      • Fix a performance regression introduced in v1.12.x node images older than v1.12.6-gke.10. This regression caused delays when the kubelet reads the /sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.
    • Upgrade Node Problem Detector to 0.6.3
  • 1.13.5-gke.10 (Preview)

    • To create a cluster, use the following command, replacing `my-alpha-cluster with the name of your cluster:

      gcloud container clusters create my-alpha-cluster \
        --cluster-version=1.13.5-gke.10 \
        --enable-kubernetes-alpha \
        --no-enable-autorepair
      
    • Upgrade Node Problem Detector to 0.6.3

The following versions are no longer available for new clusters or cluster upgrades:

  • All 1.10.x versions, including v1.10.12-gke.14

Fixed issues

A known issue in v1.12.6-gke.10 and older has been fixed in 1.12.7-gke.10. This issue causes node auto-repair to fail. Upgrading is recommended.

A known issue in 1.12.7-gke.7 and older has been fixed in 1.12.7-gke.10. The currentMetrics field now reports the correct value. The problem only affected reporting and did not impact the functionality of Horizontal Pod Autoscaler.

Deprecations

GKE v1.10.x has been deprecated, and is no longer available for new clusters, master upgrades, or node upgrades.

The Cluster.FIELDS.initial_node_count field has been deprecated in favor of nodePool.initial_node_count in the v1 and v1beta1 GKE APIs.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • etcd will be upgraded
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Improvements to Managed Certificates

April 19, 2019

You can now use Usage metering with GKE 1.12.x and 1.13.x clusters.

April 18, 2019

You can now run GKE clusters in region asia-northeast2 (Osaka, Japan) with zones asia-northeast2-a, asia-northeast2-b, and asia-northeast2-c.

The new region and zones will be included in future rollout schedules.

April 15, 2019

Version updates

GKE cluster versions have been updated.

New default version

The default version for new clusters has been updated to 1.11.8-gke.6 (previously 1.11.7-gke.12).

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current version Upgrade version
1.10.x versions 1.10.12-gke.13 and older 1.10.12-gke.14
1.11.x versions 1.11.8-gke.5 and older 1.11.8-gke.6
1.12.x versions 1.12.6-gke.9 and older 1.12.6-gke.10
1.13.x versions 1.13.4-gke.9 and older 1.13.4-gke.10 (Preview)

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

  • 1.11.9-gke.5
    • Node image for Container-Optimized OS updated to cos-69-10895-201-0
      • This release note previously stated, in error, that Docker was upgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Node image for Ubuntu updated to ubuntu-gke-1804-d1703-0-v20190409
      • This release note previously stated, in error, that Docker was upgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Upgrade Cloud Run on GKE to 0.5.0
    • Upgrade containerd to 1.1.7
  • 1.12.7-gke.7
    • Node image for Container-Optimized OS updated to cos-69-10895-201-0
      • This release note previously stated, in error, that Docker was upgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Node image for Ubuntu updated to ubuntu-gke-1804-d1703-0-v20190409
      • This release note previously stated, in error, that Docker was upgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Upgrade Cloud Run on GKE to 0.5.0
    • Upgrade containerd to 1.2.6
  • 1.13.5-gke.7 (Preview)

    • To create a cluster, use the following command, replacing `my-alpha-cluster with the name of your cluster:

      gcloud container clusters create my-alpha-cluster \
        --cluster-version=1.13.5-gke.7 \
        --enable-kubernetes-alpha \
        --no-enable-autorepair
      
    • Node image for Container-Optimized OS updated to cos-u-73-11647-121-0

      • Upgrade Docker from 17.03 to 18.09
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Node image for Ubuntu updated to ubuntu-gke-1804-d1809-0-v20190402a

      • Upgrade Docker from 17.03 to 18.09
      • Apply a restart policy to the Docker daemon, so that it attempts to restart every 10 seconds if it is not running, with no maximum number of retries.
      • Apply security update for CVE-2019-8912
    • Upgrade Cloud Run on GKE to 0.5.0

    • Upgrade containerd to 1.2.6

    • Improvements to volume operation metrics

    • Cluster Autoscaler is now supported for GKE 1.13 clusters

    • Fix a problem that caused the currentMetrics field for Horizontal Pod Autoscaler with 'AverageValue' target to always report unknown. The problem only affected reporting and did not impact the functionality of Horizontal Pod Autoscaler.

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.12-gke.7
  • 1.10.12-gke.9
  • 1.11.6-gke.11
  • 1.11.6-gke.16
  • 1.11.7-gke.12
  • 1.11.7-gke.18
  • 1.11.8-gke.2
  • 1.11.8-gke.4
  • 1.11.8-gke.5
  • 1.12.5-gke.5
  • 1.12.6-gke.7
  • 1.13.4-gke.1
  • 1.13.4-gke.5

Changes

Improvements have been made to the automated rules for the add-on resizer. It now uses 5 nodes as the inflection point.

Known issues

GKE 1.12.7-gke.7 and older, and 1.13.4-gke.10 and older have a known issue where the currentMetrics field for Horizontal Pod Autoscaler with AverageValue target always reports unknown. The problem only affects reporting and does not impact the functionality of Horizontal Pod Autoscaler.

This issue has already been fixed in GKE 1.13.5-gke.7.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Version 1.10.x will soon be unvailable for new clusters.
  • The known issue published this week about Horizontal Pod Autoscaler metrics will be fixed in GKE 1.12.x as well.
  • etcd will be upgraded.

April 2, 2019

The following GKE releases contain a security update that addresses CVE-2019-9900 and CVE-2019-9901. For more information, see the security bulletin.

  • 1.10.12-gke.14
  • 1.11.6-gke.16
  • 1.11.7-gke.18
  • 1.11.8-gke.6
  • 1.12.6-gke.10
  • 1.13.4-gke.10 (Public preview)
    • To create a cluster, use the following command, replacing my-alpha-cluster with the name of your cluster:
      gcloud container clusters create my-alpha-cluster \
        --cluster-version=1.13.4-gke.10 \
        --enable-kubernetes-alpha \
        --no-enable-autorepair
          

Rollout schedule

The rollout schedule is now included in Upgrades.

March 26, 2019

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

  • 1.11.8-gke.5
    • Improvements to Cluster Autoscaler
    • Improvements to gVisor
  • 1.12.6-gke.7
    • Improvements to Cluster Autoscaler
    • Update Ingress controller to 1.5.1
    • Update containerd to 1.2.5
  • 1.13.4-gke.5 (public preview)

    • To create a cluster, use the following command, replacing my-alpha-cluster with the name of your cluster:
      gcloud container clusters create my-alpha-cluster \
      --cluster-version=1.13.4-gke.5 \
      --enable-kubernetes-alpha \
      --no-enable-autorepair
      
    • Improvements to Vertical Pod Autoscaler
    • Improvements to gVisor
    • Update Ingress controller to 1.5.1

    • Update containerd to 1.2.5

    • Cluster Autoscaler is not operational in this GKE version.

Rollout schedule

The rollout schedule is now included in Upgrades.

March 19, 2019

GKE 1.13 public preview

GKE 1.13.4-gke.1 is available for alpha clusters as a public preview. The preview period helps Google Cloud to improve the quality of the final GA release, and allows you to test the new version earlier.

To create a cluster using this version, use the following command, replacing my-alpha-cluster with the name of your cluster. Use the exact cluster version provided in the command. You can add other configuration options, but do not change any of the ones below.

gcloud container clusters create my-alpha-cluster \
  --cluster-version=1.13.4-gke.1 \
  --enable-kubernetes-alpha \
  --no-enable-autorepair

Alpha clusters become unavailable after 30 days.

Changes

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades and node upgrades for existing clusters. See these instructions for more information on the Kubernetes versioning scheme.

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.5-gke.5
  • 1.11.6-gke.2
  • 1.11.6-gke.3
  • 1.11.6-gke.6
  • 1.11.6-gke.8
  • 1.11.7-gke.4
  • 1.11.7-gke.6

GKE 1.12.5-gke.10 is no longer available for new clusters, master upgrades, or node upgrades.

Last week, we began to make GKE 1.12.5-gke.10 unavailable for new clusters or upgrades, due to increased error rates. That process completes this week.

If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated error rates, you can contact support.

Automated master and node upgrades

The following versions will be updated for masters and nodes with auto-upgrade enabled. Automated upgrades are rolled out over multiple weeks to ensure cluster stability.

  • 1.11.6 Masters and nodes with auto-upgrade enabled which are using versions 1.11.6-gke.10 or earlier will begin to be upgraded to 1.11.7-gke.12.
  • 1.11.7 Masters and nodes with auto-upgrade enabled which are using version 1.11.7-gke.11 or earlier will begin to be upgraded to 1.11.7-gke.12.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12
  • GKE 1.12.x masters will begin using the containerd runtime with an upcoming release.

March 14, 2019

GKE 1.12.5-gke.10 is no longer available for new clusters or master upgrades.

We have received reports of master nodes experiencing elevated error rates when upgrading to version 1.12.5-gke.10 in all regions. Therefore, we have begun the process of making it unavailable for new clusters or upgrades.

If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated error rates, you can contact support.

March 11, 2019

You can now run GKE clusters in region europe-west6 (Zürich, Switzerland) with zones europe-west6-a, europe-west6-b, and europe-west6-c.

The new region and zones will be included in future rollout schedules.

March 5, 2019

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.12-gke.7 - This version is being made available again after being previously removed.
  • 1.10.12-gke.9
  • 1.11.7-gke.12
  • 1.12.5-gke.10

Node image updates

Container-Optimized OS with containerd image for GKE 1.11 clusters

The Container-Optimized OS with containerd node image has been upgraded from cos-69-10895-138-0-c115 to cos-69-10895-138-0-c116 for clusters running Kubernetes 1.11+.

See COS image release notes and the containerd v1.1.5 to v1.1.6 changelog for more information.

Container-Optimized OS with containerd image for GKE 1.12 clusters

The Container-Optimized OS with containerd node image has been upgraded from cos-69-10895-138-0-c123 to cos-69-10895-138-0-c124 for clusters running Kubernetes 1.12.5-gke.10+ and alpha clusters running Kubernetes 1.13+.

cos-69-10895-138-0-c124 upgrades Docker to v18.09.0.

See COS image release notes and the containerd v1.2.3 to v1.2.4 changelog for more information.

Other Updates

  • GKE Ingress has been upgraded from v1.4.3 to v1.5.0 for clusters running 1.12.5-gke.10+. For details, see the detailed changelog and release notes.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12

February 27, 2019

GKE 1.12.5-gke.5 is generally available and includes Kubernetes 1.12. Kubernetes 1.12 provides faster auto-scaling, faster affinity scheduling, topology-aware dynamic provisioning of storage, and advanced audit logging. For more information, see Digging into Kubernetes 1.12 on the Google Cloud blog.

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.12.5-gke.5

Rollout schedule

The rollout schedule is now included in Upgrades.

Known issues

A known issue in GKE 1.12.5-gke.5 and all 1.11.x versions below 1.11.6 can cause significant delays when the cluster autoscaler adds new nodes to the cluster, if the cluster has hundreds of unschedulable Pods due to resource starvation. It may require a few minutes before all Pods are scheduled, depending on the number of unschedulable Pods and the size of the cluster. The workaround is to add an adequate number of nodes manually. If adding nodes does not resolve the issue, contact support.
A known issue in GKE 1.12.5-gke.5 can cause unbounded memory usage. This is caused by a memory leak in ReflectorMetricsProvider. See this issue for further details. This will be fixed in an upcoming patch.
A known issue in GKE 1.12.5-gke.5 slows down or stops Pod scheduling in clusters with large numbers of terminated Pods. See this issue for further details. This will be fixed in an upcoming patch.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11

February 18, 2019

Version updates

GKE cluster versions have been updated.

New default version for new clusters

Kubernetes version 1.11.7-gke.4 is the default version for new clusters, available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.11.7-gke.6

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.x

Node image updates

The Container-Optimized OS node image has been upgraded from cos-69-10895-123-0 to cos-69-10895-138-0. See COS image release notes for more information.

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

GKE Ingress has been upgraded from v1.4.2 to v1.4.3 for clusters running 1.11.7-gke.6+. For details, see the detailed changelog and release notes.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • GKE 1.12 will be made generally available.
  • Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11.7-gke.4.

February 11, 2019

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions will be available for new clusters and for opt-in master upgrades of existing clusters this week according to the rollout schedule:

  • 1.11.6-gke.11
  • 1.11.7-gke.4
  • 1.10.12-gke.7

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.6-gke.0

Node image updates

The Ubuntu node image has been upgraded to ubuntu-gke-1604-d1703-0-v20190124 for clusters running 1.10.12-gke.7.

The Ubuntu node image has been upgraded to ubuntu-gke-1804-d1703-0-v20190124 for clusters running 1.11.6-gke.11, 1.11.7-gke.4 and 1.12.5-gke.5 (EAP).

Changes:

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

January 28, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, see Coming soon.

New default version for new clusters

GKE version 1.11.6-gke.2 is the default version for new clusters, available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are available, according to this week's rollout schedule, for new clusters and for opt-in master upgrades for existing clusters:

  • 1.11.6-gke.6

GKE Ingress controller update

GKE Ingress has been upgraded from v1.4.1 to v1.4.2 for clusters running 1.11.6-gke.6+. For details, see the change log and the release notes.

Fixed Issues

A bug in version 1.10.x and 1.11.x may lead to periodic persistent disk commit latency spikes exceeding one second. This may trigger master re-elections of GKE components and cause short (a few seconds) periods of unavailability in the cluster control plane. The issue is fixed in version 1.11.6-gke.6.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • 25% of the upgrades from 1.10 to 1.11.6-gke.2 will be complete.
  • Version 1.11.6-gke.8 will be made available.
  • Version 1.10 will be made unavailable.

January 21, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, see Coming soon.

New default version for new clusters

Kubernetes version 1.10.11-gke.1 is the default version for new clusters, available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.12-gke.1
  • 1.11.6-gke.3

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.6-gke.13
  • 1.10.7-gke.11
  • 1.10.7-gke.13
  • 1.10.9-gke.5
  • 1.10.9-gke.7
  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13

Scheduled master auto-upgrades

  • Cluster masters running 1.10.x will be upgraded to 1.10.11-gke.1.
  • Cluster masters running 1.11.2 through 1.11.4 will be upgraded to 1.11.5-gke.5.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.10.x nodes with auto-upgrade enabled will be upgraded to 1.10.11-gke.1.
  • 1.11.2 through 1.11.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.

Changes

GKE will not set --max-nodes-total, because --max-nodes-total is inaccurate when the cluster uses Flexible Pod CIDR ranges. This will be gated in 1.11.7+.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • GKE 1.11.6-gke.6 will be available.
  • A new COS image will be available.

January 14, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, see Coming soon.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.12-gke.0
  • 1.11.6-gke.0
  • 1.11.6-gke.2

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

Scheduled master auto-upgrades

  • Cluster masters running 1.9.x will be upgraded to 1.10.9-gke.5.
  • Cluster masters running 1.11.2-gke.25 will be upgraded to 1.11.2-gke.26.
  • Cluster masters running 1.11.3-gke.23 will be upgraded to 1.11.3-gke.24.
  • Cluster masters running 1.11.4-gke.12 will be upgraded to 1.11.4-gke.13.
  • Cluster masters running 1.11.5-gke.4 will be upgraded to 1.11.5-gke.5.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.9.x nodes with auto-upgrade enabled will be upgraded to 1.10.9-gke.5.
  • 1.11.2-gke.25 nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.26.
  • 1.11.3-gke.23 nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.24.
  • 1.11.4-gke.12 nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.13.
  • 1.11.5-gke.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.

GKE Ingress controller update

The GKE Ingress controller has been upgraded from v1.4.0 to v1.4.1 for clusters running 1.11.6-gke.2+. For details, see the change log and the release notes.

Fixed Issues

If you use Stackdriver Kubernetes Monitoring Beta with structured JSON logging, an issue with the parsing of structured JSON log entries was introduced in GKE v1.11.4-gke.12. See release guide for Stackdriver Kubernetes Monitoring. This is fixed by upgrading your cluster:

  • 1.11.6-gke.2

Users of GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2 on clusters that use Calico network policies may experience failures due to a problem recreating the BGPConfigurations.crd.projectcalico.org resource. This is fixed by the automatic upgrades to masters and nodes that have auto-upgrade enabled.

A problem in Endpoints API object validation could prevent updates during an upgrade, leading to stale network information for Services. Symptoms of the problem include failed healthchecks with a 502 status code or a message such as Forbidden: Cannot change NodeName. This is fixed by the automatic upgrades to masters and nodes that have auto-upgrade enabled.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • All GKE 1.10.x masters will be upgraded to the latest 1.10 version.
  • All GKE 1.11.0 through 1.11.4 masters will be upgraded to the latest 1.11.5 version.

January 8, 2019

The rollout beginning January 8, 2019 has been paused after two days. This is being done as a caution, so that we can investigate an issue that will be fixed in next week's rollout. This is not a bug in any GKE version currently available or planned to be made available.

December 17, 2018

Version updates

GKE cluster versions have been updated.

For information about changes expected in the coming weeks, see Coming soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13
  • 1.11.5-gke.5

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.11.2-gke.18
  • 1.11.2-gke.20
  • 1.11.3-gke.18
  • 1.11.4-gke.8

Scheduled master auto-upgrades

Remaining cluster masters running GKE 1.9.x will be upgraded to GKE 1.10.9-gke.5 in January 2019.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
  • 1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
  • 1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
  • 1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4

Fixed Issues

Users upgrading to GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2 on clusters that use Calico network policies may experience failures due to a problem recreating the BGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is fixed by upgrading your to one of the following versions:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

A problem in Endpoints API object validation could prevent updates during an upgrade, leading to stale network information for Services. Symptoms of the problem include failed healthchecks with a 502 status code or a message such as Forbidden: Cannot change NodeName. If you encounter this problem, upgrade your cluster to one of the following versions:

  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13
  • 1.11.5-gke.5

This problem can also affect earlier versions of GKE, but the fix is not yet available for those versions. If you are running an earlier version and encounter this issue, contact support.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • Remaining GKE 1.9.x masters are expected to be upgraded in January 2019.

December 10, 2018

Version updates

GKE cluster versions have been updated.

For information about changes expected in the coming weeks, see Coming soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.11-gke.1
  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.9.x
  • 1.10.6-gke.11

Scheduled master auto-upgrades

We will begin upgrading cluster masters running GKE 1.9.x to GKE 1.10.9-gke.5. The upgrade will be completed in January 2019.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
  • 1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
  • 1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
  • 1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4

Node image updates

Container-Optimized OS node image has been upgraded to cos-stable-69-10895-91-0 for clusters running Kubernetes 1.11.2, Kubernetes 1.11.3, Kubernetes 1.11.4, and Kubernetes 1.11.5..

Changes:

Fixed Issues

Users upgrading to GKE 1.11.3 on clusters that use Calico network policies may experience failures due to a problem recreating the BGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is fixed by upgrading your GKE 1.11.3 clusters to 1.11.3-gke.23.

Users modifying or upgrading existing GKE 1.11.x clusters that use Alias IP may experience network failures due to a mismatch between the new IP range assigned the Pods and the alias IP address range for the nodes. This is fixed by upgrading your GKE 1.11.x clusters to one of the following versions:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

Changes

Node Problem Detector (NPD) has been upgraded from 0.5.0 to 0.6.0 for clusters running GKE 1.10.11-gke.1+ and 1.11.5-gke.1+. For details, see the upstream pull request.

Known Issues

In GKE v1.11.4-gke.12 and later, if you use Stackdriver Kubernetes Monitoring Beta with structured JSON logging, there is an issue with the parsing of structured JSON log entries. As a workaround, you can downgrade to GKE 1.11.3. For more information, see the release guide for Stackdriver Kubernetes Monitoring.

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • All GKE 1.9.x masters will be upgraded to 1.10.9-gke.5.

December 4, 2018

Version updates

GKE cluster versions have been updated.

For information about changes expected in the coming weeks, see Coming soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.11.4-gke.8

Node image updates

Ubuntu node image has been upgraded to ubuntu-gke-1804-d1703-0-v20181113.manifest for clusters running Kubernetes 1.11.4-gke.8.

Changes:
  • The following warning is now displayed to SSH clients that connect to Nodes using SSH or to run remote commands on Nodes over an SSH connection:
    WARNING: Any changes on the boot disk of the node must be made via
    DaemonSet in order to preserve them across node (re)creations.
    Node will be (re)created during manual-upgrade, auto-upgrade,
    auto-repair or auto-scaling.

New features

Changes

  • You can now drain node pools and delete Nodes in parallel.
  • GKE data in Cloud Asset Inventory and Search is now available in near-real-time. Previously, data was dumped at 6-hour intervals.

Fixed Issues

When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem with provisioning the ExternalIP on one or more Nodes causes the kubectl command to fail. The following error is logged in the kube-apiserver log:

Failed to getAddresses: no preferred addresses found; known addresses

This issue is fixed in GKE 1.11.4-gke.8. If you can't upgrade to that version, you can work around this issue by following these steps:

  1. Determine which Nodes have no ExternalIP set:

    kubectl get nodes -o wide

    Look for entries where the last column is <none>.

  2. Restart affected nodes.

Known Issues

Users upgrading to GKE 1.11.3 on clusters that use Calico network policies may experience failures due to a problem recreating the BGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is expected to be fixed in the coming weeks.

To work around this problem, you can create the BGPConfigurations.crd.projectcalico.org resource manually:

  1. Copy the following script into a file named bgp.yaml:
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: bgpconfigurations.crd.projectcalico.org
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: BGPConfiguration
        plural: bgpconfigurations
        singular: bgpconfiguration
        
  2. Apply the change to the affected cluster using the following command:
    kubectl apply -f bgp.yaml

Users modifying or upgrading existing GKE 1.11.x clusters that use Alias IP may experience network failures due to a mismatch between the new IP range assigned the Pods and the alias IP address range for the nodes. This is expected to be fixed in the coming weeks.

To work around this problem, follow these steps. Use the name of your node in place of [NODE_NAME], and use your cluster's zone in place of [ZONE].

  1. Cordon node that has been affected:
    kubectl cordon [NODE_NAME]
  2. Drain node of all workloads:
    kubectl drain [NODE_NAME]
  3. Delete the Node object from Kubernetes
    kubectl delete nodes [NODE_NAME]
  4. Reboot the Node. This is not optional.
    gcloud compute instances reset --zone [ZONE] [NODE_NAME]

Rollout schedule

The rollout schedule is now included in Upgrades.

Coming soon

We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.

  • We expect to begin upgrading cluster masters running GKE 1.9.x to 1.10.9-gke.5.
  • An updated Container-Optimized OS node image, including containerd 1.1.5
  • Support for enabling Node auto-upgrade and auto-repair when creating or modifying node pools for GKE 1.11 clusters running Ubuntu node images

November 26, 2018

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.6-gke.13
  • 1.10.7-gke.13
  • 1.10.9-gke.7
  • 1.11.2-gke.20
  • 1.11.3-gke.18

Node image updates

Container-Optimized OS node image has been upgraded to cos-stable-69-10895-91-0 for clusters running Kubernetes 1.10.9 and Kubernetes 1.11.3.

Changes:

  • Bug fix for pod hanging when executing a file in NFS path

See COS image release notes for more information.

Ubuntu node image has been upgraded to ubuntu-gke-1804-bionic-20180921 for clusters running Kubernetes 1.11.3.

Changes:
  • Add GPU support on Ubuntu

Known Issues

When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem with provisioning the ExternalIP on one or more Nodes causes some kubectl command to fail. The following error is logged in the kube-apiserver log:

Failed to getAddresses: no preferred addresses found; known addresses

You can work around this issue by following these steps:

  1. Determine which Nodes have no ExternalIP set:
    kubectl get nodes -o wide

    Look for entries where the last column is <none>.

  2. Restart affected nodes.

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

Vertical Pod Autoscaler (beta) is now available on 1.11.3-gke.11 and higher.

November 12, 2018

Version updates

GKE cluster versions have been updated.

New default version for new clusters

Kubernetes version 1.9.7-gke.11 is the default version for new clusters, available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.9.7-gke.11
  • 1.10.6-gke.11
  • 1.10.7-gke.11
  • 1.10.9-gke.5
  • 1.11.2-gke.18

Scheduled master auto-upgrades

Cluster masters will be auto-upgraded as described below:

  • All clusters running 1.9.7 will be upgraded to 1.9.7-gke.11
  • All clusters running 1.10.6 will be upgraded to 1.10.6-gke.11
  • All clusters running 1.10.7 will be upgraded to 1.10.7-gke.11
  • All clusters running 1.10.9 will be upgraded to 1.10.9-gke.5
  • All clusters running 1.11.2 will be upgraded to 1.11.2-gke.18

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.9.7-gke.7
  • 1.10.6-gke.9
  • 1.10.7-gke.9
  • 1.10.9-gke.3
  • 1.11.2-gke.15

Known Issues

When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem with provisioning the ExternalIP on one or more Nodes causes some kubectl command to fail. The following error is logged in the kube-apiserver log:

Failed to getAddresses: no preferred addresses found; known addresses

You can work around this issue by following these steps:

  1. Determine which Nodes have no ExternalIP set:
    kubectl get nodes -o wide

    Look for entries where the last column is

    <none>
    .
  2. Restart affected nodes.

Other Updates

Patch 2 for Tigera Technical Advisory TTA-2018-001. See the security bulletin for further details.

Patch for Kubernetes vulnerability CVE-2018-1002105. See the security bulletin for more details.

November 5, 2018

Version updates

GKE cluster versions have been updated.

New default version for new clusters

Kubernetes version 1.9.7-gke.7 is the default version for new clusters, available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.9.7-gke.7
  • 1.10.6-gke.9
  • 1.10.7-gke.9
  • 1.10.9-gke.3
  • 1.11.2-gke.15

Scheduled master auto-upgrades

Cluster masters running will be auto-upgraded as described below:

  • All clusters running 1.9.x will be upgraded to 1.9.7-gke.7
  • All clusters running 1.10.6 will be upgraded to 1.10.6-gke.9
  • All clusters running 1.10.7 will be upgraded to 1.10.7-gke.9
  • All clusters running 1.10.9 will be upgraded to 1.10.9-gke.3
  • All clusters running 1.11.2 will be upgraded to 1.11.2-gke.15

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.9.7-gke.6
  • 1.10.6-gke.6
  • 1.10.7-gke.6
  • 1.10.9-gke.0
  • 1.11.2-gke.9

Other Updates

Patch 1 for Tigera Technical Advisory TTA-2018-001. See the security bulletin for further details. The November 12th release contains additional fixes that address TTA-2018-001 and we recommend customers upgrade to that release.

Rollout schedule

The rollout schedule is now included in Upgrades.

November 1, 2018

New Features

Node auto-provisioning is now available in beta.

October 30, 2018

Version updates

GKE cluster versions have been updated as detailed in the following sections. See supported versions for a full list of the Kubernetes versions you can run on your GKE masters and nodes.

New versions available for upgrades and new clusters

GKE 1.11.2-gke.9 is now generally available.

  • You can now select Container-Optimized OS with containerd images when creating, modifying, or upgrading a cluster to GKE v1.11. Visit Using Container-Optimized OS with containerd for details.

  • The CustomResourceDefinition API supports a versions list field (and deprecates the previous singular version field) that you can use to support multiple versions of custom resources you have developed, to indicate the stability of a given custom resource. All versions must currently use the same schema, so if you need to add a field, you must add it to all versions. Currently, versions only indicate the stability of your custom resource, and do not allow for any difference in functionality among versions. For more information, visit Versions of CustomResourceDefinitions.

  • Kubernetes 1.11 introduces beta support for increasing the size of an existing PersistentVolume. To increase the size of a PersistentVolume, edit the PersistentVolumeClaim (PVC) object. Kubernetes expands the file system automatically.

    Kubernetes 1.11 also includes alpha support for expanding an online PersistentVolume (one which is in use by a running deployment). To test this feature, use an alpha cluster.

    Shrinking persistent volumes is not supported. For more details, visit Resizing a volume containing a file system.

  • Subresources allow you to add capabilities to custom resources. You can enable /status and /scale REST endpoints for a given custom resource. You can access these endpoints to view or modify the behavior of the custom resource, using PUT, POST, or PATCH requests. Visit Subresources for details.

Also, 1.10.9-gke.0 is available.

Scheduled master auto-upgrades

  • Cluster masters running GKE 1.10.6 will be upgraded to 1.10.6-gke.6.
  • Cluster masters running GKE 1.10.7 will be upgraded to 1.10.7-gke.6.

Fixed Issues

GKE 1.10.7-gke.6 and 1.11.2-gke.9 fix an issue that is present in GKE 1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher, where master component logs are missing from Stackdriver Logging.

Other Updates

Container-Optimized OS node image has been upgraded to `cos-beta-69-10895-52-0` for clusters running Kubernetes 1.11.2-gke.9, 1.10.9-gke.0, or 1.10.7-gke.6. See COS image release notes for more information.

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

Cluster templates are now available when creating new GKE clusters in Google Cloud console.

Changes

The kubectl command on new nodes has been upgraded from version 1.9 to 1.10. The kubectl version is always one version behind the highest GKE version, to ensure compatibility with all supported versions.

Known Issues

In GKE 1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher, master component logs are missing from Stackdriver Logging. This is due to an issue in the version of fluentd used in those versions of GKE.

Update: This issue is fixed in GKE 1.10.7-gke.6 and 1.11.2-gke.9, available from October 30, 2018.

October 22, 2018

Fixed

Kubernetes 1.11.0+: Fixes a bug in kubeDNS where hostnames in SRV records were being incorrectly compressed.

Version updates

GKE cluster versions have been updated.

Scheduled master auto-upgrades

  • 20% of cluster masters running Kubernetes versions 1.10.6-gke.x will be updated to Kubernetes 1.10.6-gke.6, according to this week's rollout schedule.
  • 20% of cluster masters running Kubernetes versions 1.10.7-gke.x will be updated to Kubernetes 1.10.7-gke.6, according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.6-gke.2
  • 1.11.1-gke.1 (EAP)

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

Authorized networks is now generally available.

You can now run GKE clusters in region asia-east2 (Hong Kong) with zones asia-east2-a, asia-east2-b, and asia-east2-c.

October 18, 2018

Changes

Node autoupgrades are enabled by default for clusters and node pools created with the Google Cloud console.

October 8, 2018

Known Issues

All GKE v1.10.6 releases includes a problem with Ingress load balancing. The problem was first reported in the release notes for September 18, 2018.

The problem is fixed in GKE v1.10.7 and higher. However, it cannot be fixed in GKE v1.10.6. If your cluster uses Ingress, do not upgrade to v1.10.6. Do not use GKE v1.10.6 for new clusters. If your cluster does not use Ingress for load balancing and you cannot upgrade to GKE v1.10.7 or higher, you can still use GKE v1.10.6.

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See supported versions for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.6-gke.6
  • 1.10.7-gke.6
  • 1.11.2-gke.9 as EAP

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.6-gke.4
  • 1.10.7-gke.2

Node image updates

Container-Optimized OS node image cos-dev-69-10895-23-0 is now available. See COS image release notes for more information.

Container-Optimized OS with containerd node image cos-b-69-10895-52-0-c110 is now available. See COS image release notes for more information.

Rollout schedule

The rollout schedule is now included in Upgrades.

October 2, 2018

New Features

Private clusters is now generally available.

September 21, 2018

New Features

Container-native load balancing is now available in beta.

September 18, 2018

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.6-gke.4
  • 1.10.7-gke.2
  • 1.11.2-gke.4 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.11.2-gke.4 in Alpha Clusters.

Scheduled master auto-upgrades

20% of cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.7-gke.6, according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.9.6-gke.2
  • 1.9.7-gke.5
  • 1.10.6-gke.3
  • 1.10.7-gke.1
  • 1.11.2-gke.2 (EAP version)
  • 1.11.2-gke.3 (EAP version)

Rollout schedule

The rollout schedule is now included in Upgrades.

Fixes

September 5, 2018

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • 1.10.6-gke.3
  • 1.10.7-gke.1

Scheduled master auto-upgrades

Cluster masters running Kubernetes versions 1.10.x will be updated to Kubernetes 1.10.6-gke.2 according to this week's rollout schedule.

Cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes 1.9.7-gke.5 according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.10.2-gke.4
  • 1.10.4-gke.3
  • 1.10.5-gke.4
  • 1.10.6-gke.1

Rollout schedule

The rollout schedule is now included in Upgrades.

Fixes

  • 1.10.7-gke.1 fixes an issue where preempted GPU Pods would restart without proper GPU libraries.

August 20, 2018

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

  • 1.11.2-gke.3 (preview)
  • 1.10.6-gke.2
  • 1.9.7-gke.6

Scheduled master auto-upgrades

Auto-upgrades of Kubernetes 1.8.x clusters to 1.9.7-gke.5 continues for the second week. You can always upgrade your Kubernetes 1.8 masters manually.

Node image updates

Container-Optimized OS node image has been upgraded from cos-stable-66-10452-109-0 to cos-dev-69-10895-23-0 for clusters running Kubernetes 1.10.6-gke.2 and Kubernetes 1.11.2-gke.3. See COS image release notes for more information.

Container-Optimized OS node image has been upgraded from cos-stable-65-10323-98-0-p2 to cos-stable-65-10323-99-0-p2 for clusters running Kubernetes 1.9.7-gke.6. See COS image release notes for more information.

These images contain a fix for an L1 Terminal Fault vulnerability.

Ubuntu node image has been upgraded from ubuntu-gke-1804-bionic-20180718 to ubuntu-gke-1804-bionic-20180814 for clusters running Kubernetes 1.11.2-gke.3.

Ubuntu node image has been upgraded from ubuntu-gke-1604-xenial-20180731-1 to ubuntu-gke-1604-xenial-20180814-1 for clusters running Kubernetes 1.10.6-gke.2 and Kubernetes 1.9.7-gke.6.

These images contain a fix for an L1 Terminal Fault vulnerability.

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

  • Cloud binary authorization is promoted to Beta for GKE clusters.
  • GCE-Ingress has been upgraded to version 1.3.0. HTTP2 support for Ingress is promoted to Beta.
  • Private endpoints are promoted to Beta, for customers using private clusters. At cluster creation time, customers can now choose to use the Kubernetes master's private IP address as their API server endpoint.

Fixes

  • This week's releases address an L1 Terminal Fault vulnerability. Customers running containers from different customers on the same GKE Node, as well as customers using COS images, should prioritize updating those environments.

August 13, 2018

Version updates

GKE cluster versions have been updated.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

  • Kubernetes 1.11.2-gke.2 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.11.0-gke.1 in Alpha Clusters
  • 1.10.6-gke.1

Scheduled master auto-upgrades

  • 10 % of cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes 1.9.7-gke.5, according to this week's rollout schedule.

  • Cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.7-gke.5, according to this week's rollout schedule.

  • Cluster masters running Kubernetes versions 1.10.x will be updated to Kubernetes 1.10.6-gke.1, according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or cluster upgrades:

  • 1.8.x

Rollout schedule

The rollout schedule is now included in Upgrades.

New Features

  • Containerd integration on the Container-Optimized OS (COS) image is now beta. You can now create a cluster or a node pool with image type cos_containerd. Refer to Container-Optimized OS with containerd for details.

Fixes

August 6, 2018

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See supported versions for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

  • Kubernetes 1.9.7-gke.5 is now generally available for use with Kubernetes Engine clusters.
  • New default version for new clusters

  • Kubernetes version 1.9.7-gke.5 is the default version for new clusters, available according to this week's rollout schedule.
  • Scheduled master auto-upgrades

    Cluster masters running Kubernetes version 1.8.10-gke.0 will be updated to Kubernetes 1.8.10-gke.2, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions 1.8.12-gke.1 and 1.8.12-gke.2 will be updated to Kubernetes 1.8.12-gke.3, according to this week's rollout schedule.

    Cluster masters running Kubernetes version 1.9.6-gke.1 will be updated to Kubernetes 1.9.6-gke.2, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions 1.9.7-gke.0, 1.9.7-gke.1, 1.9.7-gke.3, and 1.9.7-gke.4 will be updated to Kubernetes 1.9.7-gke.5, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions 1.10.2-gke.0, 1.10.2-gke.1, and 1.10.2-gke.3 will be updated to Kubernetes 1.10.2-gke.4, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions 1.10.4-gke.0 and 1.10.4-gke.2 will be updated to Kubernetes 1.10.4-gke.3, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions 1.10.5-gke.0 and 1.10.5-gke.3 will be updated to Kubernetes 1.10.5-gke.4, according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    Fixes

    A patch for Kubernetes vulnerability CVE-2018-5390 is now available according to this week's rollout schedule. We recommend that you manually upgrade your nodes as soon as the patch becomes available in your cluster's zone.

    August 3, 2018

    New Features

    In a future release, all newly-created Google Kubernetes Engine clusters are VPC-native by default.

    July 30, 2018

    Version updates

    GKE cluster versions have been updated.

    • Kubernetes 1.10.5-gke.3 is now generally available for use with Google Kubernetes Engine clusters.

    July 12, 2018

    New Features

    Cloud TPU is now available with GKE in Beta. Run your machine learning workload in a Kubernetes cluster on Google Cloud, and let GKE manage and scale the Cloud TPU resources for you.

    Version updates

    GKE cluster versions have been updated.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • Kubernetes 1.8.12-gke.2 is now generally available for use with Google Kubernetes Engine clusters.
    • Kubernetes 1.9.7-gke.4 is now generally available for use with Google Kubernetes Engine clusters.
    • Kubernetes 1.10.5-gke.2 is now generally available for use with Google Kubernetes Engine clusters.
    • Kubernetes 1.11.0-gke.1 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.11.0-gke.1 in Alpha Clusters.
    Enabling/disabling network policy on already created 1.11 clusters may not work properly.

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.8 will be updated to Kubernetes 1.8.10-gke.0 according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    July 10, 2018

    New Features

    You can now run GKE clusters in region us-west2 (Los Angeles) with zones us-west2-a, us-west2-b, and us-west2-c.

    June 28, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.5-gke.0 is now generally available for use with GKE clusters.

    New default version for new clusters

    Kubernetes version 1.9.7-gke.3 is the default version for new clusters, available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt- in master upgrades for existing clusters:

    • 1.10.5-gke.0

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions older than 1.8.10-gke.0 will be updated to Kubernetes 1.8.10-gke.0 according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • 1.8.8-gke.0
    • 1.10.4-gke.0

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    Known Issues

    Currently, OS Login is not fully compatible with Google Kubernetes Engine clusters running Kubernetes version 1.10.x. The following functionalities of kubectl might not work properly when OS Login is enabled: kubectl logs, proxy, exec, attach, and port-forward. Until OS Login is fully supported, the settings at the project-level are ignored at the nodes level. The settings at project-level are ignored in Kubernetes Engine.

    June 18, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.4-gke.2 is now generally available for use with GKE clusters.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt- in master upgrades for existing clusters:

    • 1.10.4-gke.2

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • 1.9.7-gke.1
    • 1.10.2-gke.3

    New Features

    GPUs for Google Kubernetes Engine is now generally available.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    June 11, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.4-gke.0 is now generally available for use with GKE clusters.

    The base image for this version is cos-stable-66-10452-101-0, which contains a fix for an issue that causes deadlock in the Linux kernel.

    New Features

    You can now run GKE clusters in region europe-north1 (Finland) with zones europe-north1-a, europe-north1-b, and europe-north1-c.

    Refer to the rollout schedule below for the specific rollout dates in each zone.

    A new `cos_containerd` image is now available and set by default for trying out the containerd integration in the alpha clusters running Kubernetes 1.10.4-gke.0 and above. See the containerd runtime alpha user guide for more information, or learn about the containerd integration in the recent Kubernetes blog post.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    June 04, 2018

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.9.7-gke.3

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    May 22, 2018

    New versions available for upgrades and new clusters

    Kubernetes 1.10.2-gke.3 is now available for use with Kubernetes Engine clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:
    • 1.8.12-gke.0
    • 1.9.7-gke.0
    • 1.10.2-gke.1

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    New Features

    Custom Boot Disks is now available in Beta.

    Alias IPs is now generally available.

    May 16, 2018

    New Features

    Kubernetes Engine Shared VPC is now available in Beta.

    May 15, 2018

    The rollout of the release has been delayed. Refer to the revised rollout schedule below.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    Clusters running Kubernetes 1.9.0 - 1.9.6-gke.0 that have opted into automatic node upgrades will be upgraded to Kubernetes 1.9.6-gke.1 according to this week's rollout schedule.

    Kubernetes 1.10.2-gke.1 is now generally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.9.7-gke.1 is now generally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.8.12-gke.1 is now generally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Kubernetes 1.8.10-gke.0 is now the default version for new clusters.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    New Features

    Load balancers and ingresses are now automatically deleted upon cluster deletion.

    Other Updates

    The base image has been changed to cos-stable-66-10452-89-0 for clusters running Kubernetes 1.10.2-gke.1.

    This image contains a fix for Linux kernel CVE-2018-1000199 and CVEs in ext4 (CVE-2018-1092, CVE-2018-1093, CVE-2018-1094, CVE-2018-1095).

    The base image has been changed to cos-stable-65-10323-85-0 for clusters running Kubernetes 1.8.12-gke.0 and Kubernetes 1.9.7-gke.1.

    This image contains a fix for Linux kernel CVE-2018-1000199.

    The base image has been changed to ubuntu-gke-1604-xenial-20180509-1 for clusters running Kubernetes 1.9.7-gke.1 and Kubernetes 1.10.2-gke.1.

    The base image has been changed to ubuntu-gke-1604-xenial-20180509 for clusters running Kubernetes 1.8.12-gke.1.

    These images contain a fix for Linux kernel CVE-2018-1000199. Refer to USN-3641-1 for more information.

    May 7, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Scheduled master auto-upgrades

    100% of cluster masters running Kubernetes versions 1.7.0 and 1.7.12-gke.2 will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.

    100% of cluster masters running Kubernetes versions 1.7.14.gke-1 and 1.7.15-gke.0 will be updated to Kubernetes 1.8.10-gke.0, according to this week's rollout schedule.

    100% of cluster masters running Kubernetes versions 1.9.X will be updated to Kubernetes 1.9.6, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • 1.7.15-gke.l0
    • 1.9.3-gke.0

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    Known Issues

    The Kubernetes Dashboard in version 1.8.8-gke.0 isn't compatible with nodes running versions 1.7.13 through 1.7.15.

    May 1, 2018

    Known Issues

    In Kubernetes versions 1.9.7, 1.10.0, and 1.10.2, if an NVIDIA GPU device plugin restarts but the associated kubelet does not, then the node allocatable for the GPU resource nvidia.com/gpu stays zero until the kubelet restarts. This prevents new pods from consuming GPU devices.

    The most likely scenario when this problem occurs is after a cluster is created or upgraded with Kubernetes 1.9.7, 1.10.0, or 1.10.2 and the cluster master is upgraded to a new version, which triggers an NVIDIA GPU device plugin DaemonSet upgrade. The DaemonSet upgrade causes the NVIDIA GPU device plugin to restart itself.

    If you use the GPU feature, do not create or upgrade your cluster with Kubernetes 1.9.7, 1.10.0, or 1.10.2. This issue will be addressed in an upcoming release.

    April 30, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.8.12-gke.0 is now generally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.9.7-gke.0 is now generally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.10.2-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.10.2-gke.0 in Alpha Clusters.

    Scheduled master auto-upgrades

    100% of cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    Other Updates

    The base image has been changed to cos-stable-65-10323-75-0-p for clusters running Kubernetes 1.8.12-gke.0.

    The base image has been changed to cos-stable-65-10323-75-0-p2 for clusters running Kubernetes 1.9.7-gke.0.

    The base image has been changed to cos-stable-66-10452-74-0 for clusters running Kubernetes 1.10.2-gke.0.

    April 24, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Scheduled master auto-upgrades

    • 10 % of cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.
    • Cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.
    • Cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.3-gke.0, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.9.6-gke.0

    The following versions are no longer available for new clusters or cluster upgrades:

    • 1.8.7-gke.1
    • 1.9.2-gke.1
    • 1.9.6-gke.0

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    April 16, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:
    • Kubernetes 1.7.14-gke.1
    • Kubernetes 1.8.9-gke.1
    • Kubernetes 1.9.4-gke.1

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    April 9, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.1 is now generally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.10.0-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.10.0-gke.0 in Alpha Clusters.

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes 1.7.12-gke.2, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:
    • Kubernetes 1.7.12-gke.1

    Other Updates

    Container-Optimized OS node image has been upgraded to cos-stable-65-10323-69-0-p2 for clusters running Kubernetes 1.9.6-gke.1. See COS image release notes for more information.

    Container-Optimized OS node image is using cos-beta-66-10452-28-0 for clusters running Kubernetes 1.10.0-gke.0. See COS image release notes for more information.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    March 30, 2018

    Note: The March 27, 2018 release has been rolled back, so this release supersedes the rollout schedule and cluster default version previously stated.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.0., Kubernetes 1.8.10-gke.0., and Kubernetes 1.7.15-gke.0. are now generally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    The default version has been reverted from the March 27, 2018 release. Kubernetes 1.8.8-gke.0 is now the default version for new zonal and regional clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.9.2-gke.1
    • Kubernetes 1.7.12-gke.2

    Other Updates

    The following updates are the same as in the March 27, 2018 release. They have not been changed by the rollback.

    Ubuntu node image has been upgraded to ubuntu-gke-1604-xenial-v20180317-1 for clusters running Kubernetes 1.9.6-gke.0.

    Issues fixed:

    • In ubuntu-gke-1604-xenial-v20180207-1, used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be scheduled to node where Docker gets restarted.
    • Security fix for USN-3586-1

    Ubuntu node image has been upgraded to ubuntu-gke-1604-xenial-v20180308 for clusters running Kubernetes 2.8.10-gke.0 and 1.7.15-gke.0.

    Issue fixed:

    Container-Optimized OS node image has been upgraded to cos-beta-65-10323-12-0 for clusters running Kubernetes 1.7.15-gke.0. See COS image release notes for more information.

    March 27, 2018

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.0., Kubernetes 1.8.10-gke.0., and Kubernetes 1.7.15-gke.0. are now generally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes 1.8.9-gke.1 is now the default version for new zonal and regional clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.9.2-gke.1
    • Kubernetes 1.8.8-gke.0
    • Kubernetes 1.7.12-gke.2

    Other Updates

    Ubuntu node image has been upgraded to ubuntu-gke-1604-xenial-v20180317-1 for clusters running Kubernetes 1.9.6-gke.0.

    Issues fixed:

    • In ubuntu-gke-1604-xenial-v20180207-1, used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be scheduled to node where Docker gets restarted.
    • Security fix for USN-3586-1

    Ubuntu node image has been upgraded to ubuntu-gke-1604-xenial-v20180308 for clusters running Kubernetes 1.8.10-gke.0 and 1.7.15-gke.0.

    Issue fixed:

    Container-Optimized OS node image has been upgraded to cos-beta-65-10323-12-0 for clusters running Kubernetes 1.7.15-gke.0. See COS image release notes for more information.

    March 21, 2018

    New Features

    Private Clusters are now available in Beta.

    March 19, 2018

    Fixed

    Kubernetes 1.9.4+: Fixes a bug that prevented clusters with IP aliases from appearing.

    March 13, 2018

    Fixed

    A patch for Kubernetes vulnerabilities CVE-2017-1002101 and CVE-2017-1002102 is now available according to this week's rollout schedule. We recommend that you manually upgrade your nodes as soon as the patch becomes available in your cluster's zone.

    Issues

    Breaking Change: Do not upgrade your cluster if your application requires mounting a secret, configMap, downwardAPI, or projected volume with write access.

    To fix security vulnerability CVE-2017-1002102, Kubernetes 1.9.4-gke.1, Kubernetes 1.8.9-gke.1, and Kubernetes 1.7.14-gke.1 changed secret, configMap, downwardAPI, and projected volumes to mount read-only, instead of allowing applications to write data and then reverting it automatically. We recommend that you modify your application to accommodate these changes before you upgrade your cluster.

    If your cluster uses IP Aliases and was created with the --enable-ip-alias flag, upgrading the master to Kubernetes 1.9.4-gke.1 will prevent it from starting properly. This issue will be addressed in an upcoming release.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.4-gke.1, Kubernetes 1.8.9-gke.1, and Kubernetes 1.7.14-gke.1 are now generally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes 1.8.8-gke.0 is now the default version for new zonal and regional clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Regional clusters running Kubernetes 1.7.x will be upgraded to Kubernetes 1.8.7-gke.1.

    This upgrade applies to cluster masters.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.8.7-gke.1

    New Features

    You can now use version aliases with gcloud's --cluster-version option to specify Kubernetes versions. Version aliases allow you to specify the latest version or a specific version, without including the `-gke.0` version suffix. See versioning and upgrades for a complete overview of version aliases.

    March 12, 2018

    Issues

    A patch for Kubernetes vulnerabilities CVE-2017-1002101 and CVE-2017-1002102 will be available in the upcoming release. We recommend that you manually upgrade your nodes as soon as the patch becomes available.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    March 08, 2018

    New Features

    You can now easily debug your Kubernetes services from the Google Cloud console with port-forwarding and web preview.

    March 06, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.7.12-gke.1

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    February 27, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.3-gke.0, Kubernetes 1.8.8-gke.0, and Kubernetes 1.7.12-gke.2 are now generally available for use with Google Kubernetes Engine clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.8.x will be upgraded to Kubernetes 1.8.7-gke.1.
    • Regional clusters running Kubernetes 1.8.x will have etcd upgraded to etcd 3.1.11.

    This upgrade applies to cluster masters.

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.8.5-gke.0

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.8.5-gke.0

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    New Features

    Beginning with Kubernetes version 1.9.3, you can enable metadata concealment to prevent user Pods from accessing certain VM metadata for your cluster's nodes. For more information, see Protecting Cluster Metadata.

    Other Updates

    Ubuntu node image has been upgraded from ubuntu-gke-1604-xenial-v20180122 to ubuntu-gke-1604-xenial-v20180207 for clusters running Kubernetes 1.7.12-gke.2 and 1.8.8-gke.0.

    Ubuntu node image has been upgraded from ubuntu-gke-1604-xenial-v20180122 to ubuntu-gke-1604-xenial-v20180207-1 for clusters running Kubernetes 1.9.3-gke.0.

    • Security fix for USN-3548-2
    • Docker upgraded from 1.12 to 17.03 and default storage driver changed to overlay2
    • Known issue: When Docker gets restarted on a node, new pods cannot be scheduled on that node and they will be stuck in `ContainerCreating` state.

    Container-Optimized OS node image has been upgraded from cos-stable-63-10032-71-0 to cos-beta-65-10323-12-0 for clusters running Kubernetes 1.9.3-gke.0 and 1.8.8-gke.0. See COS image release notes for more information.

    February 13, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes version 1.8.7-gke.1 is now the default version for new zonal and regional clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.6.13-gke.1 and 1.7.12-gke.0 will be upgraded to Kubernetes 1.7.12-gke.1.
    • Clusters running Kubernetes 1.9.1-gke.0 and 1.9.2-gke.0 will be upgraded to Kubernetes 1.9.2-gke.1.
    • Clusters running etcd 2.* will be upgraded to etcd 3.0.17-gke.2.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    February 8, 2018

    New Features

    GPUs on Kubernetes Engine are now available in Beta.

    February 5, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.2-gke.1 is now generally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes version 1.7.12-gke.1 is now the default version for new zonal clusters.

    Regional clusters

    Kubernetes version 1.8.7-gke.1 is now the default version for new regional clusters.

    The new cluster versions can be used with the latest Ubuntu node image version, ubuntu-gke-1604-xenial-v20180122.

    • Kernel upgraded from 4.4 to 4.13
    • Security fixes for Spectre and Meltdown
    • Support for Alias IPs

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.6.13-gke.1 and 1.7.x will be upgraded to Kubernetes 1.7.12-gke.0.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    New Features

    Beginning with Kubernetes version 1.9.x on Google Kubernetes Engine, you can now perform horizontal pod autoscaling based on custom metrics from Stackdriver Monitoring (in addition to the default scaling based on CPU utilization). For more information, see Scaling an Application and the custom metrics autoscaling tutorial.

    Known Issues

    Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.

    You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster.

    January 31, 2018

    New Features

    PodSecurityPolicies are now available in Beta.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Kubernetes version 1.7.12-gke.0 is now the default version for new zonal clusters.

    Kubernetes version 1.8.6-gke.0 is now the default version for new regional clusters.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    • Kubernetes 1.8.7-gke.0
    • Kubernetes 1.9.2-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.9.2-gke.0 in Alpha Clusters.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    January 16, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    • Kubernetes 1.9.1 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.9.1 in Alpha Clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.6.x will be upgraded to 1.7.11-gke.1.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    January 10, 2018

    New Features

    You can now run Container Engine clusters in region europe-west4 (Netherlands).

    You can now run Container Engine clusters in region northamerica-northeast1 (Montréal).

    January 9, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    Kubernetes version 1.7.11-gke.1 is now the default version for new clusters, available according to this week's rollout schedule.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.6.x will be upgraded to 1.6.13-gke.1.
    • Clusters running Kubernetes 1.7.x will be upgraded to 1.7.11-gke.1.
    • Clusters running Kubernetes 1.8.x will be upgraded to 1.8.5-gke.0

    This upgrade applies to cluster masters and, if node auto-upgrades are enabled, all cluster nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.6-gke.0
    • Kubernetes 1.7.12-gke.0

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.8.4-gke.1

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    January 2, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    Kubernetes version 1.7.11-gke.1 is now the default version for new clusters, available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.5-gke.0

    Versions no longer available

    The following versions are no longer available for new clusters or cluster upgrades:

    • Kubernetes 1.6.x (all versions)
    • Kubernetes 1.7.8
    • Kubernetes 1.7.9

    Rollout schedule

    The rollout schedule is now included in Upgrades.

    December 14, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.4-gke.1
    • Kubernetes 1.7.11-gke.1
    • Kubernetes 1.6.13-gke.1

    These version updates change the default node image for Kubernetes Engine nodes to Container-Optimized OS version cos-stable-63-10032-71-0-p.

    Versions no longer available

    The following versions are no longer available for new clusters or opt-in master and node upgrades:

    • Kubernetes 1.8.4-gke.0
    • Kubernetes 1.7.11-gke.0
    • Kubernetes 1.6.13-gke.0

    Rollout schedule

    Date Available zones
    2017-12-14 europe-west2-a, us-east1-d
    2017-12-15 asia-east1-a, asia-northeast1-a, asia-south1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-12-18 asia-east1-c, asia-northeast1-b, asia-south1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-12-19 asia-east1-b, asia-northeast1-c, asia-south1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    December 5, 2017

    New Features

    Regional Clusters are now available in Beta.

    December 1, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New Features

    Audit Logging is now available in Beta.

    November 28, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.4-gke.0
    • Kubernetes 1.7.11-gke.0
    • Kubernetes 1.6.13-gke.0

    Rollout schedule

    Date Available zones
    2017-11-28 europe-west2-a, us-east1-d
    2017-11-29 asia-east1-a, asia-northeast1-a, asia-south1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-11-30 asia-east1-c, asia-northeast1-b, asia-south1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-12-1 asia-east1-b, asia-northeast1-c, asia-south1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    Other Updates

    Container-Optimized OS version m63 is now available for use as a Google Kubernetes Engine node image.

    November 13, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.7.10-gke.0
    • Kubernetes 1.8.3-gke.0

    Other Updates

    Container Engine is now named Kubernetes Engine. See the Google Cloud blog post.

    Kubernetes Engine's kubectl version has been updated from 1.8.2 to 1.8.3.

    November 7, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.2-gke.0

    Rollout schedule

    Date Available zones
    2017-11-07 europe-west2-a, us-east1-d
    2017-11-08 asia-east1-a, asia-northeast1-a, asia-south1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-11-09 asia-east1-c, asia-northeast1-b, asia-south1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-11-10 asia-east1-b, asia-northeast1-c, asia-south1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    Added an option to the gcloud container clusters create command: --enable-basic-auth. This option allows you to create a cluster with basic authorization enabled.

    Added options to the gcloud container clusters update command: --enable-basic-auth, --username, and --password. These options allows you to enable or disable basic authorization and change the username and password for an existing cluster.

    October 31, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.7.9-gke.0

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters running Kubernetes 1.6.x will be upgraded to 1.6.11-gke.0.
    • Clusters running Kubernetes 1.7.x will be upgraded to 1.7.8-gke.0.
    • Clusters running Kubernetes 1.8.x will be upgraded to 1.8.1-gke.1

    This upgrade applies to cluster masters and, if node auto-upgrades are enabled, all cluster nodes.

    New default version for new clusters

    Kubernetes version 1.7.8-gke.0 is now the default version for new clusters, available according to this week's rollout schedule.

    Rollout schedule

    Date Available zones
    2017-10-31 europe-west2-a, us-east1-d
    2017-11-1 asia-east1-a, asia-northeast1-a, asia-south1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-11-2 asia-east1-c, asia-northeast1-b, asia-south1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-11-3 asia-east1-b, asia-northeast1-c, asia-south1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    You can now run Container Engine clusters in region asia-south1 (Mumbai).

    Fixes

    Clusters using the Container-Optimized OS node image version cos-stable-61 can be affected by Docker daemon crashes and restarts and become unable to schedule pods.

    To mitigate this issue, clusters running Kubernetes versions 1.6.x, 1.7.x, and 1.8.x are slated to automatically upgrade to versions 1.6.11-gke.0, 1.7.8-gke.0, and 1.8.1-gke.1 respectively. These versions have been remapped to use the cos-stable-60-9592-90-0 node image.

    Known Issues

    Clusters running Kubernetes version 1.7.6 might see inaccurate memory usage metrics for pods running on the cluster. Clusters are slated to automatically upgrade to version 1.7.8-gke.0 to mitigate this issue. If node auto-upgrades are not enabled for your cluster, you can manually upgrade to 1.7.8-gke.0.

    October 24, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    Kubernetes version 1.8.1 is now generally available, according to this week's rollout schedule. See the Google Cloud blog post on Container Engine 1.8 for more information on the Kubernetes capabilties highlighted in this release.

    Rollout schedule

    Date Available zones
    2017-10-24 europe-west2-a, us-east1-d
    2017-10-25 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-10-26 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-10-27 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    You can now run CronJobs on your Container Engine cluster. CronJob is a Beta feature in Kubernetes version 1.8.

    You can now view the status of your cluster's nodes using the Google Cloud console.

    The Google Cloud console browser-integrated cloud shell can now automatically generate commands for the kubectl command-line interface.

    You can now edit your cluster's workloads when viewing them with the Google Cloud console.

    Known Issues

    Kubernetes Third-party Resources, previously deprecated, have been removed in version 1.8. These resources will cease to function on clusters upgrading to version 1.8.1 or later.

    Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.

    Horizontal Pod Autoscaling with Custom Metrics, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.

    Other Updates

    Beta features in the Container Engine API (and gcloud command-line interface) are now exposed via the new v1beta1 API surface. To use beta features on Container Engine, you must configure the gcloud command-line interface to use the Beta API surface to run gcloud beta container commands. See API organization for more information.

    October 10, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters, according to this week's rollout schedule:

    • 1.7.8
    • 1.6.11

    Clusters running Kubernetes version 1.6.11 can safely upgrade to Kubernetes versions 1.7.x.

    Rollout schedule

    Date Available zones
    2017-10-10 europe-west2-a, us-east1-d
    2017-10-11 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-10-12 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-10-13 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    Other Updates

    Clusters running Kubernetes versions 1.7.8 and 1.6.11 have upgraded the version of Container-Optimized OS running on cluster nodes from version cos-stable-60-9592-84-0 to cos-stable-61-9765-66-0. See the release notes for more details.

    This upgrade updates the node's Docker version from 1.13 to 17.03. See the Docker documentation for details on feature deprecations.

    October 3, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    Kubernetes version 1.8.0-gke.0 is now available for early access partners and alpha clusters only. To try out v1.8.0-gke.0, sign up for the early access program.

    Scheduled master auto-upgrades

    Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.6-gke.1 according to this week's rollout schedule.

    Rollout schedule

    Date Available zones
    2017-10-03 europe-west2-a, us-east1-d
    2017-10-04 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-10-05 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-10-06 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    You can now rotate your username for basic authorization on existing clusters, or disable basic authorization by providing an empty username.

    Fixes

    Kubernetes 1.7.6-gke.1: Fixed a regression in fluentd.

    Kubernetes 1.7.6-gke.1: Updated the kube-dns add-on to patch dnsmasq vulnerabilities announced on October 2. For more information on the vulnerability, see the associated Kubernetes Security Announcement.

    Known Issues

    Kubernetes 1.8.0-gke.0 (early access and alpha clusters only): Clusters created with a subnetwork with an automatically-generated name that contains a hash (e.g. "default-38b01f54907a15a7") might encounter issues where their internal load balancers fail to sync.

    This issue also affects clusters that run legacy networks.

    Container Engine clusters can enter a bad state if you convert your automatically-configured network to a manually-configured one. In this state, internal load balancers might fail to sync, and node pool upgrades might fail.

    September 27, 2017

    New Features

    You can now configure a maintenance window for your Container Engine clusters. You can use the maintenance window feature to designate specific spans of time for scheduled maintenance and upgrades to your master and nodes. Maintenance window is a beta feature on Container Engine.

    Container Engine's node auto upgrade feature is now generally available.

    The Ubuntu node image is now generally available for use on your Container Engine cluster nodes.

    September 25, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    Scheduled master auto-upgrades

    Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.5 according to this week's rollout schedule.

    Cluster masters running Kuberenetes versions 1.6.x will be automatically upgraded to Kubernetes v1.6.10 according to this week's rollout schedule.

    Rollout schedule

    Date Available zones
    2017-09-25 europe-west2-a, us-east1-d
    2017-09-26 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-09-27 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-09-28 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    Fixes

    Kubernetes v1.7.5: Fixed an issue with Kubernetes v1.7.0 to v1.7.4 in which controller-manager could become unhealthy and enter a repair loop.

    Kubernetes v1.6.10: Fixed an issue in which a Google Cloud Load Balancer could enter a persistently bad state if an API call failed while the ingress controller was starting.

    September 18, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New default version for new clusters

    Kubernetes v1.7.5 is the default version for new clusters, available according to this week's rollout schedule below.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.7.6
    • 1.6.10

    New versions available for node upgrades and downgrades

    The following Kubernetes versions are now available for node upgrades and downgrades:

    • 1.7.6
    • 1.6.10

    Rollout schedule

    Date Available zones
    2017-09-19 europe-west2-a, us-east1-d
    2017-09-20 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-09-21 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-09-22 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    Starting in Kubernetes version 1.7.6, the available resources on cluster nodes have been updated to account for the CPU and memory requirement of Kubernetes node daemons. See the Node documentation in the cluster architecture overview for more information.

    You can now set a cluster network policy on your Container Engine clusters running Kubernetes version 1.7.6 or later.

    Other Updates

    The deprecated container-vm node image type has been removed from the list of valid Container Engine node images. Existing clusters and node pools will continue to function, but you can no longer create new clusters and node pools that run the container-vm node image.

    Clusters that use the deprecated container-vm as a node image cannot be upgraded to Kubernetes v1.7.6 or later.

    September 12, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.7.5
    • 1.6.9
    • 1.6.7

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.6.x will be upgraded to Kubernetes v1.6.9 according to this week's rollout schedule.

    Rollout schedule

    Date Available zones
    2017-09-12 europe-west2-a, us-east1-d
    2017-09-13 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-09-14 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-09-17 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    New Features

    You can now use IP aliases with an existing subnetwork when creating a cluster. IP aliases are a Beta feature in Google Kubernetes Engine version 1.7.5.

    September 05, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New default version for new clusters

    Kubernetes v1.6.9 is the default version for new clusters, available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    Kubernetes v1.7.5 is now available for new clusters and opt-in master upgrades.

    Versions no longer available

    The following Kubernetes versions are no longer available for new clusters or upgrades to existing cluster masters:

    • 1.7.3
    • 1.7.4

    Rollout schedule

    Date Available zones
    2017-09-05 europe-west2-a, us-east1-d
    2017-09-06 asia-east1-a, asia-northeast1-a, asia-southeast1-a, australia-southeast1-a, europe-west1-c, europe-west3-a, southamerica-east1-a, us-central1-b, us-east4-b, us-west1-a
    2017-09-07 asia-east1-c, asia-northeast1-b, asia-southeast1-b, australia-southeast1-b, europe-west1-b, europe-west2-b, europe-west3-b, southamerica-east1-b, us-central1-f, us-east1-c, us-east4-c, us-west1-b
    2017-09-08 asia-east1-b, asia-northeast1-c, australia-southeast1-c, europe-west1-d, europe-west2-c, europe-west3-c, southamerica-east1-c, us-central1-a, us-central1-c, us-east1-b, us-east4-a, us-west1-c

    Other Updates

    Container Engine's kubectl version has been updated from 1.7.4 to 1.7.5.

    You can now run Container Engine clusters in region southamerica-east1 (São Paulo).

    August 28, 2017

    • Kubernetes v1.7.4 is available for new clusters and opt-in master upgrades.

    • Kubernetes v1.6.9 is available for new clusters and opt-in master upgrades.

    • Clusters with a master version of v1.6.7 and Node Auto-Upgrades enabled will have nodes upgraded to v1.6.7.

    • Clusters with a master version of v1.7.3 and Node Auto-Upgrades enabled will have nodes upgraded to v1.7.3.

    • Starting at version v1.7.4, when Cloud Monitoring is enabled for a cluster, container system metrics will start to be pushed by Heapster to Stackdriver Monitoring API. The metrics remain free, though Stackdriver Monitoring API quota will be affected.

    • Clusters running Kubernetes v1.6.9 and v1.7.4 have updated node images:

      • The COS node image was upgraded from cos-stable-59-9460-73-0 to cos-stable-60-9592-84-0. Please see the COS image release notes for details.
        • The new COS image includes an upgrade of Docker, from v1.11.2 to v1.13.1. This Docker upgrade contains many stability and performance fixes. A full list of the Docker features that have been deprecated between v1.11.2 and v1.13.1 is available on Docker's website.
        • Three features in Docker v1.13.1 are disabled by default in the COS m60 image, but are planned to be enabled in a later node image release: live-restore, shared PID namespaces and overlay2.
        • Known issue: Docker v1.13.1 supports HEALTHCHECK, which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supports more powerful liveness/readiness checks for containers, and it currently does not surface or consume the HEALTHCHECK status reported by Docker. We encourage users to disable HEALTHCHECK in Docker images to reduce unnecessary overhead, especially if performance degradation is observed after node upgrade. Note that HEALTHCHECK could be inherited from the base image.
      • Ubuntu node image was upgraded from ubuntu-gke-1604-xenial-v20170420-1 to ubuntu-gke-1604-xenial-v20170816-1.
        • This patch release is based on Ubuntu 16.04.3 LTS.
        • It includes a fix for the Stackdriver Logging issues in ubuntu-gke-1604-xenial-v20170420-1.
        • Known issue: Alias IPs is not supported.
    • Known Issues upgrading to v1.7:

    There is a known issue with StatefulSets in 1.7.X that causes StatefulSet pods to become unavailable in DNS upon upgrade. We are currently recommending that you not upgrade to 1.7.X if you are using DNS with StatefulSets. A fix is being prepared. Additional information can be found here: https://github.com/kubernetes/kubernetes/issues/48327

    • Known Issues running Docker v1.13:

    Docker v1.13.1 supports HEALTHCHECK, which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supports more powerful liveness/readiness checks for containers, and it currently does not surface or consume the HEALTHCHECK status reported by Docker. We encourage users to disable HEALTHCHECK in Docker images to reduce unnecessary overhead, especially if performance degradation is observed after node upgrade. Note that HEALTHCHECK could be inherited from the base image.

    August 21, 2017

    • When using IP aliases, you can now represent service CIDR blocks by using a secondary range instead of a subnetwork. This means you can use IP aliases without specifying the --create-subnetwork option.
    • Cluster etcd fragmentation/compaction fixes.

    • Known Issues upgrading to v1.7.3:

    There is a known issue with StatefulSets in 1.7.X regarding annotations, so we are currently recommending that you not upgrade to 1.7.X if you are using them. A fix is being prepared. Additional information can be found here: https://github.com/kubernetes/kubernetes/issues/48327

    August 14, 2017

    • Cluster masters running Kubernetes versions 1.7.X will be upgraded to v1.7.3 according to the following schedule:

      • 2017-08-15: europe-west2-a us-east1-d
      • 2017-08-16: asia-east1-a asia-northeast1-a asia-southeast1-a australia-southeast1-a europe-west1-c europe-west3-a us-central1-b us-east4-b us-west1-a
      • 2017-08-17: asia-east1-c asia-northeast1-b asia-southeast1-b australia-southeast1-b europe-west1-b europe-west2-b europe-west3-b us-central1-f us-east1-c us-east4-c us-west1-b
      • 2017-08-18: asia-east1-b asia-northeast1-c australia-southeas