To get the latest product updates delivered to you, add the URL of this page to your
feed
reader, or add the feed URL directly: https://cloud.google.com/feeds/kubernetes-engine-release-notes.xml
December 14, 2017
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:
- Kubernetes 1.8.4-gke.1
- Kubernetes 1.7.11-gke.1
- Kubernetes 1.6.13-gke.1
These version updates change the default node image for Kubernetes Engine nodes to Container-Optimized OS version cos-stable-63-10032-71-0-p
.
Versions no longer available
The following versions are no longer available for new clusters or opt-in master and node upgrades:
- Kubernetes 1.8.4-gke.0
- Kubernetes 1.7.11-gke.0
- Kubernetes 1.6.13-gke.0
Rollout schedule
Date | Available zones |
---|---|
2017-12-14 | europe-west2-a , us-east1-d |
2017-12-15 | asia-east1-a , asia-northeast1-a , asia-south1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-12-18 | asia-east1-c , asia-northeast1-b , asia-south1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-12-19 | asia-east1-b , asia-northeast1-c , asia-south1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
December 5, 2017
New Features
Regional Clusters are now available in Beta.
December 1, 2017
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.
New Features
Audit Logging is now available in Beta.
November 28, 2017
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:
- Kubernetes 1.8.4-gke.0
- Kubernetes 1.7.11-gke.0
- Kubernetes 1.6.13-gke.0
Rollout schedule
Date | Available zones |
---|---|
2017-11-28 | europe-west2-a , us-east1-d |
2017-11-29 | asia-east1-a , asia-northeast1-a , asia-south1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-11-30 | asia-east1-c , asia-northeast1-b , asia-south1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-12-1 | asia-east1-b , asia-northeast1-c , asia-south1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
Other Updates
Container-Optimized OS version m63 is now available for use as a Google Kubernetes Engine node image.
November 13, 2017
Version updates
Kubernetes Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:
- Kubernetes 1.7.10-gke.0
- Kubernetes 1.8.3-gke.0
Other Updates
Container Engine is now named Kubernetes Engine. See the Google Cloud Platform blog post.
Kubernetes Engine's kubectl
version has been updated from
1.8.2 to 1.8.3.
November 7, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:
- Kubernetes 1.8.2-gke.0
Rollout schedule
Date | Available zones |
---|---|
2017-11-07 | europe-west2-a , us-east1-d |
2017-11-08 | asia-east1-a , asia-northeast1-a , asia-south1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-11-09 | asia-east1-c , asia-northeast1-b , asia-south1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-11-10 | asia-east1-b , asia-northeast1-c , asia-south1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
Added an option to the gcloud container clusters create
command: --enable-basic-auth
. This option allows you to create a cluster with basic authorization enabled.
Added options to the gcloud container clusters update
command: --enable-basic-auth
, --username
, and --password
. These options allows you to enable or
disable basic authorization and change the username and password for an existing cluster.
October 31, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:
- Kubernetes 1.7.9-gke.0
Scheduled auto-upgrades
Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:
- Clusters running Kubernetes 1.6.x will be upgraded to 1.6.11-gke.0.
- Clusters running Kubernetes 1.7.x will be upgraded to 1.7.8-gke.0.
- Clusters running Kubernetes 1.8.x will be upgraded to 1.8.1-gke.1
This upgrade applies to cluster masters and, if node auto-upgrades are enabled, all cluster nodes.
New default version for new clusters
Kubernetes version 1.7.8-gke.0 is now the default version for new clusters, available according to this week's rollout schedule.
Rollout schedule
Date | Available zones |
---|---|
2017-10-31 | europe-west2-a , us-east1-d |
2017-11-1 | asia-east1-a , asia-northeast1-a , asia-south1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-11-2 | asia-east1-c , asia-northeast1-b , asia-south1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-11-3 | asia-east1-b , asia-northeast1-c , asia-south1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
You can now run Container Engine clusters in region
asia-south1
(Mumbai).
Fixes
Clusters using the Container-Optimized
OS node image version
cos-stable-61
can be affected by Docker daemon crashes and
restarts and become unable to schedule pods.
To mitigate this issue, clusters running Kubernetes versions 1.6.x, 1.7.x,
and 1.8.x are slated to automatically upgrade to versions 1.6.11-gke.0,
1.7.8-gke.0, and 1.8.1-gke.1 respectively. These versions have been remapped
to use the cos-stable-60-9592-90-0
node image.
Known Issues
Clusters running Kubernetes version 1.7.6 might see inaccurate memory usage metrics for pods running on the cluster. Clusters are slated to automatically upgrade to version 1.7.8-gke.0 to mitigate this issue. If node auto-upgrades are not enabled for your cluster, you can manually upgrade to 1.7.8-gke.0.
October 24, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
Kubernetes version 1.8.1 is now generally available, according to this week's rollout schedule. See the Google Cloud Platform blog post on Container Engine 1.8 for more information on the Kubernetes capabilties highlighted in this release.
Rollout schedule
Date | Available zones |
---|---|
2017-10-24 | europe-west2-a , us-east1-d |
2017-10-25 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-10-26 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-10-27 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
You can now run CronJobs on your Container Engine cluster. CronJob is a Beta feature in Kubernetes version 1.8.
You can now view the status of your cluster's nodes using the Google Cloud Console.
The Google Cloud Console browser-integrated cloud shell can now automatically
generate commands for the kubectl
command-line interface.
You can now edit your cluster's workloads when viewing them with the Google Cloud Console.
Known Issues
Kubernetes Third-party Resources, previously deprecated, have been removed in version 1.8. These resources will cease to function on clusters upgrading to version 1.8.1 or later.
Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.
Horizontal Pod Autoscaling with Custom Metrics, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.
Other Updates
Beta features in the Container Engine API (and gcloud
command-line interface) are now exposed via the new v1beta1
API surface. To use beta
features on Container Engine, you must configure the gcloud
command-line interface to use the Beta API surface to run
gcloud beta container
commands. See
API organization
for more information.
October 10, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters, according to this week's rollout schedule:
- 1.7.8
- 1.6.11
Clusters running Kubernetes version 1.6.11 can safely upgrade to Kubernetes versions 1.7.x.
Rollout schedule
Date | Available zones |
---|---|
2017-10-10 | europe-west2-a , us-east1-d |
2017-10-11 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-10-12 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-10-13 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
Other Updates
Clusters running Kubernetes versions 1.7.8 and 1.6.11 have upgraded the
version of Container-Optimized OS
running on cluster nodes from version cos-stable-60-9592-84-0
to
cos-stable-61-9765-66-0
. See the release notes for more details.
This upgrade updates the node's Docker version from 1.13 to 17.03. See the Docker documentation for details on feature deprecations.
October 3, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
Kubernetes version 1.8.0-gke.0 is now available for early access partners and alpha clusters only. To try out v1.8.0-gke.0, sign up for the early access program.
Scheduled master auto-upgrades
Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.6-gke.1 according to this week's rollout schedule.
Rollout schedule
Date | Available zones |
---|---|
2017-10-03 | europe-west2-a , us-east1-d |
2017-10-04 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-10-05 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-10-06 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
You can now rotate your username for basic authorization on existing clusters, or disable basic authorization by providing an empty username.
Fixes
Kubernetes 1.7.6-gke.1: Fixed a regression in fluentd
.
Kubernetes 1.7.6-gke.1: Updated the kube-dns
add-on to
patch dnsmasq
vulnerabilities announced on October 2. For more
information on the vulnerability, see the associated Kubernetes
Security Announcement.
Known Issues
Kubernetes 1.8.0-gke.0 (early access and alpha clusters only): Clusters created with a subnetwork with an automatically-generated name that contains a hash (e.g. "default-38b01f54907a15a7") might encounter issues where their internal load balancers fail to sync.
This issue also affects clusters that run legacy networks.
Container Engine clusters can enter a bad state if you convert your automatically-configured network to a manually-configured one. In this state, internal load balancers might fail to sync, and node pool upgrades might fail.
September 27, 2017
New Features
You can now configure a maintenance window for your Container Engine clusters. You can use the maintenance window feature to designate specific spans of time for scheduled maintenance and upgrades to your master and nodes. Maintenance window is a beta feature on Container Engine.
Container Engine's node auto upgrade feature is now generally available.
The Ubuntu node image is now generally available for use on your Container Engine cluster nodes.
September 25, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
Scheduled master auto-upgrades
Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.5 according to this week's rollout schedule.
Cluster masters running Kuberenetes versions 1.6.x will be automatically upgraded to Kubernetes v1.6.10 according to this week's rollout schedule.
Rollout schedule
Date | Available zones |
---|---|
2017-09-25 | europe-west2-a , us-east1-d |
2017-09-26 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-09-27 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-09-28 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
Fixes
Kubernetes v1.7.5: Fixed an issue with Kubernetes v1.7.0 to v1.7.4
in which controller-manager
could become unhealthy and enter
a repair loop.
Kubernetes v1.6.10: Fixed an issue in which a GCP Load Balancer could enter a persistently bad state if an API call failed while the ingress controller was starting.
September 18, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New default version for new clusters
Kubernetes v1.7.5 is the default version for new clusters, available according to this week's rollout schedule below.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:
- 1.7.6
- 1.6.10
New versions available for node upgrades and downgrades
The following Kubernetes versions are now available for node upgrades and downgrades:
- 1.7.6
- 1.6.10
Rollout schedule
Date | Available zones |
---|---|
2017-09-19 | europe-west2-a , us-east1-d |
2017-09-20 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-09-21 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-09-22 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
Starting in Kubernetes version 1.7.6, the available resources on cluster nodes have been updated to account for the CPU and memory requirement of Kubernetes node daemons. See the Node documentation in the cluster architecture overview for more information.
You can now set a cluster network policy on your Container Engine clusters running Kubernetes version 1.7.6 or later.
Other Updates
The deprecated container-vm
node image type has been removed
from the list of valid Container Engine node images. Existing clusters and
node pools will continue to function, but you can no longer create new
clusters and node pools that run the container-vm
node
image.
Clusters that use the deprecated container-vm
as a node image
cannot be upgraded to Kubernetes v1.7.6 or later.
September 12, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:
- 1.7.5
- 1.6.9
- 1.6.7
Scheduled master auto-upgrades
Cluster masters running Kubernetes versions 1.6.x will be upgraded to Kubernetes v1.6.9 according to this week's rollout schedule.
Rollout schedule
Date | Available zones |
---|---|
2017-09-12 | europe-west2-a , us-east1-d |
2017-09-13 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-09-14 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-09-17 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
New Features
You can now use IP aliases with an existing subnetwork when creating a cluster. IP aliases are a Beta feature in Google Kubernetes Engine version 1.7.5.
September 05, 2017
Version updates
Container Engine cluster versions have been updated as detailed in the following sections. See versioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.
New default version for new clusters
Kubernetes v1.6.9 is the default version for new clusters, available according to this week's rollout schedule.
New versions available for upgrades and new clusters
Kubernetes v1.7.5 is now available for new clusters and opt-in master upgrades.
Versions no longer available
The following Kubernetes versions are no longer available for new clusters or upgrades to existing cluster masters:
- 1.7.3
- 1.7.4
Rollout schedule
Date | Available zones |
---|---|
2017-09-05 | europe-west2-a , us-east1-d |
2017-09-06 | asia-east1-a , asia-northeast1-a , asia-southeast1-a , australia-southeast1-a , europe-west1-c , europe-west3-a , southamerica-east1-a , us-central1-b , us-east4-b , us-west1-a |
2017-09-07 | asia-east1-c , asia-northeast1-b , asia-southeast1-b , australia-southeast1-b , europe-west1-b , europe-west2-b , europe-west3-b , southamerica-east1-b , us-central1-f , us-east1-c , us-east4-c , us-west1-b |
2017-09-08 | asia-east1-b , asia-northeast1-c , australia-southeast1-c , europe-west1-d , europe-west2-c , europe-west3-c , southamerica-east1-c , us-central1-a , us-central1-c , us-east1-b , us-east4-a , us-west1-c |
Other Updates
Container Engine's kubectl
version has been updated from
1.7.4 to 1.7.5.
You can now run Container Engine clusters in region southamerica-east1
(São Paulo).
August 28, 2017
Kubernetes v1.7.4 is available for new clusters and opt-in master upgrades.
Kubernetes v1.6.9 is available for new clusters and opt-in master upgrades.
Clusters with a master version of v1.6.7 and Node Auto-Upgrades enabled will have nodes upgraded to v1.6.7.
Clusters with a master version of v1.7.3 and Node Auto-Upgrades enabled will have nodes upgraded to v1.7.3.
Starting at version v1.7.4, when Cloud Monitoring is enabled for a cluster, container system metrics will start to be pushed by Heapster to Stackdriver Monitoring API. The metrics remain free, though Stackdriver Monitoring API quota will be affected.
Clusters running Kubernetes v1.6.9 and v1.7.4 have updated node images:
- The COS node image was upgraded from
cos-stable-59-9460-73-0
tocos-stable-60-9592-84-0
. Please see the COS image release notes for details.- The new COS image includes an upgrade of Docker, from v1.11.2 to v1.13.1. This Docker upgrade contains many stability and performance fixes. A full list of the Docker features that have been deprecated between v1.11.2 and v1.13.1 is available on Docker's website.
- Three features in Docker v1.13.1 are disabled by default in the COS m60 image, but are planned to be enabled in a later node image release: live-restore, shared PID namespaces and overlay2.
- Known issue: Docker v1.13.1 supports
HEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes
supports more powerful liveness/readiness checks for containers, and
it currently does not surface or consume the
HEALTHCHECK
status reported by Docker. We encourage users to disableHEALTHCHECK
in Docker images to reduce unnecessary overhead, especially if performance degradation is observed after node upgrade. Note thatHEALTHCHECK
could be inherited from the base image.
- Ubuntu node image was upgraded from
ubuntu-gke-1604-xenial-v20170420-1
toubuntu-gke-1604-xenial-v20170816-1
.- This patch release is based on Ubuntu 16.04.3 LTS.
- It includes a fix for the Stackdriver Logging issues in
ubuntu-gke-1604-xenial-v20170420-1
. - Known issue: Alias IPs is not supported.
- The COS node image was upgraded from
Known Issues upgrading to v1.7:
There is a known issue with StatefulSets in 1.7.X that causes StatefulSet pods to become unavailable in DNS upon upgrade. We are currently recommending that you not upgrade to 1.7.X if you are using DNS with StatefulSets. A fix is being prepared. Additional information can be found here: https://github.com/kubernetes/kubernetes/issues/48327
- Known Issues running Docker v1.13:
Docker v1.13.1 supports
HEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supports
more powerful liveness/readiness checks for containers, and it currently does
not surface or consume the HEALTHCHECK
status reported by Docker. We
encourage users to disable HEALTHCHECK
in Docker images to reduce
unnecessary overhead, especially if performance degradation is observed after
node upgrade. Note that HEALTHCHECK
could be inherited from the base image.
August 21, 2017
- When using IP aliases, you can now represent service CIDR blocks by using a
secondary range instead of a subnetwork. This means you can use IP aliases
without specifying the
--create-subnetwork
option. Cluster etcd fragmentation/compaction fixes.
Known Issues upgrading to v1.7.3:
There is a known issue with StatefulSets in 1.7.X regarding annotations, so we are currently recommending that you not upgrade to 1.7.X if you are using them. A fix is being prepared. Additional information can be found here: https://github.com/kubernetes/kubernetes/issues/48327
August 14, 2017
Cluster masters running Kubernetes versions 1.7.X will be upgraded to v1.7.3 according to the following schedule:
- 2017-08-15: europe-west2-a us-east1-d
- 2017-08-16: asia-east1-a asia-northeast1-a asia-southeast1-a australia-southeast1-a europe-west1-c europe-west3-a us-central1-b us-east4-b us-west1-a
- 2017-08-17: asia-east1-c asia-northeast1-b asia-southeast1-b australia-southeast1-b europe-west1-b europe-west2-b europe-west3-b us-central1-f us-east1-c us-east4-c us-west1-b
- 2017-08-18: asia-east1-b asia-northeast1-c australia-southeast1-c europe-west1-d europe-west2-c europe-west3-c us-central1-a us-central1-c us-east1-b us-east4-a us-west1-c
You can now specify a minimum CPU size/class for Alpha clusters by using the
--min-cpu-platform
flag withgcloud alpha container
commands.Cluster resize commands (
gcloud alpha container clusters resize
orgcloud beta container clusters resize
) now safely drain nodes before removal.Updated Google Container Engine's kubectl from version 1.7.2 to 1.7.3.
Added
--logging-service
flag togcloud beta container clusters update
. This flag controls the enabling and disabling of Stackdriver Logging integration. Use--logging-service=logging.googleapis.com
to enable and--logging-service=none
to disable.Modified the
--scopes
flag ingcloud beta container clusters create
andgcloud beta container node-pools create
commands to default tologging.write,monitoring
and support passing an empty list.
August 7, 2017
Kubernetes v1.7.3 is available for new clusters and opt-in master upgrades.
Kubernetes v1.6.8 is available for new clusters and opt-in master upgrades.
Cluster masters running Kubernetes version v1.6.6 or older will be upgraded to v1.6.7 according to the following schedule:
- 2017-08-08: europe-west2-a us-east1-d
- 2017-08-09: asia-east1-a asia-northeast1-a asia-southeast1-a australia-southeast1-a europe-west1-c europe-west3-a us-central1-b us-east4-b us-west1-a
- 2017-08-10: asia-east1-c asia-northeast1-b asia-southeast1-b australia-southeast1-b europe-west1-b europe-west2-b europe-west3-b us-central1-f us-east1-c us-east4-c us-west1-b
- 2017-08-11: asia-east1-b asia-northeast1-c australia-southeast1-c europe-west1-d europe-west2-c europe-west3-c us-central1-a us-central1-c us-east1-b us-east4-a us-west1-c
Node pools can now be created with an initial node count of 0.
Cloud monitoring can only be enabled in clusters that have monitoring scope enabled in all node pools.
Known Issues upgrading to v1.6.7:
- Kubernetes 1.6.7 includes version 0.9.5 of the GCP Ingress Controller. This version contains a
fix for a bug that caused the controller to incorrectly synchronize GCP URL Maps. Changes to
the ingress resource may not have caused the GCP URL Map to update. Using the fixed controller
will ensure maps reflect the host and path rules. To avoid potential disruption, validate that
all ingress objects contain the desired
host
orpath
rules.
- Kubernetes 1.6.7 includes version 0.9.5 of the GCP Ingress Controller. This version contains a
fix for a bug that caused the controller to incorrectly synchronize GCP URL Maps. Changes to
the ingress resource may not have caused the GCP URL Map to update. Using the fixed controller
will ensure maps reflect the host and path rules. To avoid potential disruption, validate that
all ingress objects contain the desired
August 3, 2017
- Users with access to Kubernetes
Secret objects
can no longer view the secrets' values in Google Container Engine UI.
The recommended way to access them is with the
kubectl
tool.
August 1, 2017
The VM firewall rule (e.g.
cluster-<hash>-vms
) for non-legacy auto-mode networks now includes both the primary and reserved VM ranges (10.128/9) if the primary range lies outside of the reserved range.You can now use the beta Ubuntu node image with clusters running Kubernetes version 1.6.4 or higher.
You can now run Container Engine clusters in region
europe-west3
(Frankfurt).
July 26, 2017
- You can now use the Google Cloud Console to add additional zones to, or remove zones from, your existing multi-zone clusters. For more information, see Multi-Zone Clusters.
- You can now use the Google Cloud Console to define Master Authorized Networks - a restricted range of IP addresses that are permitted to access your container cluster's Kubernetes master endpoint.
July 25, 2017
Kubernetes v1.7.2 is available for new clusters and opt-in master upgrades.
Known Issues upgrading to v1.7.2:
- If you are upgrading nodes from 1.7.0 or 1.7.1 to 1.7.2, you may experience service disruption if you have services of type=LoadBalancer. To mitigate this potential disruption, see the upgrade instructions for versions 1.7.0 and 1.7.1.
Kubernetes v1.6.7 is the default version for new clusters, released according to the following schedule:
- 2017-07-25: us-east1-d europe-west2-a
- 2017-07-26: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
- 2017-07-27: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
- 2017-07-28: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
gcloud beta container clusters create
now supports enabling authorized networks for Kubernetes Master via--enable-master-authorized-networks
and--master-authorized-networks
flags.gcloud beta container clusters update
now supports configuring authorized networks for Kubernetes Master via--enable-master-authorized-networks
,--no-enable-master-authorized-networks
, and--master-authorized-networks
flags.gcloud container clusters create
now allows the Kubernetes Dashboard to be disabled for a new cluster via the--disable-addons=KubernetesDashboard
flag.gcloud container clusters update
now allows the Kubernetes Dashboard to be disabled on existing clusters via the--update-addons=KubernetesDashboard=DISABLED
flag.
July 18, 2017
Kubernetes v1.7.1 is available for new clusters and opt-in master upgrades.
Cluster masters running Kubernetes version v1.7.0 will be upgraded to v1.7.1 according to the following schedule:
- 2017-07-18: us-east1-d europe-west2-a
- 2017-07-19: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
- 2017-07-20: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
- 2017-07-21: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
Container Engine now respects Kubernetes Pod Disruption Budgets, making stateful workloads more stable during upgrades. This also reduces disruptions during node auto-upgrades.
gcloud container clusters get-credentials
now correctly respects the HOMEDRIVE/HOMEPATH and USERPROFILE environment variables when generating the kubectl config file on Windows.Known Issues with v1.7.1:
GCP Internal Load Balancers created through Kubernetes services (a Beta feature in 1.7) have an issue that causes health-checks to fail preventing them from functioning. This will be fixed in a future patch release.
Services of type=LoadBalancer in clusters that have nodes running Kubernetes v1.7 may fail GCP Load Balancer health checks. However, the Load Balancers will continue to forward traffic to backends. This issue will be fixed in future patch release and may require special upgrade actions.
July 13, 2017
- New views available in Google Container Engine UI, allowing cross-cluster
overview and inspection of various Kubernetes Objects. This new UI will be
rolling out in the coming week:
- Workloads: inspect and diagnose your pods and their controllers.
- Discovery and load balancing: view details of your services, ingresses and load balancers.
- Configuration: survey all config maps and secrets your containers are using.
- Storage: browse all storage classes, persistent volumes and claims that your clusters use.
July 11, 2017
Kubernetes v1.7.0 This version will be available for new clusters and opt-in master upgrades according to the following planned schedule:
- 2017-07-11: europe-west2-a
- 2017-07-12: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
- 2017-07-13: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
- 2017-07-14: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
Kubernetes 1.7 is being made available as an optional version for clusters. Please see the release announcement for more details on new features.
You can now use HTTP re-encryption through Google Cloud Load Balancing to allow HTTPS access from the GCP Load Balancer to your service backend. This feature ensures that your data is fully encrypted in all phases of transit, even after it enters Google's global network.
Support for all-private IP (RFC-1918) addresses is generally available. These addresses allow you create clusters and access resources in all-private IP ranges, and extends your ability to use Container Engine clusters with existing networks.
Support for external source IP preservation is now generally available. This feature allows applications to be fully aware of client IP addresses for Kubernetes services you expose.
Cluster autoscaler now supports for scaling node pools to 0 or 1, for when you don't need capacity.
Cluster autoscaler can now use a pricing-based expander, which applies additional cost-based constraints to let you use auto-scaling in the most cost-effective manner. This is default as of 1.7.0 and is not user-configurable.
Cluster autoscaler now supports balanced scale-outs of similar node groups. This is useful for clusters that span multiple zones.
You can now use API Aggregation to extend the Kubernetes API with custom APIs. For example, you can now add existing API solutions such as service catalog, or build your own.
The following new features are available on Alpha clusters running Kubernetes version 1.7:
- Local storage
- External webhook admission controllers
Known Issues with v1.7.0:
- Kubelet certificate rotation is not enabled for Alpha clusters. This issue will be fixed in a future release.
- Kubernetes services with network load balancers using static IP will cause the kube-controller-manager to crash loop, leading to multiple master repairs. See issue#48848 for more details. This issue will be fixed in a future release.
July 10, 2017
You can now create clusters with preemptible nodes using the Google Cloud Console. For more information, see Preemptible VMs.
You can now disable basic authentication for new clusters using the Google Cloud Console.
You can now disable client certificate generation for new clusters using the Google Cloud Console.
June 26, 2017
- Known Issues with v1.6.6 A bug in the version of fluentd bundled with Kubernetes v1.6.6 causes JSON-formatted logs to be exported as plain text. This issue will be fixed in v1.6.7. Meanwhile v1.6.6 will remain available as an optional version for new cluster creation and opt-in master upgrades, but will not be made the default. See issue #48018 for more details.
- There will be no release for the week of July 3rd, since this is a holiday in the US. The next release is planned for the week of July 10th.
June 20, 2017
- You can now use v1.6.6 for creating new clusters.
- Original plan to upgrade container cluster masters to 1.6 this week has been postponed due to a bug in the GLBC ingress controller that causes unintentional overwrites of manual health check edits (See known issues for v1.6.4). This bug is fixed in 1.6.6.
- DeleteNodepool now drains all nodes in the pool before deletion.
- You can now run Container Engine clusters in region
australia-southeast1
(Sydney).
June 13, 2017
- v1.5.7 will no longer be available for new clusters and master upgrades.
- All cluster masters will be upgraded to v1.6.4 in the week of 2017-06-19.
June 5, 2017
- Cluster masters running Kubernetes versions v1.6.0 - v1.6.3 will be
upgraded to
v1.6.4
according to the following schedule:
- 2017-06-05: us-east1-d asia-northeast1-c
- 2017-06-06: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a europe-west2-a
- 2017-06-07: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
- 2017-06-08: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c
- After 2017-06-12, v1.5.7 will no longer be available for new clusters and master upgrades.
- You can now run Container Engine clusters in region europe-west2 (London).
June 1, 2017
- You can now specify a Kubernetes version when creating a cluster using the Google Cloud Console
- You can now change the automatic repair setting for existing node pools in Google Cloud Console. For more information, see Node Auto-Repair.
- You can now change the image type for existing node pools in Google Cloud Console. For more information, see Node Image Migration.
May 30, 2017
- Kubernetes
v1.6.4
is the default version for new clusters, released according to the following
schedule:
- 2017-05-30: us-east1-d asia-northeast1-c
- 2017-05-31: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a europe-west2-a
- 2017-06-01: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
- 2017-06-02: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c
May 24, 2017
- Kubernetes v1.6.4 is available for new clusters and opt-in master upgrades.
- v1.6.1 is no longer available for container cluster node upgrades/downgrades.
- The default cluster version for new clusters will be changed to Kubernetes v1.6.4 in the week of May 29th.
- Kubernetes v1.6.3 was skipped due to known issues that have been fixed in v1.6.4.
May 17, 2017
- You can now create clusters with more than 500 nodes in zones
europe-west1-b
andus-central1-a
. - Fixed the known issue with Container Engine's IP Rotation feature where the cluster SSH firewall rule was not being updated.
- Container Engine integration with Google Cloud Platform Labels is now available in Beta. For more information, see Cluster Labeling.
May 12, 2017
- You can now use the Google Cloud Console to choose whether clusters should use legacy authorization permissions. This option is available in clusters running version 1.6 or later. See the Role-Based Access Control Documentation for more information.
May 10, 2017
- Cluster masters running Kubernetes versions v1.5.6 and below will be
upgraded to
v1.5.7
according to the following schedule:
- 2017-05-09: us-east1-d asia-northeast1-c
- 2017-05-10: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b
- 2017-05-11: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c
- 2017-05-12: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c
- v1.6.0 is no longer available for container cluster node upgrades/downgrades.
Known Issues
- A known issue with Container Engine's IP Rotation
feature can cause it to break Kubernetes features that depend on the proxy
endpoint (such as
kubectl exec
,kubectl logs
), as well as cluster metrics exports into Stackdriver. This issue only affects your cluster if you ranCompleteIPRotation
, and have also disabled the default SSH firewall rule for cluster nodes. There is a simple manual fix; see IP Rotation known issues for details.
May 3, 2017
- You can now use the Google Cloud Console to choose whether existing node pools should be automatically upgraded when a new Kubernetes version becomes available. See Node Auto-Upgrade documentation for more information.
- You can now use the Google Cloud Console to scale existing clusters running Kubernetes version 1.6.0 or later up to 5000 nodes in most zones.
May 2, 2017
- Kubernetes
v1.5.7
is the default version for new clusters. This version will be available for
new clusters and opt-in master upgrades according to the following planned
schedule:
- 2017-05-02: us-east1-d asia-northeast1-c
- 2017-05-03: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b
- 2017-05-04: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c
- 2017-05-05: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c
- Cluster masters running Kubernetes versions v1.6.0 and v.1.6.1 will be upgraded to v1.6.2.
April 26, 2017
- Kubernetes v1.6.2 This version will be available for new clusters and opt-in master upgrades.
- You can create a cluster with HTTP basic authentication disabled by passing
an empty username:
gcloud container clusters create CLUSTER_NAME --username=""
This feature only works with version 1.6.0 and later. - Fixed a bug where SetMasterAuth would fail silently on clusters below v1.6.0. SetMasterAuth is only allowed for clusters at v1.6.0 and above.
- Fixed a bug for clusters at v1.6.0 and above where fluentd pods were mistakenly created on all nodes when logging was disabled.
- gcloud
kubectl
version is now 1.6.2 instead of 1.6.0.
April 12, 2017
- Kubernetes
v1.6.1
This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:
- 2017-04-12: us-east1-d asia-northeast1-c
- 2017-04-13: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a
- 2017-04-14: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b
- 2017-04-17: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b
- Kubernetes v1.5.6 is still the default version for new clusters.
- Container engine hosted masters will be upgraded to v1.5.6 according to the planned schedule mentioned above.
- Known issue:
gcloud container clusters update --set-password
(or --generate-password), for setting or rotating your cluster admin password, does not work on clusters running Kubernetes version 1.5.x or earlier. Please use this method only on clusters running Kubernetes version 1.6.x or later.
April 4, 2017
- Kubernetes
v1.6.0
This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:
- 2017-04-04: us-east1-d asia-northeast1-c
- 2017-04-05: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a
- 2017-04-06: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b
- 2017-04-07: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b
- Kubernetes v1.5.6 is still the default version for new clusters.
- Container-Optimized OS
is now generally available. You can create or upgrade clusters and node
pools that use Container-Optimized OS by specifying
imageType
values of eitherCOS
orGCI
. - A new system daemon, node problem detector, is introduced in Kubernetes v1.6 on COS node images. It detects node problems (e.g. kernel/network/container runtime issues) and reports them as node conditions and events.
- Starting in 1.6, a default StorageClass instance with the gce-pd provisioner is installed. All unbound PVCs that don't specify a StorageClass will automatically use the default provisioner, which is different behavior from previous releases and can be disabled by modifying the default StorageClass and removing the "storageclass.beta.kubernetes.io/is-default-class annotation". This feature replaces alpha dynamic provisioning, but the alpha annotation will still be allowed and will retain the same behavior.
gcloud container clusters create|get-credentials
will now configure kubectl to use the credentials of the active gcloud account by default, instead of using application default credentials. This requires kubectl 1.6.0 or higher. You can update kubectl by runninggcloud components update kubectl
. If you prefer to use application default credentials to authenticate kubectl to Google Container Engine clusters, you can revert to the previous behavior by setting thecontainer/use_application_default_credentials
property:gcloud config set container/use_application_default_credentials true
export CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS=true
gcloud
command-line tool kubectl version updating to 1.6.0.- New clusters launched at 1.6.0 will use be using etcd3 in the master. Existing cluster masters will be automatically updated to use etcd3 in a future release.
- Starting in 1.6, RBAC can be used to grant permissions for users and Service Accounts to the cluster's API. To help transition to using RBAC, the cluster's legacy authorization permissions are enabled by default, allowing Kubernetes Service Accounts full access to the API like they had in previous versions of Kubernetes. An option will be rolled out soon to allow the legacy authorization mode to be disabled in order to take full advantage of RBAC.
- You can now use gcloud to set or rotate the admin password for Container
clusters by running
gcloud container clusters update --set-password
gcloud container clusters update --generate-password
- During node upgrades, Container Engine will now verify and recreate the Managed Instance Group for a node pool (at size 0) if required.
March 29, 2017
Kubernetes v1.5.6 is the default version for new clusters. This version will be available for new clusters and opt-in master upgrades according to the following planned schedule:
- 2017-03-29: us-east1-d asia-northeast1-c
- 2017-03-30: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a
- 2017-03-31: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b
- 2017-04-03: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b
Cluster and node pool create requests will return a 4xx error (instead of 5xx) if an invalid service account is specified.
Return a more accurate error message for cluster requests if the Container API is not enabled.
March 20, 2017
- Update Google Container Engine's kubectl from version 1.5.3 to 1.5.4.
- Container engine hosted masters will be upgraded to v1.5.4 according to the
following planned schedule:
- 2017-03-23: us-east1-d asia-northeast1-c
- 2017-03-24: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a
- 2017-03-27: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b
- 2017-03-28: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b
March 16, 2017
- Kubernetes v1.5.4 is the default version for new clusters.
- Added
--enable-autorepair
flag togcloud beta container clusters create
,gcloud beta container node-pools create
andgcloud beta container node-pools update
March 6, 2017
- Container Engine node auto-repair now available in Beta. For more information, see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair
- Google Cloud Console now allows enabling automatic repair for new clusters and node pools.
March 1, 2017
Container Engine hosted masters
- running v1.4 will be upgraded to v1.4.9.
- running v1.5 will be upgraded to v1.5.3.
according to the following planned schedule:
- 2017-03-02: us-east1-d asia-northeast1-c
- 2017-03-03: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a
- 2017-03-06: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b
- 2017-03-07: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b
February 23, 2017
- Kubernetes v1.5.3 is the default version for new clusters.
gcloud
command-line tool kubectl version updating to 1.5.3.
February 14, 2017
- It is no longer necessary to disable the HttpLoadBalancing add-on when you create a cluster without adding the compute read/write scope to nodes. Previously, when you created a cluster without adding the compute read/write scope, you were required to disable HttpLoadBalacing.
January 31, 2017
gcloud
command-line tool kubectl version updating to 1.5.2.
January 26, 2017
Kubernetes v1.5.2 is the default version for new clusters.
The
gcloud
command-line tool and kubectl 1.5+ support usinggcloud
credentials for authentication. Currently,gcloud container clusters create
andgcloud container clusters get-credentials
configure kubectl to use Application Default Credentials to authenticate to Container Clusters. If these differ from the Cloud Identity and Access Management (Cloud IAM) role that thegcloud
command-line tool is using, kubectl requests can fail authentication (#30617). With Google Cloud SDK 140.0.0 and kubectl 1.5+, thegcloud
command-line tool can configure kubectl to use its own credentials. This means that if, e.g., thegcloud
command-line is configured to use a service account, kubectl will authenticate as the same service account.To enable using the
gcloud
command-line tool's own credentials, set thecontainer/use_application_default_credentials
property to false:export CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS=false # or gcloud config set container/use_application_default_credentials false
The current default behavior is to continue using application default credentials. The
gcloud
command-line tool credentials will be made the default for kubectl configuration (viagcloud container clusters create|get-credentials
) in a future release.
January 17, 2017
Kubernetes v1.4.8 is the default version for new clusters.
Kubernetes v1.5.2 is available for new clusters.
January 10, 2017
Rollout of Kubernetes v1.5 as the default for new clusters is postponed until v1.5.2 to fix known issues with v1.5.1.
Fixed an issue where Node Upgrades would fail if one of the nodes was not registered with the Master.
gcloud
command-line tool kubectl version updating to 1.5.1.
Known Issues with Kubernetes v1.5.1
#39680 Defining a pod with a resource request of 0 will cause Controller Manager to crashloop.
#38322 Kubelet can evict or refuse to admit critical pods (kube-proxy, static pods) when under memory pressure.
January 4, 2017
- Default cluster version for new clusters will be changed to Kubernetes v1.5.1 in the week of January 9th.
January 3, 2017
- Google Cloud Console now allows setting newly created clusters and node pools to automatically upgrade when a new Kubernetes version becomes available. See documentation for details.
December 14, 2016
- Kubernetes v1.4.7 is the default version for new clusters.
- Kubernetes v1.5.1 is available for new clusters.
- Node pools can now opt in to automatically upgrade when a new Kubernetes version becomes available. See documentation for details.
- Node pool upgrades can now be rolled back using the
gcloud alpha container node-pools rollback <pool-name>
command. Seegcloud alpha container node-pools rollback --help
for more details.
December 7, 2016
- Google Cloud Console now allows choosing between Container-VM Image (GCI) and the deprecated container-vm when adding new node pools to existing clusters. To learn more about image types, click here.
December 5, 2016
- Container Engine hosted masters running v1.4 will be upgraded to v1.4.6.
November 29, 2016
Increase master disk size in large Google Container Engine clusters. This is needed as in large clusters etcd needs much more IOPS.
Change the
gcloud container list-tags
command to support user-specified filters on occurrences and exposes a column summarizing vulnerability information.
November 15, 2016
- Kubernetes v1.4.6 is the default version for new clusters.
November 8, 2016
Container Engine hosted masters running v1.4 have been upgraded to v1.4.5.
Container Engine hosted masters running v1.3 will be upgraded to v1.4.5 according to the following planned schedule:
- 2016-11-09: us-east1-d
- 2016-11-10: asia-east1-a, asia-northeast1-a, europe-west1-c, us-central1-b, us-west1-a
- 2016-11-11: asia-east1-c, asia-northeast1-b, europe-west1-b, us-central1-f, us-east1-c, us-west1-b
- 2016-11-14: asia-east1-b, asia-northeast1-c, europe-west1-d, us-central1-a, us-central1-c, us-east1-b
November 7, 2016
- Google Cloud Console now allows creating temporary clusters with Kubernetes alpha features enabled.
November 2, 2016
Google Cloud Console now allows choosing between Container-VM Image (GCI) and the deprecated container-vm on cluster creation. To learn more about image types, click here.
Google Cloud Console supports Cluster Autoscaling on creating clusters, node pools, and editing existing node pools.
Google Cloud Console now allows specifying boot disk size of node VMs.
November 1, 2016
Kubernetes v1.4.5 is the default version for new clusters.
Kubernetes v1.4.5 and v1.3.10 include fixes for CVE-2016-5195 (Dirty Cow), which is a Linux kernel vulnerability that allows privilege escalation. If your clusters are running nodes with lower versions, we strongly encourage you to upgrade them to a version of Kubernetes that includes a node image that is not vulnerable, such as Kubernetes 1.3.10 or 1.4.5. To upgrade a cluster, see https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade.
Upgrade operations can now be cancelled using
gcloud alpha container operations cancel <operation_id>
. Seegcloud alpha container operations cancel --help
for more details.
October 17, 2016
Kubernetes v1.4.3 is the default version for new clusters.
Reminder that the base OS image for nodes has changed in the 1.4 release. A set of known issues have been identified and have been documented here. If you suspect that your application or workflow is having problems with new clusters, you may select the old ContainerVM by following the opt-out instructions documented here.
Rewrote the node upgrade logic to make it less disruptive by waiting for the node to register with the Kubernetes master before upgrading the next node.
Added support for new clusters and node-pools to use preemptible VM instances by using the
--preemptible
flag. Seegcloud beta container clusters create --help
andgcloud beta container node-pools create --help
for more details.
October 10, 2016
Kubernetes v1.4.1 is becoming the default version for new clusters.
Reminder that the base OS image for nodes has changed in the 1.4 release. A set of known issues have been identified and have been documented here. If you suspect that your application or workflow is having problems with new clusters, you may select the old ContainerVM by following the opt-out instructions documented here.
Fix a bug in
gcloud beta container images list-tags
.Add support for kubernetes labels on new clusters and nodepools by passing
--node-labels=label1=value1,label2=value2...
. Seegcloud container clusters create --help
andgcloud container nodepools create --help
for more details and examples.Update kubectl to version 1.4.1.
October 5, 2016
Can now specify the cluster-version when creating Google Container Engine clusters.
Update kubectl to version 1.4.0.
Introduce 1.3.8 as a valid cluster version. 1.3.8 fixes a log rotation leak on the master.
September 27, 2016
Kubernetes v1.4.0 is becoming the default version for new clusters.
Container-VM Image (GCI), which was introduced earlier this year, is now the default ImageType for new clusters and node-pools. The old container-vm is now deprecated; it will be supported for a limited time. To learn more about how to use GCI, click here.
Can now create temporary clusters with all kubernetes alpha features enabled via
gcloud alpha container clusters create --enable-kubernetes-alpha
See documentation for details.
Can now add custom kubernetes labels on new clusters and nodepools. via
gcloud alpha container clusters create --node-labels=key1=value1,key2=value2...
See
gcloud alpha container clusters create --help
for details.
Known Issues with v1.4.0 masters and older nodes
init-containers are now supported on Container Engine, but only when master and nodes are running 1.4.0 or higher. Other configurations are not supported.
Customers manually upgrading masters to 1.4 should be aware that the lowest node version supported with it is 1.2.
September 20, 2016
Container Engine hosted masters will be upgraded to v1.3.7 in zones according to the following planned schedule:
- 2016-09-21: us-east1-d
- 2016-09-22: asia-east1-a, europe-west1-c, us-central1-b, us-west1-a
- 2016-09-23: asia-east1-c, europe-west1-b, us-central1-f, us-east1-c, us-west1-b
- 2016-09-26: asia-east1-b, europe-west1-d, us-central1-a, us-central1-c, us-east1-b
gcloud
command-line tool kubectl version updated to v1.3.7
September 15, 2016
Kubernetes v1.3.7 is the default version for new clusters.
Container Engine hosted masters have been upgraded to v1.3.6.
Known Issues with v1.3.6 fixed in v1.3.7
September 6, 2016
- Cluster update to add node locations (API:
rest/v1/projects.zones.clusters/update
, CLI:gcloud beta container clusters update --additional-zones
) will now wait for all nodes to be healthy before marking operation completed (DONE
).
August 30, 2016
Kubernetes v1.3.5 is the default version for new clusters.
Known Issues with v1.3.5 fixed in v1.3.6
Known Issues with older versions fixed in v1.3.6
cluster.master_auth.password
is no longer required in aclusters.create
request. If a password is not specified for a cluster, one will be generated.gcloud
command-line tool kubectl version updated to v1.3.5Image Type selection for
gcloud container
commands is now GA. Can now usegcloud container clusters create --image-type=...
gcloud container clusters upgrade --image-type=...
August 17, 2016
Kubernetes v1.3.5 is the default version for new clusters.
gcloud
command-line tool changed thecontainer/use_client_certificate
property default value tofalse
. This makes thegcloud container clusters create
andgcloud container clusters get-credentials
commands configurekubectl
to use Google OAuth2 credentials by default instead of the legacy client certificate.
August 8, 2016
Kubernetes v1.3.4 is the default version for new clusters.
gcloud
command-line tool kubectl version updated to v1.3.3.
July 29, 2016
- Kubernetes v1.3.3 is the default version for new clusters.
July 22, 2016
Kubernetes v1.3.2 is the default version for new clusters.
1.3.0 clusters have been upgraded to 1.3.2 to pick up the fix for bad route creation.
Fixed the issues where clusters with a non-default master auth username were unable to authenticate using HTTP Basic Auth.
The DNS Replica Auto-Sizer now creates a minimum of 2 replicas except for single node clusters.
gcloud
command-line tool kubectl version updated to v1.2.5.Google Cloud Console now supports CIDR ranges with mask sizes from /8 to /19 on cluster creation.
Google Cloud Console now supports specifying additional zones on cluster creation.
Google Cloud Console now supports creating clusters with up to 2000 nodes (across multiple node pools).
Google Cloud Console now supports specifying a local SSD count on cluster creation and while creating and editing node pools.
Known Issues
#29051 PVC Volume not detached if pod deleted via namespace. deletion.
#29358 Google Compute Engine PD Detach fails if node no longer exists.
#28616 Mounting (only 'default-token') volume takes a long time when creating a batch of pods (parallelization issue).
#28750 Error while tearing down pod, "device or resource busy" on service account secret.
July 11, 2016
Kubernetes v1.3.0 is becoming the default version for new clusters.
Existing Google Container Engine cluster masters were upgraded to Kubernetes v1.2.5 over the previous week.
Improved error messages when a cluster is already being operated on.
Now supports creating clusters and node pools with local SSDs attached to nodes. See Container Cluster Operations for examples.
Cluster autoscaling is now available for clusters running v1.3.0. Autoscaling options can be specified on cluster create and update. See Container Cluster Operations for examples.
Existing single-zone clusters can now be updated to multi-zone clusters by running
gcloud beta container clusters update --additional zones
. See Container Cluster Operations for examples.Known issues:
Scaling v1.3.0 clusters after creation (including via cluster autoscaling) can cause bad routes to be created with colliding target CIDRs. Bad routes can be detected and manually fixed via following
* 1. List routes with duplicate destination ranges gcloud compute routes list --filter="name ~ gke-$CLUSTER_NAME" --format='value(destRange)' | uniq -d
If the above returns any values, the bad routes can be fixed by deleting one of the target instances. A new one will be automatically recreated with a working route.
* 2. Replace $DUPE_RANGE with a destination range from 1. gcloud compute routes list --filter="destRange:$DUPE_RANGE" * 3. Delete one of the target instances listed by 2. gcloud compute instances delete $TARGET_INSTANCE
kubectl authorization for v1.3.0 clusters fails if a the cluster is created with a non-default master auth username (
gcloud container clusters create --username ...
). This can be worked around by authenticating with the cluster certificate instead by runningkubectl config unset users.gke_$PROJECT_$ZONE_$NAME.username
on the machine from which you want to run
kubectl
, where$PROJECT,$ZONE,$NAME
are the cluster's project id, zone and name, respectively.
July 1, 2016
Kubernetes v1.2.5 is the default version for new clusters.
Fixed a bug where Google Cloud Console didn't properly render clusters with no node pools.
Google Cloud Console supports editing clusters with no node pools.
June 20, 2016
Google Cloud Console supports creation and deletion of node pools.
(Breaking change) The
--wait
flag for thegcloud container clusters
command group is now deprecated; please use the--async
flag instead.
June 13, 2016
- Bug fixes.
June 7, 2016
Fixed a bug where
kubectl
for the wrong architecture was installed on Windows. We now install 32- and 64-bit.Google Cloud Console supports resizing and upgrading node pools.
June 3, 2016
- Bug fixes.
May 27, 2016
gcloud container clusters update
command is now available for updating cluster settings of an existing container cluster.The
gcloud container node-pools
commands are now available for creating deleting, describing and listing node pools of a cluster.Google Cloud Console supports listing node pools. Listed node pools can also be upgraded/downgraded to supported Kubernetes versions.
May 18, 2016
gcloud alpha container
commands (e.g. create) now support specifying alternate ImageTypes, such as the newly-available Beta Container-VM Image. To try it out, update to the latest gcloud (gcloud components install alpha ; gcloud components update
) and then create a new cluster:gcloud alpha container clusters create --image-type=GCI $NAME
. Support for ImageTypes in Google Cloud Console will follow at a later date.The
gcloud container clusters list
command now sorts the clusters based on zone and then on cluster name.The
gcloud container clusters create
command now allows specifying--max-nodes-per-pool
(default 1000) to create multiple node pools for large clusters.
May 16, 2016
Container Engine hosted masters have been upgraded to v1.2.4.
gcloud
command-line toolkubectl
version updated to v.1.2.4.CreateCluster calls now accept multiple NodePool objects.
May 6, 2016
Container Engine hosted masters have been upgraded to v1.2.3.
gcloud
command-line toolkubectl
version updated to v1.2.3
April 29, 2016
Kubernetes v1.2.3 is the default version for new clusters.
gcloud container clusters resize
now allows specifying a node pool via--node-pool
.
April 21, 2016
Can now create a multi-zone cluster, which is a cluster whose nodes span multiple zones, enabling higher availability of applications running in the cluster. More details on multi-zone clusters can be found at http://kubernetes.io/docs/admin/multiple-zones/. The ability to convert existing clusters to be multi-zone will be coming soon.
gcloud container clusters create
now allows specifying multiple zones within a region for your cluster's nodes to be created in by using the--additional-zones
flag.Fixed bug that caused
kubectl
component to be missing from gcloud components list on Windows.gcloud
command-line toolkubectl
version updated to v1.2.2
April 13, 2016
- Known issue: the "bastion route" workaround for accessing services from outside of a kubernetes cluster no longer works with 1.2.0 - 1.2.2 nodes, due to a change in kube-proxy. If you are using this workaround, we recommend not upgrading nodes to 1.2.x at this time. This will be addressed in a future patch release.
April 11, 2016
Kubernetes v1.2.2 is the default version for new clusters.
gcloud alpha container clusters update
now allows enabling/disabling addons for Container Engine clusters via--update-addons
flag.gcloud container clusters create
now supports disabling HPA and Ingress controller addons via--disable-addons
flag.Google Cloud Console supports "Google Kubernetes Engine master upgrade" option, which allows proactive upgrade of cluster masters. Note this is the same functionality available via
gcloud container clusters upgrade --master
.
April 4, 2016
- Kubernetes v1.2.1 is the default version for new clusters.
March 29, 2016
The API Discovery Doc and Client Libraries have been updated.
gcloud container clusters create|get-credentials
will warn|fail respectively if the HOME env var isn't set. The variable is required to store kubectl credentials (kubeconfig).gcloud
command-line toolkubectl
component is now available for Windows.
March 21, 2016
Kubernetes v1.2.0 is the default version for new clusters. This update contains significant changes from v1.1, described in detail at releases-1.2.0. Major changes include
- Increased cluster scale by 400% to 1000 nodes with 30,000 pods per cluster.
- Kubelet supports 100 pods per node with 4x reduced system overhead.
- Deployment and DaemonSet API now Beta. Job and HorizontalPodAutoscaler APIs moved from Beta to GA.
- Ingress supports HTTPS.
- Kube-Proxy now defaults to an iptables-based proxy.
- Docker v1.9.1.
- Dynamic configuration for applications via ConfigMap API provides alternative to baking in commandline flags when building container.
- New kubernetes GUI that enables the same functionality as CLI.
- Graceful node shutdown via
kubectl drain
command to gracefully evict pods from nodes.
Access scopes service.management and servicecontrol are now enabled by default for new Container Engine clusters
Clusters created without compute read/write node scopes must also disable HttpLoadBalancing. Note that disabling compute read/write is only possible via the raw API, not the
gcloud
command-line tool or the Google Cloud Console.ClusterUpdates to clusters whose node scopes do not have compute read/write must also specify an AddonsConfig with HttpLoadBalancing disabled.
gcloud
command-line toolkubectl
version updated to 1.2.0.
March 16, 2016
CreateCluster will now succeed if the kubernetes API reports at least 99% of nodes have registered and are healthy within a startup deadline.
gcloud container clusters create
prints a warning if cluster creation finished with > 99% but < 100% of nodes registered/healthy.
March 2, 2016
- Container Engine hosted master upgrades from v1.1.7 to v1.1.8 were completed this week.
February 26, 2016
Kubernetes v1.1.8 is the default version for new clusters.
DeleteCluster will fail fast with an error if there are backend services that target the cluster's node group, as existence of such services will block deletion of the nodes.
You can now self-initiate an upgrade of a cluster's hosted master to the latest supported Kubernetes version by running
gcloud container clusters upgrade --master
. This lets you access versions ahead of automatic Container Engine hosted master upgrades.
February 10, 2016
Container Engine hosted master upgrades from v1.1.3, v1.1.4 to v1.1.7 were completed this week.
gcloud
command-line toolkubectl
version is 1.1.7.
January 28, 2016
- Kubernetes v1.1.7 is the default version for new clusters.
January 15, 2016
Kubernetes v1.1.4 is the default version for new clusters.
Can now run
gcloud container clusters resize
to resize Container Engine clusters.gcloud container clusters
describe
andlist
now notify the user when a node upgrade is available.gcloud
command-line toolkubectl
version is 1.1.3.
January 5, 2016
Fixed an issue where Google Cloud Console incorrectly disallowed users from creating clusters with Cloud Monitoring enabled.
Fixed an issue where users could not create clusters in domain-scoped projects.
December 8, 2015
Kubernetes v1.1.3 is the default version for new clusters.
Added support for custom machine types.
Create cluster now checks that the network for the cluster has a route to the default internet gateway. If no such route exists, the request returns with an error immediately, instead of timing out waiting for the nodes to register.
gcloud container clusters upgrade
now prompts for confirmation.
December 3, 2015
The Google Container Engine v1beta1 API, which was previously deprecated, is now disabled.
Container Engine hosted masters were upgraded to v1.1.2 this week, except for clusters with nodes older than v1.0.1, which will be upgraded once v1.1.3 is available.
November 30, 2015
Kubernetes v1.1.2 is the default version for new clusters.
Container Engine now supports manual-subnet networks. Subnetworks are an Alpha feature of Google Compute Engine and you must be whitelisted to use them. See the Subnetworks documentation for whitelist information.
Once whitelisted, the subnetwork is specified in the cluster create request. In the REST API, this is specified as the value of the
subnetwork
field of the cluster object; when usinggcloud container
commands, pass a--subnetwork
flag togcloud container clusters create
.Improved reliability of cluster creation and deletion.
November 18, 2015
kubectl
version is 1.1.1.New API options for disabling the HTTP load balancer controller and horizontal pod autoscaling.
November 12, 2015
The release documented below is being rolled out over the next few days.
Clusters can now be created with up to 250 nodes.
The Google Compute Engine load balancer controller addon is added by default to new clusters. Learn more.
Kubernetes v1.1.1 is the default version for new clusters.
Important Note: The packaged
kubectl
is version 1.0.7, consequently new Kubernetes 1.1 APIs like autoscaling will not be available viakubectl
until next week's push of thekubectl
binary.Users who want access before then can manually download a 1.1
kubectl
from:And then
chmod a+x kubectl; cp kubectl $(which kubectl)
to install it.Kubernetes v0.19.3 and v0.21.4 are no longer supported for nodes.
New clusters using the
f1-micro
machine type must contain at least three nodes. This ensures that there is enough memory in the cluster to run more than just a couple of very small pods.kubectl
version is 1.0.7.
November 4, 2015
Kubernetes v1.0.7 is the default version for new clusters.
Existing clusters will have their masters upgraded from v1.0.6 to v1.0.7 over the coming week.
Added support for subnetworks (Alpha).
October 27, 2015
Added a
detail
field to operation objects to show progress details for long-running operations (such as cluster updates).Better categorization of errors caused by projects not being fully initialized with the default service accounts.
October 19, 2015
The
--container-ipv4-cidr
flag has been deprecated in favor of--cluster-ipv4-cidr
.The current node count of Container Engine clusters is available from the REST API.
Metrics in Cloud Monitoring are now available with a much shorter delay.
Cluster names now only need to be unique within each zone, not within the entire project.
Error messages involving regular expressions have more useful, human-readable hints.
October 12, 2015
- You can now specify custom metadata to be added to the nodes when creating a cluster with the REST API.
September 25, 2015
Cluster self links now contain the project ID rather than the project number.
kubectl
version is 1.0.6.
September 18, 2015
Kubernetes v1.0.6 is the default version for new clusters.
Existing clusters will have their masters upgraded from v1.0.4 to v1.0.6 over the coming week.
September 4, 2015
- Fixed a bug where a
CreateCluster
request would be rejected if it contained aClusterApiVersion
. Since the field is output-only, it is now silently ignored.
August 31, 2015
To avoid creating clusters without any space for non-system containers, there are now limits on clusters consisting of f1-micro instances:
- A single-node f1-micro cluster must disable both logging and monitoring.
- A two-node f1-micro cluster must disable at least one of logging and monitoring.
August 26, 2015
Google Container Engine is out of beta.
All
gcloud beta container
commands are now in thegcloud container
command group instead.You can now use the Google Container Engine API to enable or disable Google Cloud Monitoring on your cluster. Use the
desiredMonitoringService
field of the cluster update method. When updating this field, the Kubernetes apiserver will be see a brief outage as the master is updated.
August 14, 2015
Kubernetes v1.0.3 is the default version for new clusters.
The
compute
anddevstorage.read_only
auth scopes are no longer required and are no longer automatically added server-side to new clusters. Thegcloud
command and Cloud Console still add these scopes on the client side when creating new clusters; the REST API does not.Listing container clusters in a non-existent zone now results in a
404: Not Found
error instead of an empty list.The
get-credentials
command has moved togcloud beta container clusters get-credentials
. Runninggcloud beta container get-credentials
prints an error redirecting to the new location.The new
gcloud beta container get-server-config
command returns:- The default Kubernetes version currently used for new clusters.
- The list of supported versions for node upgrades
(via
gcloud beta container clusters upgrade
).
August 4, 2015
Kubernetes v1.0.1 is the default version for new clusters.
kubectl
version is 1.0.1.Removed the v1beta1 API discovery doc in preparation for deprecation.
The
gcloud alpha container
commands target the Container Engine v1 API. The options forgcloud alpha container clusters create
have been updated accordingly:--user
renamed--username
.--cluster-api-version
removed. Cluster version not selectable in v1 API; new clusters always created at latest supported version.--image
option removed. Source image not selectable in v1 API; clusters are always created with latest supported ContainerVM image. Note that using an unsupported image (i.e. not ContainerVM) would result in an unusable cluster in most cases anyway.- Added
--no-enable-cloud-monitoring
to turn off cloud monitoring (on by default). - Added
--disk-size
option for specifying boot disk size of node VMs.
July 27, 2015
A firewall rule is now created at the time of cluster creation to make node VMs accessible via SSH. This ensures that the Kubernetes proxy functionality works.
Updated the admission controllers list to match the recommended list for v1.0.
Disabled the
--source-image
option in the v1beta1 API. Attempting to rungcloud alpha container clusters create --source-image
now returns an error.Removed the option to create clusters in the 172.16.0.0/12 private IP block.
July 24, 2015
Upgrade to Kubernetes v1 - Action Required
Users must upgrade their configuration files to the v1 Kubernetes API before August 5th, 2015. This applies to any Beta Container Engine cluster created before July 21st.
Google Container Engine will upgrade container cluster masters beginning on August 5th, to use the v1 Kubernetes API. If you'd like to upgrade prior, please sign up for an early upgrade.
This upgrade removes support for the v1beta3 API. All configuration files must be formatted according to the v1 specification to ensure that your cluster remains functional. The v1 API represents the production-ready set of APIs for Kubernetes and Container Engine.
Some helpful resources are:
An upgrade script to convert your v1beta3 configuration files to v1.
The official v1 specification reference.
If your configuration files already use the v1 specification, no action is required.
July 15, 2015
Kubernetes v0.21.2 is the default version for new clusters.
Existing masters running versions 0.19.3 or higher will be upgraded to 0.21.2. Customers should upgrade their container clusters at their convenience. Clusters running versions older than 0.19.3 can not be updated.
The
kubectl
version is now 0.20.2.
July 10, 2015
Kubernetes v0.21.1 is the default version for new clusters.
The
kubectl
version is now 0.20.1.
Known issue:
The
rolling-update
command will fail when usingkubectl
v0.20.1 with clusters running v0.19.3 of the Kubernetes API. To resolve the issue, specify--api-version=v1beta3
as a flag to therolling-update
command:kubectl rolling-update --api-version=v1beta3 --image=<foo> ...
To find your version of
kubectl
:kubectl version
To find your cluster version:
gcloud container clusters describe CLUSTER_NAME
June 25, 2015
The Google Container Engine REST API has been updated to v1.
The REST API returns a more accurate error message when the region is out of quota.
gcloud container clusters create
supports specifying disk size for nodes with the--disk-size
flag.
June 22, 2015
Google Container Engine is now in Beta.
Kubernetes master VMs are no longer created for new clusters. They are now run as a hosted service. There is no Compute Engine instance charge for the hosted master. Read more about pricing details.
Kubernetes v0.19.3 is the default version for new clusters.
For projects with default regional Compute Engine CPUs quota, container clusters are limited to 3 per region.
Documentation updated to use
gcloud beta
command group.Documentation updated to use
apiVersion: v1
in all samples.
Known issue:
kubectl exec
is broken for cluster version 0.19.3.
June 10, 2015
Documentation updated to use v1beta3.
Kubernetes v0.18.2 is the default version for new clusters.
June 3, 2015
Kubernetes v0.18.0 is the default version for new clusters.
Clusters launched with 0.18.0 and above are deployed using Managed Instance Groups.
New clusters can no longer be created at v0.16.0.
Fixed a race condition that could cause routes to be leaked on cluster deletion.
Fail faster and with a helpful message if the project is lacking specific resource quota to create a functioning cluster.
gcloud
command-line tool:
The
gcloud alpha container clusters create
command always setskubectl
's current context to the newly created cluster.The
clusters create
andget-credentials
commands look for and writekubectl
configuration to aKUBECONFIG
environment variable. This matches the behavior ofkubectl config *
commands.The
gcloud alpha container kubectl
command is disabled. Use simplykubectl
instead.
May 22, 2015
Kubernetes v0.17.1 is the default version for new clusters.
Kubernetes v0.16.0 is still supported. However, new clusters can no longer be created at Kubernetes v0.17.0 due to the bug listed below.
Fixes a bug that was preventing containers from accessing the Google Compute Engine metadata service.
Kubernetes service DNS names are now suffixed with
.<namespace>.svc.cluster.local
instead of.<namespace>.kubernetes.local
.
kubectl 0.17.0 notes:
Updated
kubectl cluster-info
to show v1beta3 addresses.Add
kubectl log --previous
support to view last terminated container log.Added displaying external IPs to
kubectl cluster-info
.Print container statuses in
kubectl get pods
.Add
kubectl_label
to custom functions in bash completion.Change
IP
toIP(S)
in service columns forkubectl get
.Added
TerminationGracePeriod
field to PodSpec andgrace-period
flag tokubectl stop
.
May 13, 2015
Kubernetes v0.17.0 is the default version for new clusters.
New clusters can no longer be created at Kubernetes version 0.15.0.
Standalone
kubectl
works with Container Engine created clusters without needing to set theKUBECONFIG
env var.gcloud alpha container kubectl
is deprecated. The command still works, but prints a warning with directions for usingkubectl
directly.Added a new command,
gcloud alpha container get-credentials
. The command fetches cluster auth and updates the localkubectl
command.gcloud alpha container kubectl
andclusters delete|describe
print more helpful error messages when the cluster cannot be found due to an incorrect zone flag/default.gcloud alpha container clusters create
exits with non-zero returncode if cluster create succeeded but cert data could not be fetched.
kubectl 0.16.1 notes:
Improvements to
kubectl rolling-update
.Default global
kubeconfig
location changed to~/.kube/config
from~/.kube/.kubeconfig
.kubectl delete
now stops resources by default (deletes child resources, e.g. pods managed by replication controller).Flag word separators
-
and_
made equivalent.Recognize
.yml
extension for schema files.kubectl get pods
now prints container statuses.Simplified loading rules for
kubeconfig
(seekubectl config --help
for details).--flatten
and--minify
options forkubectl config view
.Various bugfixes.
May 8, 2015
- Master VMs are now created with a data persistent disk to store important cluster data, leaving the boot disk for the OS / software.
May 2, 2015
Kubernetes v0.16.0 is the default version for new clusters.
Clusters that don't have nginx will use bearer token auth instead of basic auth.
KUBE_PROXY_TOKEN
added tokube-env
metadata.
April 22, 2015
A CIDR can now be requested during cluster creation when using the
gcloud
command-line tool or the REST API. For thegcloud
command-line tool, use the--container-ipv4-cidr
flag. If not set, the server will choose a CIDR for the cluster.Standalone
kubectl
instructions are now available fromgcloud alpha container kubectl --help
.When fetching cluster credentials after creating a cluster using the
gcloud
command-line tool, you'll never have to enter the passphrase for your SSH key more than once.The
gcloud alpha container clusters ...
commands default to human-readable (table) output.
April 16, 2015
Container Engine:
Kubernetes v0.15.0 is the default version for new clusters. v0.14.2 is still supported.
The Kubernetes v1beta3 API is now enabled for new clusters.
New clusters can no longer be created at kubernetes version 0.13.2.
gcloud
command-line tool:
The
kubectl
version is now v0.14.1.The deprecated
gcloud alpha container pods|services|replicationcontrollers
commands have been removed. Usegcloud alpha container kubectl
instead.
April 9, 2015
Container Engine:
Kubernetes v0.14.2 is the default version for new clusters.
New clusters can no longer be created at kubernetes version 0.14.1.
Cluster creation is more reliable.
Clusters created via the Google Cloud Console will pre-fill the cluster name with a project-unique name instead of a zone-unique name.
API endpoint no longer included in cluster list.
April 2, 2015
Container Engine:
Kubernetes v0.14.1 is the default version for new clusters.
New clusters can no longer be created at version 0.11.0.
Container Engine's cluster firewall no longer specifies target-tags. This allows pods to make outgoing connections by default (in the private network).
gcloud
command-line tool:
Clusters created by the
gcloud
command-line tool now automatically send logs to Google Cloud Logging unless explicitly disabled using the--no-enable-cloud-logging
flag. Logs are visible in the logs section of the Cloud Console once your project has enabled the Google Cloud Logging API.You can now access Container Engine clusters with standalone
kubectl
(i.e. withoutgcloud alpha container
) after setting an environment variable, which is printed after successful cluster creation and/or the first time accessing a cluster withgcloud alpha container kubectl
.Gcloud will always try to fetch certificate files for the cluster if they are missing. "WARNING: No certificate files found in..." will resolve itself on a subsequent
gcloud alpha container kubectl
command run if the cluster is healthy.Known issue:
container
commands are included in thealpha
component, but the kubernetes client (kubectl
) is still installed with thepreview
component, so users will need both.
April 1, 2015
- All Container Engine commands have moved from
gcloud preview
togcloud alpha
. Rungcloud components update alpha
to install this command group. Documentation has been updated to use thealpha
commands.
March 25, 2015
Kubernetes v0.13.2 is the default version for new clusters.
The
kubectl
version is now v0.13.1.Updated to
container-vm-v20150317
, which starts up more reliably.The default boot disk size for cluster nodes has been increased from 10GB to 100GB.
February 25, 2015
gcloud
command-line tool:
The
kubectl
wrapper commands (gcloud preview container pods|services|replicationcontrollers
) have been deprecated in favor of usinggcloud preview container kubectl
directly. Calling the deprecated commands prints the equivalentkubectl
command.The
kubectl
version has been bumped to 0.11.0.Fixed a bug that prevented
kubectl update
with--patch
from working.The
kubectl
command now automatically tries refetching the configuration if the command fails with a stale configuration error.
February 19, 2015
Google Container Engine:
Kubernetes v0.11.0 is the default version for new clusters.
Removed support for creating clusters at Kubernetes v0.9.2.
Nodes now use the container-vm-v20150129 image.
gcloud
command-line tool:
Pods created with
gcloud preview container pods create
no longer bind to a host port. As a result the scheduler can assign more than one pod to each host.The version of
kubectl
used by thegcloud preview container kubectl
command is 0.10.1.
February 12, 2015
Kubernetes v0.10.1 is the default version for new clusters.
Removed support for creating clusters at Kubernetes v0.10.0.
Improved API enablement flow and error messages when first visiting the Container Engine page of the Google Cloud Console.
February 5, 2015
Google Container Engine:
Kubernetes v0.10.0 is the default version for new clusters.
Removed support for creating clusters at Kubernetes version 0.8.1.
gcloud
command-line tool:
The
gcloud preview container kubectl
command is upgraded to version 0.9.1:kubectl create
handles bulk creation from file or directory.- The
createall
command has been removed. - Added the
kubectl rollingupdate
command, which runs controlled updates of replicated pods. - Added the
kubectl run-container
command, which simplifies creation of a (optionally replicated) pod from an image. - Added the
kubectl stop
command to cleanly shut down a replication controller. - Added
kubectl config ...
commands for managing config for multiple clusters/users. (Note: this is not yet compatible withgcloud preview container kubectl
).
Refer to the
kubectl
reference documentation for more details.
January 29, 2015
Kubernetes v0.9.2 is the default version for new clusters.
Removed support for creating clusters at v0.7.1. Existing clusters at this version can still be used and deleted.
SkyDNS is supported for services on clusters using v0.9.2 onwards.
January 21, 2015
Improved error messages during pod creation when the source image is invalid.
Fixed a bug affecting Compute Engine routes whose
destRange
fields are plain IP addresses.Improved the reliability of cluster creation when provisioning is slow.
January 15, 2015
Kubernetes v0.8.1 is the default version for newly created clusters. Our v0.8.1 support includes changes on the 0.8 branch at 0.8.1.
Removed support for creating clusters at Kubernetes v0.8.0. Existing clusters at this version can still be used and deleted.
Service accounts and auth scopes can be added to node instances at the time of creation for all pods to use.
The command line interface now renders multiple error messages across newlines and tabs, instead of using a comma separator.
Machine type information has been fixed in the cluster details page of the Google Cloud Console.
January 8, 2015
Kubernetes v0.8.0 is the default version for newly created clusters. Kubernetes v0.7.1 is also supported. Refer to the Kubernetes release notes for information about each release. Our v0.7.1 support includes changes on the 0.7 branch at 0.7.1. Our v0.8.0 support includes changes in the 0.7.2 and 0.8.0 releases.
Removed support for creating clusters at Kubernetes v0.6.1 and v0.7.0. Existing clusters at these versions can still be used and deleted.
The
pods|services|replicationcontrollers create
commands now validate the resource type when creating with--config-file
. This fixes the known issue in the December 12, 2014 release.
December 19, 2014
Kubernetes v0.7.0 is the default version for newly created clusters.
Removed support for creating clusters at Kubernetes v0.4.4 and v0.5.5. Existing clusters at these versions can still be used and deleted.
December 12, 2014
Known issues:
- The
pods|services|replicationcontrollers create
commands do not validate the resource type when creating with--config-file
. The command creates the resource specified in the configuration file, regardless of the command group specified. For example, callingpods create
and passing a service configuration file creates a service instead of failing.
Updates:
Kubernetes v0.6.1 is the default version for newly created clusters.
Google Container Engine now reserves a /14 CIDR range for new clusters. Previously, a /16 was reserved.
New clusters created with Kubernetes v0.4.4 now use the backports-debian-7-wheezy-v20141108 image. This replaces the previous backports-debian-7-wheezy-v20141021 image.
New clusters created with Kubernetes v0.5.5 or v0.6.1 now use the container-vm image, instead of the Debian backports image.
The Service Operations documentation has been updated to describe the
createExternalLoadBalancer
option.A new
gcloud preview container kubectl
command has been added to the CLI. This is a pass-through command to call the native Kubernetes kubectl client with arbitrary commands, using thegcloud
command-line tool to handle authentication.The
--cluster-name
flag in all CLI commands has been renamed to--cluster
.New
describe
andlist
support for cluster operations.
December 5, 2014
The syntax for creating a pod with the Google Container Engine command line interface has changed. The name of the pod is now specified as the value of a
--name
flag. See the Pod Operations page for details.Clusters and Operations returned by the API now include a
selfLink
field and Operations also include atargetLink
field, which contain the full URL of the given resource.Added support for Kubernetes v0.4.4 and Kubernetes v0.5.5. The default version is now v0.4.4. Refer to the Kubernetes release notes for information about each release. Our v0.4.4 support includes changes on the 0.4 branch from 0.4.2 through 0.4.4. Our v0.5.5 support includes changes on the 0.5 branch through 0.5.5.
Removed support for creating clusters at Kubernetes v0.4.2. Existing clusters at this version can still be used and deleted.
November 20, 2014
Updates to the gcloud preview container
commands:
New error message that catches cluster creation failure due to missing
default
network.Specify default zone and cluster :
gcloud config set compute/zone ZONE gcloud config set container/cluster CLUSTER_NAME
There is currently a bug preventing the default cluster name from working if the local configuration cache is missing. If you see a stack trace when omitting
--cluster-name
, repeat the command once with the flag specified. Subsequent commands can omit the flag.The default cluster name is set to the value of the new cluster when a cluster is successfully created.
The
gcloud preview container clusters list
command lists clusters across all zones if no--zone
flag is specified. Thelist
command ignores any default zone that may be set.
Documentation updates:
- A new Getting Started guide has been added: Hello Wordpress.
Cloud Console updates:
- Cluster error state information is available in the Cloud Console.
November 4, 2014
(Updated November 10, 2014: Added two additional known issues with Google Container Engine.)
Google Container Engine is a new service that creates and manages Kubernetes clusters for Google Cloud Platform users.
Container Engine is currently in Alpha state; it is suitable for experimentation and is intended to provide an early view of the production service, but customers are strongly encouraged not to run production workloads on it.
The underlying open source Kubernetes project is being actively developed by the community and is not considered ready for production use. This version of Google Container Engine is based on Kubernetes public build v0.4.2. While the Kubernetes community is working hard to address community-reported issues as they are reported, there are some known issues in the v0.4.2 release that will be addressed in v0.5 and that will be incorporated into Google Container Engine in the coming days.
Known issues with the Kubernetes 0.4.2 release
(Issue #1730) External health checks that use in-container scripts (exec) do not work. Process, HTTP and TCP health checks work properly. Health checks that use in-container shell execution are not functioning; they always report Unknown. This is a result of the transition to
docker exec
introduced in Docker version 1.3. At this time process-level health checks, TCP socket health checks, and HTTP level health checks are functional. This has been addressed in v0.5 and will be available shortly.(Issue #1712) Pod update operations fails. In v0.4.2, pod update functionality is not implemented, and a call to the update API returns an unimplemented error. Pods must be updated by tear down and recreate. This will be implemented in v0.5.
(Issue #974) Silent failure on internal service port number collision: Each Kubernetes service needs a unique network port assignment. Currently if you try to create a second service with a port number that conflicts with an existing service, the operation succeeds but the second service will not receive network traffic. This has been fixed, and will be available in v0.5.
(Issue #1161) External service load balancing. The current Kubernetes design includes a model that does a 1:1 mapping between an externally-exposed port number at the cluster level, and a service. This means that only a single external service can exist on a given port. For now this is a hard limitation of the service.
Known issues with Google Container Engine
In addition to issues with the underlying Kubernetes project, there are some known issues with the Google Container Engine tools and API that will be addressed in subsequent releases.
Kubecfg binary conflicts: During the Google Cloud Platform SDK installation, kubecfg v0.4.1 is installed and placed on the path by the Google Cloud SDK. Depending on your $PATH variable, this version may conflict with other installed versions from the open source Kubernetes product.
Containers are assigned private IPs in the range 10.40.0.0/16 to 10.239.0.0/16. If you have changed your default network settings from 10.240.0.0/16, clusters may create successfully, but fail during operation.
All Container Engine nodes are started with and require project level read-write scope. This is temporarily required to support the dynamic mounting of PD-based volumes to nodes. In future releases nodes will revert to default read-only project scope.
Windows is not currently supported. The
gcloud preview container
command is built on top of the Kubernetes client'skubecfg
binary, which is not yet available on Windows.The default network is required. Container Engine relies on the existence of the default network, and tries to create routes that use it. If you don't have a default network, Container Engine cluster creation will fail.
To recreate it:
- Go to the Networks page in the Cloud Console and select your project.
- Click New network.
- Enter the following values:
- Name:
default
- Address range:
10.240.0.0/16
- Gateway:
10.240.0.1
- Name:
- Click Create.
Next, recreate the firewall rules:
- Click
default
in the All networks list. - Click Create new next to Firewall rules.
- Enter the following values:
- Name:
default-allow-internal
- Source IP ranges:
10.240.0.0/16
- Protocols & ports:
tcp:1-65535; udp:1-65535; icmp
- Name:
- Click Create.
- Create a second firewall rule with the following values:
- Name:
default-allow-ssh
- Source IP ranges:
0.0.0.0/0
- Protocols & ports:
tcp:22
- Name: