新版 GKE on AWS 已于 10 月 2 日发布。如需了解详情,请参阅版本说明

版本说明

本页面记录了 GKE on AWS 的正式版更新。请查看此页面,了解有关新增功能、更新功能、Bug 修复、已知问题和弃用功能的公告。

您可以在 Google Cloud 版本说明页面上查看 Google Cloud 所有产品的最新产品动态。

October 12, 2020

GKE on AWS 1.5.0 supports volume snapshots.

October 02, 2020

Anthos GKE on-AWS 1.5.0-gke.6 is now available and clusters run on 1.16.15-gke.700 and v1.17.9-gke.2800. To upgrade your clusters, perform the following steps:

  1. Upgrade your Management service to 1.5.0-gke.6.
  2. Upgrade your user clusters to 1.16.15-gke.700 or v1.17.9-gke.2800

Workload identity (preview) lets you bind Kubernetes service accounts to AWS IAM accounts with specific permissions. Workload identity blocks unwanted access to cloud resources with AWS IAM permissions. With workload identity, you can assign different IAM roles to each workload. Fine grained permissions control allows you to follow the principle of least privilege. For more details, see Creating a user cluster with workload identity

You can now route traffic from the GKE on AWS management service and Connect through an HTTP/HTTPS proxy. For more details, see Using a proxy with GKE on AWS

Improved installation experience

  • This version enables installation and upgrade by using any Google Cloud–authenticated service account. You no longer need to be on the allowlist to access GKE on AWS components..

  • Additional preflight checks enforce enablement of required Google Cloud APIs. See Google Cloud requirements for more information.

When creating multiple multiple management clusters, users may have seen name collisions with S3 bucket. Now, you can specify a custom name for your S3 bucket to avoid naming conflicts.

September 17, 2020

GKE on AWS 1.4.3-gke.7 is now available. GKE on AWS 1.4.3-gke.7 clusters run on Kubernetes 1.16.13-gke.1402.

To Upgrade:

  1. Upgrade your Management service to 1.4.3-gke.7.
  2. Upgrade your user clusters to to 1.16.13-gke.1402.

A vulnerability, described in CVE-2020-14386, was recently discovered in the Linux kernel. The vulnerability may allow container escape to obtain root privileges on the host node.

All GKE on AWS nodes are affected.

To fix this vulnerability, upgrade your management service and user clusters to this patched version. The following GKE on AWS version contains the fix for this vulnerability:

  • GKE on AWS 1.4.3

For more information, see the Security Bulletin

August 27, 2020

GKE on AWS 1.4.2-gke.1 is released. This release includes Kubernetes version 1.16.13-gke.1401.

This release includes bug fixes and security improvements. We recommend you update your clusters to this version.

To upgrade your clusters, perform the following steps:

  1. Upgrade your management service to aws-1.4.2-gke.1.
  2. Upgrade your user cluster's AWSCluster and AWSNodePools to 1.16.13-gke.1401.

August 04, 2020

Anthos GKE on AWS 1.4.1-gke.17 is released. This release fixes a memory leak that causes clusters to become unresponsive.

To upgrade your clusters, perform the following steps:

  1. Restart your control plane instances.
  2. Upgrade your management service to aws-1.4.1-gke.17.
  3. Upgrade your user cluster's AWSCluster and AWSNodePools to 1.16.9-gke.15.

Use version 1.16.9-gke.15 for creating new clusters.

August 03, 2020

Anthos GKE on AWS 1.4.1-gke.15 clusters will experience a memory leak that results in an unresponsive cluster. A fix for this issue is in development.

If you are planning to deploy an Anthos GKE on AWS cluster, wait until the fix is ready.

July 24, 2020

Anthos GKE on AWS is now generally available.

Clusters support in-place upgrades, with the ability to upgrade the control plane and node pools separately.

Clusters can be deployed in a high availability (HA) configuration, where control plane instances and node pools are spread across multiple availability zones.

Clusters have been validated to support up to 200 nodes and 6000 pods.

Allows the number of nodes to be scaled dynamically based on traffic volume to increase utilization and reduce cost, and improve performance

Anthos can be deployed within existing AWS VPCs, leveraging existing security groups to secure those clusters. Customers can ingress traffic using NLB and ALBs. Additionally Anthos on AWS supports AWS IAM and OIDC. This makes deploying Anthos easy, eliminates the need to provision new accounts, and minimizes configuration of the environment.

With Anthos Config Management enterprises can set policies on their AWS workloads and with Anthos Service Mesh, they can monitor, manage, and secure them.

Kubernetes settings (flags and sysctl settings) have been updated to match GKE.

Upgrades from beta versions are not supported. To install Anthos GKE on AWS, you must remove your user and management clusters, then reinstall them.

May 29, 2020

A new build of Anthos GKE on AWS has been released. This build removes the need to check AWS IAM privileges when creating a management cluster. You don't need to update if you have not encountered this issue.

To install this build, download the anthos-gke tool by running the following command:

gsutil cp gs://gke-multi-cloud-release/bin/aws-0.2.1-gke.8/anthos-gke .

Then, recreate your Terraform configuration and continue with your installation.

May 07, 2020

To upgrade your Anthos GKE on AWS clusters, you need to uninstall all your management and user clusters. You also need to download the new version of the anthos-gke cli tool.

Anthos GKE on AWS now supports auto-scaling. You can enable auto-scaling by changing settings in your AWSNodePools, or scale your clusters manually by adding new AWSNodePools.

Built-in EBS StorageClass names have been changed to standard-rwo and premium-rwo. If you declare the singlewriter-standard or singlewriter-premium StorageClasses with your workloads, you must update your workloads when upgrading.

Anthos GKE on AWS now support for Application-layer secrets encryption with AWS KMS by passing a KMS key ARN to your AWSCluster.

April 02, 2020

Initial beta release of Anthos GKE on AWS

The release improves upon earlier releases with:

  • Improved reliability: User clusters are now deployed in a high availability (HA) fashion, where both control plane instances as well as node pools can be placed across multiple availability zones. AWS Auto Scaling groups are also now used for resiliency.

  • Improved security: Control plane instances for different user clusters are now isolated in separate security groups. Instance Metadata Service Version 2 (IMDSv2) is enabled to protect against SSRF attacks, and sensitive fields in EC2 metadata are now encrypted.

  • Easier to deploy: The installation process for the management layer has been simplified and performs additional validation checks. It uses Terraform modules for flexible integration into different AWS environments, and customers can now leverage existing security groups and IAM resources to secure clusters. Documentation has been improved and expanded.

  • Future-proof storage stack: We're now using the EBS CSI driver to manage all AWS EBS volumes. The legacy, in-tree Kubernetes EBS driver has been removed entirely, and all upcoming storage features, such as snapshots, will be provided using CSI.

  • Updated Kubernetes version: User clusters are now based on Kubernetes 1.15 and have passed open-source Kubernetes conformance tests.