Google Cloud release notes

The following release notes cover the most recent changes over the last 30 days. For a comprehensive list, see the individual product release note pages .

You can see the latest product updates for all of Google Cloud on the Google Cloud release notes page.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/gcp-release-notes.xml

July 13, 2020

AI Platform Training

You can now configure a training job to run using a custom service account. Using a custom service account can help you customize which Google Cloud resources your training code can access.

This feature is available in beta.

App Engine standard environment Java
  • Updated Java SDK to version 1.9.81
BigQuery

The Standard SQL statement ASSERT is now supported. You can use ASSERT to validate that data matches specified expectations.

Cloud Bigtable

The default data points used for CPU utilization charts on the Cloud Bigtable Monitoring page have changed. Previously, data points on the charts reflected the mean for a displayed alignment period. Now the data points reflect the maximum for the alignment period. This change ensures that charts clearly show the peaks that are important for monitoring the health of a Cloud Bigtable instance.

Cloud CDN

Added a new setup guide for custom (external) origins with Cloud CDN and external HTTP(S) Load Balancing.

Cloud Load Balancing

Internal TCP/UDP load balancers now support regional health checks. To configure, see Health checks for backend services. This feature is supported in General Availability.

Dataprep by Trifacta

Introducing Cloud Dataprep Premium by TRIFACTA INC. and Cloud Dataprep Standard by TRIFACTA INC.: You can now upgrade your existing Cloud Dataprep by TRIFACTA INC. projects to unlock advanced features, such as broader API access and relational connectivity. To see the full set of new capabilities and use cases, see Google Cloud Dataprep by Trifacta Pricing.

  • These two product tiers are available through the Google Cloud Platform Marketplace.
  • Important: All existing Cloud Dataprep by TRIFACTA INC. projects are unchanged. You can upgrade individual projects through the GCP Marketplace to unlock the new functionality.

Relational connectivity: Connect to relational sources to import data and, where supported, write results.

Advanced Cloud Dataflow execution options: Specify additional job execution options at the project level or for individual jobs.

  • Feature Availability: This feature is available in Cloud Dataprep Premium by TRIFACTA® INC.
  • Assign scaling algorithms for managing Google Compute Engine instances or define minimum and maximum workers to use.
  • Specify the service account and any billing labels to apply to your jobs.
  • For more information:

Introducing plans: A plan is a sequence of tasks on one or more flows that can be scheduled.

  • Feature Availability: This feature is available in Cloud Dataprep Premium by TRIFACTA INC.
  • NOTE: In this release, the only type of task that is supported is Run Flow.
  • For more information on plans, see Plans Page.

  • For more information on orchestration in general, see Overview of Operationalization.

*Dataflow execution in non-local VPC:* You can now execute your Cloud Dataflow jobs on a non-local or shared virtual private network (VPC).

  • NOTE: To accommodate a wider range of shared VPCs configuration, subnetworks must be specified by full URL. See Changes below.
  • Project owners can set these execution options for the entire project. See Project Settings Page.

Subnetwork specified by URL: When you are specifying the subnetwork where to execute your Cloud Dataflow jobs, you must now specify the subnetwork using a URL.

  • Tip: This feature can be used when Cloud Dataprep by TRIFACTA INC. is configured to execute Cloud Dataflow jobs to run within a shared VPC hosted in a project other than the current project.
  • Previously, you could specify the subnetwork by name. However, non-local subnetwork values could not be specified in this manner.
  • For more information, see Dataflow Execution Settings.
Google Cloud Marketplace

The IAM permissions required for purchasing the following solutions from Google Cloud Marketplace have changed:

  • Apache Kafka® on Confluent Cloud™
  • DataStax Astra for Apache Cassandra
  • Elasticsearch Service on Elastic Cloud
  • NetApp Cloud Volumes Service
  • Redis Enterprise Cloud

If you use custom roles to purchase these solutions, you must update the custom roles to include the permissions described in Access Control for Google Cloud Marketplace.

Specifically, if your custom role includes the billing.subscriptions.create permission, you must update it to include the consumerprocurement.orders.place and the consumerprocurement.accounts.create permissions.

If you use the Billing Administrator role to purchase these solutions, you don't need to take any action.

Secret Manager

Secret Manager resources can now be stored in the australia-southeast1 region. To learn more, see Locations.

July 10, 2020

Anthos Service Mesh

1.6.5-asm.1, 1.5.8-asm.0, and 1.4.10-asm.4

Fixes the security issue, ISTIO-SECURITY-2020-008, with the same fixes as Istio 1.6.5 and Istio 1.5.8. These fixes were backported to 1.4.10-asm.4. For more information, see the Istio release notes:

Cloud Billing

The Cost Table report functionality has been updated to add a Table configuration interface that replaces the previous Group by and Label selectors. Use the new Table configuration dialog to choose a Label key and select your Group by options. Additionally, the available Group by options have been enhanced to include a new Custom grouping option. Use custom grouping to view a nested cost table with your costs grouped by up to three dimensions that you choose, including label values. See the documentation for more details.

Cloud Functions

Cloud Functions is now available in the following regions:

  • us-west2 (Los Angeles)
  • us-west4
  • southamerica-east1 (Sao Paulo)
  • asia-northeast2 (Osaka)

See Cloud Functions Locations for details.

Cloud Monitoring

SLO monitoring for microservices is now Generally Available in the Cloud Console. This feature lets you create service-level objectives (SLOs) and set up alerting policies to monitor their performance using auto-generated dashboards with metrics, logs, and alerts in a single place. For more information, see SLO monitoring.

Dataproc

Added --temp-bucket flag to gcloud dataproc clusters create and gcloud dataproc workflow-templates set-managed-cluster to allow users to configure a Cloud Storage bucket that stores ephemeral cluster and jobs data, such as Spark and MapReduce history files.

Extended Jupyter to support notebooks stored on VM persistent disk. This change modifies the Jupyter contents manager to create two virtual top-level directories, named GCS, and Local Disk. The GCS directory points to the Cloud Storage location used by previous versions, and the Local Disk directory points to the persistent disk of the VM running Jupyter.

Dataproc images now include the oauth2l command line tool. The tool is installed in /usr/local/bin, which is available to all users in the VM.

New sub-minor versions of Dataproc images: 1.2.102-debian9, 1.3.62-debian9, 1.4.33-debian9, 1.3.62-debian10, 1.4.33-debian10, 1.5.8-debian10, 1.3.62-ubuntu18, 1.4.33-ubuntu18, 1.5.8-ubuntu18, 2.0.0-RC4-debian10, 2.0.0-RC4-ubuntu18

  • Images 1.3 - 1.5:

    • Fixed HIVE-11920: ADD JAR failing with URL schemes other than file/ivy/hdfs.
  • Images 1.3 - 2.0 preview:

    • Fixed TEZ-4108: NullPointerException during speculative execution race condition.

Fixed a race condition that could nondeterministically cause Hive-WebHCat to fail at startup when HBase is not enabled.

July 09, 2020

Cloud SQL for PostgreSQL

Cloud SQL now supports point-in-time recovery (PITR) for PostgreSQL. Point-in-time recovery helps you recover an instance to a specific point in time. For example, if an error causes a loss of data, you can recover a database to its state before the error occurred.

Config Connector

Added support for SecretManagerSecret

Managed Service for Microsoft Active Directory

The Managed Microsoft AD SLA has been published.

July 08, 2020

App Engine flexible environment .NET

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Go

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Java

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Node.js

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment PHP

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Ruby

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment custom runtimes

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Go

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Java

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Node.js

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment PHP

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Python

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Ruby

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

July 07, 2020

Cloud Composer
  • New versions of Cloud Composer images: composer-1.10.6-airflow-1.10.2, composer-1.10.6-airflow-1.10.3 and composer-1.10.6-airflow-1.10.6. The default is composer-1.10.6-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.

  • For Airflow 1.10.6 and later: The Airflow config property [celery] pool is now blocked.

  • Fixed an issue with Airflow 1.10.6 environments where task logs were not visible in the UI when DAG serialization was enabled.
Cloud Functions

External HTTP(S) Load Balancing is now supported for Google Cloud Functions via Serverless network endpoint groups.

Notably, this feature allows you to use Cloud CDN and Cloud Armor with Google Cloud Functions.

This feature is available in Beta.

Cloud Load Balancing

External HTTP(S) Load Balancing is now supported for App Engine, Cloud Functions, and Cloud Run services. To configure this, you will need to use a new type of network endpoint group (NEG) called a Serverless NEG.

This feature is available in Beta.

Cloud Monitoring

Monitoring Query Language (MQL) is now Generally Available. MQL is an expressive, text-based interface to Cloud Monitoring time-series data. With MQL, you can create charts you can't create any other way. You can access MQL from both the Cloud Console and the Monitoring API. For more information, see Introduction to Monitoring Query Language.

Cloud Run

External HTTP(S) Load Balancing is now supported for Cloud Run services via Serverless network endpoint groups.
Notably, this feature allows you to use Cloud CDN and multi-region load balancing.
This feature is available in Beta.

Dataproc

Announcing the General Availability (GA) release of Dataproc Component Gateway, which provides secure access to web endpoints for Dataproc default and optional components.

Traffic Director

Traffic Director now provides the option of automated Envoy deployment.

Traffic Director now supports automated Envoy deployments for Google Compute Engine VMs in Beta.

July 06, 2020

App Engine standard environment Node.js

The Node.js 12 runtime for the App Engine standard environment is now generally available.

App Engine standard environment Python

The Python 3.8 runtime for the App Engine standard environment is now generally available.

App Engine standard environment Ruby

The Ruby 2.6 and 2.7 runtime Betas for the App Engine standard environment are now available.

BigQuery

Updated version of Magnitude Simba ODBC driver. This version includes some performance improvements and bug fixes, and it catches up with the JDBC driver by adding support for user defined functions and variable time zones using the connection string.

Compute Engine

E2 machine types now offer up to 32 vCPUs. See E2 machine types for more information.

Dialogflow

The Dialogflow Console has been upgraded with an improved Analytics page (Beta) that provides new metrics and data views.

July 04, 2020

Document AI

Invoice Parsing Beta model upgrade

The Invoice Parsing Beta model has been upgraded. This model upgrade results in higher quality results for the entities and entityRelations. There is no API change.

See the product documentation for more information.

July 01, 2020

BigQuery ML

BigQuery ML now supports time series models as a beta release. For more information, see CREATE MODEL statement for time series models.

Config Connector

Config Connector now supports --server-dry-run for resource CRDs.

Fix a bug for the BigtableInstance resource that causes constant reconciliation.

Deprecate BigtableInstance's spec.deletionProtection field.

Identity and Access Management

The organization policy constraint to prevent automatic role grants to Cloud IAM service accounts is now generally available. To improve security, we strongly recommend that you enable this constraint.

Starting on July 27, 2020, IAM policies will identify deleted members that are bound to a role. Deleted members have the prefix deleted: and the suffix ?uid=numeric-id.

For example, if you delete the account for the user tamika@example.com, and a policy binds that user to a role, the policy shows an identifier similar to deleted:user:tamika@example.com?uid=123456789012345678901.

For SetIamPolicy requests, you can use this new syntax starting on July 27. For GetIamPolicy and SetIamPolicy responses, you might see the new prefix and suffix in some, but not all, responses until we finish rolling out the change. We expect to complete the rollout by July 31, 2020.

See the documentation for a detailed example, as well as guidance on updating policies that contain deleted members.

Starting on July 27, 2020, if a binding in a policy refers to a deleted member (for example, deleted:user:tamika@example.com?uid=123456789012345678901), you cannot add a binding for a newly created member with the same name (in this case, user:tamika@example.com). If you try to add a binding for the newly created member, IAM will apply the binding to the deleted member instead.

To resolve this issue, see our guidance on updating policies that contain deleted members.

Network Intelligence Center

Connectivity Tests now supports running tests from the Network interface details screen of a Compute Engine VM instance in the Google Cloud Console.

Resource Manager

The Organization Policy for restricting automatic IAM permission grants to new service accounts has launched into general availability.

June 30, 2020

Anthos Service Mesh

1.6.4-asm.9 is now available.

ASM 1.6 is compatible with and has the feature set of Istio 1.6 (see Istio release notes), subject to the list of ASM Supported Features.

1.5.7-asm.0 and 1.4.10-asm.3

Fixes the security issue, ISTIO-SECURITY-2020-007, with the same fixes as Istio 1.6.4. For information, see the Istio release notes.

Description

The vulnerability affects Anthos Service Mesh (ASM) versions 1.4.0 to 1.4.10, 1.5.0 to 1.5.5, and 1.6.4 whether running in Anthos GKE on-prem or on GKE, potentially exposing your application to Denial of Service (DOS) attacks. This vulnerability is referenced in these publicly disclosed Istio security bulletins:

  • ISTIO-SECURITY-2020-007:
    • CVE-2020-12603 (CVSS score 7.0, High): Envoy through 1.14.1 may consume excessive amounts of memory when proxying HTTP/2 requests or responses with many small (e.g., 1 byte) data frames.
    • CVE-2020-12605 (CVSS score 7.0, High): Envoy through 1.14.1 may consume excessive amounts of memory when processing HTTP/1.1 headers with long field names or requests with long URLs.
    • CVE-2020-8663 (CVSS score 7.0, High): Envoy version 1.14.1 or earlier may exhaust file descriptors and/or memory when accepting too many connections.
    • CVE-2020-12604 (CVSS score 7.0, High): Envoy through 1.14.1 is susceptible to increased memory usage in the case where an HTTP/2 client requests a large payload but does not send enough window updates to consume the entire stream and does not reset the stream. The attacker can cause data associated with many streams to be buffered forever.

Mitigation

If you use ASM 1.6.4: * Apply the additional configuration changes specified in ISTIO-SECURITY-2020-007 to prevent Denial of Service (DOS) attacks on your mesh.

If you use ASM 1.4.0 to 1.4.10 or 1.5.0 to 1.5.5: * Upgrade your clusters to ASM 1.4.10-asm.3 or ASM 1.5.7-asm.0 as soon as possible and apply the additional configuration changes specified in ISTIO-SECURITY-2020-007 to prevent Denial of Service (DOS) attacks on your mesh.

Anthos Service Mesh now supports multi-cluster meshes (beta) when running on GKE on Google Cloud.

Users that configure multiple clusters in their mesh can now see unified, multi-cluster views of their services in the Anthos Service Mesh pages in the Cloud Console. Note that multi-cluster support is in Beta and not all UI features are supported in multi-cluster mode.

ASM 1.6 is supported in a single cluster configuration in Anthos Attached Clusters in the following environments: Amazon Elastic Kubernetes Service (EKS) and Microsoft Azure Kubernetes Service (AKS).

The profile to install ASM in GKE has been renamed from asm to asm-gcp, see Upgrading Anthos Service Mesh on GKE. The profile to install ASM in GKE on-premise clusters has been renamed from asm-onprem to asm-multicloud, see Upgrading Anthos Service Mesh on premises.

In the asm-multicloud profile, ASM now installs a complete observability stack (Prometheus, Grafana and Kiali).

Support for cross-cluster load balancing (beta) for your multi-cluster mesh for GKE on Google Cloud.

Anthos Service Mesh now supports cross-cluster security policies (beta) for your multi-cluster mesh when running on GKE on Google Cloud.

Upgrade from ASM 1.5 to ASM 1.6 without downtime using a dual control plane upgrade.

Known Issue: If you upgrade from Istio to ASM 1.6 and have set SLOs on your service metrics, those SLOs might be lost and need to be recreated after the upgrade.

Cloud Build

Cloud Build now provides open-source notifiers for Slack and SMTP. These notifiers can be configured to securely alert users about build status.

Cloud Composer

Cloud Composer support for VPC Service Controls is now in Beta.

Cloud Logging

Cloud Logging now contains a Logs Dashboard page that provides a high-level overview into the health of your systems running within a project. To learn more, see Logs Dashboard.

Cloud Run

Cloud Run (fully managed) support for connecting to a VPC network with Serverless VPC Access is now at general availability (GA).

VPC Service Controls

General availability of dry run mode for service perimeters.

This release introduces dry run configurations for your service perimeters, allowing you to test changes to perimeters before enforcing the changes. For more information, read about dry run mode.

Beta release of the VPC Service Controls Troubleshooter.

The VPC Service Controls Troubleshooter allows you to use the unique identifiers generated by VPC Service Controls errors to understand and resolve common denials to services in your perimeters.

During the beta period, the following error types are supported:

  • NO_MATCHING_ACCESS_LEVEL
  • NETWORK_NOT_IN_SAME_SERVICE_PERIMETER
  • NO_MATCHING_ACCESS_LEVEL

For more information, read about the VPC Service Controls Troubleshooter.

Beta stage support for the following integrations:

June 29, 2020

BigQuery

The BigQuery SLA has been updated to >= 99.99% Monthly Uptime Percentage for all users.

Cloud Debugger

Cloud Debugger now lets you canary snapshots and logpoints on your Node.js applications. To learn more, see the Node.js page for setting up Cloud Debugger.

Cloud Load Balancing

You can now create an internal HTTP(S) load balancer in a Shared VPC service project.

This feature is available in Alpha. Please contact your Google account team to get access to this feature.

Cloud Run

Cloud Run is now available in the following regions:

  • asia-northeast2 (Osaka)
  • australia-southeast1 (Sydney)
  • northamerica-northeast1 (Montréal)
Dialogflow

The V1 API is in the process of a gradual shutdown. See the November 14, 2019 release note for details.

June 26, 2020

App Engine standard environment Go

The Go 1.14 runtime Beta for the App Engine standard environment is now available.

BigQuery

Starting in mid-July, unqualified INFORMATION_SCHEMA queries for SCHEMATA and SCHEMATA_OPTIONS views will default to returning metadata from the US multi-region. For information about how to specify a region, see region qualifier syntax.

Compute Engine

To support a wide variety of BYOL scenarios, you can now configure VMs to live migrate within a sole-tenant node group during host maintenance events. This is Generally Available.

VPC Service Controls

Beta stage support for the following integration:

June 25, 2020

Anthos

Anthos 1.4.0 is now available.

Updated components:

Anthos Config Management

Anthos Config Management is now Generally Available on AKS (Kubernetes v1.16 or higher) and EKS (Kubernetes v1.16 or higher).

Config Connector is not currently supported on EKS or AKS, as it is unable to run on these providers.

The following Policy Controller constraint templates have been added to the Default Template Library:

  • allowedserviceportname
  • destinationruletlsenabled
  • disallowedauthzprefix
  • policystrictonly
  • sourcenotallauthz

The following constraint templates have been updated:

  • k8sblockprocessnamespacesharing
  • k8sdisallowedrolebindingsubjects
  • k8semptydirhassizelimit
  • k8slocalstoragerequiresafetoevict
  • k8smemoryrequestequalslimit
  • k8snoexternalservices
  • k8spspallowedusers
  • k8spspallowprivilegeescalationcontainer
  • k8spspapparmor
  • k8spspcapabilities
  • k8spspflexvolumes
  • k8spspforbiddensysctls
  • k8spspfsgroup
  • k8spsphostfilesystem
  • k8spsphostnamespace
  • k8spsphostnetworkingports
  • k8spspprivilegedcontainer
  • k8spspprocmount
  • k8spspreadonlyrootfilesystem
  • k8spspseccomp
  • k8spspselinux
  • k8spspvolumetypes

See the Default Template Library documentation for more information.

Anthos Policy Controller has been updated to include a more recent build of OPA Gatekeeper (hash: 25ca799).

This new build of OPA Gatekeeper includes a number of bug fixes and performance improvements, and adds three new monitoring metrics:

  • gatekeeper_sync
  • gatekeeper_sync_duration_seconds
  • gatekeeper_sync_last_run_time

The nomos CLI tool now supports the KUBECONFIG environment variable in a way that matches the kubectl behavior with multiple delimited configuration files.

Anthos Config Management no longer gets into a continuous PATCH loop when encountering unmanaged resources with config-management annotations and a missing last-applied-configuration annotation.

Anthos Config Management is not issuing errors when it encounters certain types of malformed configurations in a resource definition. This may result in the Kubernetes API Server ignoring the malformed fields and applying the default value for the field instead.

Policy Controller may fail to start successfully when synced resources are marked for deletion.

This issue will be addressed in the upstream OPA Gatekeeper project in a future release. For more information see the relevant issue in the Gatekeeper project.

This release includes several logging and performance improvements.

Anthos GKE on-prem

Anthos GKE on-prem 1.4.0-gke.13 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.4.0-gke.13 clusters run on Kubernetes 1.16.8-gke.6.

Updated to Kubernetes 1.16:

Simplified upgrade:

  • This release provides a simplified upgrade experience via the following changes:

    • Automatically migrate information from the previous version of admin workstation using gkeadm.
    • Extend preflight checks to better prepare for upgrades.
    • Support skip version upgrade to enable users to upgrade the cluster from any patch release of a minor release to any patch release of the next minor release. For more information about the detailed upgrade procedure and limitations, see upgrading GKE on-prem.
    • The alternate upgrade scenario for Common Vulnerabilities and Exposures has been deprecated. All upgrades starting with version 1.3.2 need to upgrade the entire admin workstation.
    • The bundled load balancer is now automatically upgraded during cluster upgrade.

Improved installation and cluster configuration:

  • The user cluster node pools feature is now generally available.
  • This release improves the installation experience via the following changes:

    • Supports gkeadm for Windows OS.
    • Introduces a standalone command for creating admin clusters.
  • Introduce a new version of configuration files to separate admin and user cluster configurations and commands. This is designed to provide a consistent user experience and better configuration management.

Improved disaster recovery capabilities:

  • This release provides enhanced disaster recovery functionality to support backup and restore HA user cluster with etcd.
  • This release also provides a manual process to recover a single etcd replica failure in a HA cluster without any data loss.

Enhanced monitoring with Cloud Monitoring (formerly Stackdriver):

  • This release provides better product monitoring and resource usage management via the following changes:

  • Ubuntu Image now conforms with PCI DSS, NIST Baseline High, and DoD SRG IL2 compliance configurations.

Functionality changes:

  • Enabled Horizontal Pod Autoscaler (HPA) for the Istio ingress gateway.
  • Removed ingress controller from admin cluster.
  • Consolidated sysctl configs with Google Kubernetes Engine.
  • Added etcd defrag pod in admin cluster and user cluster, which will be responsible for monitoring etcd's database size and defragmenting it as needed. This helps reclaim etcd database size and recover etcd when its disk space is exceeded.

Support for a vSphere folder (Preview):

  • This release allows customers to install GKE on-prem in a vSphere folder, reducing the scope of the permission required for the vSphere user.

Improved scale:

Fixes:

  • Fixed the issue of the user cluster's Kubernetes API server not being able to connect to kube-etcd after admin nodes and user cluster master reboot. In previous versions, kube-dns in admin clusters was configured through kubeadm. In 1.4, this configuration is moved from kubeadm to bundle, which enables deploying two kube-dns replicas on two admin nodes. As a result, a single admin node reboot/failure won't disrupt user cluster API access.
  • Fixed the issue that controllers such as calico-typha can't be scheduled on an admin cluster master node, when the admin cluster master node is under disk pressure.
  • Resolved pods failure with MatchNodeSelector on admin cluster master after node reboot or kubelet restart.
  • Tuned etcd quota limit settings based on the etcd data disk size and the settings in GKE Classic.

Known issues:

  • If a user cluster is created without any node pool named the same as the cluster, managing the node pools using gkectl update cluster would fail. To avoid this issue, when creating a user cluster, you need to name one node pool the same as the cluster.
  • The gkectl command might exit with panic when converting config from "/path/to/config.yaml" to v1 config files. When that occurs, you can resolve the issue by removing the unused bundled load balancer section ("loadbalancerconfig") in the config file.
  • When using gkeadm to upgrade an admin workstation on Windows, the info file filled out from this template needs to have the line endings converted to use Unix line endings (LF) instead of Windows line endings (CRLF). You can use Notepad++ to convert the line endings.
  • After upgrading an admin workstation with a static IP using gkeadm, you need to run ssh-keygen -R <admin-workstation-ip> to remove the IP from the known hosts, because the host identification changed after VM re-creation.
  • We have added Horizontal Pod Autoscaler for istio-ingress and istio-pilot deployments. HPA can scale up unnecessarily for istio-ingress and istio-pilot deployments during cluster upgrades. This happens because the metrics server is not able to report usage of some pods (newly created and terminating; for more information, see this Kubernetes issue). No actions are needed; scale down will happen five minutes after the upgrade finishes.
  • When running a preflight check for config.yaml that contains both admincluster and usercluster sections, the "data disk" check in the "user cluster vCenter" category might fail with the message: [FAILURE] Data Disk: Data disk is not in a folder. Use a data disk in a folder when using vSAN datastore. User clusters don't use data disks, and it's safe to ignore the failure.
  • When upgrading the admin cluster, the preflight check for the user cluster OS image validation will fail. The user cluster OS image is not used in this case, and it's safe to ignore the "User Cluster OS Image Exists" failure in this case.
  • A Calico-node pod might be stuck in an unready state after node IP changes. To resolve this issue, you need to delete any unready Calico-node pods.
  • The BIG-IP controller might fail to update F5 VIP after any admin cluster master IP changes. To resolve this, you need to use the admin cluster master node IP in kubeconfig and delete the bigip-controller pod from the admin master.
  • The stackdriver-prometheus-k8s pod could enter a crashloop after host failure. To resolve this, you need to remove any corrupted PersistentVolumes that the stackdriver-prometheus-k8s pod uses.
  • After node IP change, pods running with hostNetwork don't get podIP corrected until Kubelet restarts. To resolve this, you need to restart Kubelet or delete those pods using previous IPs.
  • An admin cluster fails after any admin cluster master node IP address changes. To avoid this, you should avoid changing the admin master IP address if possible by using a static IP or a non-expired DHCP lease instead. If you encounter this issue and need further assistance, please contact Google Support.
  • User cluster upgrade might be stuck with the error: Failed to update machine status: no matches for kind "Machine" in version "cluster.k8s.io/v1alpha1". To resolve this, you need to delete the clusterapi pod in the user cluster namespace in the admin cluster.

If your vSphere environment has fewer than three hosts, user cluster upgrade might fail. To resolve this, you need to disable antiAffinityGroups in the cluster config before upgrading the user cluster. For v1 config, please set antiAffinityGroups.enabled = false; for v0 config, please set usercluster.antiaffinitygroups.enabled = false.

Note: Disabling antiAffinityGroups in the cluster config during upgrade is only allowed for the 1.3.2 to 1.4.x upgrade to resolve the upgrade issue; the support might be removed in the future.

Cloud Load Balancing

The introductory period during which you can use Internal HTTP(S) Load Balancing without charge is coming to an end. Starting on July 25, 2020, your usage of Internal HTTP(S) Load Balancing will be billed to your project.

Config Connector

Add an option, iam-format, to config-connector to control IAM output, options are policy, policymember, or none.

ComputeForwardingRule's target field now supports referencing a ComputeTargetSSLProxy and ComputeTargetTCPProxy.

DataFlowJob's serviceAccountEmail, network, subnetwork, machineType, and ipConfiguration fields now support updates.

Fix an issue where config-connector would error on a Project resource.

June 24, 2020

Cloud Composer
  • New versions of Cloud Composer images: composer-1.10.5-airflow-1.10.2, composer-1.10.5-airflow-1.10.3 and composer-1.10.5-airflow-1.10.6. The default is composer-1.10.5-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.
  • Composer now uses the Kubernetes v1 API, and is compatible with GKE 1.16
  • An updated haproxy configuration for Composer increases the maximum number of connections to 2000, and changes load balancing to be based on the number of connections. These settings can be configured with environment variables.
  • Error messages for TP_APP_ENGINE_CREATING timeout and RPC delivery issues have been expanded.
  • Airflow Providers can now be installed inside Cloud Composer.
  • Error handling for rendering templates in the Airflow web server UI has been improved.
  • Fixed an issue with rendering task instance details (logs, task instance template, params) in the Airflow web server UI when DAG serialization is enabled.
  • Fixed an issue with DataFlowJavaOperator, so it can now be used with Apache Beam 2.20.
  • Improved error reporting for failing operations.
  • Memory consumption of the gcs-syncd container is now constrained to prevent system instability.
Dataproc

New subminor image versions: 1.2.100-debian9, 1.3.60-debian9, 1.4.31-debian9, 1.3.60-debian10, 1.4.31-debian10, 1.5.6-debian10, 1.3.60-ubuntu18, 1.4.31-ubuntu18, 1.5.6-ubuntu18, preview 2.0.0-RC2-debian10, and preview 2.0.0-RC2-ubuntu18.

  • Image 2.0 preview:

    • SPARK-22404: set spark.yarn.unmanagedAM.enabled property to true on clusters where Kerberos is not enabled to run Spark Application Master in driver (not managed in YARN) to improve job execution time.
    • Updated R version to 3.6

    • Updated Spark to 3.0.0 version

  • Image 1.5

    • Updated R version to 3.6

Fixed a quota validation bug where accelerator counts were squared before validation -- for example, previously if you requested 8 GPUs, Dataproc validated whether your project had quota for 8^2=64 GPUs.

June 23, 2020

AI Platform Deep Learning VM Image

M50 release

Miscellaneous bug fixes.

Cloud Billing

Committed use discounts (CUDs) are now available to purchase for Cloud SQL. CUDs provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. With spend-based committed use discounts for Cloud SQL, you can earn a deep discount off your cost of use in exchange for committing to continuously use database instances in a particular region for a 1- or 3-year term. See the blog and documentation for more details.

Cloud SQL for MySQL

Committed use discounts (CUDs) are now available to purchase for Cloud SQL. CUDs provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. With committed use discounts for Cloud SQL, you can earn a deep discount off your cost of use in exchange for committing to continuously use database instances in a particular region for a 1- or 3-year term. See the documentation for more details.

Cloud SQL for PostgreSQL

Committed use discounts (CUDs) are now available to purchase for Cloud SQL. CUDs provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. With committed use discounts for Cloud SQL, you can earn a deep discount off your cost of use in exchange for committing to continuously use database instances in a particular region for a 1- or 3-year term. See the documentation for more details.

Cloud SQL for SQL Server

Committed use discounts (CUDs) are now available to purchase for Cloud SQL. CUDs provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. With committed use discounts for Cloud SQL, you can earn a deep discount off your cost of use in exchange for committing to continuously use database instances in a particular region for a 1- or 3-year term. See the documentation for more details.

Google Cloud Armor

Promotional pricing for Google Cloud Armor is extended to July 31, 2020.

June 22, 2020

AI Platform Training

You can now use Cloud TPUs for training jobs in the europe-west4 region. TPU v2 accelerators are generally available, and TPU v3 accelerators are available in beta.

Learn how to configure your training job to use TPUs, and read about TPU pricing on AI Platform Training.

Anthos Service Mesh

1.5.6-asm.0 and 1.4.10.asm.2

Contains the same fixes as OSS Istio 1.5.6. Non-critical, minor improvements were also backported to ASM 1.4.10. See Announcing Istio 1.5.6 for more information.

Cloud Build

Cloud Build's substitution variables can now refer to other substitution variables, manipulate them using bash-style string operations, and pull information from a trigger event payload. To learn more, see Using bash-style string operations and payload bindings in substitutions.

Cloud Key Management Service

Keys hosted by Thales are now supported in Cloud EKM. To learn more, see Cloud EKM.

Compute Engine

N2D machine types are now available in Belgium, europe-west1, in all three zones. Read more information on the VM instance pricing page.

Firestore

The Google Cloud console now includes a Firestore usage dashboard.

Identity and Access Management

Using the Cloud IAM API to sign JSON Web Tokens (JWTs) or binary blobs is now deprecated.

June 19, 2020

Cloud Data Loss Prevention

Added support for location-based processing. Learn more:

Cloud Functions

Cloud Functions is now available in the following regions:

  • australia-southeast1 (Sydney)
  • northamerica-northeast1 (Montreal)

See Cloud Functions Locations for details.

Cloud Run for Anthos

Cloud Run for Anthos on Google Cloud version 0.14.0-gke.5 is now available for following cluster versions (and greater):

  • 1.17.6-gke.4

June 17, 2020

Cloud Debugger

Cloud Debugger now lets you canary snapshots and logpoints on your Python applications. To learn more, see the Python page for setting up Cloud Debugger.

Memorystore for Memcached

Added new Memorystore for Memcached regions: Finland (europe-north1), Hong Kong (asia-east2), Jakarta (asia-southeast2), Las Vegas (us-west4), Montréal (northamerica-northeast1), Mumbai (asia-south1), Osaka (asia-northeast2), Salt Lake City (us-west3), São Paulo (southamerica-east1), Seoul (asia-northeast3), and Zurich (europe-west6).

June 16, 2020

BigQuery BigQuery Data Transfer Service

The Top Brands report for Google Merchant Center Best Sellers exports is now in beta.

BigQuery ML

BigQuery ML now supports beta integration with AI Platform. The following models are supported in beta:

Cloud Interconnect

The public documentation for Cloud Interconnect is now located under the Network Connectivity page.

Cloud Router

The public documentation for Cloud Router is now located under the Network Connectivity page.

Cloud Run

The Cloud Run user interface now allows you to copy a Cloud Run service.

Cloud VPN

The public documentation for Cloud VPN is now located under the Network Connectivity page.

Config Connector

You can use config-connector tool to export Google Cloud resources into Config Connector: documentation

Bug fixes

Pub/Sub

Retry policies for Pub/Sub subscriptions are now available at the GA launch stage.

June 15, 2020

AI Platform Training

AI Platform Training now supports private services access in beta. You can use VPC Network Peering to create a private connection so that training jobs can connect to your network on private IP.

Learn how to set up VPC Network Peering with AI Platform Training.

Anthos Config Management

A regression in Anthos Config Management 1.3.2 results in unnecessary patches to the API server for the gatekeeper-system namespace and spurious logging for error KNV2005. This "fight" results when the gatekeeper-system namespace is managed in the Git repo, and two Anthos Config Management components (the operator and syncer) are both trying to reconcile the state of the namespace with the API server. The only workaround at this time is to unmanage the gatekeeper-system namespace. The issue will be fixed in Anthos Config Management 1.4.1.

Anthos Service Mesh

1.5.5-asm.2

Fixes a bug in the istioctl HorizontalPodAutoscaling setting that caused Anthos Service Mesh installations to fail.

Cloud Data Loss Prevention

Added infoType detector:

  • VEHICLE_IDENTIFICATION_NUMBER
Cloud Monitoring

The Service Monitoring API is now Generally Available. You can use this feature to create services, set service-level objectives (SLOs), and create alerting policies to monitor your SLOs. See Service monitoring for documentation, and services for reference material.

Cloud VPN

Cloud VPN now supports an org-level policy that restricts peer IP addresses through a Cloud VPN tunnel.

Compute Engine Resource Manager

The Organization Policy for restricting peer IP addresses through a Cloud VPN tunnel has been launched into general availability.