Google Cloud release notes

The following release notes cover the most recent changes over the last 30 days. For a comprehensive list, see the individual product release note pages .

You can see the latest product updates for all of Google Cloud on the Google Cloud release notes page.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly:

August 13, 2020


The exports per day (Extract Bytes) default quota has been raised from 10 TB to 50 TB per day.

Cloud Monitoring

The new, out-of-the-box Infrastructure Summary dashboard for Compute Engine VMs provides a single-pane-of-glass view into your VM fleet and load balancers. At a glance, you can see the top 5 VMs across a variety of key metrics including memory, CPU, sent/received traffic, latency, disk read/write, and more.

Google Cloud Marketplace Partners

Google Cloud Marketplace supports filters, called Category IDs, that are available to customers within the Google Cloud console. When you add a Category ID, your solution shows up in the filtered view for that category when customers search for solutions in Cloud Marketplace. You can select up to two categories for each of your solutions.

To add or modify categories, go to the Solutions Details page and edit the Category ID section.

Virtual Private Cloud

GRE support for VPC networks is now available in Beta.

August 12, 2020

Cloud Billing

Recommendations for Compute Engine committed use discounts are now available in beta. Recommendations provide you opportunities to optimize your compute costs by analyzing your VM spending trends and recommending committed use discount contracts. For understanding and purchasing committed use discount recommendations, see the documentation.

Cloud Monitoring

Enhancements to the pre-configured Compute Engine VM Instances dashboard. The inventory table now includes a Monitoring Agent Status column, and the Monitoring agent can be installed by using a UI workflow from the table. The Explore tab gives an overview of additional metrics being sent (including agent metrics, custom metrics, and logs-based metrics) as well as a set of quick links to learn more about each type of metric. You can also use the Recommended Alerts button on the dashboard to configure fleet-wide alerts.

Compute Engine

Compute Engine Committed use discount recommendations are available in beta. Committed use recommendations give you opportunities to optimize your compute costs by analyzing your VM spending trends. For additional information, see Understanding commitment recommendations.

CPU overcommit on sole-tenant nodes lets you overprovision sole-tenant node resources and schedule more VM CPUs on a sole-tenant node than are normally available. This feature is Generally Available.

Key metrics for persistent disks in the new disk-level Monitoring tab are now Generally Available. Select any persistent disk attached to a single VM from Disks to see mean throughput, peak throughput, mean operations, and peak operations. You can also open each metric in Monitoring for querying, browsing, adding to a dashboard, or configuring alerts.

Migrate for Compute Engine

V4.11 offers integration with Secret Manager. You can store Migrate for Compute Engine password and encryption key as objects in secret manager to provide a higher level of security and control. See Configuring the migration manager for more.

V4.11 introduces Windows upgrade with bring-your-own-license (BYOL) feature. Migrating Windows Server 2008R2 with a customer owned license (BYOL) can upgrade to Windows Server 2012R2 using BYOL as part of the migration process. See Upgrading Windows Server VMs for more.

V4.11 introduces automatic deployment of Google Cloud OS Config agent to migrating VMs. This allows you to get insights on your migrated VM patch status and automate deployment of software patches to migrated VMs. See Adapting VMs to run on Google Cloud for more.

Migrate Backend network connectivity requirement to Migrate Manager and Cloud Extensions have been reduced, all traffic on this channel is performed over port 443 (HTTPS and TLS) instead of using port 9111. See Network access requirements for more.

Usability enhancements in the following flows:

  • Automatic adjustments of VDDK max open sessions when accessing vSphere V6.5 to avoid overloading VDDK max connections limit.
  • Support for vCenter certificate update flow.
  • Enhancement of automatic license assignment feature to offline migration flow.

#160405343: Due to a change in behavior on the activation flow for SUSE, configuring repositories on SUSE Enterprise Linux instances post-detach now fail.

Workaround: The following workaround can be used prior to detach (either before migration or before detach).

  1. Follow the instructions described for Situation 4 at to download the required packages for Compute Engine as a tar.gz file.

  2. For SLES 12.x, then run the following commands:

    tar -xf late_instance_offline_update_gce_SLE12.tar.gz
    cd x86_64/
    zypper --no-refresh --no-remote --non-interactive in *.rpm```
  1. For SLES 15.x, then run the following commands:

    tar -xf late_instance_offline_update_gce_SLE15.tar.gz
    cd x86_64/
    zypper --no-refresh --no-remote --non-interactive in *.rpm```

#149004085: Ubuntu 14 from on-premise may fail to start networking post detach.

Workaround: Connect via the serial console and manually add the network interface with DHCP.

#145086776: In rare cases, older versions of RHEL7 may hang during streaming or reach a Kernel panic. This issues were resolved in later versions of RHEL7.

Workaround: Run sudo yum update before migrating to update the system.

#145644737: Clones created on Azure or AWS from instances of Linux distributions that use cloud-init may experience issues in booting after installing the Linux prep package.

Workaround: Uninstall the package before cloning and reinstall when preparing to migrate.

#143313211: Customer migrating RHEL 6.8 VM may experience boot issues in the cloud destination.

RHEL 6.x systems using kernel versions 2.6.32-xxx and using LVM may reach a kernel panic when booting in Compute Engine during migration.

Workaround: The kernel should be upgraded to 2.6.32-754 or higher before migrating.

#143262721: Migration of VM from Azure fails when data disk is greater than 4 terabytes.

At this time, Migrate for Compute Engine does not support migration of Azure VMs with data disks bigger than 4TB.

Workaround: Make sure VM has data disk smaller than 4TB.

#131532690: Run-in-cloud and migration operations may fail for Windows Server 2016 workload when Symantec Endpoint Protection (SEP) is installed. This may also happen when SEP appears to be disabled.

Workaround: Modify workload Network interface bindings to remove the SEP option.

  1. Download Microsoft Network VSP Bind (nvspbind)
  2. Install Microsoft_Nvspbind_package.EXE into c:\temp.
  3. Open a command prompt as an Administrator and run the following:

    nvspbind.exe /d * symc_teefer2

#131614405: When the Velostrata Prep RPM is installed on SUSE Linux Enterprise Server 11, the VM obtains a DHCP IP address in addition to an existing static IP configuration. This issue occurs when the VM is started on-premises in a subnet that is enabled with DHCP services.

Note: The issue does not occur when the subnet has no DHCP services. There is no connectivity impact for communications with the original static IP address.

#131637800: After registering the Velostrata plug-in, running the Cloud Extension wizard might generate an error "XXXXXXXXXX" upon "Finish".

Workaround: Un-register the Velostrata plug-in and restart the vSphere Web client service, then re-register the plug-in. Contact support if the issue persists.

#131548730: In some cases, when a VM is moved to Run-in-Cloud while a 3rd party VM-level backup solution holds a temporary snapshot, the Migrate for Compute Engine periodic write-back operations will not complete even after the backup solution deletes the temporary snapshot. The uncommitted writes counter on the VM will show an increasing size and no consistency checkpoint will be created on-premises.

Workaround: Select the Run On-Premises action for the VM and wait for the task to complete, which will commit all pending writes. Then select the Run-in-Cloud action again. Note that committing many pending writes may take a while. Do not use the Force option as this will result in the loss of the uncommitted writes.

#131605387: vCenter reboot causes Velostrata tasks in vCenter to disappear from UI. This is a vCenter limitation.

Workaround: Use the Velostrata PowerShell module to monitor Velostrata managed VMs or Cloud Extensions tasks that are currently running.

#131638716: With an ESXi host in maintenance mode, if a VM is moved to cloud, the operation will fail and get stuck in the rollback phase.

Workaround: Manually cancel the Run-in-Cloud task, migrate the VM to another ESXi host in the cluster and retry the Run-in-Cloud operation.

#131638455: A Run-in-Cloud operation fails with the error - "Failed to create virtual machine snapshot. The attempted operation cannot be performed in the current state (Powered off)".

Workaround: The VMware VM snapshot file may be pointing to a non-existent snapshot. Contact support for assistance in correcting the issue.

#131534862: In rare cases, after running a Workload back on-premises - Workload VMDK's are locked. In certain cases, this is due to network disruptions between the Velostrata management appliance and the ESXi host on which the workload is running.

Note: The issue will resolve itself after 1-2 hours.

#131550214: During Detach, the operation might fail with the following error message: "Operation was canceled".

Workaround: Retry the Detach operation.

#131650367: When performing a detach after a cancel detach operation, the action may fail.

Workaround: Retry the operation.

#131649978: In the event of certain system failures, Velostrata components disconnect from vCenter. In this case, an event may not be sent, resulting in the alarm either not being set properly or not being cleared properly.

Workaround: Clear the alarm manually in vCenter.

#131532549: For workloads with a Windows machine using a retail license, when returning from the cloud, the license is not present.

Workaround: Reinstall the license.

#131555885: vCenter "Export OVA" operation is available when the VM in cloud is running in cache mode, however, this operation results in a corrupted OVA.

Workaround: Only create OVA after the detach.

#131647857: In rare cases, when a cloud component instance is created and system fails before it is tagged, the instance will remain untagged. This will not allow full clean-up or repair of the CE.

Workaround: Manually tag the instance, and then run "Repair".

#131537125: Cloud Extension high availability does not work for workloads running Ubuntu OS with LVM configuration.

Workaround: Update the kernel to 3.13.0-161 or higher.

#131560126: Suse12: Due to a bug in SUSE kernel older than 4.2, configurations that include BTRFS mounts with subvolumes are not supported.

Workaround: Upgrade to SUSE version with Kernel >=4.2 (SP2).

#131533480: When using the Create Cloud Extension wizard, using an illegal HTTP proxy address will not generate a warning message.

Workaround: Delete the CE and then create the CE with a valid HTTP proxy address.

#131647654: Run on-premises operation succeeded but the status is marked as failed with error "Failed to consolidate snapshots"

Workaround: Consolidate snapshots via vCenter, and clear the error manually.

#131558198: PowerShell client for cloud to cloud Runbook reports errors when running on PowerShell 3.0

Workaround: Upgrade to PowerShell 4.0

#131533056: When migrating RHEL 7.4 from AWS to Google Cloud, Google Cloud agent will not be installed automatically.

Workaround: Manually remove the AWS agent and install Google Cloud agent

#131532713: After Offline Migration of Windows 2003R2, if a NIC is manually deleted, it may be impossible to auto-detect and automatically reinstall it.

Workaround: The VM storage can be attached to a different VM, and the NIC Registry entry could be imported manually using a similar VM as a reference. Contact support for assistance.

#131532666: Linux versions running with kernel version 2.6.32 may experience a kernel panic on ephemeral storage access failures; these are more likely while streaming over iSCSI.

Workaround: Upgrade your kernel. The issue will also reduce in likelihood after Detach.

#131532846: Certain firewalls and anti-viruses may cause Windows VMs to fail when moved to cloud by blocking iSCSI traffic.

Workaround: Disable the affecting service while migrating and reinstall after Detach.

#131532882: In certain cases, initiating Run in Cloud during a Windows update may cause the update to terminate abruptly and cause a failure to boot in the cloud.

Workaround: Allow the system to finish Windows update and/or suspend Windows updates before migrating.

#135664281: When completing or canceling Azure to Google Cloud migration, if Velostrata Management failed to start the importer, Velostrata-created resources may be left in the original instance's resource group.

#133137658: Scenario: No network connection between Migration Manager and VSphere

Customer Impact: RunInCloud task will stay stuck due to failure in call to getReadSessions on VSphere.

Workaround: Fix the network connection. If not, cancel the task and try again.

#135573857: Scenario: When moving a VM back on-prem with "force" flag, failure to consolidate snapshot will cause the VM to remain as managed by Velostrata. RunInCloud on the same VM may fail since it is not allowed on managed VMs.

Workaround: Wait a couple of minutes and try again.

#137082702: In rare cases, the Cancel detach operation succeeds but the VM instance will fail to start.

Workaround: Move the instance back and move it again to the cloud.

August 11, 2020


For flat-rate pricing, the minimum slot purchase is now 100 slots. Slots can be purchased in 100-slot increments.

Cloud Logging

Users now manage logs exclusions through logs sinks. As a result, custom roles that have the logging.sinks.* permissions can now control the volume of logs ingested into Cloud Logging through logs sinks.

We recommend that you review any custom roles with the logging.sinks.* permissions so that you can make adjustments as needed.

Beta release: You can now use Logs Buckets to centralize or divide your logs based on your needs. For information about this feature, refer to the Managing logs buckets guide.

August 10, 2020

AI Platform Deep Learning VM Image

M54 release

  • Added support for the europe-west3 region
  • Updated the Explainable AI sdk and added explainers
  • Fixed llvm-openmp support
  • Added support for instance auto upgrade
  • Made Deep Learning VM images and Deep Learning Containers more consistent for TPU
  • Updated NCCL to 2.7.6 in CU110 images
  • Added the scikit-learn package and container
  • Added JRE to R images
  • Limited custom container memory utilization
Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.2-airflow-1.10.3, composer-1.11.2-airflow-1.10.6, and composer-1.11.2-airflow-1.10.9. The default is composer-1.11.2-airflow-1.10.6. Upgrade your Cloud SDK to use features in this release.
  • Airflow 1.10.6 and 1.10.9: You can now specify a location argument when creating a BigQueryCheckOperator to use it in a different region from the Composer environment.
  • Fixed GKE setting incompatibilities that broke environment creation for Composer versions between 1.7.2 and 1.8.3.
  • When DAG serialization is on, plugins and DAGs are no longer synced when the Airflow web server starts up. This fixes web server failures when plugins use custom PyPI packages.
  • Fixed intermittent failures when triggering a DAG from the Airflow Web UI with DAG serialization turned on.
  • Fixed update operations (installing Python dependencies and upgrading environments) for domain-scoped projects.
  • Fixed a broken link to the Airflow documentation in Airflow 1.10.9.

August 08, 2020

Config Connector

Added support for BigtableTable

Fix a bug where a CRD would be marked as uninstalling on a dryrun delete

August 07, 2020

Cloud Billing

You can now view a summary of all your spend-based committed use discounts (CUD) and purchase new commitments in the commitment dashboard. The dashboard lists the type of commitment, region it's located, current active commitments, term length, and the start and end dates for the commitment. See the documentation for more details.

Compute Engine

You can now update multiple instance properties using a single request from the command-line tool or the Compute Engine API to update multiple instance properties. For more information, see Updating instance properties.

August 06, 2020

AI Platform Deep Learning VM Image

M53 release

TensorFlow Enterprise 2.3 images, including images that support CUDA 11.0, are now available.


BigQuery is now available in the following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery BI Engine

BigQuery BI Engine is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery Data Transfer Service

BigQuery Data Transfer Service is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery ML

BigQuery ML is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

Cloud Billing

If you have a negotiated pricing contract associated with your Cloud Billing account, starting with your July 2020 invoice, the Cloud Billing report and the Cost Breakdown report now support displaying your costs calculated using list prices, displaying your negotiated savings as a separate credit. This view helps you see how much money you are saving on your Google Cloud costs because of your negotiated pricing contract.

For information on how to view your list costs and negotiated savings in reports, see the documentation:

Cloud Spanner

A new multi-region instance configuration is now available in North America - nam10 (Iowa/Salt Lake).

August 05, 2020

Cloud Functions

Cloud Functions Java 11, Python 3.7 or 3.8, and Go 1.13 runtimes now build container images in the user's project, providing direct access to build logs and removing the preset build-time quota.

See Building Cloud Functions for details.

Istio on Google Kubernetes Engine

Starting with version 1.6, the Istio on GKE add-on uses the Istio Operator for installation and configuration. When you upgrade your cluster to 1.17.7-gke.8+, 1.17.8-gke.6+, or higher, the Istio 1.6 Operator and control plane are installed alongside the existing 1.4.x Istio control plane. The upgrade requires user action and follows the dual control plane upgrade process (referred to as canary upgrades in the Istio documentation). With a dual control plane upgrade, you can migrate to the 1.6 version by setting a label on your workloads to point to the new control plane and performing a rolling restart. To learn more, see Upgrading to Istio 1.6 with Operator.


Pub/Sub message ordering is now available at the beta launch stage.

August 04, 2020

AI Platform Training

Read a new guide to distributed PyTorch training. You can use this guide with pre-built PyTorch containers, which are in beta.

Anthos GKE on AWS

Anthos GKE on AWS 1.4.1-gke.17 is released. This release fixes a memory leak that causes clusters to become unresponsive.

To upgrade your clusters, perform the following steps:

  1. Restart your control plane instances.
  2. Upgrade your management service to aws-1.4.1-gke.17.
  3. Upgrade your user cluster's AWSCluster and AWSNodePools to 1.16.9-gke.15.

Use version 1.16.9-gke.15 for creating new clusters.

Compute Engine

You can attach a maximum of 24 local SSD partitions for 9 TB per instance. This is generally available on instances with N1 machine types. For more information, see Local SSDs.

August 03, 2020

Anthos GKE on AWS

Anthos GKE on AWS 1.4.1-gke.15 clusters will experience a memory leak that results in an unresponsive cluster. A fix for this issue is in development.

If you are planning to deploy an Anthos GKE on AWS cluster, wait until the fix is ready.

Cloud Asset Inventory fields deprecation

The following two fields for assets of are now deprecated in the exported output of Cloud Storage and BigQuery.

  • metadata.resourceVersion
  • status.conditions.lastHeartbeatTime
Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.1-airflow-1.10.3, composer-1.11.1-airflow-1.10.6, and composer-1.11.1-airflow-1.10.9. The default is composer-1.11.1-airflow-1.10.6. Upgrade your Cloud SDK to use features in this release.
  • Composer now enforces iam.serviceAccounts.actAs permission checks on the service account specified during Composer environment creation. See Creating environments for details.
  • Private IP environments can now be creating using non-rfc 1918 CGN ranges (
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Cloud Composer 1.11.1+: Backport providers are installed by default for Airflow 1.10.6 and 1.10.9.
  • You can now use the label.worker_id filter in Cloud Monitoring logs to see logs sent out of a specific Airflow worker Pod.
  • With the Composer Beta API, you can now upgrade an environment to any of the three latest Composer versions (instead of just the latest).
  • You can now modify these previously blocked Airflow configurations: [scheduler] scheduler_heartbeat_sec, [scheduler] job_heartbeat_sec, [scheduler] run_duration
  • A more informative error message was added for environment creation failures caused by issues with Cloud SQL instance creation.
  • Improved error reporting has been added for update operations that change the web server image in cases where the error occurs before the new web server image is created.
  • The Airflow-worker liveness check has been changed so that a task just added to a queue will not fire an alert.
  • Reduced the amount of non-informative logs thrown by the environment in Composer 1.10.6.
  • Improved the syncing procedure for env_var.json in Airflow 1.10.9 (it should no longer throw "missing file:" errors).
  • Airflow-worker and airflow-scheduler will no longer throw "missing env_var.json" errors in Airflow 1.10.6.
Cloud Logging

Alpha release: You can now use Logs Buckets to centralize or divide your logs based on your needs. For information about this feature, refer to the Managing logs buckets guide. To participate in the alpha or to get notified when Logs Buckets goes beta, fill out the sign up form.

Cloud Run

When setting up Continuous Deployment in the Cloud Run user interface, you can now select a repository that contains Go, Node.js, Python Java or .NET Core code. It will be built using Google Cloud Buildpacks without needing a Dockerfile.

Compute Engine

You can now access C2 machine types in the following zones: Taiwan: asia-east1-a, Singapore: asia-southeast1-a, Sao Paulo: southamerica-east1-b,c, and Oregon: us-west1-b. For more information, see VM instance pricing.


Dataproc users are required to have service account ActAs permission to deploy Dataproc resources, for example, to create clusters and submit jobs. See Managing service account impersonation for more information.

Opt-in for existing Dataproc customers: This change does not automatically apply to current Dataproc customers without ActAs permission. To opt in, see Securing Dataproc, Dataflow, and Cloud Data Fusion.

July 31, 2020


Updated version of Magnitude Simba ODBC driver includes performance improvements and bug fixes.

Cloud Functions

Cloud Functions is now available in the following regions:

  • asia-south1 (Mumbai)
  • asia-southeast2 (Jakarta)
  • asia-northeast3 (Seoul)

See Cloud Functions Locations for details.

Compute Engine

N2D machine types are now available in asia-east1 in all three zones. For more information, see the VM instance pricing page.

Config Connector

Add support for ArtifactRegistryRepository

Changes DataflowJob to allow for spec.parameters and spec.ipConfiguration to be updateable

Fixes issue that was causing ContainerNodePool and SQLDatabase to display UpdateFailed due to the referenced ContainerCluster or SQLDatabase not being ready

Fixes issue preventing the creation of BigQuery resources that read from Google Drive files due to insufficient OAuth 2.0 scopes

Fixes issue causing SourceRepoRepository to constantly update even when there were no changes


Enabled Kerberos automatic-configuration feature. When creating a cluster, users can enable Kerberos by setting the dataproc:kerberos.beta.automatic-config.enable cluster property to true. When using this feature, users do not need to specify the Kerberos root principal password with the --kerberos-root-principal-password and --kerberos-kms-key-uri flags.

New sub-minor versions of Dataproc images: 1.3.65-debian10, 1.3.65-ubuntu18, 1.4.36-debian10, 1.4.36-ubuntu18, 1.5.11-debian10, 1.5.11-ubuntu18, 2.0.0-RC7-debian10, and 2.0.0-RC7-ubuntu18.

1.3+ images (includes Preview image):

  • HADOOP-16984: Added support to read history files only from the done directory.

  • MAPREDUCE-7279: Display the Resource Manager name on the HistoryServer web page.

  • SPARK-32135: Show the Spark driver name on the Spark history web page.

  • SPARK-32097: Allow reading Spark history log files via the Spark history server from multiple directories.

Images 1.3 - 1.5:

  • HIVE-20600: Fixed Hive Metastore connection leak.

Images 1.5 - 2.0 preview:

Fixed an issue where optional components that depend on HDFS failed on single node clusters.

Fixed an issue that caused workflows to be stuck in the RUNNING state when managed clusters (created by the workflow) were deleted while the workflow was running.

Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on September 14, 2020.

Storage Transfer Service

Transfers from Microsoft Azure Blob Storage are now generally available.

July 30, 2020


Anthos 1.3.3 is now available.

Updated components:

Anthos Config Management

Updated the git-sync image to fix security vulnerability CVE-2019-5482.

Anthos GKE on-prem

Anthos GKE on-prem 1.3.3-gke.0 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.3.3-gke.0 clusters run on Kubernetes 1.15.12-gke.9.


Cloud Composer

Cloud Composer is now available in Osaka (asia-northeast2).

Cloud Logging

The Logs field explorer panel is now generally available (GA). To learn more, see the Logs field explorer section on Logs Viewer (Preview) interface page.

Cloud Run

You can now tag Cloud Run revisions. Tagged revisions get a dedicated URL allowing developers to reach these specific revisions without needing to allocate traffic to it.

Cloud Spanner

The Cloud Spanner emulator is now generally available, enabling you to develop and test applications locally. For more information, see Using the Cloud Spanner Emulator.

Compute Engine

When creating patch jobs, you can now choose whether to deploy zones concurrently or one at a time. You can also now specify a disruption budget for your VMs. For more information, see Patch rollout options.

N2 machines are now available in Sao Paulo southamerica-southeast1 in all three zones. For more information, see VM instance pricing.

You can access m2-megamem memory-optimized machine types in all zones that already have m2-ultramem memory-optimized machine types. These two machine types have also been added to asia-south1-b. You can use m1-ultramem machine types in asia-south1-a. To learn more, read Memory-optimized machine type family.


GA (general availability) launch of mega agents.

Beta launch of the Facebook Workplace integration.

Network Intelligence Center

Network Topology no longer supports infrastructure segments. This feature is deprecated and will be completely removed after 90 days. If you have any questions, see Getting support.

July 28, 2020

Compute Engine

Improved validation checks will be introduced on API calls to starting on August 3, 2020 to increase reliability and REST API compliance of the Compute Engine platform for all users. Learn how to Validate API Requests to ensure your requests are properly formed.

Memorystore for Redis

Support for VPC Service Controls on Memorystore for Redis is now Generally Available.

Migrate for Anthos

The migctl migration cleanup command has been removed and is no longer necessary.

In previous releases, you used a command in the form: migctl source create ce my-ce-src --project my-project --zone zone to create a migration for Compute Engine. The --zone option has been removed when creating a Compute Engine migration. Using the --zone option in this release causes an error.

The migctl migration logs command has been removed. You now use the Google Console to view logs.

Added the new --json-key sa.json option to the migctl source create ce command to create a migration for Compute Engine, where sa.json specifies a service account. See Optionally creating a service account when using Compute Engine as a migration source for more.

To edit the migration plan, you must now use the migctl migration get my-migration command to download the plan. After you are done editing the plan, you have to upload it by using the migctl migration update my-migration command. See Customizing a migration plan for more.

Added support for Anthos GKE on-prem clusters running on VMware. On-prem support lets you migrate source VM workloads in a vCenter/vSphere environment to a GKE on-prem cluster running in the same vCenter/vSphere environment. See Migration prerequisites for the requirements for on-prem migration.

The Google Cloud Console provides a web-based, graphical user interface that you can use to manage your Google Cloud Console projects and resources. Migrate for Anthos now supports the migration of workloads by using the Google Cloud Console.

In this release, the Migrate for Anthos on the Cloud Consoledoes not support migrations for Windows or for on-prem, including monitoring Windows or on-prem migrations.

Migrate for Anthos now includes Custom Resource Definitions (CRDs) that enable you to easily create and manage migrations using an API. For example, you can use these CRDs to build your own automated tools.

Added the node-selectors and tolerations options to the migctl setup install installation command that lets you install Migrate for Anthos on a specific set of nodes or node pools in a cluster. See Installing Migrate for Anthos.

You can use Migrate for Anthos to migrate Windows VMs to workloads on GKE. This process clones your Compute Engine VM disks and uses the clone to generate artifacts (including a Dockerfile and a zip archive with extracted workload files and settings) you can use to build a deployable GKE image. See Adding a Windows migration source.

160309992: Editing a migration plan from the GUI console might fail if it was also edited using migctl.

161135630: Attempting multiple migrations of the same remote VM (from VMware, AWS or Azure) simultaneously, might result in a stuck migration process.

Workaround: Delete the stuck migration.

161214397: For Anthos on-prem, in case of a missing service-account to upload container images to the Container Registry, the migration might get stuck.

Workaround: Add the service-account. If you are using the Migrate for Anthos CRD API, delete the GenerateArtifactsTask and recreate it. If using the migctl CLI tool, delete the migration and recreate it. You can first download the migration YAML using migctl migration get to back up any customizations you have made.

161110816: migctl migration create with a source that doesn't exist fails with a non-informative error message: request was denied.

161104564: Creating a Linux migration with wrong os-type specification causes the migration process to get stuck until deleted.

160858543, 160836394, 160844377, 154430477, 154403665, 153241390, 153239696, 152408818, 151516642, 132002453: Unstable network in Migrate for Anthos infrastructure, or a GKE node restart, might cause migration to get stuck.

Workaround: Delete the migration and re-create it. If recreating the migration does not solve the issue, please contact support.

161787358: In some cases, upgrading from version v1.3 to v1.4 might fail with Failed to convert source message.

Workaround: Re-run the upgrade command.

153811691, 153439420: Migrate for Anthos support for older Java does not handle OpenJDK 7 and 8 CPU resource calculations.

152974631: Using GKE nodes with CPU and Memory configurations below the recommended values might cause migrations to get stuck.

GKE on-prem preview: If a source was created with migctl source create using the wrong credentials, you could not delete the migration with migctl migration delete. This issue has been fixed in the GA release of on-prem support.

In version 1.4, by default Migrate for Anthos installs to and performs migrations in the v2k-system namespace. In previous release, you could specify the namespace. The option to specify a namespace has been removed.

157890913, 160082702, 161125635, 159693579: A migration might continue to indicate that it is running, while an issue encountered prevents further processing.

Workaround: Check event messages on the migration object using the verbose migctl status command: migclt migration status migration_name -v. You might be able to correct the issue to allow the migration to continue or the migration should be deleted and recreated if an Error event is listed without further retries.

An example is when creating a Windows migration on a cluster with no Windows nodes. In this case the event message will show: Warning FailedScheduling 10s Pod discover-xyz 0/1 nodes are available: 1 node(s) didn't match node selector.

VPC Service Controls

General availability for the following integration:

July 27, 2020


INFORMATION_SCHEMA views for streaming metadata are now in alpha. You can use these views to retrieve historical and real-time information about streaming data into BigQuery.

Cloud Run

Cloud Run is now available in asia-southeast1 (Singapore)


Dataflow now supports Dataflow Shuffle, Streaming Engine, FlexRS, and the following regional endpoints in GA:

  • northamerica-northeast1 (Montréal)
  • asia-southeast1 (Singapore)
  • australia-southeast1 (Sydney)

Beta launch of Dialogflow Messenger. This new integration provides a customizable chat dialog for your agent that can be embedded in your website.

Security Command Center

Security Command Center v1beta1 API will be disabled on Jan. 31, 2021. All users will be required to migrate to Security Command Center v1 API, which is now in general availability.

  • Update to Google-provided v1 API client libraries.
  • Move your client libraries and HTTP/grpc calls to v1 by following instructions in the reference documentation for service endpoints and SDK configuration.
  • If you call this service using your own libraries, follow the guidance in our Security Command Center API Overview when making API requests.
  • To use ListFindings calls in the v1 API, update your response handling to respond to an extra layer of object nesting, as shown below:
    • v1beta1: response.getFindings().forEach( x -> ....)
    • v1: response.getListFindingsResults().forEach(x -> { x.getFinding(); .... })

Additional changes to the v1 API are listed below. Learn more about Using the Security Command Center API.

The SeverityLevel finding source property for all Security Health Analytics findings will be removed and replaced with a field named Severity, which retains the same values. * Impact: Finding notification filters, post-processing, and alerting based on the SeverityLevel finding source property will no longer be possible. * Recommendation: Replace the SeverityLevel finding source property with the Severity finding attribute property to retain existing functionality.

The nodePools finding source property will be removed from the OVER_PRIVILEGED_SCOPES findings and replaced with a source property named VulnerableNodePools. * Impact: Finding notification filters, post-processing and alerting based on this finding source property may fail. * Recommendation: Modify workflows as necessary to utilize the new VulnerableNodePools source property.

The finding category of 2SV_NOT_ENFORCED is being renamed MFA_NOT_ENFORCED. * Impact: Case-sensitive finding notification filters, post-processing, and alerting based on the previous finding category name may fail. * Recommendation: Update any post-processing to use the new category name.

The ExceptionInstructions source property will be removed from all Security Health Analytics findings. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * In progress: A new property that will indicate the current state of findings is being developed.

The ProjectId source property from all Security Health Analytics findings will be removed. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * Recommendation: Update workflows to utilize the project id in the resource.project_display_name field of a ListFindingsResult.

The AssetSettings finding source property from PUBLIC_SQL_INSTANCE, SQL_PUBLIC_IP, SSL_NOT_ENFORCED, AUTO_BACKUP_DISABLED, SQL_NO_ROOT_PASSWORD, SQL_WEAK_ROOT_PASSWORD finding types will be removed, as it contains data duplicated from the asset entity. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Replacing the AssetSettings finding source property with the Settings resource property from the asset underlying the finding will retain existing functionality.

The Allowed finding source property from OPEN_FIREWALL findings will be replaced with changed a new field named ExternallyAccessibleProtocolsAndPorts, which will contain a subset of the values from the Allowed property. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternallyAccessibleProtocolsAndPorts source property.

The SourceRanges finding source property from findings in OPEN_FIREWALL findings will be replaced with a new ExternalSourceRanges, which will contain a subset of the values from the SourceRanges property. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternalSourceRanges source property.

As of Jan. 31, 2021, the UpdateFinding API will no longer support storing string properties that are longer than 7,000 characters. * Impact: Calls to UpdateFinding that seek to store string properties longer than 7,000 characters will be rejected with an invalid argument error. * Recommendation: Consider storing string properties longer than 7,000 characters as JSON structs or JSON lists. Learn more about writing findings.

As of Sept. 1, 2020, the ListFindings API will no longer support searching on finding properties that are longer than 7,000 characters. * Impact: Searches on strings that are longer than 7,000 characters will not return expected results. For example, if a partial string match filter has a match at the 7,005th character on a property in a finding, that finding will not be returned because that match is past the 7,000-character threshold. An exception will not be returned. * Recommendation: Customers can remove filter restrictions (e.g. x : "some-value") that are supposed to match very long properties. The results can then be filtered locally to remove findings whose strings do not match designated criteria. Learn more about filtering findings.

The OffendingIamRoles source property in extensions of IAM Scanner Configurations will use structured data instead of a JSON-formatted string. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type on findings of the following categories: ADMIN_SERVICE_ACCOUNT, NON_ORG_IAM_MEMBER, PRIMITIVE_ROLES_USED, OVER_PRIVILEGED_SERVICE_ACCOUNT_USER, REDIS_ROLE_USED_ON_ORG, SERVICE_ACCOUNT_ROLE_SEPARATION, KMS_ROLE_SEPARATION. * Recommendation: Update workflows to utilize the new data type.

The QualifiedLogMetricNames source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The AlertPolicyFailureReasons source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The CompatibleFeatures source property in WEAK_SSL_POLICY findings will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings. * Recommendation: Update workflows to utilize the new data type.

July 25, 2020

Cloud Load Balancing

The introductory period during which you could use Internal HTTP(S) Load Balancing without charge has ended. Starting July 25, 2020, your usage of Internal HTTP(S) Load Balancing will be billed to your project.

July 24, 2020

Anthos GKE on AWS

Anthos GKE on AWS is now generally available.

Clusters support in-place upgrades, with the ability to upgrade the control plane and node pools separately.

Clusters can be deployed in a high availability (HA) configuration, where control plane instances and node pools are spread across multiple availability zones.

Clusters have been validated to support up to 200 nodes and 6000 pods.

Allows the number of nodes to be scaled dynamically based on traffic volume to increase utilization and reduce cost, and improve performance

Anthos can be deployed within existing AWS VPCs, leveraging existing security groups to secure those clusters. Customers can ingress traffic using NLB and ALBs. Additionally Anthos on AWS supports AWS IAM and OIDC. This makes deploying Anthos easy, eliminates the need to provision new accounts, and minimizes configuration of the environment.

With Anthos Config Management enterprises can set policies on their AWS workloads and with Anthos Service Mesh, they can monitor, manage, and secure them.

Kubernetes settings (flags and sysctl settings) have been updated to match GKE.

Upgrades from beta versions are not supported. To install Anthos GKE on AWS, you must remove your user and management clusters, then reinstall them.

Anthos Service Mesh

Anthos Service Mesh on GKE on AWS is supported.

For more information, see Installing Anthos Service Mesh on GKE on AWS.


BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

BigQuery Data Transfer Service

BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.0-airflow-1.10.2, composer-1.11.0-airflow-1.10.3, composer-1.11.0-airflow-1.10.6, and composer-1.11.0-airflow-1.10.9. The default is composer-1.11.0-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.
  • Airflow 1.10.9 is now supported.
  • Environment upgrades have been enabled for the latest two Composer versions (1.11.0 and 1.10.6).
  • Added a retry feature to the Airflow CeleryExecutor (disabled by default). You can configure the number of times Celery will attempt to execute a task by setting the [celery] max_command_attempts property. The delay between each retry can also be adjusted with [celery] command_retry_wait_duration (default: 5 seconds).
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Fixed synchronization of environment variables to the web server.
  • Improved error reporting when PyPI package installation fails.
  • Composer versions 1.6.1, 1.7.0, and 1.7.1 are now deprecated.
Compute Engine
  • NVIDIA® Tesla® T4 GPUs are now available in the following additional regions and zones:

    • Ashburn, Northern Virginia, USA: us-east4-b

    For information about using T4 GPUs on Compute Engine, see GPUs on Compute Engine.

N2 machines are now available in Northern Virginia us-east4-c. Read more information on the VM instance pricing page.

Data Catalog

Data Catalog is now available in Seoul (asia-northeast3).


Terminals started in Jupyter and JupyterLab now use login shells. The terminals behave as if you SSH'd into the cluster as root.

Upgraded the jupyter-gcs-contents-manager package to the latest version. This upgrade includes a bug fix to a 404 (NOT FOUND) error message that was issued in response to an attempt to create a file in the virtual top-level directory instead of the expected 403 (PERMISSION DENIED) error message.

New sub-minor versions of Dataproc images: 1.3.64-debian10, 1.3.64-ubuntu18, 1.4.35-debian10, 1.4.35-ubuntu18, 1.5.10-debian10, 1.5.10-ubuntu18, 2.0.0-RC6-debian10, and 2.0.0-RC6-ubuntu18.

Fixed a bug in which the HDFS DataNode daemon was enabled on secondary workers but not started (except on VM reboot if started automatically by systemd).

Fixed a bug in which StartLimitIntervalSec=0 appeared in the Service section instead of the Unit section for systemd services, which disabled rate limiting for retries when systemd restarted a service.

July 23, 2020

Anthos Anthos Config Management

Config Connector has been updated in Anthos Config Management to version 1.13.1.

Anthos Config Management now includes Hierarchy Controller as a beta feature. For more information on this component, see the Hierarchy Controller overview.

Policy Controller users may now enable --log-denies to log all denies and dryrun failures. This is useful when trying to see what is being denied or fails dry-run and for keeping a log to debug cluster problems without looking through the status of all constraints. This is configured by setting spec.policyController.logDeniesEnabled: true in the configuration file for the Operator. There is an example in the section on Installing Policy Controller.

This release includes several logging and performance improvements.

This release includes several fixes and improvements for the nomos command line utility.

The use of unsecured HTTP for GitHub repo connections or in an http_proxy is now discouraged, and support for unsecured HTTP will be removed in a future release. HTTPS will continue to be supported for GitHub repo and local proxy connections.

This release improves the handling of GitHub repositories with very large histories.

Prior to this release, Config Sync and kubectl controllers and processes used the same annotation ( to calculate three-way merge patches. The shared annotation sometimes resulted in resource fights, causing unnecessary removal of each other's fields. Config Sync now uses its own annotation, which prevents resource clashes.

In most cases, this change will be transparent to you. However, there are two cases where some previously unspecified behavior will change.

The first case is when you have run kubectl apply on an unmanaged resource in a cluster, and you later add that same resource to the GitHub repo. Previously, Config Sync would have pruned any fields that were previously applied but not declared in the GitHub repo. Now, Config Sync writes the declared fields to the resource and leaves undeclared fields in place. If you want to remove those fields, do one of the following:

  • Get a local copy of the resource from GitHub and kubectl apply it.
  • Use kubectl edit --save-config to remove the fields directly.

The second case is when you stop managing a resource on the cluster or even stop all of Config Sync on a cluster. In this case, if you want to prune fields from a previously managed resource, you will see different behavior. Previously, you could get a local copy of the resource from GitHub, remove the unwanted fields, and kubectl apply it. Now, kubectl apply no longer prunes the missing fields. If you want to remove those fields, do one of the following:

  • Call kubectl apply set-last-applied with the unmodified resource from GitHub, then remove unwanted fields and kubectl apply it again without the set-last-applied flag.
  • Use kubectl edit --save-config to remove the fields directly.

In error messages, links to error docs are now more concise.

Anthos GKE on-prem

Anthos GKE on-prem 1.4.1-gke.1 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.4.1-gke.1 clusters run on Kubernetes 1.16.9-gke.14.

Anthos Identity Service LDAP authentication is now available in Alpha for GKE on-prem

Contact support if you are interested in a trial of the LDAP authentication feature in GKE on-prem.

Support for F5 BIG-IP load balancer credentials update

This preview release enables customers to manage and update the F5 BIG-IP load balancer credentials by using the gkectl update credentials f5bigip command.

Functionality changes:

  • The Ubuntu image is upgraded to include the newest packages.
  • Preflight checks are updated to validate that the gkectl version matches the target cluster version for cluster creation and upgrade.
  • Preflight checks are updated to validate the Window OS version used for running gkeadm. The gkeadm command-line tool is only available for Linux, Windows 10, and Windows Server 2019.
  • gkeadm is updated to populate network.vCenter.networkName in both admin cluster and user cluster configuration files.


  • Removed the static IP used by admin workstation after upgrade from ~/.ssh/known_hosts to avoid manual workaround.
  • Resolved a known issue that network.vCenter.networkName is not populated in the user cluster configuration file during user cluster creation.
  • Resolved a user cluster upgrade–related issue to only wait for the machines and pods in the same namespace within the cluster to be ready to complete the cluster upgrade.
  • Updated the default value for ingressHTTPNodePort and ingressHTTPSNodePort in the loadBalancer.manualLB section of the admin cluster configuration file.
  • Fixed CVE-2020-8558 and CVE-2020-8559 described in Security bulletins.
  • Logging and monitoring: Resolved an issue that stackdriver-log-forwarder was not scheduled on the master node on the admin cluster.
  • Resolved the following known issues published in the 1.4.0 release notes:
    • If a user cluster is created without any node pool named the same as the cluster, managing the node pools using gkectl update cluster would fail. To avoid this issue, when creating a user cluster, you need to name one node pool the same as the cluster.
    • The gkectl command might exit with panic when converting config from "/path/to/config.yaml" to v1 config files. When that occurs, you can resolve the issue by removing the unused bundled load balancer section ("loadbalancerconfig") in the config file.
    • When using gkeadm to upgrade an admin workstation on Windows, the info file filled out from this template needs to have the line endings converted to use Unix line endings (LF) instead of Windows line endings (CRLF). You can use Notepad++ to convert the line endings.
    • When running a preflight check for config.yaml that contains both admincluster and usercluster sections, the "data disk" check in the "user cluster vCenter" category might fail with the message: [FAILURE] Data Disk: Data disk is not in a folder. Use a data disk in a folder when using vSAN datastore. User clusters don't use data disks, and it's safe to ignore the failure.
    • When upgrading the admin cluster, the preflight check for the user cluster OS image validation will fail. The user cluster OS image is not used in this case, and it's safe to ignore the "User Cluster OS Image Exists" failure in this case.
    • User cluster creation and upgrade might be stuck with the error: Failed to update machine status: no matches for kind "Machine" in version "". To resolve this, you need to delete the clusterapi pod in the user cluster namespace in the admin cluster.

Known issues:

  • During reboots, the data disk is not remounted on the admin workstation when using GKE on-prem 1.4.0 or 1.4.1 because the startup script is not run after the initial creation. To resolve this, you can run sudo mount /dev/sdb1 /home/ubuntu.
App Engine standard environment Go App Engine standard environment Java App Engine standard environment Node.js App Engine standard environment PHP App Engine standard environment Python App Engine standard environment Ruby Cloud Billing

Export your Cloud Billing account SKU prices to BigQuery. You can now export your pricing information for Google Cloud and Google Maps Platform SKUs to BigQuery. Exporting your pricing data allows you to audit, analyze, and/or join your pricing data with your exported cost data. The pricing export includes list prices, pricing tiers, and, when applicable, any promotional or negotiated pricing. See the documentation for more details.

Cloud Functions Cloud Run Dialogflow

Amazon Alexa importer and exporter are no longer supported.

Network Intelligence Center

Network Topology includes two new metrics for connections between entities: packet loss and latency. Additionally, you can now use a drop-down menu to select which metric Network Topology overlays on traffic paths. For more information, see Viewing metrics for traffic between entities and Network Topology metrics reference.

Virtual Private Cloud

Serverless VPC Access support for Shared VPC is now available in Beta.

July 22, 2020

Anthos Service Mesh

1.6.5-asm.7, 1.5.8-asm.7, and 1.4.10-asm.15 are now available

This release provides these features and fixes:

  • Builds Istiod (Pilot), Citadel Agent, Pilot Agent, Galley, and Sidecar Injector with Go+BoringCrypto.
  • Builds Istio Proxy (Envoy) with the --define boringssl=fips option.
  • Ensures the components listed above use FIPS-compliant algorithms.
Cloud Bigtable

Cloud Bigtable's fully integrated backups feature is now generally available. Backups let you save a copy of a table's schema and data and restore the backup to a new table at a later time.

July 21, 2020

AutoML Video Intelligence Object Tracking

In April 2020, a model upgrade for the AutoML Video Object Tracking feature was released. This release is for non-downloadable models only. Models trained after April 2020 may show improvements in the evaluation results.

Cloud Run

Cloud Run resources are now available in Cloud Asset Inventory

Compute Engine

You can now create balanced persistent disks , in addition to standard and SSD persistent disks. Balanced persistent disks are an alternative to SSD persistent disks that balance performance and cost. For more information, see Persistent disk types.

Config Connector

bug fixes and performance improvements

Istio on Google Kubernetes Engine

Istio 1.4.10-gke.4

Fixes known security issues with the same fixes as OSS Istio 1.4.10

Recommendations AI

Recommendations AI public beta

Recommendations AI is now in public beta.

New pricing available

Pricing for Recommendations AI has been updated for public beta. For new pricing and free trial details, see Pricing.

UI redesign

The Recommendations AI console has a new look. You'll see a new layout, including a redesigned dashboard and improved alerts setup.

New support resources

We have new support resources available:

See Getting support for all support resources.

New FAQ page

A Frequently Asked Questions page is now available. See the FAQ here.

Traffic Director

Traffic Director supports proxyless gRPC applications in General Availability. In this deployment model, gRPC applications can participate in a service mesh without needing a sidecar proxy.

July 20, 2020

AI Platform Training

You can now train a PyTorch model on AI Platform Training by using a pre-built PyTorch container. Pre-built PyTorch containers are available in beta.

Cloud Storage Data Catalog

Data Catalog is now available in Salt Lake City (us-west3) and Las Vegas (us-west4).

Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on August 31, 2020.

Resource Manager

The Organization Policy for enabling detailed Cloud Audit Logs has launched into general availability.

Secret Manager

Secret Manager adds support for the following curated Cloud IAM roles:

  • Secret Manager Secret Version Adder (roles/secretmanager.secretVersionAdder )
  • Secret Manager Secret Version Manager (roles/secretmanager.secretVersionManager)

To learn more, see IAM and access control.

VPC Service Controls

General availability for the following integration:

July 17, 2020

App Engine standard environment Java
  • Updated Java SDK to version 1.9.81
AutoML Translation

For test data, added support for the .tmx file type when evaluating existing models. For more information, see Evaluating models.

Compute Engine

The Organization Policy for restricting protocol forwarding creation has launched into Beta.


Dataproc now uses Shielded VMs for Debian 10 and Ubuntu 18.04 clusters by default.

The Proxy-Authorization header is accepted in place of Authorization to authenticate to Component Gateway to the backend for programmatic API calls. If Proxy-Authorization is set to a bearer token, Component Gateway will forward the Authorization header if it does not contain a bearer token.

For example, this allows setting both Proxy-Authorization: Bearer <google-access-token> as well as setting Authorization: Basic ... to authenticate to HiveServer2 with HTTP basic auth.

Added support for Zeppelin Spark and shell interpreters in Kerberized clusters by default.

New sub-minor versions of Dataproc images: 1.3.63-debian10, 1.3.63-ubuntu18, 1.4.34-debian10, 1.4.34-ubuntu18, 1.5.9-debian10, 1.5.9-ubuntu18, 2.0.0-RC5-debian10, and 2.0.0-RC5-ubuntu18.

Image 2.0 preview:

If a project's regional Dataproc staging bucket is manually deleted, it will be recreated automatically when a cluster is subsequently created in that region.

Resource Manager

The Organization Policy for restricting protocol forwarding creation has launched into public beta.

July 16, 2020


BigQuery GIS now supports two new functions, ST_CONVEXHULL and ST_DUMP:

  • ST_CONVEXHULL returns the smallest convex GEOGRAPHY that covers the input.
  • ST_DUMP returns an ARRAY of simple GEOGRAPHYs where each element is a component of the input GEOGRAPHY.

For more information, see the ST_CONVEXHULL and ST_DUMP reference pages.

Cloud Data Fusion

Cloud Data Fusion version 6.1.3 is now available. This version includes performance improvements and minor bug fixes.

  • Improved performance of Joiner plugins, aggregators, program startup, and previews.
  • Added support for custom images. You can select a custom Dataproc image by specifying the image URI.
  • Added support for rendering large schemas (>1000 fields) in the pipelines UI.
  • Added payload compression support to the messaging service.
Cloud Load Balancing

The Organization Policy for restricting load balancer creation has launched into Beta.

Compute Engine

SSD persistent disks on certain machine types now have a maximum write throughput of 1,200 MB/s. To learn more about the requirements to reach these limits, see Block storage performance.

You can now suspend and resume your VM instances. This feature is available in Beta.

Config Connector

Add support for allowing fields not specified by the user to be externally-managed (i.e. changeable outside of Config Connector). This feature can be enabled for a resource by enabling K8s server-side apply for the resource, which will be the default for all K8s resources starting in K8s 1.18. More detailed docs about the feature coming soon.

Operator improvement: add support for cluster-mode set-ups, which allows users to use one Google Service Account for all namespaces in their cluster. This is very similar to the traditional "Workload Identity" installation set-up.

Fix ContainerCluster validation issue (Issue #242).

Fix OOM issue for the cnrm-resource-stats-recorder pod (Issue #239).

Add support for projectViewer prefix for members in IAMPolicy and IAMPolicyMember (Issue #234).

Reduce spec.revisionHistoryLimit for the cnrm-stats-recorder and cnrm-webhook-manager Deployments from 10 (the default) to 1.