Google Cloud release notes

The following release notes cover the most recent changes over the last 30 days. For a comprehensive list, see the individual product release note pages .

You can see the latest product updates for all of Google Cloud on the Google Cloud release notes page.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/gcp-release-notes.xml

August 03, 2020

Anthos GKE on AWS

Anthos GKE on AWS 1.4.1 clusters will experience a memory leak that results in an unresponsive cluster. A fix for this issue is in development.

If you are planning to deploy an Anthos GKE on AWS cluster, wait until the fix is ready.

Cloud Asset Inventory

k8s.io/Node fields deprecation

The following two fields for assets of k8s.io/Node are now deprecated in the exported output of Cloud Storage and BigQuery.

  • metadata.resourceVersion
  • status.conditions.lastHeartbeatTime
Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.1-airflow-1.10.3, composer-1.11.1-airflow-1.10.6, and composer-1.11.1-airflow-1.10.9. The default is composer-1.11.1-airflow-1.10.6. Upgrade your Cloud SDK to use features in this release.
  • Composer now enforces iam.serviceAccounts.actAs permission checks on the service account specified during Composer environment creation. See Creating environments for details.
  • Private IP environments can now be creating using non-rfc 1918 CGN ranges (100.64.0.0/10)
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Cloud Composer 1.11.1+: Backport providers are installed by default for Airflow 1.10.6 and 1.10.9.
  • You can now use the label.worker_id filter in Cloud Monitoring logs to see logs sent out of a specific Airflow worker Pod.
  • With the Composer Beta API, you can now upgrade an environment to any of the three latest Composer versions (instead of just the latest).
  • You can now modify these previously blocked Airflow configurations: [scheduler] scheduler_heartbeat_sec, [scheduler] job_heartbeat_sec, [scheduler] run_duration
  • A more informative error message was added for environment creation failures caused by issues with Cloud SQL instance creation.
  • Improved error reporting has been added for update operations that change the web server image in cases where the error occurs before the new web server image is created.
  • The Airflow-worker liveness check has been changed so that a task just added to a queue will not fire an alert.
  • Reduced the amount of non-informative logs thrown by the environment in Composer 1.10.6.
  • Improved the syncing procedure for env_var.json in Airflow 1.10.9 (it should no longer throw "missing file:" errors).
  • Airflow-worker and airflow-scheduler will no longer throw "missing env_var.json" errors in Airflow 1.10.6.
  • Added validation in the v1 API to provide meaningful error messages when creating an environment with domain restricted sharing.
Compute Engine

You can now access C2 machine types in the following zones: Taiwan: asia-east1-a, Singapore: asia-southeast1-a, Sao Paulo: southamerica-east1-b,c, and Oregon: us-west1-b. For more information, see VM instance pricing.

July 31, 2020

BigQuery

Updated version of Magnitude Simba ODBC driver includes performance improvements and bug fixes.

Cloud Functions

Cloud Functions is now available in the following regions:

  • asia-south1 (Mumbai)
  • asia-southeast2 (Jakarta)
  • asia-northeast3 (Seoul)

See Cloud Functions Locations for details.

Compute Engine

N2D machine types are now available in asia-east1 in all three zones. For more information, see the VM instance pricing page.

Config Connector

Add support for ArtifactRegistryRepository

Changes DataflowJob to allow for spec.parameters and spec.ipConfiguration to be updateable

Fixes issue that was causing ContainerNodePool and SQLDatabase to display UpdateFailed due to the referenced ContainerCluster or SQLDatabase not being ready

Fixes issue preventing the creation of BigQuery resources that read from Google Drive files due to insufficient OAuth 2.0 scopes

Fixes issue causing SourceRepoRepository to constantly update even when there were no changes

Dataproc

Enabled Kerberos automatic-configuration feature. When creating a cluster, users can enable Kerberos by setting the dataproc:kerberos.beta.automatic-config.enable cluster property to true. When using this feature, users do not need to specify the Kerberos root principal password with the --kerberos-root-principal-password and --kerberos-kms-key-uri flags.

New sub-minor versions of Dataproc images: 1.3.65-debian10, 1.3.65-ubuntu18, 1.4.36-debian10, 1.4.36-ubuntu18, 1.5.11-debian10, 1.5.11-ubuntu18, 2.0.0-RC7-debian10, and 2.0.0-RC7-ubuntu18.

1.3+ images (includes Preview image):

  • HADOOP-16984: Added support to read history files only from the done directory.

  • MAPREDUCE-7279: Display the Resource Manager name on the HistoryServer web page.

  • SPARK-32135: Show the Spark driver name on the Spark history web page.

  • SPARK-32097: Allow reading Spark history log files via the Spark history server from multiple directories.

Images 1.3 - 1.5:

  • HIVE-20600: Fixed Hive Metastore connection leak.

Images 1.5 - 2.0 preview:

Fixed an issue where optional components that depend on HDFS failed on single node clusters.

Fixed an issue that caused workflows to be stuck in the RUNNING state when managed clusters (created by the workflow) were deleted while the workflow was running.

Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on September 14, 2020.

Storage Transfer Service

Transfers from Microsoft Azure Blob Storage are now generally available.

July 30, 2020

Anthos

Anthos 1.3.3 is now available.

Updated components:

Anthos GKE on-prem

Anthos GKE on-prem 1.3.3-gke.0 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.3.3-gke.0 clusters run on Kubernetes 1.15.12-gke.9.

Fixes:

Cloud Composer

Cloud Composer is now available in Osaka (asia-northeast2).

Cloud Logging

The Logs field explorer panel is now generally available (GA). To learn more, see the Logs field explorer section on Logs Viewer (Preview) interface page.

Cloud Run

You can now tag Cloud Run revisions. Tagged revisions get a dedicated URL allowing developers to reach these specific revisions without needing to allocate traffic to it.

Cloud Spanner

The Cloud Spanner emulator is now generally available, enabling you to develop and test applications locally. For more information, see Using the Cloud Spanner Emulator.

Compute Engine

When creating patch jobs, you can now choose whether to deploy zones concurrently or one at a time. You can also now specify a disruption budget for your VMs. For more information, see Patch rollout options.

N2 machines are now available in Sao Paulo southamerica-southeast1 in all three zones. For more information, see VM instance pricing.

You can access m2-megamem memory-optimized machine types in all zones that already have m2-ultramem memory-optimized machine types. These two machine types have also been added to asia-south1-b. You can use m1-ultramem machine types in asia-south1-a. To learn more, read Memory-optimized machine type family.

Dialogflow

GA (general availability) launch of mega agents.

Beta launch of the Facebook Workplace integration.

Network Intelligence Center

Network Topology no longer supports infrastructure segments. This feature is deprecated and will be completely removed after 90 days. If you have any questions, see Getting support.

July 28, 2020

Compute Engine

Improved validation checks will be introduced on API calls to compute.googleapis.com starting on August 3, 2020 to increase reliability and REST API compliance of the Compute Engine platform for all users. Learn how to Validate API Requests to ensure your requests are properly formed.

Memorystore for Redis

Support for VPC Service Controls on Memorystore for Redis is now Generally Available.

Migrate for Anthos

The migctl migration cleanup command has been removed and is no longer necessary.

In previous releases, you used a command in the form: migctl source create ce my-ce-src --project my-project --zone zone to create a migration for Compute Engine. The --zone option has been removed when creating a Compute Engine migration. Using the --zone option in this release causes an error.

The migctl migration logs command has been removed. You now use the Google Console to view logs.

Added the new --json-key sa.json option to the migctl source create ce command to create a migration for Compute Engine, where sa.json specifies a service account. See Optionally creating a service account when using Compute Engine as a migration source for more.

To edit the migration plan, you must now use the migctl migration get my-migration command to download the plan. After you are done editing the plan, you have to upload it by using the migctl migration update my-migration command. See Customizing a migration plan for more.

Added support for Anthos GKE on-prem clusters running on VMware. On-prem support lets you migrate source VM workloads in a vCenter/vSphere environment to a GKE on-prem cluster running in the same vCenter/vSphere environment. See Migration prerequisites for the requirements for on-prem migration.

The Google Cloud Console provides a web-based, graphical user interface that you can use to manage your Google Cloud Console projects and resources. Migrate for Anthos now supports the migration of workloads by using the Google Cloud Console.

In this release, the Migrate for Anthos on the Cloud Consoledoes not support migrations for Windows or for on-prem, including monitoring Windows or on-prem migrations.

Migrate for Anthos now includes Custom Resource Definitions (CRDs) that enable you to easily create and manage migrations using an API. For example, you can use these CRDs to build your own automated tools.

Added the node-selectors and tolerations options to the migctl setup install installation command that lets you install Migrate for Anthos on a specific set of nodes or node pools in a cluster. See Installing Migrate for Anthos.

You can use Migrate for Anthos to migrate Windows VMs to workloads on GKE. This process clones your Compute Engine VM disks and uses the clone to generate artifacts (including a Dockerfile and a zip archive with extracted workload files and settings) you can use to build a deployable GKE image. See Adding a Windows migration source.

160309992: Editing a migration plan from the GUI console might fail if it was also edited using migctl.

161135630: Attempting multiple migrations of the same remote VM (from VMware, AWS or Azure) simultaneously, might result in a stuck migration process.

Workaround: Delete the stuck migration.

161214397: For Anthos on-prem, in case of a missing service-account to upload container images to the Container Registry, the migration might get stuck.

Workaround: Add the service-account. If you are using the Migrate for Anthos CRD API, delete the GenerateArtifactsTask and recreate it. If using the migctl CLI tool, delete the migration and recreate it. You can first download the migration YAML using migctl migration get to back up any customizations you have made.

161110816: migctl migration create with a source that doesn't exist fails with a non-informative error message: request was denied.

161104564: Creating a Linux migration with wrong os-type specification causes the migration process to get stuck until deleted.

160858543, 160836394, 160844377, 154430477, 154403665, 153241390, 153239696, 152408818, 151516642, 132002453: Unstable network in Migrate for Anthos infrastructure, or a GKE node restart, might cause migration to get stuck.

Workaround: Delete the migration and re-create it. If recreating the migration does not solve the issue, please contact support.

161787358: In some cases, upgrading from version v1.3 to v1.4 might fail with Failed to convert source message.

Workaround: Re-run the upgrade command.

153811691, 153439420: Migrate for Anthos support for older Java does not handle OpenJDK 7 and 8 CPU resource calculations.

152974631: Using GKE nodes with CPU and Memory configurations below the recommended values might cause migrations to get stuck.

GKE on-prem preview: If a source was created with migctl source create using the wrong credentials, you could not delete the migration with migctl migration delete. This issue has been fixed in the GA release of on-prem support.

In version 1.4, by default Migrate for Anthos installs to and performs migrations in the v2k-system namespace. In previous release, you could specify the namespace. The option to specify a namespace has been removed.

157890913, 160082702, 161125635, 159693579: A migration might continue to indicate that it is running, while an issue encountered prevents further processing.

Workaround: Check event messages on the migration object using the verbose migctl status command: migclt migration status migration_name -v. You might be able to correct the issue to allow the migration to continue or the migration should be deleted and recreated if an Error event is listed without further retries.

An example is when creating a Windows migration on a cluster with no Windows nodes. In this case the event message will show: Warning FailedScheduling 10s Pod discover-xyz 0/1 nodes are available: 1 node(s) didn't match node selector.

VPC Service Controls

General availability for the following integration:

July 27, 2020

BigQuery

INFORMATION_SCHEMA views for streaming metadata are now in alpha. You can use these views to retrieve historical and real-time information about streaming data into BigQuery.

Cloud Run

Cloud Run is now available in asia-southeast1 (Singapore)

Dataflow

Dataflow now supports Dataflow Shuffle, Streaming Engine, FlexRS, and the following regional endpoints in GA:

  • northamerica-northeast1 (Montréal)
  • asia-southeast1 (Singapore)
  • australia-southeast1 (Sydney)
Dialogflow

Beta launch of Dialogflow Messenger. This new integration provides a customizable chat dialog for your agent that can be embedded in your website.

Security Command Center

Security Command Center v1beta1 API will be disabled on Jan. 31, 2021. All users will be required to migrate to Security Command Center v1 API, which is now in general availability.

  • Update to Google-provided v1 API client libraries.
  • Move your client libraries and HTTP/grpc calls to v1 by following instructions in the reference documentation for service endpoints and SDK configuration.
  • If you call this service using your own libraries, follow the guidance in our Security Command Center API Overview when making API requests.
  • To use ListFindings calls in the v1 API, update your response handling to respond to an extra layer of object nesting, as shown below:
    • v1beta1: response.getFindings().forEach( x -> ....)
    • v1: response.getListFindingsResults().forEach(x -> { x.getFinding(); .... })

Additional changes to the v1 API are listed below. Learn more about Using the Security Command Center API.

The SeverityLevel finding source property for all Security Health Analytics findings will be removed and replaced with a field named Severity, which retains the same values. * Impact: Finding notification filters, post-processing, and alerting based on the SeverityLevel finding source property will no longer be possible. * Recommendation: Replace the SeverityLevel finding source property with the Severity finding attribute property to retain existing functionality.

The nodePools finding source property will be removed from the OVER_PRIVILEGED_SCOPES findings and replaced with a source property named VulnerableNodePools. * Impact: Finding notification filters, post-processing and alerting based on this finding source property may fail. * Recommendation: Modify workflows as necessary to utilize the new VulnerableNodePools source property.

The finding category of 2SV_NOT_ENFORCED is being renamed MFA_NOT_ENFORCED. * Impact: Case-sensitive finding notification filters, post-processing, and alerting based on the previous finding category name may fail. * Recommendation: Update any post-processing to use the new category name.

The ExceptionInstructions source property will be removed from all Security Health Analytics findings. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * In progress: A new property that will indicate the current state of findings is being developed.

The ProjectId source property from all Security Health Analytics findings will be removed. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * Recommendation: Update workflows to utilize the project id in the resource.project_display_name field of a ListFindingsResult.

The AssetSettings finding source property from PUBLIC_SQL_INSTANCE, SQL_PUBLIC_IP, SSL_NOT_ENFORCED, AUTO_BACKUP_DISABLED, SQL_NO_ROOT_PASSWORD, SQL_WEAK_ROOT_PASSWORD finding types will be removed, as it contains data duplicated from the asset entity. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Replacing the AssetSettings finding source property with the Settings resource property from the asset underlying the finding will retain existing functionality.

The Allowed finding source property from OPEN_FIREWALL findings will be replaced with changed a new field named ExternallyAccessibleProtocolsAndPorts, which will contain a subset of the values from the Allowed property. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternallyAccessibleProtocolsAndPorts source property.

The SourceRanges finding source property from findings in OPEN_FIREWALL findings will be replaced with a new ExternalSourceRanges, which will contain a subset of the values from the SourceRanges property. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternalSourceRanges source property.

As of Jan. 31, 2021, the UpdateFinding API will no longer support storing string properties that are longer than 7,000 characters. * Impact: Calls to UpdateFinding that seek to store string properties longer than 7,000 characters will be rejected with an invalid argument error. * Recommendation: Consider storing string properties longer than 7,000 characters as JSON structs or JSON lists. Learn more about writing findings.

As of Sept. 1, 2020, the ListFindings API will no longer support searching on finding properties that are longer than 7,000 characters. * Impact: Searches on strings that are longer than 7,000 characters will not return expected results. For example, if a partial string match filter has a match at the 7,005th character on a property in a finding, that finding will not be returned because that match is past the 7,000-character threshold. An exception will not be returned. * Recommendation: Customers can remove filter restrictions (e.g. x : "some-value") that are supposed to match very long properties. The results can then be filtered locally to remove findings whose strings do not match designated criteria. Learn more about filtering findings.

The OffendingIamRoles source property in extensions of IAM Scanner Configurations will use structured data instead of a JSON-formatted string. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type on findings of the following categories: ADMIN_SERVICE_ACCOUNT, NON_ORG_IAM_MEMBER, PRIMITIVE_ROLES_USED, OVER_PRIVILEGED_SERVICE_ACCOUNT_USER, REDIS_ROLE_USED_ON_ORG, SERVICE_ACCOUNT_ROLE_SEPARATION, KMS_ROLE_SEPARATION. * Recommendation: Update workflows to utilize the new data type.

The QualifiedLogMetricNames source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The AlertPolicyFailureReasons source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The CompatibleFeatures source property in WEAK_SSL_POLICY findings will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings. * Recommendation: Update workflows to utilize the new data type.

July 25, 2020

Cloud Load Balancing

The introductory period during which you could use Internal HTTP(S) Load Balancing without charge has ended. Starting July 25, 2020, your usage of Internal HTTP(S) Load Balancing will be billed to your project.

July 24, 2020

Anthos GKE on AWS

Anthos GKE on AWS is now generally available.

Clusters support in-place upgrades, with the ability to upgrade the control plane and node pools separately.

Clusters can be deployed in a high availability (HA) configuration, where control plane instances and node pools are spread across multiple availability zones.

Clusters have been validated to support up to 200 nodes and 6000 pods.

Allows the number of nodes to be scaled dynamically based on traffic volume to increase utilization and reduce cost, and improve performance

Anthos can be deployed within existing AWS VPCs, leveraging existing security groups to secure those clusters. Customers can ingress traffic using NLB and ALBs. Additionally Anthos on AWS supports AWS IAM and OIDC. This makes deploying Anthos easy, eliminates the need to provision new accounts, and minimizes configuration of the environment.

With Anthos Config Management enterprises can set policies on their AWS workloads and with Anthos Service Mesh, they can monitor, manage, and secure them.

Kubernetes settings (flags and sysctl settings) have been updated to match GKE.

Upgrades from beta versions are not supported. To install Anthos GKE on AWS, you must remove your user and management clusters, then reinstall them.

Anthos Service Mesh

Anthos Service Mesh on GKE on AWS is supported.

For more information, see Installing Anthos Service Mesh on GKE on AWS.

BigQuery

BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

BigQuery Data Transfer Service

BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.0-airflow-1.10.2, composer-1.11.0-airflow-1.10.3, composer-1.11.0-airflow-1.10.6, and composer-1.11.0-airflow-1.10.9. The default is composer-1.11.0-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.
  • Airflow 1.10.9 is now supported.
  • Environment upgrades have been enabled for the latest two Composer versions (1.11.0 and 1.10.6).
  • Added a retry feature to the Airflow CeleryExecutor (disabled by default). You can configure the number of times Celery will attempt to execute a task by setting the [celery] max_command_attempts property. The delay between each retry can also be adjusted with [celery] command_retry_wait_duration (default: 5 seconds).
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Fixed synchronization of environment variables to the web server.
  • Improved error reporting when PyPI package installation fails.
  • Composer versions 1.6.1, 1.7.0, and 1.7.1 are now deprecated.
Compute Engine
  • NVIDIA® Tesla® T4 GPUs are now available in the following additional regions and zones:

    • Ashburn, Northern Virginia, USA: us-east4-b

    For information about using T4 GPUs on Compute Engine, see GPUs on Compute Engine.

N2 machines are now available in Northern Virginia us-east4-c. Read more information on the VM instance pricing page.

Dataproc

Terminals started in Jupyter and JupyterLab now use login shells. The terminals behave as if you SSH'd into the cluster as root.

Upgraded the jupyter-gcs-contents-manager package to the latest version. This upgrade includes a bug fix to a 404 (NOT FOUND) error message that was issued in response to an attempt to create a file in the virtual top-level directory instead of the expected 403 (PERMISSION DENIED) error message.

New sub-minor versions of Dataproc images: 1.3.64-debian10, 1.3.64-ubuntu18, 1.4.35-debian10, 1.4.35-ubuntu18, 1.5.10-debian10, 1.5.10-ubuntu18, 2.0.0-RC6-debian10, and 2.0.0-RC6-ubuntu18.

Fixed a bug in which the HDFS DataNode daemon was enabled on secondary workers but not started (except on VM reboot if started automatically by systemd).

Fixed a bug in which StartLimitIntervalSec=0 appeared in the Service section instead of the Unit section for systemd services, which disabled rate limiting for retries when systemd restarted a service.

July 23, 2020

Anthos Anthos Config Management

Config Connector has been updated in Anthos Config Management to version 1.13.1.

Anthos Config Management now includes Hierarchy Controller as a beta feature. For more information on this component, see the Hierarchy Controller overview.

Policy Controller users may now enable --log-denies to log all denies and dryrun failures. This is useful when trying to see what is being denied or fails dry-run and for keeping a log to debug cluster problems without looking through the status of all constraints. This is configured by setting spec.policyController.logDeniesEnabled: true in the configuration file for the Operator. There is an example in the section on Installing Policy Controller.

This release includes several logging and performance improvements.

This release includes several fixes and improvements for the nomos command line utility.

The use of unsecured HTTP for GitHub repo connections or in an http_proxy is now discouraged, and support for unsecured HTTP will be removed in a future release. HTTPS will continue to be supported for GitHub repo and local proxy connections.

This release improves the handling of GitHub repositories with very large histories.

Prior to this release, Config Sync and kubectl controllers and processes used the same annotation (kubectl.kubernetes.io/last-applied-configuration) to calculate three-way merge patches. The shared annotation sometimes resulted in resource fights, causing unnecessary removal of each other's fields. Config Sync now uses its own annotation, which prevents resource clashes.

In most cases, this change will be transparent to you. However, there are two cases where some previously unspecified behavior will change.

The first case is when you have run kubectl apply on an unmanaged resource in a cluster, and you later add that same resource to the GitHub repo. Previously, Config Sync would have pruned any fields that were previously applied but not declared in the GitHub repo. Now, Config Sync writes the declared fields to the resource and leaves undeclared fields in place. If you want to remove those fields, do one of the following:

  • Get a local copy of the resource from GitHub and kubectl apply it.
  • Use kubectl edit --save-config to remove the fields directly.

The second case is when you stop managing a resource on the cluster or even stop all of Config Sync on a cluster. In this case, if you want to prune fields from a previously managed resource, you will see different behavior. Previously, you could get a local copy of the resource from GitHub, remove the unwanted fields, and kubectl apply it. Now, kubectl apply no longer prunes the missing fields. If you want to remove those fields, do one of the following:

  • Call kubectl apply set-last-applied with the unmodified resource from GitHub, then remove unwanted fields and kubectl apply it again without the set-last-applied flag.
  • Use kubectl edit --save-config to remove the fields directly.

In error messages, links to error docs are now more concise.

Anthos GKE on-prem

Anthos GKE on-prem 1.4.1-gke.1 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.4.1-gke.1 clusters run on Kubernetes 1.16.9-gke.14.

Anthos Identity Service LDAP authentication is now available in Alpha for GKE on-prem

Contact support if you are interested in a trial of the LDAP authentication feature in GKE on-prem.

Support for F5 BIG-IP load balancer credentials update

This preview release enables customers to manage and update the F5 BIG-IP load balancer credentials by using the gkectl update credentials f5bigip command.

Functionality changes:

  • The Ubuntu image is upgraded to include the newest packages.
  • Preflight checks are updated to validate that the gkectl version matches the target cluster version for cluster creation and upgrade.
  • Preflight checks are updated to validate the Window OS version used for running gkeadm. The gkeadm command-line tool is only available for Linux, Windows 10, and Windows Server 2019.
  • gkeadm is updated to populate network.vCenter.networkName in both admin cluster and user cluster configuration files.

Fixes:

  • Removed the static IP used by admin workstation after upgrade from ~/.ssh/known_hosts to avoid manual workaround.
  • Resolved a known issue that network.vCenter.networkName is not populated in the user cluster configuration file during user cluster creation.
  • Resolved a user cluster upgrade–related issue to only wait for the machines and pods in the same namespace within the cluster to be ready to complete the cluster upgrade.
  • Updated the default value for ingressHTTPNodePort and ingressHTTPSNodePort in the loadBalancer.manualLB section of the admin cluster configuration file.
  • Fixed CVE-2020-8558 and CVE-2020-8559 described in Security bulletins.
  • Logging and monitoring: Resolved an issue that stackdriver-log-forwarder was not scheduled on the master node on the admin cluster.
  • Resolved the following known issues published in the 1.4.0 release notes:
    • If a user cluster is created without any node pool named the same as the cluster, managing the node pools using gkectl update cluster would fail. To avoid this issue, when creating a user cluster, you need to name one node pool the same as the cluster.
    • The gkectl command might exit with panic when converting config from "/path/to/config.yaml" to v1 config files. When that occurs, you can resolve the issue by removing the unused bundled load balancer section ("loadbalancerconfig") in the config file.
    • When using gkeadm to upgrade an admin workstation on Windows, the info file filled out from this template needs to have the line endings converted to use Unix line endings (LF) instead of Windows line endings (CRLF). You can use Notepad++ to convert the line endings.
    • When running a preflight check for config.yaml that contains both admincluster and usercluster sections, the "data disk" check in the "user cluster vCenter" category might fail with the message: [FAILURE] Data Disk: Data disk is not in a folder. Use a data disk in a folder when using vSAN datastore. User clusters don't use data disks, and it's safe to ignore the failure.
    • When upgrading the admin cluster, the preflight check for the user cluster OS image validation will fail. The user cluster OS image is not used in this case, and it's safe to ignore the "User Cluster OS Image Exists" failure in this case.
    • User cluster creation and upgrade might be stuck with the error: Failed to update machine status: no matches for kind "Machine" in version "cluster.k8s.io/v1alpha1". To resolve this, you need to delete the clusterapi pod in the user cluster namespace in the admin cluster.

Known issues:

  • During reboots, the data disk is not remounted on the admin workstation when using GKE on-prem 1.4.0 or 1.4.1 because the startup script is not run after the initial creation. To resolve this, you can run sudo mount /dev/sdb1 /home/ubuntu.
App Engine standard environment Go App Engine standard environment Java App Engine standard environment Node.js App Engine standard environment PHP App Engine standard environment Python App Engine standard environment Ruby Cloud Billing

Export your Cloud Billing account SKU prices to BigQuery. You can now export your pricing information for Google Cloud and Google Maps Platform SKUs to BigQuery. Exporting your pricing data allows you to audit, analyze, and/or join your pricing data with your exported cost data. The pricing export includes list prices, pricing tiers, and, when applicable, any promotional or negotiated pricing. See the documentation for more details.

Cloud Functions Cloud Run Dialogflow

Amazon Alexa importer and exporter are no longer supported.

Network Intelligence Center

Network Topology includes two new metrics for connections between entities: packet loss and latency. Additionally, you can now use a drop-down menu to select which metric Network Topology overlays on traffic paths. For more information, see Viewing metrics for traffic between entities and Network Topology metrics reference.

Virtual Private Cloud

Serverless VPC Access support for Shared VPC is now available in Beta.

July 22, 2020

Anthos Service Mesh

1.6.5-asm.7, 1.5.8-asm.7, and 1.4.10-asm.15 are now available

This release provides these features and fixes:

  • Builds Istiod (Pilot), Citadel Agent, Pilot Agent, Galley, and Sidecar Injector with Go+BoringCrypto.
  • Builds Istio Proxy (Envoy) with the --define boringssl=fips option.
  • Ensures the components listed above use FIPS-compliant algorithms.
Cloud Bigtable

Cloud Bigtable's fully integrated backups feature is now generally available. Backups let you save a copy of a table's schema and data and restore the backup to a new table at a later time.

July 21, 2020

AutoML Video Intelligence Object Tracking

In April 2020, a model upgrade for the AutoML Video Object Tracking feature was released. This release is for non-downloadable models only. Models trained after April 2020 may show improvements in the evaluation results.

Cloud Run

Cloud Run resources are now available in Cloud Asset Inventory

Compute Engine

You can now create balanced persistent disks , in addition to standard and SSD persistent disks. Balanced persistent disks are an alternative to SSD persistent disks that balance performance and cost. For more information, see Persistent disk types.

Config Connector

bug fixes and performance improvements

Istio on Google Kubernetes Engine

Istio 1.4.10-gke.4

Fixes known security issues with the same fixes as OSS Istio 1.4.10

Recommendations AI

Recommendations AI public beta

Recommendations AI is now in public beta.

New pricing available

Pricing for Recommendations AI has been updated for public beta. For new pricing and free trial details, see Pricing.

UI redesign

The Recommendations AI console has a new look. You'll see a new layout, including a redesigned dashboard and improved alerts setup.

New support resources

We have new support resources available:

See Getting support for all support resources.

New FAQ page

A Frequently Asked Questions page is now available. See the FAQ here.

Traffic Director

Traffic Director supports proxyless gRPC applications in General Availability. In this deployment model, gRPC applications can participate in a service mesh without needing a sidecar proxy.

July 20, 2020

AI Platform Training

You can now train a PyTorch model on AI Platform Training by using a pre-built PyTorch container. Pre-built PyTorch containers are available in beta.

Cloud Storage Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on August 31, 2020.

Resource Manager

The Organization Policy for enabling detailed Cloud Audit Logs has launched into general availability.

Secret Manager

Secret Manager adds support for the following curated Cloud IAM roles:

  • Secret Manager Secret Version Adder (roles/secretmanager.secretVersionAdder )
  • Secret Manager Secret Version Manager (roles/secretmanager.secretVersionManager)

To learn more, see IAM and access control.

VPC Service Controls

General availability for the following integration:

July 17, 2020

App Engine standard environment Java
  • Updated Java SDK to version 1.9.81
AutoML Translation

For test data, added support for the .tmx file type when evaluating existing models. For more information, see Evaluating models.

Compute Engine

The Organization Policy for restricting protocol forwarding creation has launched into Beta.

Dataproc

Dataproc now uses Shielded VMs for Debian 10 and Ubuntu 18.04 clusters by default.

The Proxy-Authorization header is accepted in place of Authorization to authenticate to Component Gateway to the backend for programmatic API calls. If Proxy-Authorization is set to a bearer token, Component Gateway will forward the Authorization header if it does not contain a bearer token.

For example, this allows setting both Proxy-Authorization: Bearer <google-access-token> as well as setting Authorization: Basic ... to authenticate to HiveServer2 with HTTP basic auth.

Added support for Zeppelin Spark and shell interpreters in Kerberized clusters by default.

New sub-minor versions of Dataproc images: 1.3.63-debian10, 1.3.63-ubuntu18, 1.4.34-debian10, 1.4.34-ubuntu18, 1.5.9-debian10, 1.5.9-ubuntu18, 2.0.0-RC5-debian10, and 2.0.0-RC5-ubuntu18.

Image 2.0 preview:

If a project's regional Dataproc staging bucket is manually deleted, it will be recreated automatically when a cluster is subsequently created in that region.

Resource Manager

The Organization Policy for restricting protocol forwarding creation has launched into public beta.

July 16, 2020

BigQuery

BigQuery GIS now supports two new functions, ST_CONVEXHULL and ST_DUMP:

  • ST_CONVEXHULL returns the smallest convex GEOGRAPHY that covers the input.
  • ST_DUMP returns an ARRAY of simple GEOGRAPHYs where each element is a component of the input GEOGRAPHY.

For more information, see the ST_CONVEXHULL and ST_DUMP reference pages.

Cloud Data Fusion

Cloud Data Fusion version 6.1.3 is now available. This version includes performance improvements and minor bug fixes.

  • Improved performance of Joiner plugins, aggregators, program startup, and previews.
  • Added support for custom images. You can select a custom Dataproc image by specifying the image URI.
  • Added support for rendering large schemas (>1000 fields) in the pipelines UI.
  • Added payload compression support to the messaging service.
Cloud Load Balancing

The Organization Policy for restricting load balancer creation has launched into Beta.

Compute Engine

SSD persistent disks on certain machine types now have a maximum write throughput of 1,200 MB/s. To learn more about the requirements to reach these limits, see Block storage performance.

You can now suspend and resume your VM instances. This feature is available in Beta.

Config Connector

Add support for allowing fields not specified by the user to be externally-managed (i.e. changeable outside of Config Connector). This feature can be enabled for a resource by enabling K8s server-side apply for the resource, which will be the default for all K8s resources starting in K8s 1.18. More detailed docs about the feature coming soon.

Operator improvement: add support for cluster-mode set-ups, which allows users to use one Google Service Account for all namespaces in their cluster. This is very similar to the traditional "Workload Identity" installation set-up.

Fix ContainerCluster validation issue (Issue #242).

Fix OOM issue for the cnrm-resource-stats-recorder pod (Issue #239).

Add support for projectViewer prefix for members in IAMPolicy and IAMPolicyMember (Issue #234).

Reduce spec.revisionHistoryLimit for the cnrm-stats-recorder and cnrm-webhook-manager Deployments from 10 (the default) to 1.

July 15, 2020

AutoML Vision Image Classification (ICN)

TFLite Edge model update

TFLite edge models are now enhanced with metadata. Models trained in the next 6 months will be backwards compatible as separate metadata and label files are included. TFLite models trained after this time may not be backwards compatible.

For more information see:

BigQuery ML

Data split and validation options are now available for AutoML Table model training.

Cloud Data Loss Prevention

Added infoType detector:

  • ISRAEL_IDENTITY_CARD_NUMBER
Cloud Functions

Cloud Functions has added support for a new runtime, Node 12, in Beta.

Cloud Functions has added support for a new runtime, Python 3.8, in Beta.

Cloud Spanner

You can now run SQL queries to retrieve read statistics for your database over recent one-minute, 10-minute, and one-hour time periods.

July 14, 2020

AI Platform Prediction

VPC Service Controls now supports AI Platform Prediction. Learn how to use a service perimeter to protect online prediction. This functionality is in beta.

Artifact Registry

You can now use Customer-Managed Encryption Keys (CMEK) to protect repository data in Artifact Registry. For more information, see Enabling customer-managed encryption keys.

Cloud Key Management Service

Cloud HSM resources are available in the us-west4 and asia-southeast2 regions. Cloud KMS resources were already available in these regions.

For information about which Cloud Locations are supported by Cloud KMS, Cloud HSM, and Cloud EKM, see the Cloud KMS regional locations.

VPC Service Controls

Beta stage support for the following integration:

July 13, 2020

AI Platform Deep Learning VM Image

M51 release

Allow removing sudo access from Deep Learning Containers.

Debian-10-based images are released. You can create Shielded VM instances from these images.

AI Platform Training

You can now configure a training job to run using a custom service account. Using a custom service account can help you customize which Google Cloud resources your training code can access.

This feature is available in beta.

BigQuery

The Standard SQL statement ASSERT is now supported. You can use ASSERT to validate that data matches specified expectations.

Cloud Bigtable

The default data points used for CPU utilization charts on the Cloud Bigtable Monitoring page have changed. Previously, data points on the charts reflected the mean for a displayed alignment period. Now the data points reflect the maximum for the alignment period. This change ensures that charts clearly show the peaks that are important for monitoring the health of a Cloud Bigtable instance.

Cloud CDN

Added a new setup guide for custom (external) origins with Cloud CDN and external HTTP(S) Load Balancing.

Cloud Load Balancing

Internal TCP/UDP load balancers now support regional health checks. To configure, see Health checks for backend services. This feature is supported in General Availability.

Cloud Run

The Cloud Run user interface now allows you to easily set up Continuous Deployment from Git using Cloud Build

Dataprep by Trifacta

Introducing Cloud Dataprep Premium by TRIFACTA INC. and Cloud Dataprep Standard by TRIFACTA INC.: You can now upgrade your existing Cloud Dataprep by TRIFACTA INC. projects to unlock advanced features, such as broader API access and relational connectivity. To see the full set of new capabilities and use cases, see Google Cloud Dataprep by Trifacta Pricing.

  • These two product tiers are available through the Google Cloud Platform Marketplace.
  • Important: All existing Cloud Dataprep by TRIFACTA INC. projects are unchanged. You can upgrade individual projects through the GCP Marketplace to unlock the new functionality.

Relational connectivity: Connect to relational sources to import data and, where supported, write results.

Advanced Cloud Dataflow execution options: Specify additional job execution options at the project level or for individual jobs.

  • Feature Availability: This feature is available in Cloud Dataprep Premium by TRIFACTA® INC.
  • Assign scaling algorithms for managing Google Compute Engine instances or define minimum and maximum workers to use.
  • Specify the service account and any billing labels to apply to your jobs.
  • For more information:

Introducing plans: A plan is a sequence of tasks on one or more flows that can be scheduled.

  • Feature Availability: This feature is available in Cloud Dataprep Premium by TRIFACTA INC.
  • NOTE: In this release, the only type of task that is supported is Run Flow.
  • For more information on plans, see Plans Page.

  • For more information on orchestration in general, see Overview of Operationalization.

*Dataflow execution in non-local VPC:* You can now execute your Cloud Dataflow jobs on a non-local or shared virtual private network (VPC).

  • NOTE: To accommodate a wider range of shared VPCs configuration, subnetworks must be specified by full URL. See Changes below.
  • Project owners can set these execution options for the entire project. See Project Settings Page.

Subnetwork specified by URL: When you are specifying the subnetwork where to execute your Cloud Dataflow jobs, you must now specify the subnetwork using a URL.

  • Tip: This feature can be used when Cloud Dataprep by TRIFACTA INC. is configured to execute Cloud Dataflow jobs to run within a shared VPC hosted in a project other than the current project.
  • Previously, you could specify the subnetwork by name. However, non-local subnetwork values could not be specified in this manner.
  • For more information, see Dataflow Execution Settings.
Google Cloud Marketplace

The IAM permissions required for purchasing the following solutions from Google Cloud Marketplace have changed:

  • Apache Kafka® on Confluent Cloud™
  • DataStax Astra for Apache Cassandra
  • Elasticsearch Service on Elastic Cloud
  • NetApp Cloud Volumes Service
  • Redis Enterprise Cloud

If you use custom roles to purchase these solutions, you must update the custom roles to include the permissions described in Access Control for Google Cloud Marketplace.

Specifically, if your custom role includes the billing.subscriptions.create permission, you must update it to include the consumerprocurement.orders.place and the consumerprocurement.accounts.create permissions.

If you use the Billing Administrator role to purchase these solutions, you don't need to take any action.

Secret Manager

Secret Manager resources can now be stored in the australia-southeast1 region. To learn more, see Locations.

July 10, 2020

Anthos Service Mesh

1.6.5-asm.1, 1.5.8-asm.0, and 1.4.10-asm.4

Fixes the security issue, ISTIO-SECURITY-2020-008, with the same fixes as Istio 1.6.5 and Istio 1.5.8. These fixes were backported to 1.4.10-asm.4. For more information, see the Istio release notes:

Cloud Billing

The Cost Table report functionality has been updated to add a Table configuration interface that replaces the previous Group by and Label selectors. Use the new Table configuration dialog to choose a Label key and select your Group by options. Additionally, the available Group by options have been enhanced to include a new Custom grouping option. Use custom grouping to view a nested cost table with your costs grouped by up to three dimensions that you choose, including label values. See the documentation for more details.

Cloud Functions

Cloud Functions is now available in the following regions:

  • us-west2 (Los Angeles)
  • us-west4
  • southamerica-east1 (Sao Paulo)
  • asia-northeast2 (Osaka)

See Cloud Functions Locations for details.

Cloud Monitoring

SLO monitoring for microservices is now Generally Available in the Cloud Console. This feature lets you create service-level objectives (SLOs) and set up alerting policies to monitor their performance using auto-generated dashboards with metrics, logs, and alerts in a single place. For more information, see SLO monitoring.

Dataproc

Added --temp-bucket flag to gcloud dataproc clusters create and gcloud dataproc workflow-templates set-managed-cluster to allow users to configure a Cloud Storage bucket that stores ephemeral cluster and jobs data, such as Spark and MapReduce history files.

Extended Jupyter to support notebooks stored on VM persistent disk. This change modifies the Jupyter contents manager to create two virtual top-level directories, named GCS, and Local Disk. The GCS directory points to the Cloud Storage location used by previous versions, and the Local Disk directory points to the persistent disk of the VM running Jupyter.

Dataproc images now include the oauth2l command line tool. The tool is installed in /usr/local/bin, which is available to all users in the VM.

New sub-minor versions of Dataproc images: 1.2.102-debian9, 1.3.62-debian9, 1.4.33-debian9, 1.3.62-debian10, 1.4.33-debian10, 1.5.8-debian10, 1.3.62-ubuntu18, 1.4.33-ubuntu18, 1.5.8-ubuntu18, 2.0.0-RC4-debian10, 2.0.0-RC4-ubuntu18

  • Images 1.3 - 1.5:

    • Fixed HIVE-11920: ADD JAR failing with URL schemes other than file/ivy/hdfs.
  • Images 1.3 - 2.0 preview:

    • Fixed TEZ-4108: NullPointerException during speculative execution race condition.

Fixed a race condition that could nondeterministically cause Hive-WebHCat to fail at startup when HBase is not enabled.

July 09, 2020

Cloud SQL for PostgreSQL

Cloud SQL now supports point-in-time recovery (PITR) for PostgreSQL. Point-in-time recovery helps you recover an instance to a specific point in time. For example, if an error causes a loss of data, you can recover a database to its state before the error occurred.

Config Connector

Added support for SecretManagerSecret

Managed Service for Microsoft Active Directory

The Managed Microsoft AD SLA has been published.

July 08, 2020

App Engine flexible environment .NET

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Go

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Java

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Node.js

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment PHP

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment Ruby

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine flexible environment custom runtimes

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Go

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Java

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Node.js

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment PHP

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Python

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

App Engine standard environment Ruby

External HTTP(S) Load Balancing is now supported for App Engine via Serverless network endpoint groups. Notably, this feature allows you to use Cloud CDN with App Engine.
This feature is available in Beta.

July 07, 2020

Cloud Composer
  • New versions of Cloud Composer images: composer-1.10.6-airflow-1.10.2, composer-1.10.6-airflow-1.10.3 and composer-1.10.6-airflow-1.10.6. The default is composer-1.10.6-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.

  • For Airflow 1.10.6 and later: The Airflow config property [celery] pool is now blocked.

  • The [core]sql_alchemy_pool_recycle Airflow setting has been modified to improve SQL connection reliability.

  • Fixed an issue with Airflow 1.10.6 environments where task logs were not visible in the UI when DAG serialization was enabled.
  • It is now possible to upgrade from Composer versions 1.1.1, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.4.2, 1.5.0, and 1.5.2 to the newest version.
Cloud Functions

External HTTP(S) Load Balancing is now supported for Google Cloud Functions via Serverless network endpoint groups.

Notably, this feature allows you to use Cloud CDN and Cloud Armor with Google Cloud Functions.

This feature is available in Beta.

Cloud Load Balancing

External HTTP(S) Load Balancing is now supported for App Engine, Cloud Functions, and Cloud Run services. To configure this, you will need to use a new type of network endpoint group (NEG) called a Serverless NEG.

This feature is available in Beta.

Cloud Monitoring

Monitoring Query Language (MQL) is now Generally Available. MQL is an expressive, text-based interface to Cloud Monitoring time-series data. With MQL, you can create charts you can't create any other way. You can access MQL from both the Cloud Console and the Monitoring API. For more information, see Introduction to Monitoring Query Language.

Cloud Run

External HTTP(S) Load Balancing is now supported for Cloud Run services via Serverless network endpoint groups.
Notably, this feature allows you to use Cloud CDN and multi-region load balancing.
This feature is available in Beta.

Dataproc

Announcing the General Availability (GA) release of Dataproc Component Gateway, which provides secure access to web endpoints for Dataproc default and optional components.

Traffic Director

Traffic Director now provides the option of automated Envoy deployment.

Traffic Director now supports automated Envoy deployments for Google Compute Engine VMs in Beta.

July 06, 2020

App Engine standard environment Node.js

The Node.js 12 runtime for the App Engine standard environment is now generally available.

App Engine standard environment Python

The Python 3.8 runtime for the App Engine standard environment is now generally available.

App Engine standard environment Ruby

The Ruby 2.6 and 2.7 runtime Betas for the App Engine standard environment are now available.

BigQuery

Updated version of Magnitude Simba ODBC driver. This version includes some performance improvements and bug fixes, and it catches up with the JDBC driver by adding support for user defined functions and variable time zones using the connection string.

Compute Engine

E2 machine types now offer up to 32 vCPUs. See E2 machine types for more information.

Dialogflow

The Dialogflow Console has been upgraded with an improved Analytics page (Beta) that provides new metrics and data views.