Release notes

This page documents production updates to Vertex AI. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud Console, or you can programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/vertex-ai-release-notes.xml

December 02, 2021

You can now use a pre-built container to perform custom training with TensorFlow 2.7.

December 01, 2021

Vertex AI TensorBoard is generally available (GA).

November 19, 2021

The autopackaging feature of the gcloud ai custom-jobs create command is generally available (GA). Autopackaging lets you use a single command to run code on your local computer as a custom training job in Vertex AI.

The gcloud ai customs-jobs local-run command is generally available (GA). You can use this command to containerize and run training code locally.

November 09, 2021

Vertex AI Pipelines is generally available (GA).

November 04, 2021

Vertex Explainable AI Preview support available for AutoML image classification models

Vertex Explainable AI offers Preview support for the following model type:

November 02, 2021

Using interactive shells to inspect custom training jobs is generally available (GA).

You can use these interactive shells with VPC Service Controls.

October 25, 2021

Vertex ML Metadata is generally available (GA).

October 05, 2021

Vertex Feature Store is generally available (GA).

September 24, 2021

Vertex Matching Engine is generally available (GA).

September 21, 2021

Vertex Vizier is generally available (GA).

September 15, 2021

Vertex Explainable AI is generally available (GA).

September 13, 2021

September 10, 2021

Vertex Model Monitoring is generally available (GA).

When you perform custom training, you can access Cloud Storage buckets by reading and writing to the local filesystem. This feature, based on Cloud Storage Fuse, is available in Preview.

August 30, 2021

You can now use a pre-built container to perform custom training with TensorFlow 2.6 and PyTorch 1.9.

August 24, 2021

The following tools for creating embeddings to use with Vertex Matching Engine are available in Preview:

August 02, 2021

Vertex Pipelines is available in the following regions:

  • us-east1 (South Carolina)
  • europe-west2 (London)
  • asia-southeast1 (Singapore)

See all the locations where Vertex Pipelines is available.

July 28, 2021

You can use the Reduction Server algorithm (Preview) to increase throughput and reduce latency during distributed custom training.

July 27, 2021

July 20, 2021

Private endpoints for online prediction are now available in preview. After you set up VPC Network Peering with Vertex AI, you can create private endpoints for low-latency online prediction within your private network.

Additionally, the documentation for VPC Network Peering with custom training has moved. The general instructions for setting up VPC Network Peering with Vertex AI are available at the original link, https://cloud.google.com/vertex-ai/docs/general/vpc-peering. The documentation for custom training is now available here: Using private IP with custom training.

July 19, 2021

You can now use an interactive shell to inspect your custom training container while it runs. The interactive shell can be helpful for monitoring and debugging training.

This feature is available in preview.

July 14, 2021

You can now use the gcloud beta ai custom-jobs create command to build a Docker image based on local training code, push the image to Container Registry, and create a CustomJob resource.

July 08, 2021

You can now containerize and run your training code locally by using the new gcloud beta ai custom-jobs local-run command. This feature is available in preview.

June 25, 2021

You can now use NVIDIA A100 GPUs and several accelerator-optimized (A2) machine types for training. You must use A100 GPUs and A2 machine types together. Learn about their pricing.

June 11, 2021

May 18, 2021

AI Platform (Unified) is now Vertex AI.

Vertex AI has added support for custom model training, custom model batch prediction, custom model online prediction, and a limited number of other services in the following regions:

  • us-west1
  • us-east1
  • us-east4
  • northamerica-northeast1
  • europe-west2
  • europe-west1
  • asia-southeast1
  • asia-northeast1
  • australia-southeast1
  • asia-northeast3

Vertex AI now supports forecasting with time series data for AutoML tabular models, in Preview. You can use forecasting to predict a series of numeric values that extend into the future.

Vertex Pipelines is now available in Preview. Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow.

Vertex Model Monitoring is now available in Preview. Vertex Model Monitoring enables you to monitor model quality over time.

Vertex Feature Store is now available in Preview. Vertex Feature Store provides a centralized repository for organizing, storing, and serving ML features.

Vertex ML Metadata is now available in Preview. Vertex ML Metadata lets you record the metadata and artifacts produced by your ML system so you can analyze the performance of your ML system.

Vertex Matching Engine is now available in Preview. Vertex Matching Engine enables vector similarity search.

Vertex TensorBoard is now available in Preview. Vertex TensorBoard enables you to track, visualize, and compare ML experiments.

May 03, 2021

April 27, 2021

Vizier is now available in preview. Vizier is a feature of AI Platform (Unified) that you can use to perform black-box optimization. You can use Vizier to tune hyperparameters or optimize any evaluable system.

April 15, 2021

The Python client library for AI Platform (Unified) is now called the AI Platform (Unified) SDK. With the release of version 0.7 (Preview), the AI Platform (Unified) SDK provides two levels of support. The high-level aiplatform library is designed to simplify common data science workflows by using wrapper classes and opinionated defaults. The lower-level aiplatform.gapic library remains available for those times when you need more flexibility or control. Learn more.

March 31, 2021

AI Platform (Unified) is now available in General Availability (GA).

AI Platform (Unified) has added support for the following regions for custom model training, as well as batch and online prediction for custom-trained models:

  • us-west1 (Oregon)
  • us-east1 (South Carolina)
  • us-east4 (N. Virginia)
  • northamerica-northeast1 (Montreal)
  • europe-west2 (London)
  • europe-west1 (Belgium)
  • asia-southeast1 (Singapore)
  • asia-northeast1 (Tokyo)
  • australia-southeast1 (Sydney)
  • asia-northeast3 (Seoul)

March 15, 2021

March 02, 2021

CMEK compliance using the client libraries

You can now use the client libraries to create resources with a customer-managed encryption key (CMEK).

For more information on creating a resource with an encryption key using the client libraries, see Using customer-managed encryption keys (CMEK).

March 01, 2021

The client library for Java now includes enhancements to improve usage of training and prediction features. The client library includes additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.

To use these enhancements, you must install the latest version of the client library.

February 25, 2021

AI Platform (Unified) now supports Access Transparency in beta. Google Cloud organizations with certain support packages can use this feature. Learn more about using Access Transparency with AI Platform (Unified).

The client libraries for Node.js and Python now include enhancements to improve usage of training and prediction features. These client libraries include additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.

To use these enhancements, you must install the latest version of the client libraries.

The predict and explain method calls no longer require the use of a different service endpoint (for example, https://us-central1-prediction-aiplatform.googleapis.com). These methods are now available on the same endpoint as all other methods.

In addition to Docker images hosted on Container Registry, you can now use Docker images hosted on Artifact Registry and Docker Hub for custom container training on AI Platform.

The Docker images for pre-built training containers and pre-built prediction containers are now available on Artifact Registry.

February 01, 2021

January 19, 2021

Preview: Select AI Platform (Unified) resources can now be configured to use Customer-managed encryption keys (CMEK).

Currently you can only create resources with a CMEK key in the UI; this functionality is not currently available using the client libraries.

January 11, 2021

The default boot disk type for virtual machine instances used for custom training has changed from pd-standard to pd-ssd. Learn more about disk types for custom training and read about pricing for different disk types.

If you previously used the default disk type for custom training and want to continue training with the same disk type, make sure to explicitly specify the pd-standard boot disk type when you perform custom training.

January 06, 2021

December 17, 2020

AI Platform (Unified) now stores and processes your data only in the region you specify for most features. Learn more.

November 16, 2020

Preview release

AI Platform (Unified) is now available in Preview.

For more information, see the product documentation.