Release notes

Stay organized with collections Save and categorize content based on your preferences.

This page documents production updates to Vertex AI. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or you can programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/vertex-ai-release-notes.xml

November 30, 2022

AutoML image model updates

AutoML image classification and object detection now support a higher-accuracy model type. This model is available in Preview.

For information about how to train a model using the higher accuracy model type, see Begin AutoML model training.

Batch prediction is currently not supported for this model type.

Cloud Logging for Vertex AI Pipelines is now generally available (GA). For more information, see View pipeline job logs.

November 18, 2022

Vertex AI Prediction

You can now perform some simple filtering and transformation on the batch input in your BatchPredictionJob requests without having to write any code in the prediction container. This feature is in Preview. For more information, see Filter and transform input data.

November 17, 2022

The Vertex AI Pipelines email notification component is now generally available (GA). This component enables you to configure your pipeline to send up to three emails upon success or failure of a pipeline run. For more information, see Configure email notifications and the Email notification component.

November 10, 2022

AutoML Image Classification Error Analysis

Error analysis allows you to examine error cases after training a model from within the model evaluation page. This feature is available in Preview.

For each image you can inspect similar images from the training set to help identify the following:

  • Label inconsistencies between visually similar images
  • Outliers if a test sample has no visually similar images in the training set

After fixing any data issues, you can retrain the model to improve model performance.

November 09, 2022

Feature Transform Engine is available in Preview. For documentation, refer to Feature engineering.

November 04, 2022

Vertex AI Prediction

You can now use A2 machine types to serve predictions.

Custom training on Vertex AI now supports NVIDIA A100 80GB GPUs on a2-ultragpu-1g/2g/4g/8g machines. For details, see Configure compute resources for custom training.

November 03, 2022

Vertex AI Prediction

Custom prediction routines (CPR) are now Generally Available. CPR lets you easily build custom containers for prediction with pre/post processing support.

October 27, 2022

Vertex AI Prediction

You can now use E2 machine types to serve predictions.

October 12, 2022

Tabular Workflow for TabNet Training is available in Preview. For documentation, refer to Tabular Workflows for TabNet Training.

Tabular Workflow for Wide & Deep Training is available in Preview. For documentation, refer to Tabular Workflow for Wide & Deep Training.

October 11, 2022

Vertex AI Feature Store streaming ingestion is available in Preview.

October 10, 2022

The Vertex AI Model Registry is generally available (GA). Vertex AI Model Registry is a searchable repository where you can manage the lifecycle of your ML models. From the Vertex AI Model Registry, you can better organize your models, train new versions, and deploy directly to endpoints.

The Vertex AI Model Registry and BigQuery ML integration is generally available (GA). With this integration, BigQuery ML models can be managed alongside other ML models in Vertex AI to easily version, evaluate, and deploy for prediction.

October 06, 2022

Incrementally train an AutoML model

You can now incrementally train an AutoML image classification or object detection model by selecting a previously trained model. This feature is in Preview. For more information, see Train an AutoML image classification model.

October 05, 2022

October 04, 2022

Vertex AI model evaluation is now available in Preview. Model evaluation provides model evaluation metrics, such as precision and recall, to help you determine the performance of your models.

September 26, 2022

Vertex AI Model Monitoring

Vertex AI Model Monitoring now offers Preview support for batch prediction jobs. For more details, see Vertex AI Model Monitoring for batch predictions.

Vertex AI Feature Store

Feature value monitoring is now generally available (GA).

September 22, 2022

Vertex AI Matching Engine

Vertex AI Matching Engine now offers Preview support for updating your indices using Streaming Update, which is real-time indexing for the Approximate Nearest Neighbor (ANN) service.

September 20, 2022

The option to configure pipeline run caching (enable_caching) is now available in the Cloud console.

September 14, 2022

You can now limit the number of concurrent or parallel task runs in a pipeline run using dsl.ParallelFor. For more information, see the Kubeflow Pipelines SDK Documentation.

The performance of the ListPipelineJobs API has been improved via a new readMask that lets you filter out large fields. To leverage this in the Python SDK, use the new enable_simple_view.

August 12, 2022

Vertex Explainable AI

Vertex Explainable AI now offers Preview support for example-based explanations. For more information, see Configure example-based explanations for custom training.

August 01, 2022

TensorFlow Profiler integration: Debug model training performance for your custom training jobs. For details, see Profile model training performance using Profiler.

July 29, 2022

Vertex AI now offers Preview support for Custom prediction routines (CPR). CPR lets you easily build custom containers for prediction with pre/post processing support.

July 18, 2022

NFS support for custom training is GA. For details, see Mount an NFS share for custom training.

July 14, 2022

The Pipeline Templates feature is available in Preview. For documentation, refer to Create, upload, and use a pipeline template.

The features supported by pipeline templates include the following:

  • Create a template registry using Artifact Registry (AR).
  • Compile and publish a pipeline template.
  • Create a pipeline run using the template and filter the runs.
  • Manage (create, update, or delete) the pipeline template resources.

July 12, 2022

You can now use a pre-built container to perform custom training with TensorFlow 2.9

July 11, 2022

Vertex AI Pipelines now lets you configure task-level retries. You can set the number of times a task is retried before it fails. For more information about this option, see the Kubeflow Pipelines SDK Documentation.

July 06, 2022

Tabular Workflows is available in Preview. For documentation, refer to Tabular Workflows on Vertex AI.

End-to-End AutoML workflow is available in Public Preview. For documentation, refer to End-to-End AutoML.

June 30, 2022

Feature: Vertex AI Experiments is generally available (GA). Vertex AI Experiments helps users track and compare multiple experiment runs and analyze key model metrics.

Features supported by Experiments include:

  • Vary and track parameters and metrics.
  • Compare parameters, metrics, and artifacts between pipeline runs.
  • Track steps and artifacts to capture the lineage of experiments.
  • Compare vertex pipelines against Notebook experiments.

June 28, 2022

June 17, 2022

Support for IAM resource-level policies for Vertex AI featurestore and entityType resources is available in Preview.

May 24, 2022

You can now configure the failure policy for a pipeline run.

May 18, 2022

The ability to configure Vertex AI private endpoints is now general available (GA). Vertex AI private endpoints provide a low-latency, secure connection to the Vertex AI online prediction service. You can configure Vertex AI private endpoints by using VPC Network Peering. For more information, see Use private endpoints for online prediction.

April 26, 2022

You can now train your custom models using Cloud TPU Architecture (TPU VMs).

April 21, 2022

You can now use a pre-built container to perform custom training with PyTorch 1.11.

April 06, 2022

Vertex AI Model Registry is available in Preview. Vertex AI Model Registry is a searchable repository where you can manage the lifecycle of your ML models. From the Vertex AI Model Registry, you can better organize your models, train new versions, and deploy directly to endpoints.

March 07, 2022

Vertex AI Feature Store online store autoscaling is available in Preview. The online store nodes automatically scale to balance performance and cost with different traffic patterns. The offline store already scales automatically.

You can now mount Network File System (NFS) shares to access remote files when you run a custom training job. For more information, see Mount an NFS share for custom training.

This feature is in Preview.

Google Cloud Pipeline Components SDK v1.0 is now generally available.

February 16, 2022

You can now use a pre-built container to perform custom training with TensorFlow 2.8.

February 10, 2022

For Vertex AI featurestore resources, the online store is optional. You can set the number of online nodes to 0. For more information, see Manage featurestores.

January 04, 2022

You can now use a pre-built container to perform custom training with PyTorch 1.10.

December 23, 2021

There are now three Vertex AI release note feeds. Add any of the following to your feed reader:

  • For both Vertex AI and Vertex AI Workbench: https://cloud.google.com/feeds/vertex-ai-product-group-release-notes.xml
  • For Vertex AI only: https://cloud.google.com/feeds/vertex-ai-release-notes.xml
  • For Vertex AI Workbench only: https://cloud.google.com/feeds/aiplatformnotebooks-release-notes.xml

December 02, 2021

You can now use a pre-built container to perform custom training with TensorFlow 2.7.

December 01, 2021

Vertex AI TensorBoard is generally available (GA).

November 19, 2021

The autopackaging feature of the gcloud ai custom-jobs create command is generally available (GA). Autopackaging lets you use a single command to run code on your local computer as a custom training job in Vertex AI.

The gcloud ai customs-jobs local-run command is generally available (GA). You can use this command to containerize and run training code locally.

November 09, 2021

Vertex AI Pipelines is generally available (GA).

November 04, 2021

Vertex Explainable AI Preview support available for AutoML image classification models

Vertex Explainable AI offers Preview support for the following model type:

November 02, 2021

Using interactive shells to inspect custom training jobs is generally available (GA).

You can use these interactive shells with VPC Service Controls.

October 25, 2021

Vertex ML Metadata is generally available (GA).

October 05, 2021

Vertex Feature Store is generally available (GA).

September 24, 2021

Vertex Matching Engine is generally available (GA).

September 21, 2021

Vertex AI Vizier is generally available (GA).

September 15, 2021

Vertex Explainable AI is generally available (GA).

September 13, 2021

September 10, 2021

Vertex Model Monitoring is generally available (GA).

When you perform custom training, you can access Cloud Storage buckets by reading and writing to the local filesystem. This feature, based on Cloud Storage Fuse, is available in Preview.

August 30, 2021

You can now use a pre-built container to perform custom training with TensorFlow 2.6 and PyTorch 1.9.

August 24, 2021

The following tools for creating embeddings to use with Vertex Matching Engine are available in Preview:

  • the Two Tower built-in algorithm
  • the Swivel pipeline template

August 02, 2021

Vertex Pipelines is available in the following regions:

  • us-east1 (South Carolina)
  • europe-west2 (London)
  • asia-southeast1 (Singapore)

See all the locations where Vertex Pipelines is available.

July 28, 2021

You can use the Reduction Server algorithm (Preview) to increase throughput and reduce latency during distributed custom training.

July 27, 2021

July 20, 2021

Private endpoints for online prediction are now available in preview. After you set up VPC Network Peering with Vertex AI, you can create private endpoints for low-latency online prediction within your private network.

Additionally, the documentation for VPC Network Peering with custom training has moved. The general instructions for setting up VPC Network Peering with Vertex AI are available at the original link, https://cloud.google.com/vertex-ai/docs/general/vpc-peering. The documentation for custom training is now available here: Using private IP with custom training.

July 19, 2021

You can now use an interactive shell to inspect your custom training container while it runs. The interactive shell can be helpful for monitoring and debugging training.

This feature is available in preview.

July 14, 2021

You can now use the gcloud beta ai custom-jobs create command to build a Docker image based on local training code, push the image to Container Registry, and create a CustomJob resource.

July 08, 2021

You can now containerize and run your training code locally by using the new gcloud beta ai custom-jobs local-run command. This feature is available in preview.

June 25, 2021

You can now use NVIDIA A100 GPUs and several accelerator-optimized (A2) machine types for training. You must use A100 GPUs and A2 machine types together. Learn about their pricing.

June 11, 2021

May 18, 2021

AI Platform (Unified) is now Vertex AI.

Vertex AI has added support for custom model training, custom model batch prediction, custom model online prediction, and a limited number of other services in the following regions:

  • us-west1
  • us-east1
  • us-east4
  • northamerica-northeast1
  • europe-west2
  • europe-west1
  • asia-southeast1
  • asia-northeast1
  • australia-southeast1
  • asia-northeast3

Vertex AI now supports forecasting with time series data for AutoML tabular models, in Preview. You can use forecasting to predict a series of numeric values that extend into the future.

Vertex Pipelines is now available in Preview. Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow.

Vertex Model Monitoring is now available in Preview. Vertex Model Monitoring enables you to monitor model quality over time.

Vertex Feature Store is now available in Preview. Vertex Feature Store provides a centralized repository for organizing, storing, and serving ML features.

Vertex ML Metadata is now available in Preview. Vertex ML Metadata lets you record the metadata and artifacts produced by your ML system so you can analyze the performance of your ML system.

Vertex Matching Engine is now available in Preview. Vertex Matching Engine enables vector similarity search.

Vertex TensorBoard is now available in Preview. Vertex TensorBoard enables you to track, visualize, and compare ML experiments.

May 03, 2021

April 27, 2021

AI Platform Vizier is now available in preview. Vizier is a feature of AI Platform (Unified) that you can use to perform black-box optimization. You can use Vizier to tune hyperparameters or optimize any evaluable system.

April 15, 2021

The Python client library for AI Platform (Unified) is now called the AI Platform (Unified) SDK. With the release of version 0.7 (Preview), the AI Platform (Unified) SDK provides two levels of support. The high-level aiplatform library is designed to simplify common data science workflows by using wrapper classes and opinionated defaults. The lower-level aiplatform.gapic library remains available for those times when you need more flexibility or control. Learn more.

March 31, 2021

AI Platform (Unified) is now available in General Availability (GA).

AI Platform (Unified) has added support for the following regions for custom model training, as well as batch and online prediction for custom-trained models:

  • us-west1 (Oregon)
  • us-east1 (South Carolina)
  • us-east4 (N. Virginia)
  • northamerica-northeast1 (Montreal)
  • europe-west2 (London)
  • europe-west1 (Belgium)
  • asia-southeast1 (Singapore)
  • asia-northeast1 (Tokyo)
  • australia-southeast1 (Sydney)
  • asia-northeast3 (Seoul)

March 15, 2021

March 02, 2021

CMEK compliance using the client libraries

You can now use the client libraries to create resources with a customer-managed encryption key (CMEK).

For more information on creating a resource with an encryption key using the client libraries, see Using customer-managed encryption keys (CMEK).

March 01, 2021

The client library for Java now includes enhancements to improve usage of training and prediction features. The client library includes additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.

To use these enhancements, you must install the latest version of the client library.

February 25, 2021

AI Platform (Unified) now supports Access Transparency in beta. Google Cloud organizations with certain support packages can use this feature. Learn more about using Access Transparency with AI Platform (Unified).

The client libraries for Node.js and Python now include enhancements to improve usage of training and prediction features. These client libraries include additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.

To use these enhancements, you must install the latest version of the client libraries.

The predict and explain method calls no longer require the use of a different service endpoint (for example, https://us-central1-prediction-aiplatform.googleapis.com). These methods are now available on the same endpoint as all other methods.

In addition to Docker images hosted on Container Registry, you can now use Docker images hosted on Artifact Registry and Docker Hub for custom container training on AI Platform.

The Docker images for pre-built training containers and pre-built prediction containers are now available on Artifact Registry.

February 01, 2021

January 19, 2021

Preview: Select AI Platform (Unified) resources can now be configured to use Customer-managed encryption keys (CMEK).

Currently you can only create resources with a CMEK key in the UI; this functionality is not currently available using the client libraries.

January 11, 2021

The default boot disk type for virtual machine instances used for custom training has changed from pd-standard to pd-ssd. Learn more about disk types for custom training and read about pricing for different disk types.

If you previously used the default disk type for custom training and want to continue training with the same disk type, make sure to explicitly specify the pd-standard boot disk type when you perform custom training.

January 06, 2021

December 17, 2020

AI Platform (Unified) now stores and processes your data only in the region you specify for most features. Learn more.

November 16, 2020

Preview release

AI Platform (Unified) is now available in Preview.

For more information, see the product documentation.