This page documents production updates to Vertex AI. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.
You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or you can programmatically access release notes in BigQuery.
To get the latest product updates delivered to you, add the URL of this page to your
reader, or add the feed URL directly:
November 30, 2022
AutoML image model updates
AutoML image classification and object detection now support a higher-accuracy model type. This model is available in Preview.
For information about how to train a model using the higher accuracy model type, see Begin AutoML model training.
Batch prediction is currently not supported for this model type.
November 18, 2022
Vertex AI Prediction
You can now perform some simple filtering and transformation on the batch input in your
BatchPredictionJob requests without having to write any code in the prediction container. This feature is in Preview. For more information, see Filter and transform input data.
November 17, 2022
The Vertex AI Pipelines email notification component is now generally available (GA). This component enables you to configure your pipeline to send up to three emails upon success or failure of a pipeline run. For more information, see Configure email notifications and the Email notification component.
November 10, 2022
AutoML Image Classification Error Analysis
Error analysis allows you to examine error cases after training a model from within the model evaluation page. This feature is available in Preview.
For each image you can inspect similar images from the training set to help identify the following:
- Label inconsistencies between visually similar images
- Outliers if a test sample has no visually similar images in the training set
After fixing any data issues, you can retrain the model to improve model performance.
November 09, 2022
November 04, 2022
Vertex AI Prediction
You can now use A2 machine types to serve predictions.
Vertex ML Metadata
November 03, 2022
Vertex AI Prediction
October 27, 2022
Vertex AI Prediction
You can now use E2 machine types to serve predictions.
October 12, 2022
October 11, 2022
October 10, 2022
The Vertex AI Model Registry is generally available (GA). Vertex AI Model Registry is a searchable repository where you can manage the lifecycle of your ML models. From the Vertex AI Model Registry, you can better organize your models, train new versions, and deploy directly to endpoints.
The Vertex AI Model Registry and BigQuery ML integration is generally available (GA). With this integration, BigQuery ML models can be managed alongside other ML models in Vertex AI to easily version, evaluate, and deploy for prediction.
October 06, 2022
Incrementally train an AutoML model
You can now incrementally train an AutoML image classification or object detection model by selecting a previously trained model. This feature is in Preview. For more information, see Train an AutoML image classification model.
October 05, 2022
Vertex AI Feature Store
- Delete feature values from specified entities
- Delete feature values from specified features within a time range
Links to additional resources:
October 04, 2022
Vertex AI model evaluation is now available in Preview. Model evaluation provides model evaluation metrics, such as precision and recall, to help you determine the performance of your models.
September 26, 2022
Vertex AI Model Monitoring
September 22, 2022
Vertex AI Matching Engine
September 20, 2022
The option to configure pipeline run caching (
enable_caching) is now available in the Cloud console.
September 14, 2022
You can now limit the number of concurrent or parallel task runs in a pipeline run using
dsl.ParallelFor. For more information, see the Kubeflow Pipelines SDK Documentation.
August 12, 2022
Vertex Explainable AI
Vertex Explainable AI now offers Preview support for example-based explanations. For more information, see Configure example-based explanations for custom training.
August 01, 2022
TensorFlow Profiler integration: Debug model training performance for your custom training jobs. For details, see Profile model training performance using Profiler.
July 29, 2022
July 18, 2022
NFS support for custom training is GA. For details, see Mount an NFS share for custom training.
July 14, 2022
The features supported by pipeline templates include the following:
- Create a template registry using Artifact Registry (AR).
- Compile and publish a pipeline template.
- Create a pipeline run using the template and filter the runs.
- Manage (create, update, or delete) the pipeline template resources.
July 12, 2022
You can now use a pre-built container to perform custom training with TensorFlow 2.9
July 11, 2022
Vertex AI Pipelines now lets you configure task-level retries. You can set the number of times a task is retried before it fails. For more information about this option, see the Kubeflow Pipelines SDK Documentation.
July 06, 2022
June 30, 2022
Features supported by Experiments include:
- Vary and track parameters and metrics.
- Compare parameters, metrics, and artifacts between pipeline runs.
- Track steps and artifacts to capture the lineage of experiments.
- Compare vertex pipelines against Notebook experiments.
June 28, 2022
Vertex AI Forecasting is available in GA. The following features are available:
June 17, 2022
May 24, 2022
You can now configure the failure policy for a pipeline run.
May 18, 2022
The ability to configure Vertex AI private endpoints is now general available (GA). Vertex AI private endpoints provide a low-latency, secure connection to the Vertex AI online prediction service. You can configure Vertex AI private endpoints by using VPC Network Peering. For more information, see Use private endpoints for online prediction.
April 26, 2022
You can now train your custom models using Cloud TPU Architecture (TPU VMs).
April 21, 2022
You can now use a pre-built container to perform custom training with PyTorch 1.11.
April 06, 2022
Vertex AI Model Registry is available in Preview. Vertex AI Model Registry is a searchable repository where you can manage the lifecycle of your ML models. From the Vertex AI Model Registry, you can better organize your models, train new versions, and deploy directly to endpoints.
March 07, 2022
Vertex AI Feature Store online store autoscaling is available in Preview. The online store nodes automatically scale to balance performance and cost with different traffic patterns. The offline store already scales automatically.
You can now mount Network File System (NFS) shares to access remote files when you run a custom training job. For more information, see Mount an NFS share for custom training.
This feature is in Preview.
Google Cloud Pipeline Components SDK v1.0 is now generally available.
February 16, 2022
You can now use a pre-built container to perform custom training with TensorFlow 2.8.
February 10, 2022
For Vertex AI featurestore resources, the online store is optional. You can set the number of online nodes to
0. For more information, see Manage featurestores.
January 04, 2022
You can now use a pre-built container to perform custom training with PyTorch 1.10.
December 23, 2021
There are now three Vertex AI release note feeds. Add any of the following to your feed reader:
- For both Vertex AI and Vertex AI Workbench:
- For Vertex AI only:
- For Vertex AI Workbench only:
December 02, 2021
You can now use a pre-built container to perform custom training with TensorFlow 2.7.
December 01, 2021
November 19, 2021
The autopackaging feature of the
gcloud ai custom-jobs create command is generally available (GA). Autopackaging lets you use a single command to run code on your local computer as a custom training job in Vertex AI.
November 09, 2021
November 04, 2021
Vertex Explainable AI Preview support available for AutoML image classification models
Vertex Explainable AI offers Preview support for the following model type:
November 02, 2021
You can use these interactive shells with VPC Service Controls.
October 25, 2021
October 05, 2021
September 24, 2021
September 21, 2021
September 15, 2021
September 13, 2021
September 10, 2021
When you perform custom training, you can access Cloud Storage buckets by reading and writing to the local filesystem. This feature, based on Cloud Storage Fuse, is available in Preview.
August 30, 2021
August 24, 2021
- the Two Tower built-in algorithm
- the Swivel pipeline template
August 02, 2021
Vertex Pipelines is available in the following regions:
See all the locations where Vertex Pipelines is available.
July 28, 2021
July 27, 2021
The following features are generally available (GA):
- Access Transparency for Vertex AI
- Using a custom service account for custom training and prediction
- Using VPC Service Controls with Vertex AI
- Setting up VPC Network Peering with Vertex AI and using private IP for custom training (Using private IP for prediction and vector matching with Matching Engine remains in preview.)
July 20, 2021
Private endpoints for online prediction are now available in preview. After you set up VPC Network Peering with Vertex AI, you can create private endpoints for low-latency online prediction within your private network.
Additionally, the documentation for VPC Network Peering with custom training has moved. The general instructions for setting up VPC Network Peering with Vertex AI are available at the original link, https://cloud.google.com/vertex-ai/docs/general/vpc-peering. The documentation for custom training is now available here: Using private IP with custom training.
July 19, 2021
You can now use an interactive shell to inspect your custom training container while it runs. The interactive shell can be helpful for monitoring and debugging training.
This feature is available in preview.
July 14, 2021
You can now use the
gcloud beta ai custom-jobs create command to build a Docker image based on local training code, push the image to Container Registry, and create a
July 08, 2021
You can now containerize and run your training code locally by using the new
gcloud beta ai custom-jobs local-run command. This feature is available in preview.
June 25, 2021
June 11, 2021
You can now use a pre-built container to serve predictions from TensorFlow 2.5 models.
You can now use a pre-built container to serve predictions from XGBoost 1.4 models.
May 18, 2021
AI Platform (Unified) is now Vertex AI.
Vertex AI has added support for custom model training, custom model batch prediction, custom model online prediction, and a limited number of other services in the following regions:
May 03, 2021
You can now use a pre-built container to serve predictions from TensorFlow 2.4 models.
You can now use a pre-built container to serve predictions from scikit-learn 0.24 models.
You can now use a pre-built container to serve predictions from XGBoost 1.3 models.
April 27, 2021
AI Platform Vizier is now available in preview. Vizier is a feature of AI Platform (Unified) that you can use to perform black-box optimization. You can use Vizier to tune hyperparameters or optimize any evaluable system.
April 15, 2021
The Python client library for AI Platform (Unified) is now called the
AI Platform (Unified) SDK. With the release of version 0.7
(Preview), the AI Platform (Unified) SDK provides two levels of support.
is designed to simplify common data
science workflows by using wrapper classes and opinionated defaults. The
aiplatform.gapic library remains
available for those times when you need more flexibility or control.
March 31, 2021
AI Platform (Unified) is now available in General Availability (GA).
AI Platform (Unified) has added support for the following regions for custom model training, as well as batch and online prediction for custom-trained models:
- us-west1 (Oregon)
- us-east1 (South Carolina)
- us-east4 (N. Virginia)
- northamerica-northeast1 (Montreal)
- europe-west2 (London)
- europe-west1 (Belgium)
- asia-southeast1 (Singapore)
- asia-northeast1 (Tokyo)
- australia-southeast1 (Sydney)
- asia-northeast3 (Seoul)
March 15, 2021
You can now use a pre-built container to perform custom training with PyTorch 1.7.
March 02, 2021
CMEK compliance using the client libraries
You can now use the client libraries to create resources with a customer-managed encryption key (CMEK).
For more information on creating a resource with an encryption key using the client libraries, see Using customer-managed encryption keys (CMEK).
March 01, 2021
The client library for Java now includes enhancements to improve usage of training and prediction features. The client library includes additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.
To use these enhancements, you must install the latest version of the client library.
February 25, 2021
AI Platform (Unified) now supports Access Transparency in beta. Google Cloud organizations with certain support packages can use this feature. Learn more about using Access Transparency with AI Platform (Unified).
The client libraries for Node.js and Python now include enhancements to improve usage of training and prediction features. These client libraries include additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.
To use these enhancements, you must install the latest version of the client libraries.
explain method calls no longer require the use of a different service endpoint (for example,
https://us-central1-prediction-aiplatform.googleapis.com). These methods are now available on the same endpoint as all other methods.
You can now use a pre-built container to perform custom training with TensorFlow 2.4.
You can now use a pre-built container to serve predictions from TensorFlow 2.3 models.
You can now use a pre-built container to serve predictions from XGBoost 1.2 models.
February 01, 2021
You can now use a pre-built container to perform custom training with PyTorch 1.6.
January 19, 2021
Preview: Select AI Platform (Unified) resources can now be configured to use Customer-managed encryption keys (CMEK).
Currently you can only create resources with a CMEK key in the UI; this functionality is not currently available using the client libraries.
January 11, 2021
The default boot disk type for virtual machine instances used for custom training has changed from
pd-ssd. Learn more about disk types for custom training and read about pricing for different disk types.
If you previously used the default disk type for custom training and want to continue training with the same disk type, make sure to explicitly specify the
pd-standard boot disk type when you perform custom training.
January 06, 2021
You can now use a pre-built container to perform custom training with TensorFlow 2.3.
December 17, 2020
AI Platform (Unified) now stores and processes your data only in the region you specify for most features. Learn more.