This page documents production updates to Vertex AI. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.
To get the latest product updates delivered to you, add the URL of this page to your
reader, or add the feed URL directly:
August 02, 2021
Vertex Pipelines is available in the following regions:
See all the locations where Vertex Pipelines is available.
July 28, 2021
July 27, 2021
The following features are generally available (GA):
- Access Transparency for Vertex AI
- Using a custom service account for custom training and prediction
- Using VPC Service Controls with Vertex AI
- Setting up VPC Network Peering with Vertex AI and using private IP for custom training (Using private IP for prediction and vector matching with Matching Engine remains in preview.)
July 20, 2021
Private endpoints for online prediction are now available in preview. After you set up VPC Network Peering with Vertex AI, you can create private endpoints for low-latency online prediction within your private network.
Additionally, the documentation for VPC Network Peering with custom training has moved. The general instructions for setting up VPC Network Peering with Vertex AI are available at the original link, https://cloud.google.com/vertex-ai/docs/general/vpc-peering. The documentation for custom training is now available here: Using private IP with custom training.
July 19, 2021
You can now use an interactive shell to inspect your custom training container while it runs. The interactive shell can be helpful for monitoring and debugging training.
This feature is available in preview.
July 14, 2021
You can now use the
gcloud beta ai custom-jobs create command to build a Docker image based on local training code, push the image to Container Registry, and create a
July 08, 2021
You can now containerize and run your training code locally by using the new
gcloud beta ai custom-jobs local-run command. This feature is available in preview.
June 25, 2021
June 11, 2021
You can now use a pre-built container to serve predictions from TensorFlow 2.5 models.
You can now use a pre-built container to serve predictions from XGBoost 1.4 models.
May 18, 2021
AI Platform (Unified) is now Vertex AI.
Vertex AI has added support for custom model training, custom model batch prediction, custom model online prediction, and a limited number of other services in the following regions:
May 03, 2021
You can now use a pre-built container to serve predictions from TensorFlow 2.4 models.
You can now use a pre-built container to serve predictions from scikit-learn 0.24 models.
You can now use a pre-built container to serve predictions from XGBoost 1.3 models.
April 27, 2021
Vizier is now available in preview. Vizier is a feature of AI Platform (Unified) that you can use to perform black-box optimization. You can use Vizier to tune hyperparameters or optimize any evaluable system.
April 15, 2021
The Python client library for AI Platform (Unified) is now called the
AI Platform (Unified) SDK. With the release of version 0.7
(Preview), the AI Platform (Unified) SDK provides two levels of support.
is designed to simplify common data
science workflows by using wrapper classes and opinionated defaults. The
aiplatform.gapic library remains
available for those times when you need more flexibility or control.
March 31, 2021
AI Platform (Unified) is now available in General Availability (GA).
AI Platform (Unified) has added support for the following regions for custom model training, as well as batch and online prediction for custom-trained models:
- us-west1 (Oregon)
- us-east1 (South Carolina)
- us-east4 (N. Virginia)
- northamerica-northeast1 (Montreal)
- europe-west2 (London)
- europe-west1 (Belgium)
- asia-southeast1 (Singapore)
- asia-northeast1 (Tokyo)
- australia-southeast1 (Sydney)
- asia-northeast3 (Seoul)
March 15, 2021
You can now use a pre-built container to perform custom training with PyTorch 1.7.
March 02, 2021
CMEK compliance using the client libraries
You can now use the client libraries to create resources with a customer-managed encryption key (CMEK).
For more information on creating a resource with an encryption key using the client libraries, see Using customer-managed encryption keys (CMEK).
March 01, 2021
The client library for Java now includes enhancements to improve usage of training and prediction features. The client library includes additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.
To use these enhancements, you must install the latest version of the client library.
February 25, 2021
AI Platform (Unified) now supports Access Transparency in beta. Google Cloud organizations with certain support packages can use this feature. Learn more about using Access Transparency with AI Platform (Unified).
The client libraries for Node.js and Python now include enhancements to improve usage of training and prediction features. These client libraries include additional types and utility functions for sending training requests, sending prediction requests, and reading prediction results.
To use these enhancements, you must install the latest version of the client libraries.
explain method calls no longer require the use of a different service endpoint (for example,
https://us-central1-prediction-aiplatform.googleapis.com). These methods are now available on the same endpoint as all other methods.
You can now use a pre-built container to perform custom training with TensorFlow 2.4.
You can now use a pre-built container to serve predictions from TensorFlow 2.3 models.
You can now use a pre-built container to serve predictions from XGBoost 1.2 models.
February 01, 2021
You can now use a pre-built container to perform custom training with PyTorch 1.6.
January 19, 2021
Preview: Select AI Platform (Unified) resources can now be configured to use Customer-managed encryption keys (CMEK).
Currently you can only create resources with a CMEK key in the UI; this functionality is not currently available using the client libraries.
January 11, 2021
The default boot disk type for virtual machine instances used for custom training has changed from
pd-ssd. Learn more about disk types for custom training and read about pricing for different disk types.
If you previously used the default disk type for custom training and want to continue training with the same disk type, make sure to explicitly specify the
pd-standard boot disk type when you perform custom training.
January 06, 2021
You can now use a pre-built container to perform custom training with TensorFlow 2.3.
December 17, 2020
AI Platform (Unified) now stores and processes your data only in the region you specify for most features. Learn more.