Vertex AI brings together AI Platform and AutoML into a single interface. This page compares Vertex AI and AI Platform, for users who are familiar with AI Platform.
Custom training
With Vertex AI, you can train models with AutoML, or you can do custom training, which is a workflow more similar to AI Platform Training.
Task | AI Platform Training | Vertex AI |
---|---|---|
Select the machine learning framework version to use | Google Cloud console users set the framework name and framework version. | |
Runtime versions - When submitting a training job, specify the number of a runtime version that includes your desired framework and framework version. | Prebuilt containers - When submitting a custom training job, specify the Artifact Registry URI of a prebuilt container that corresponds to your framework and framework version. | |
Submit a training job using a custom container | Build your own custom container, host it on Artifact Registry, and use it to run your training app. | |
Set the Google Cloud region to use | Specify the name of a region when submitting a training job to a
global endpoint (ml.googleapis.com ). |
Submit your custom training job to a regional endpoint, such as
us-central1-aiplatform.googleapis.com .
There is no global endpoint. Some regions that
are available in AI Platform are not
available in Vertex AI. See the list of supported regions
on the Locations page.
|
Specify machine configurations for distributed training | Specify configurations named after specific
roles of your training cluster
(masterConfig , workerConfig ,
parameterServerConfig , and evaluatorConfig ).
|
The configuration is a generic list — specify machine
configurations in
CustomJobSpec.workerPoolSpecs[] . |
Submit a training job using a Python package | Fields related to your Python package are top-level within
TrainingInput . |
Fields related to your Python package are organized within
pythonPackageSpec . |
Specify machine types |
|
|
Submit a hyperparameter tuning job |
Submit a training job with a hyperparameters
configuration. Whether a training job is submitted with or without
hyperparameter tuning, it creates a TrainingJob
API resource.
|
Submit a hyperparameter tuning job with a studySpec
configuration. This creates a top-level API
resource (HyperparameterTuningJob ). Custom training jobs
submitted without hyperparameter tuning create a top-level
CustomJob API resource.
|
Create a training pipeline to orchestrate training jobs with other operations | No built-in API resource for orchestration; use AI Platform Pipelines, Kubeflow, or another orchestration tool. | Create a TrainingPipeline resource to orchestrate a
training job with model deployment. |
Prediction
Task | AI Platform Prediction | Vertex AI |
---|---|---|
Select the machine learning framework version to use | Google Cloud console users set the framework name and framework version. | |
Runtime versions - When deploying a model, specify the number of a runtime version that includes your desired framework and framework version. | Prebuilt containers - When deploying a model, specify the
Artifact Registry URI of a
prebuilt container that
corresponds to your framework and framework version. Use the
multi-regional option that matches your regional endpoint — for
example, us-docker.pkg.dev for a us-central1
endpoint. |
|
Run custom code with prediction | Use custom prediction routines. | Use custom prediction routines on Vertex AI. |
Set the Google Cloud region to use | Specify the name of a region when creating a model on
a global API endpoint (ml.googleapis.com ). |
Create your model on a regional endpoint, such as
us-central1-aiplatform.googleapis.com .
There is no global endpoint. Some regions that
are available in AI Platform are not
available in Vertex AI. See the list of supported regions
on the Locations page.
|
Store model artifacts | Model artifacts are stored in Cloud Storage. There is no associated API resource for model artifacts. | There is managed model storage available for model artifacts, and it
is associated with the Model resource.You can still deploy models stored in Cloud Storage without using a Vertex AI managed dataset. |
Model deployment | You deploy a model directly to make it available for online predictions. |
You create an Endpoint object, which provides resources for
serving online predictions. You then deploy the model to the endpoint.
To request predictions, you call the
predict()
method. |
Request batch predictions | You can request batch predictions on models stored in Cloud Storage, and specify a runtime version in your request. Alternatively, you can request batch predictions on deployed models, and use the runtime version you specified during model deployment. | You upload your model to Vertex AI, and then you specify either a prebuilt container or a custom container to serve the predictions. |
Online prediction requests | The JSON structure includes a list of instances. | The JSON structure includes a list of instances and a field for parameters. |
Specify machine types | Specify any available machine type when creating a version. | Legacy online prediction machine types from AI Platform (MLS1) are not supported. Only Compute Engine machine types are available. |
Deploy models | Create a model resource, and then create a version resource. | Create a model resource, create an endpoint resource, and deploy the model to the endpoint. Specify traffic splitting in the endpoint. |
Vertex Explainable AI
You can get feature attributions for tabular and image models in both AI Explanations for AI Platform and Vertex Explainable AI.
Task | AI Explanations for AI Platform | Vertex Explainable AI |
---|---|---|
Get feature attributions for tabular models | Use Sampled Shapley or integrated gradients to get feature attributions for tabular models. | |
Get feature attributions for image models | Use integrated gradients or XRAI to get feature attributions for image models. |