A prediction is the output of a trained machine learning model. This page provides an overview of the workflow for getting predictions from your models on Vertex AI.
Vertex AI offers two methods for getting prediction:
Online predictions are synchronous requests made to a model
endpoint. Before sending a request, you must first deploy the
modelresource to an
endpoint. This associates compute resources with the model so that it can serve online predictions with low latency. Use online predictions when you are making requests in response to application input or in situations that require timely inference.
Batch predictions are asynchronous requests. You request a
batchPredictionsJobdirectly from the
modelresource without needing to deploy the model to an endpoint. Use batch predictions when you don't require an immediate response and want to process accumulated data by using a single request.
Get predictions from custom trained models
Then, read the following documentation to learn how to get predictions:
Get predictions from AutoML models
Unlike custom trained models, AutoML models are automatically imported into the Vertex AI Model Registry after training.
Other than that, the workflow for AutoML models is similar, but varies slightly based on your data type and model objective. The documentation for getting AutoML predictions is located alongside the AutoML documentation. Here are links to the documentation:
Learn how to get predictions from the following types of image AutoML models:
Learn how to get predictions from the following types of tabular AutoML models:
Tabular classification/regression models
Tabular forecasting models (batch predictions only)
Learn how to get predictions from the following types of text AutoML models:
Learn how to get predictions from the following types of video AutoML models:
- Video action recognition models (batch predictions only)
- Video classification models (batch predictions only)
- Video object tracking models (batch predictions only)