Batch prediction component

Stay organized with collections Save and categorize content based on your preferences.

The BatchPredictionJob component allows you to run an asynchronous prediction request. You request batch predictions directly from the model resource; you do not need to deploy the model to an endpoint. For data types that support both batch and online predictions you can use batch predictions. This is useful when you don't require an immediate response and want to process accumulated data by using a single request.

To make a batch prediction, you specify an input source and an output location for Vertex AI to store predictions results. The inputs and outputs depend on the model type that you're working with. For example, batch predictions for the AutoML image model type require an input JSON Lines file and the name of a Cloud Storage bucket to store the output. For more information about batch prediction, see Get batch predictions.

You can use the ModelBatchPredictOp to access this functionality through Vertex AI Pipelines.

API reference

Tutorials

Version history and changelog

Date Version Notes
03/2022 GCPC v1.0 Version 1.0 release of the components.
  • Added support for prediction using google.UnmanagedContainerModel artifact.
02/2022 GCPC v0.3 New Experimental version of the components.
11/2021 GCPC v0.2 Experimental release of the components.

Technical Support Contacts

If you have any questions, please reach out to kubeflow-pipelines-components@google.com.