This page shows you how to get batch predictions from your video classification models using the Google Cloud console or the Vertex AI API. Batch predictions are asynchronous requests. You request batch predictions directly from the model resource without needing to deploy the model to an endpoint.
AutoML video models do not support online predictions.
Get batch predictions
To make a batch prediction request, you specify an input source and an output format where Vertex AI stores predictions results.
Input data requirements
The input for batch requests specifies the items to send to your model for
prediction. Batch predictions for the AutoML video model type use a
JSON Lines file to specify a list of videos to make predictions for, and then
store the JSON Lines file in a Cloud Storage bucket. You can specify
Infinity
for the timeSegmentEnd
field to specify the end of the video. The
following sample shows a single line in an input JSON Lines file.
{'content': 'gs://sourcebucket/datasets/videos/source_video.mp4', 'mimeType': 'video/mp4', 'timeSegmentStart': '0.0s', 'timeSegmentEnd': '2.366667s'}
Request a batch prediction
For batch prediction requests, you can use the Google Cloud console or the Vertex AI API. Depending on the number of input items that you've submitted, a batch prediction task can take some time to complete.
Google Cloud console
Use the Google Cloud console to request a batch prediction.
In the Google Cloud console, in the Vertex AI section, go to the Batch predictions page.
Click Create to open the New batch prediction window and complete the following steps:
- Enter a name for the batch prediction.
- For Model name, select the name of the model to use for this batch prediction.
- For Source path, specify the Cloud Storage location where your JSON Lines input file is located.
- For the Destination path, specify a Cloud Storage location where the batch prediction results are stored. The Output format is determined by your model's objective. AutoML models for image objectives output JSON Lines files.
API
Use the Vertex AI API to send batch prediction requests.
REST
Before using any of the request data, make the following replacements:
- LOCATION_ID: Region where Model is stored and batch prediction job is executed. For
example,
us-central1
. - PROJECT_ID: Your project ID
- BATCH_JOB_NAME: Display name for the batch job
- MODEL_ID: The ID for the model to use for making predictions
- THRESHOLD_VALUE (optional): Model returns only predictions that have confidence scores with at least this value
- SEGMENT_CLASSIFICATION (optional): A Boolean value that
determines whether to request segment-level classification. Vertex AI
returns labels and their confidence scores for the entire time segment of
the video that you specified in the input instance. The default is
true
. - SHOT_CLASSIFICATION (optional): A Boolean value that determines
whether to request shot-level classification. Vertex AI determines
the boundaries for each camera shot in the entire time segment of the video
that you specified in the input instance. Vertex AI then returns
labels and their confidence scores for each detected shot, along with the
start and end time of the shot. The default is
false
. - ONE_SEC_INTERVAL_CLASSIFICATION (optional): A Boolean value
that determines whether to request classification for a video at one-second
intervals. Vertex AI returns labels and their confidence scores for
each second of the entire time segment of the video that you
specified in the input instance. The default is
false
. - URI: Cloud Storage URI where your input JSON Lines file is located.
- BUCKET: Your Cloud Storage bucket
- PROJECT_NUMBER: Your project's automatically generated project number
HTTP method and URL:
POST https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs
Request JSON body:
{ "displayName": "BATCH_JOB_NAME", "model": "projects/PROJECT_ID/locations/LOCATION_ID/models/MODEL_ID", "modelParameters": { "confidenceThreshold": THRESHOLD_VALUE, "segmentClassification": SEGMENT_CLASSIFICATION, "shotClassification": SHOT_CLASSIFICATION, "oneSecIntervalClassification": ONE_SEC_INTERVAL_CLASSIFICATION }, "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": ["URI"], }, }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "OUTPUT_BUCKET", }, }, }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT_NUMBER/locations/us-central1/batchPredictionJobs/BATCH_JOB_ID", "displayName": "BATCH_JOB_NAME", "model": "projects/PROJECT_NUMBER/locations/us-central1/models/MODEL_ID", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "CONTENT" ] } }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "BUCKET" } }, "state": "JOB_STATE_PENDING", "createTime": "2020-05-30T02:58:44.341643Z", "updateTime": "2020-05-30T02:58:44.341643Z", "modelDisplayName": "MODEL_NAME", "modelObjective": "MODEL_OBJECTIVE" }
You can poll for the status of the batch job using
the BATCH_JOB_ID until the job state
is
JOB_STATE_SUCCEEDED
.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Retrieve batch prediction results
Vertex AI sends batch prediction output to your specified destination.
When a batch prediction task is complete, the output of the prediction is stored in the Cloud Storage bucket that you specified in your request.
Example batch prediction results
The following an example batch prediction results from a video classification model.
{ "instance": { "content": "gs://bucket/video.mp4", "mimeType": "video/mp4", "timeSegmentStart": "1s", "timeSegmentEnd": "5s" } "prediction": [{ "id": "1", "displayName": "cat", "type": "segment-classification", "timeSegmentStart": "1s", "timeSegmentEnd": "5s", "confidence": 0.7 }, { "id": "1", "displayName": "cat", "type": "shot-classification", "timeSegmentStart": "1s", "timeSegmentEnd": "4s", "confidence": 0.9 }, { "id": "2", "displayName": "dog", "type": "shot-classification", "timeSegmentStart": "4s", "timeSegmentEnd": "5s", "confidence": 0.6 }, { "id": "1", "displayName": "cat", "type": "one-sec-interval-classification", "timeSegmentStart": "1s", "timeSegmentEnd": "1s", "confidence": 0.95 }, { "id": "1", "displayName": "cat", "type": "one-sec-interval-classification", "timeSegmentStart": "2s", "timeSegmentEnd": "2s", "confidence": 0.9 }, { "id": "1", "displayName": "cat", "type": "one-sec-interval-classification", "timeSegmentStart": "3s", "timeSegmentEnd": "3s", "confidence": 0.85 }, { "id": "2", "displayName": "dog", "type": "one-sec-interval-classification", "timeSegmentStart": "4s", "timeSegmentEnd": "4s", "confidence": 0.6 }] }