Method: projects.locations.models.batchPredict

Perform a batch prediction and return the id of a long-running operation. You can request the operation result by using the operations.get method. When the operation has completed, you can call operations.get to retrieve a BatchPredictResult from the response field.

HTTP request

POST https://automl.googleapis.com/v1beta1/{name}:batchPredict

Path parameters

Parameters
name

string

Name of the model requested to serve the batch prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

Request body

The request body contains data with the following structure:

JSON representation
{
  "inputConfig": {
    object(BatchPredictInputConfig)
  },
  "outputConfig": {
    object(BatchPredictOutputConfig)
  },
  "params": {
    string: string,
    ...
  }
}
Fields
inputConfig

object(BatchPredictInputConfig)

Required. The input configuration for batch prediction.

outputConfig

object(BatchPredictOutputConfig)

Required. The Configuration specifying where output predictions should be written.

params

map (key: string, value: string)

Can be one of the following:

score_threshold

(float) A value from 0.0 to 1.0. When the model detects objects on video frames, it will only produce bounding boxes that have at least this confidence score. The default is 0.5.

max_bounding_box_count

(int64) The maximum number of bounding boxes to return per video frame. The default is 100. The requested value might be limited by server.

min_bounding_box_size

(float) Only return bounding boxes where the shortest edge of the bounding box is at least

min_bounding_box_size

long, relative to the size of the video frame. The value is a percentage of the size of the video frame and can be a value from 0 to 1. For example, if you set

min_bounding_box_size

to 0.2, then the model will only return bounding boxes where the shortest edge is greater than 20 percent of the size of the related edge from the video frame. The default is 0.

See Annotating videos for more details.

Response body

If successful, the response body contains an instance of Operation.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

BatchPredictInputConfig

Input configuration for models.batchPredict action. The input is one or more CSV files stored in Google Cloud Storage where the CSV files are in the following format:

GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
  • GCS_FILE_PATH identifies the Google Cloud Storage path to a video up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.

  • TIME_SEGMENT_START and TIME_SEGMENT_END must be within the length of the video, and end has to be after the start. Both are measured in seconds from the beginning of the video.

Three sample rows:

gs://folder/video1.mp4,10,40
gs://folder/video1.mp4,20,60
gs://folder/vid2.mov,0,inf

See Annotating videos for more information.

JSON representation
{
  "gcsSource": {
    object(GcsSource)
  }
}
Fields
gcsSource

object(GcsSource)

The Google Cloud Storage location for the input content.

BatchPredictOutputConfig

Output configuration for models.batchPredict Action.

AutoML Video Intelligence creates a directory specified in the gcsDestination. The name of the directory is "prediction-<model-display-name>-<timestamp-of-prediction-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format.

AutoML Video Intelligence creates a file named videoObjectTracking.csv in the new directory, and also a JSON file for each object tracking request in it, that is, each row in the input CSV file.

The format of the videoObjectTracking.csv file is as follows:

GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS
  • The GCS_FILE_PATH, TIME_SEGMENT_START, TIME_SEGMENT_END match the same fields from the input CSV file.

  • JSON_FILE_NAME is the name of the JSON file in the output directory that contains prediction responses for each video time segment. The JSON files are named video_object_tracking_1.json, video_object_tracking_2.json, and so on up to the number of object tracking requests. These files include the AnnotationPayload in JSON format.

  • STATUS contains "OK" if the prediction completed successfully; otherwise contains error information. If STATUS is not "OK" then the JSON file for that prediction might be empty or the file might not exist.

Each JSON file where STATUS is "OK", contains a list of AnnotationPayload protos in JSON format, which are the predictions for the video time segment the file is assigned to in the videoClassification.csv. All AnnotationPayload protos have a videoObjectTracking field, and are sorted by the videoObjectTracking.type field. The types returned are determined by the object_tracking_types parameter of BatchPredictRequest.params.

JSON representation
{

  // Union field destination can be only one of the following:
  "gcsDestination": {
    object(GcsDestination)
  },
  "bigqueryDestination": {
    object(BigQueryDestination)
  }
  // End of list of possible types for union field destination.
}
Fields
Union field destination. Required. The destination of the output. destination can be only one of the following:
gcsDestination

object(GcsDestination)

The Google Cloud Storage location of the directory where the output is to be written to.

bigqueryDestination

object(BigQueryDestination)

The BigQuery location where the output is to be written to.

BigQueryDestination

The BigQuery location for the output content.

JSON representation
{
  "outputUri": string
}
Fields
outputUri

string

Required. BigQuery URI to a project, up to 2000 characters long. For example: bq://projectId