Method: projects.locations.models.predict

Not available for AutoML Video Intelligence.

HTTP request

POST https://automl.googleapis.com/v1beta1/{name}:predict

Path parameters

Parameters
name

string

Name of the model requested to serve the prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

Request body

The request body contains data with the following structure:

JSON representation
{
  "payload": {
    object(ExamplePayload)
  },
  "params": {
    string: string,
    ...
  }
}
Fields
payload

object(ExamplePayload)

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params

map (key: string, value: string)

Additional domain-specific parameters, any string must be up to 25000 characters long.

Response body

If successful, the response body contains data with the following structure:

Response message for PredictionService.Predict.

JSON representation
{
  "payload": [
    {
      object(AnnotationPayload)
    }
  ],
  "metadata": {
    string: string,
    ...
  }
}
Fields
payload[]

object(AnnotationPayload)

Prediction result.

metadata

map (key: string, value: string)

Additional domain-specific prediction response metadata.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ExamplePayload

Example data used for training or prediction.

AnnotationPayload

Contains annotation information that is relevant to AutoML.

JSON representation
{
  "annotationSpecId": string,
  "displayName": string,

  // Union field detail can be only one of the following:
  "classification": {
    object(ClassificationAnnotation)
  },
  "videoClassification": {
    object(VideoClassificationAnnotation)
  }
  // End of list of possible types for union field detail.
}
Fields
annotationSpecId

string

Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.

displayName

string

Output only. The value of displayName when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the displayName between any two model training.

Union field detail. Output only . Additional information about the annotation specific to the AutoML domain. detail can be only one of the following:
classification

object(ClassificationAnnotation)

Annotation details for classification predictions.

videoClassification

object(VideoClassificationAnnotation)

Annotation details for video classification. Returned for Video Classification predictions.

ClassificationAnnotation

Contains annotation details specific to classification.

JSON representation
{
  "score": number
}
Fields
score

number

Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.

VideoClassificationAnnotation

Contains annotation details specific to video classification.

JSON representation
{
  "type": string,
  "classificationAnnotation": {
    object(ClassificationAnnotation)
  },
  "timeSegment": {
    object(TimeSegment)
  }
}
Fields
type

string

Output only. Expresses the type of video classification. Possible values:

  • segment - Classification done on a specified by user time segment of a video. AnnotationSpec is answered to be present in that time segment, if it is present in any part of it. The video ML model evaluations are done only for this type of classification.

  • shot- Shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

  • 1s_interval - AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

classificationAnnotation

object(ClassificationAnnotation)

Output only . The classification details of this annotation.

timeSegment

object(TimeSegment)

Output only . The time segment of the video to which the annotation applies.

TimeSegment

A time period inside of an example that has a time dimension (e.g. video).

JSON representation
{
  "startTimeOffset": string,
  "endTimeOffset": string
}
Fields
startTimeOffset

string (Duration format)

Start of the time segment (inclusive), represented as the duration since the example start.

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

endTimeOffset

string (Duration format)

End of the time segment (exclusive), represented as the duration since the example start.

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".