REST Resource: videos

Resource: AnnotateVideoRequest

Video annotation request.

JSON representation
{
  "inputUri": string,
  "inputContent": string,
  "features": [
    enum(Feature)
  ],
  "videoContext": {
    object(VideoContext)
  },
  "outputUri": string,
  "locationId": string
}
Fields
inputUri

string

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as inputContent. If set, inputContent should be unset.

inputContent

string

The video data bytes. Encoding: base64. If unset, the input video(s) should be specified via inputUri. If set, inputUri should be unset.

features[]

enum(Feature)

Requested video annotation features.

videoContext

object(VideoContext)

Additional video context and/or feature-specific parameters.

outputUri

string

Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.

locationId

string

Optional cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

Feature

Video annotation feature.

Enums
FEATURE_UNSPECIFIED Unspecified.
LABEL_DETECTION Label detection. Detect objects, such as dog or flower.
SHOT_CHANGE_DETECTION Shot change detection.
SAFE_SEARCH_DETECTION Safe search detection.

VideoContext

Video context and/or feature-specific parameters.

JSON representation
{
  "segments": [
    {
      object(VideoSegment)
    }
  ],
  "labelDetectionMode": enum(LabelDetectionMode),
  "stationaryCamera": boolean,
  "labelDetectionModel": string,
  "shotChangeDetectionModel": string,
  "safeSearchDetectionModel": string
}
Fields
segments[]

object(VideoSegment)

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

labelDetectionMode

enum(LabelDetectionMode)

If label detection has been requested, what labels should be detected in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

stationaryCamera

boolean

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects.

labelDetectionModel

string

Model to use for label detection. Supported values: "latest" and "stable" (the default).

shotChangeDetectionModel

string

Model to use for shot change detection. Supported values: "latest" and "stable" (the default).

safeSearchDetectionModel

string

Model to use for safe search detection. Supported values: "latest" and "stable" (the default).

LabelDetectionMode

Label detection mode.

Enums
LABEL_DETECTION_MODE_UNSPECIFIED Unspecified.
SHOT_MODE Detect shot-level labels.
FRAME_MODE Detect frame-level labels.
SHOT_AND_FRAME_MODE Detect both shot-level and frame-level labels.

Methods

annotate

Performs asynchronous video annotation.