Method: videos.annotate

Performs asynchronous video annotation. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains AnnotateVideoProgress (progress). Operation.response contains AnnotateVideoResponse (results).

HTTP request

POST https://videointelligence.googleapis.com/v1/videos:annotate

The URL uses gRPC Transcoding syntax.

Request body

The request body contains data with the following structure:

JSON representation
{
  "inputUri": string,
  "inputContent": string,
  "features": [
    enum(Feature)
  ],
  "videoContext": {
    object(VideoContext)
  },
  "outputUri": string,
  "locationId": string
}
Fields
inputUri

string

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as inputContent. If set, inputContent should be unset.

inputContent

string (bytes format)

The video data bytes. If unset, the input video(s) should be specified via inputUri. If set, inputUri should be unset.

A base64-encoded string.

features[]

enum(Feature)

Requested video annotation features.

videoContext

object(VideoContext)

Additional video context and/or feature-specific parameters.

outputUri

string

Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.

locationId

string

Optional cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

Response body

If successful, the response body contains an instance of Operation.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

Feature

Video annotation feature.

Enums
FEATURE_UNSPECIFIED Unspecified.
LABEL_DETECTION Label detection. Detect objects, such as dog or flower.
SHOT_CHANGE_DETECTION Shot change detection.
EXPLICIT_CONTENT_DETECTION Explicit content detection.

VideoContext

Video context and/or feature-specific parameters.

JSON representation
{
  "segments": [
    {
      object(VideoSegment)
    }
  ],
  "labelDetectionConfig": {
    object(LabelDetectionConfig)
  },
  "shotChangeDetectionConfig": {
    object(ShotChangeDetectionConfig)
  },
  "explicitContentDetectionConfig": {
    object(ExplicitContentDetectionConfig)
  }
}
Fields
segments[]

object(VideoSegment)

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

labelDetectionConfig

object(LabelDetectionConfig)

Config for LABEL_DETECTION.

shotChangeDetectionConfig

object(ShotChangeDetectionConfig)

Config for SHOT_CHANGE_DETECTION.

explicitContentDetectionConfig

object(ExplicitContentDetectionConfig)

Config for EXPLICIT_CONTENT_DETECTION.

VideoSegment

Video segment.

JSON representation
{
  "startTimeOffset": string,
  "endTimeOffset": string
}
Fields
startTimeOffset

string (Duration format)

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

endTimeOffset

string (Duration format)

Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

LabelDetectionConfig

Config for LABEL_DETECTION.

JSON representation
{
  "labelDetectionMode": enum(LabelDetectionMode),
  "stationaryCamera": boolean,
  "model": string
}
Fields
labelDetectionMode

enum(LabelDetectionMode)

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

stationaryCamera

boolean

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with SHOT_AND_FRAME_MODE enabled.

model

string

Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

LabelDetectionMode

Label detection mode.

Enums
LABEL_DETECTION_MODE_UNSPECIFIED Unspecified.
SHOT_MODE Detect shot-level labels.
FRAME_MODE Detect frame-level labels.
SHOT_AND_FRAME_MODE Detect both shot-level and frame-level labels.

ShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION.

JSON representation
{
  "model": string
}
Fields
model

string

Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

ExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION.

JSON representation
{
  "model": string
}
Fields
model

string

Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

Try it!

Var denne side nyttig? Giv os en anmeldelse af den:

Send feedback om...

Cloud Video Intelligence API