Method: videos.annotate

Performs asynchronous video annotation. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains AnnotateVideoProgress (progress). Operation.response contains AnnotateVideoResponse (results).

HTTP request

POST https://videointelligence.googleapis.com/v1p3beta1/videos:annotate

The URL uses gRPC Transcoding syntax.

Request body

The request body contains data with the following structure:

JSON representation
{
  "inputUri": string,
  "inputContent": string,
  "features": [
    enum (Feature)
  ],
  "videoContext": {
    object (VideoContext)
  },
  "outputUri": string,
  "locationId": string
}
Fields
inputUri

string

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as inputContent. If set, inputContent should be unset.

inputContent

string (bytes format)

The video data bytes. If unset, the input video(s) should be specified via inputUri. If set, inputUri should be unset.

A base64-encoded string.

features[]

enum (Feature)

Required. Requested video annotation features.

videoContext

object (VideoContext)

Additional video context and/or feature-specific parameters.

outputUri

string

Optional. Location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.

locationId

string

Optional. Cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

Response body

If successful, the response body contains data with the following structure:

This resource represents a long-running operation that is the result of a network API call.

JSON representation
{
  "name": string,
  "metadata": {
    "@type": string,
    field1: ...,
    ...
  },
  "done": boolean,

  // Union field result can be only one of the following:
  "error": {
    object (Status)
  },
  "response": {
    "@type": string,
    field1: ...,
    ...
  }
  // End of list of possible types for union field result.
}
Fields
name

string

The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.

metadata

object

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

done

boolean

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

Union field result. The operation result, which can be either an error or a valid response. If done == false, neither error nor response is set. If done == true, exactly one of error or response is set. result can be only one of the following:
error

object (Status)

The error result of the operation in case of failure or cancellation.

response

object

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

Feature

Video annotation feature.

Enums
FEATURE_UNSPECIFIED Unspecified.
LABEL_DETECTION Label detection. Detect objects, such as dog or flower.
SHOT_CHANGE_DETECTION Shot change detection.
EXPLICIT_CONTENT_DETECTION Explicit content detection.
FACE_DETECTION Human face detection.
SPEECH_TRANSCRIPTION Speech transcription.
TEXT_DETECTION OCR text detection and tracking.
OBJECT_TRACKING Object detection and tracking.
LOGO_RECOGNITION Logo detection, tracking, and recognition.
PERSON_DETECTION Person detection.

VideoContext

Video context and/or feature-specific parameters.

JSON representation
{
  "segments": [
    {
      object (VideoSegment)
    }
  ],
  "labelDetectionConfig": {
    object (LabelDetectionConfig)
  },
  "shotChangeDetectionConfig": {
    object (ShotChangeDetectionConfig)
  },
  "explicitContentDetectionConfig": {
    object (ExplicitContentDetectionConfig)
  },
  "faceDetectionConfig": {
    object (FaceDetectionConfig)
  },
  "speechTranscriptionConfig": {
    object (SpeechTranscriptionConfig)
  },
  "textDetectionConfig": {
    object (TextDetectionConfig)
  },
  "personDetectionConfig": {
    object (PersonDetectionConfig)
  },
  "objectTrackingConfig": {
    object (ObjectTrackingConfig)
  }
}
Fields
segments[]

object (VideoSegment)

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

labelDetectionConfig

object (LabelDetectionConfig)

Config for LABEL_DETECTION.

shotChangeDetectionConfig

object (ShotChangeDetectionConfig)

Config for SHOT_CHANGE_DETECTION.

explicitContentDetectionConfig

object (ExplicitContentDetectionConfig)

Config for EXPLICIT_CONTENT_DETECTION.

faceDetectionConfig

object (FaceDetectionConfig)

Config for FACE_DETECTION.

speechTranscriptionConfig

object (SpeechTranscriptionConfig)

Config for SPEECH_TRANSCRIPTION.

textDetectionConfig

object (TextDetectionConfig)

Config for TEXT_DETECTION.

personDetectionConfig

object (PersonDetectionConfig)

Config for PERSON_DETECTION.

objectTrackingConfig

object (ObjectTrackingConfig)

Config for OBJECT_TRACKING.

VideoSegment

Video segment.

JSON representation
{
  "startTimeOffset": string,
  "endTimeOffset": string
}
Fields
startTimeOffset

string (Duration format)

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

endTimeOffset

string (Duration format)

Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

LabelDetectionConfig

Config for LABEL_DETECTION.

JSON representation
{
  "labelDetectionMode": enum (LabelDetectionMode),
  "stationaryCamera": boolean,
  "model": string,
  "frameConfidenceThreshold": number,
  "videoConfidenceThreshold": number
}
Fields
labelDetectionMode

enum (LabelDetectionMode)

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

stationaryCamera

boolean

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with SHOT_AND_FRAME_MODE enabled.

model

string

Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

frameConfidenceThreshold

number

The confidence threshold we perform filtering on the labels from frame-level detection. If not set, it is set to 0.4 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model.

videoConfidenceThreshold

number

The confidence threshold we perform filtering on the labels from video-level and shot-level detections. If not set, it is set to 0.3 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model.

LabelDetectionMode

Label detection mode.

Enums
LABEL_DETECTION_MODE_UNSPECIFIED Unspecified.
SHOT_MODE Detect shot-level labels.
FRAME_MODE Detect frame-level labels.
SHOT_AND_FRAME_MODE Detect both shot-level and frame-level labels.

ShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION.

JSON representation
{
  "model": string
}
Fields
model

string

Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

ExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION.

JSON representation
{
  "model": string
}
Fields
model

string

Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

FaceDetectionConfig

Config for FACE_DETECTION.

JSON representation
{
  "model": string,
  "includeBoundingBoxes": boolean,
  "includeAttributes": boolean
}
Fields
model

string

Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

includeBoundingBoxes

boolean

Whether bounding boxes be included in the face annotation output.

includeAttributes

boolean

Whether to enable face attributes detection, such as glasses, dark_glasses, mouth_open etc. Ignored if 'includeBoundingBoxes' is false.

SpeechTranscriptionConfig

Config for SPEECH_TRANSCRIPTION.

JSON representation
{
  "languageCode": string,
  "maxAlternatives": integer,
  "filterProfanity": boolean,
  "speechContexts": [
    {
      object (SpeechContext)
    }
  ],
  "enableAutomaticPunctuation": boolean,
  "audioTracks": [
    integer
  ],
  "enableSpeakerDiarization": boolean,
  "diarizationSpeakerCount": integer,
  "enableWordConfidence": boolean
}
Fields
languageCode

string

Required. Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.

maxAlternatives

integer

Optional. Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative messages within each SpeechTranscription. The server may return fewer than maxAlternatives. Valid values are 0-30. A value of 0 or 1 will return a maximum of one. If omitted, will return a maximum of one.

filterProfanity

boolean

Optional. If set to true, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to false or omitted, profanities won't be filtered out.

speechContexts[]

object (SpeechContext)

Optional. A means to provide context to assist the speech recognition.

enableAutomaticPunctuation

boolean

Optional. If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. NOTE: "This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature."

audioTracks[]

integer

Optional. For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0.

enableSpeakerDiarization

boolean

Optional. If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo. Note: When this is true, we send all the words from the beginning of the audio for the top alternative in every consecutive responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time.

diarizationSpeakerCount

integer

Optional. If set, specifies the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless enableSpeakerDiarization is set to true.

enableWordConfidence

boolean

Optional. If true, the top result includes a list of words and the confidence for those words. If false, no word-level confidence information is returned. The default is false.

SpeechContext

Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

JSON representation
{
  "phrases": [
    string
  ]
}
Fields
phrases[]

string

Optional. A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.

TextDetectionConfig

Config for TEXT_DETECTION.

JSON representation
{
  "languageHints": [
    string
  ],
  "model": string
}
Fields
languageHints[]

string

Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format.

Automatic language detection is performed if no hint is provided.

model

string

Model to use for text detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

PersonDetectionConfig

Config for PERSON_DETECTION.

JSON representation
{
  "includeBoundingBoxes": boolean,
  "includePoseLandmarks": boolean,
  "includeAttributes": boolean
}
Fields
includeBoundingBoxes

boolean

Whether bounding boxes be included in the person detection annotation output.

includePoseLandmarks

boolean

Whether to enable pose landmarks detection. Ignored if 'includeBoundingBoxes' is false.

includeAttributes

boolean

Whether to enable person attributes detection, such as cloth color (black, blue, etc), type (coat, dress, etc), or pattern (plain, floral, etc). Ignored if 'includeBoundingBoxes' is false.

ObjectTrackingConfig

Config for OBJECT_TRACKING.

JSON representation
{
  "model": string
}
Fields
model

string

Model to use for object tracking. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

Status

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details.

You can find out more about this error model and how to work with it in the API Design Guide.

JSON representation
{
  "code": integer,
  "message": string,
  "details": [
    {
      "@type": string,
      field1: ...,
      ...
    }
  ]
}
Fields
code

integer

The status code, which should be an enum value of google.rpc.Code.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.