- HTTP request
- Request body
- Response body
- Authorization Scopes
- Feature
- VideoContext
- VideoSegment
- LabelDetectionConfig
- LabelDetectionMode
- ShotChangeDetectionConfig
- ExplicitContentDetectionConfig
- FaceDetectionConfig
- SpeechTranscriptionConfig
- SpeechContext
- TextDetectionConfig
- PersonDetectionConfig
- ObjectTrackingConfig
- Try it!
Performs asynchronous video annotation. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains AnnotateVideoProgress
(progress). Operation.response
contains AnnotateVideoResponse
(results).
HTTP request
POST https://videointelligence.googleapis.com/v1/videos:annotate
The URL uses gRPC Transcoding syntax.
Request body
The request body contains data with the following structure:
JSON representation | |
---|---|
{ "inputUri": string, "inputContent": string, "features": [ enum ( |
Fields | |
---|---|
inputUri |
Input video location. Currently, only Cloud Storage URIs are supported. URIs must be specified in the following format: |
inputContent |
The video data bytes. If unset, the input video(s) should be specified via the A base64-encoded string. |
features[] |
Required. Requested video annotation features. |
videoContext |
Additional video context and/or feature-specific parameters. |
outputUri |
Optional. Location where the output (in JSON format) should be stored. Currently, only Cloud Storage URIs are supported. These must be specified in the following format: |
locationId |
Optional. Cloud region where annotation should take place. Supported cloud regions are: |
Response body
If successful, the response body contains an instance of Operation
.
Authorization Scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview.
Feature
Video annotation feature.
Enums | |
---|---|
FEATURE_UNSPECIFIED |
Unspecified. |
LABEL_DETECTION |
Label detection. Detect objects, such as dog or flower. |
SHOT_CHANGE_DETECTION |
Shot change detection. |
EXPLICIT_CONTENT_DETECTION |
Explicit content detection. |
FACE_DETECTION |
Human face detection. |
SPEECH_TRANSCRIPTION |
Speech transcription. |
TEXT_DETECTION |
OCR text detection and tracking. |
OBJECT_TRACKING |
Object detection and tracking. |
LOGO_RECOGNITION |
Logo detection, tracking, and recognition. |
PERSON_DETECTION |
Person detection. |
VideoContext
Video context and/or feature-specific parameters.
JSON representation | |
---|---|
{ "segments": [ { object ( |
Fields | |
---|---|
segments[] |
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment. |
labelDetectionConfig |
Config for LABEL_DETECTION. |
shotChangeDetectionConfig |
Config for SHOT_CHANGE_DETECTION. |
explicitContentDetectionConfig |
Config for EXPLICIT_CONTENT_DETECTION. |
faceDetectionConfig |
Config for FACE_DETECTION. |
speechTranscriptionConfig |
Config for SPEECH_TRANSCRIPTION. |
textDetectionConfig |
Config for TEXT_DETECTION. |
personDetectionConfig |
Config for PERSON_DETECTION. |
objectTrackingConfig |
Config for OBJECT_TRACKING. |
VideoSegment
Video segment.
JSON representation | |
---|---|
{ "startTimeOffset": string, "endTimeOffset": string } |
Fields | |
---|---|
startTimeOffset |
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive). A duration in seconds with up to nine fractional digits, terminated by ' |
endTimeOffset |
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive). A duration in seconds with up to nine fractional digits, terminated by ' |
LabelDetectionConfig
Config for LABEL_DETECTION.
JSON representation | |
---|---|
{
"labelDetectionMode": enum ( |
Fields | |
---|---|
labelDetectionMode |
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to |
stationaryCamera |
Whether the video has been shot from a stationary (i.e., non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with |
model |
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
frameConfidenceThreshold |
The confidence threshold we perform filtering on the labels from frame-level detection. If not set, it is set to 0.4 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: For best results, follow the default threshold. We will update the default threshold everytime when we release a new model. |
videoConfidenceThreshold |
The confidence threshold we perform filtering on the labels from video-level and shot-level detections. If not set, it's set to 0.3 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: For best results, follow the default threshold. We will update the default threshold everytime when we release a new model. |
LabelDetectionMode
Label detection mode.
Enums | |
---|---|
LABEL_DETECTION_MODE_UNSPECIFIED |
Unspecified. |
SHOT_MODE |
Detect shot-level labels. |
FRAME_MODE |
Detect frame-level labels. |
SHOT_AND_FRAME_MODE |
Detect both shot-level and frame-level labels. |
ShotChangeDetectionConfig
Config for SHOT_CHANGE_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
ExplicitContentDetectionConfig
Config for EXPLICIT_CONTENT_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
FaceDetectionConfig
Config for FACE_DETECTION.
JSON representation | |
---|---|
{ "model": string, "includeBoundingBoxes": boolean, "includeAttributes": boolean } |
Fields | |
---|---|
model |
Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
includeBoundingBoxes |
Whether bounding boxes are included in the face annotation output. |
includeAttributes |
Whether to enable face attributes detection, such as glasses, dark_glasses, mouth_open etc. Ignored if 'includeBoundingBoxes' is set to false. |
SpeechTranscriptionConfig
Config for SPEECH_TRANSCRIPTION.
JSON representation | |
---|---|
{
"languageCode": string,
"maxAlternatives": integer,
"filterProfanity": boolean,
"speechContexts": [
{
object ( |
Fields | |
---|---|
languageCode |
Required. Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes. |
maxAlternatives |
Optional. Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of |
filterProfanity |
Optional. If set to |
speechContexts[] |
Optional. A means to provide context to assist the speech recognition. |
enableAutomaticPunctuation |
Optional. If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. NOTE: "This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature." |
audioTracks[] |
Optional. For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0. |
enableSpeakerDiarization |
Optional. If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo. Note: When this is true, we send all the words from the beginning of the audio for the top alternative in every consecutive response. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. |
diarizationSpeakerCount |
Optional. If set, specifies the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless enableSpeakerDiarization is set to true. |
enableWordConfidence |
Optional. If |
SpeechContext
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
JSON representation | |
---|---|
{ "phrases": [ string ] } |
Fields | |
---|---|
phrases[] |
Optional. A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits. |
TextDetectionConfig
Config for TEXT_DETECTION.
JSON representation | |
---|---|
{ "languageHints": [ string ], "model": string } |
Fields | |
---|---|
languageHints[] |
Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format. Automatic language detection is performed if no hint is provided. |
model |
Model to use for text detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
PersonDetectionConfig
Config for PERSON_DETECTION.
JSON representation | |
---|---|
{ "includeBoundingBoxes": boolean, "includePoseLandmarks": boolean, "includeAttributes": boolean } |
Fields | |
---|---|
includeBoundingBoxes |
Whether bounding boxes are included in the person detection annotation output. |
includePoseLandmarks |
Whether to enable pose landmarks detection. Ignored if 'includeBoundingBoxes' is set to false. |
includeAttributes |
Whether to enable person attributes detection, such as cloth color (black, blue, etc), type (coat, dress, etc), pattern (plain, floral, etc), hair, etc. Ignored if 'includeBoundingBoxes' is set to false. |
ObjectTrackingConfig
Config for OBJECT_TRACKING.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for object tracking. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |