- HTTP request
- Request body
- Response body
- Authorization Scopes
- Feature
- VideoContext
- VideoSegment
- LabelDetectionConfig
- LabelDetectionMode
- ShotChangeDetectionConfig
- ExplicitContentDetectionConfig
- SpeechTranscriptionConfig
- SpeechContext
- TextDetectionConfig
- Status
Performs asynchronous video annotation. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains AnnotateVideoProgress
(progress). Operation.response
contains AnnotateVideoResponse
(results).
HTTP request
POST https://videointelligence.googleapis.com/v1p2beta1/videos:annotate
The URL uses gRPC Transcoding syntax.
Request body
The request body contains data with the following structure:
JSON representation | |
---|---|
{ "inputUri": string, "inputContent": string, "features": [ enum( |
Fields | |
---|---|
inputUri |
Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: |
inputContent |
The video data bytes. If unset, the input video(s) should be specified via A base64-encoded string. |
features[] |
Requested video annotation features. |
videoContext |
Additional video context and/or feature-specific parameters. |
outputUri |
Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: |
locationId |
Optional cloud region where annotation should take place. Supported cloud regions: |
Response body
If successful, the response body contains data with the following structure:
This resource represents a long-running operation that is the result of a network API call.
JSON representation | |
---|---|
{ "name": string, "metadata": { "@type": string, field1: ..., ... }, "done": boolean, // Union field |
Fields | ||
---|---|---|
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the |
|
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. An object containing fields of an arbitrary type. An additional field |
|
done |
If the value is |
|
Union field result . The operation result, which can be either an error or a valid response . If done == false , neither error nor response is set. If done == true , exactly one of error or response is set. result can be only one of the following: |
||
error |
The error result of the operation in case of failure or cancellation. |
|
response |
The normal response of the operation in case of success. If the original method returns no data on success, such as An object containing fields of an arbitrary type. An additional field |
Authorization Scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview.
Feature
Video annotation feature.
Enums | |
---|---|
FEATURE_UNSPECIFIED |
Unspecified. |
LABEL_DETECTION |
Label detection. Detect objects, such as dog or flower. |
SHOT_CHANGE_DETECTION |
Shot change detection. |
EXPLICIT_CONTENT_DETECTION |
Explicit content detection. |
SPEECH_TRANSCRIPTION |
Speech transcription. |
TEXT_DETECTION |
OCR text detection and tracking. |
OBJECT_TRACKING |
Object detection and tracking. |
VideoContext
Video context and/or feature-specific parameters.
JSON representation | |
---|---|
{ "segments": [ { object( |
Fields | |
---|---|
segments[] |
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment. |
labelDetectionConfig |
Config for LABEL_DETECTION. |
shotChangeDetectionConfig |
Config for SHOT_CHANGE_DETECTION. |
explicitContentDetectionConfig |
Config for EXPLICIT_CONTENT_DETECTION. |
speechTranscriptionConfig |
Config for SPEECH_TRANSCRIPTION. |
textDetectionConfig |
Config for TEXT_DETECTION. |
VideoSegment
Video segment.
JSON representation | |
---|---|
{ "startTimeOffset": string, "endTimeOffset": string } |
Fields | |
---|---|
startTimeOffset |
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive). A duration in seconds with up to nine fractional digits, terminated by ' |
endTimeOffset |
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive). A duration in seconds with up to nine fractional digits, terminated by ' |
LabelDetectionConfig
Config for LABEL_DETECTION.
JSON representation | |
---|---|
{
"labelDetectionMode": enum( |
Fields | |
---|---|
labelDetectionMode |
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to |
stationaryCamera |
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with |
model |
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
frameConfidenceThreshold |
The confidence threshold we perform filtering on the labels from frame-level detection. If not set, it is set to 0.4 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model. |
videoConfidenceThreshold |
The confidence threshold we perform filtering on the labels from video-level and shot-level detections. If not set, it is set to 0.3 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model. |
LabelDetectionMode
Label detection mode.
Enums | |
---|---|
LABEL_DETECTION_MODE_UNSPECIFIED |
Unspecified. |
SHOT_MODE |
Detect shot-level labels. |
FRAME_MODE |
Detect frame-level labels. |
SHOT_AND_FRAME_MODE |
Detect both shot-level and frame-level labels. |
ShotChangeDetectionConfig
Config for SHOT_CHANGE_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
ExplicitContentDetectionConfig
Config for EXPLICIT_CONTENT_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
SpeechTranscriptionConfig
Config for SPEECH_TRANSCRIPTION.
JSON representation | |
---|---|
{
"languageCode": string,
"maxAlternatives": number,
"filterProfanity": boolean,
"speechContexts": [
{
object( |
Fields | |
---|---|
languageCode |
Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes. |
maxAlternatives |
Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of |
filterProfanity |
Optional If set to |
speechContexts[] |
Optional A means to provide context to assist the speech recognition. |
enableAutomaticPunctuation |
Optional If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. NOTE: "This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature." |
audioTracks[] |
Optional For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0. |
enableSpeakerDiarization |
Optional If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo. Note: When this is true, we send all the words from the beginning of the audio for the top alternative in every consecutive responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. |
diarizationSpeakerCount |
Optional If set, specifies the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless enableSpeakerDiarization is set to true. |
enableWordConfidence |
Optional If |
SpeechContext
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
JSON representation | |
---|---|
{ "phrases": [ string ] } |
Fields | |
---|---|
phrases[] |
Optional A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits. |
TextDetectionConfig
Config for TEXT_DETECTION.
JSON representation | |
---|---|
{ "languageHints": [ string ] } |
Fields | |
---|---|
languageHints[] |
Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format. Automatic language detection is performed if no hint is provided. |
Status
The Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. The error model is designed to be:
- Simple to use and understand for most users
- Flexible enough to meet unexpected needs
Overview
The Status
message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code
, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers understand and resolve the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the package google.rpc
that can be used for common error conditions.
Language mapping
The Status
message is the logical representation of the error model, but it is not necessarily the actual wire format. When the Status
message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C.
Other uses
The error model and the Status
message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments.
Example uses of this error model include:
Partial errors. If a service needs to return partial errors to the client, it may embed the
Status
in the normal response to indicate the partial errors.Workflow errors. A typical workflow has multiple steps. Each step may have a
Status
message for error reporting.Batch operations. If a client uses batch request and batch response, the
Status
message should be used directly inside batch response, one for each error sub-response.Asynchronous operations. If an API call embeds asynchronous operation results in its response, the status of those operations should be represented directly using the
Status
message.Logging. If some API errors are stored in logs, the message
Status
could be used directly after any stripping needed for security/privacy reasons.
JSON representation | |
---|---|
{ "code": number, "message": string, "details": [ { "@type": string, field1: ..., ... } ] } |
Fields | |
---|---|
code |
The status code, which should be an enum value of |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. An object containing fields of an arbitrary type. An additional field |