Enumerations

Feature

static

number

Video annotation feature.

Value

FEATURE_UNSPECIFIED

Unspecified.

LABEL_DETECTION

Label detection. Detect objects, such as dog or flower.

FACE_DETECTION

Human face detection and tracking.

SHOT_CHANGE_DETECTION

Shot change detection.

SAFE_SEARCH_DETECTION

Safe search detection.

LabelDetectionMode

static

number

Label detection mode.

Value

LABEL_DETECTION_MODE_UNSPECIFIED

Unspecified.

SHOT_MODE

Detect shot-level labels.

FRAME_MODE

Detect frame-level labels.

SHOT_AND_FRAME_MODE

Detect both shot-level and frame-level labels.

LabelLevel

static

number

Label level (scope).

Value

LABEL_LEVEL_UNSPECIFIED

Unspecified.

VIDEO_LEVEL

Video-level. Corresponds to the whole video.

SEGMENT_LEVEL

Segment-level. Corresponds to one of AnnotateSpec.segments.

SHOT_LEVEL

Shot-level. Corresponds to a single shot (i.e. a series of frames without a major camera position or background change).

FRAME_LEVEL

Frame-level. Corresponds to a single video frame.

Likelihood

static

number

Bucketized representation of likelihood.

Value

UNKNOWN

Unknown likelihood.

VERY_UNLIKELY

Very unlikely.

UNLIKELY

Unlikely.

POSSIBLE

Possible.

LIKELY

Likely.

VERY_LIKELY

Very likely.

Properties

Feature

static

number

Video annotation feature.

Value

FEATURE_UNSPECIFIED

Unspecified.

LABEL_DETECTION

Label detection. Detect objects, such as dog or flower.

FACE_DETECTION

Human face detection and tracking.

SHOT_CHANGE_DETECTION

Shot change detection.

SAFE_SEARCH_DETECTION

Safe search detection.

LabelDetectionMode

static

number

Label detection mode.

Value

LABEL_DETECTION_MODE_UNSPECIFIED

Unspecified.

SHOT_MODE

Detect shot-level labels.

FRAME_MODE

Detect frame-level labels.

SHOT_AND_FRAME_MODE

Detect both shot-level and frame-level labels.

LabelLevel

static

number

Label level (scope).

Value

LABEL_LEVEL_UNSPECIFIED

Unspecified.

VIDEO_LEVEL

Video-level. Corresponds to the whole video.

SEGMENT_LEVEL

Segment-level. Corresponds to one of AnnotateSpec.segments.

SHOT_LEVEL

Shot-level. Corresponds to a single shot (i.e. a series of frames without a major camera position or background change).

FRAME_LEVEL

Frame-level. Corresponds to a single video frame.

Likelihood

static

number

Bucketized representation of likelihood.

Value

UNKNOWN

Unknown likelihood.

VERY_UNLIKELY

Very unlikely.

UNLIKELY

Unlikely.

POSSIBLE

Possible.

LIKELY

Likely.

VERY_LIKELY

Very likely.

Abstract types

AnnotateVideoProgress

static

Video annotation progress. Included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Property

Parameter

annotationProgress

Array of Object

Progress metadata for all videos specified in AnnotateVideoRequest.

This object should have the same structure as VideoAnnotationProgress

See also

google.cloud.videointelligence.v1beta1.AnnotateVideoProgress definition in proto format

AnnotateVideoRequest

static

Video annotation request.

Properties

Parameter

inputUri

string

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as input_content. If set, input_content should be unset.

inputContent

string

The video data bytes. Encoding: base64. If unset, the input video(s) should be specified via input_uri. If set, input_uri should be unset.

features

Array of number

Requested video annotation features.

The number should be among the values of Feature

videoContext

Object

Additional video context and/or feature-specific parameters.

This object should have the same structure as VideoContext

outputUri

string

Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.

locationId

string

Optional cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

See also

google.cloud.videointelligence.v1beta1.AnnotateVideoRequest definition in proto format

AnnotateVideoResponse

static

Video annotation response. Included in the response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Property

Parameter

annotationResults

Array of Object

Annotation results for all videos specified in AnnotateVideoRequest.

This object should have the same structure as VideoAnnotationResults

See also

google.cloud.videointelligence.v1beta1.AnnotateVideoResponse definition in proto format

BoundingBox

static

Bounding box.

Properties

Parameter

left

number

Left X coordinate.

right

number

Right X coordinate.

bottom

number

Bottom Y coordinate.

top

number

Top Y coordinate.

See also

google.cloud.videointelligence.v1beta1.BoundingBox definition in proto format

FaceAnnotation

static

Face annotation.

Properties

Parameter

thumbnail

string

Thumbnail of a representative face view (in JPEG format). Encoding: base64.

segments

Array of Object

All locations where a face was detected. Faces are detected and tracked on a per-video basis (as opposed to across multiple videos).

This object should have the same structure as VideoSegment

locations

Array of Object

Face locations at one frame per second.

This object should have the same structure as FaceLocation

See also

google.cloud.videointelligence.v1beta1.FaceAnnotation definition in proto format

FaceLocation

static

Face location.

Properties

Parameter

boundingBox

Object

Bounding box in a frame.

This object should have the same structure as BoundingBox

timeOffset

number

Video time offset in microseconds.

See also

google.cloud.videointelligence.v1beta1.FaceLocation definition in proto format

LabelAnnotation

static

Label annotation.

Properties

Parameter

description

string

Textual description, e.g. Fixed-gear bicycle.

languageCode

string

Language code for description in BCP-47 format.

locations

Array of Object

Where the label was detected and with what confidence.

This object should have the same structure as LabelLocation

See also

google.cloud.videointelligence.v1beta1.LabelAnnotation definition in proto format

LabelLocation

static

Label location.

Properties

Parameter

segment

Object

Video segment. Set to [-1, -1] for video-level labels. Set to [timestamp, timestamp] for frame-level labels. Otherwise, corresponds to one of AnnotateSpec.segments (if specified) or to shot boundaries (if requested).

This object should have the same structure as VideoSegment

confidence

number

Confidence that the label is accurate. Range: [0, 1].

level

number

Label level.

The number should be among the values of LabelLevel

See also

google.cloud.videointelligence.v1beta1.LabelLocation definition in proto format

SafeSearchAnnotation

static

Safe search annotation (based on per-frame visual signals only). If no unsafe content has been detected in a frame, no annotations are present for that frame. If only some types of unsafe content have been detected in a frame, the likelihood is set to UNKNOWN for all other types of unsafe content.

Properties

Parameter

adult

number

Likelihood of adult content.

The number should be among the values of Likelihood

spoof

number

Likelihood that an obvious modification was made to the original version to make it appear funny or offensive.

The number should be among the values of Likelihood

medical

number

Likelihood of medical content.

The number should be among the values of Likelihood

violent

number

Likelihood of violent content.

The number should be among the values of Likelihood

racy

number

Likelihood of racy content.

The number should be among the values of Likelihood

timeOffset

number

Video time offset in microseconds.

See also

google.cloud.videointelligence.v1beta1.SafeSearchAnnotation definition in proto format

VideoAnnotationProgress

static

Annotation progress for a single video.

Properties

Parameter

inputUri

string

Video file location in Google Cloud Storage.

progressPercent

number

Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.

startTime

Object

Time when the request was received.

This object should have the same structure as Timestamp

updateTime

Object

Time of the most recent update.

This object should have the same structure as Timestamp

See also

google.cloud.videointelligence.v1beta1.VideoAnnotationProgress definition in proto format

VideoAnnotationResults

static

Annotation results for a single video.

Properties

Parameter

inputUri

string

Video file location in Google Cloud Storage.

labelAnnotations

Array of Object

Label annotations. There is exactly one element for each unique label.

This object should have the same structure as LabelAnnotation

faceAnnotations

Array of Object

Face annotations. There is exactly one element for each unique face.

This object should have the same structure as FaceAnnotation

shotAnnotations

Array of Object

Shot annotations. Each shot is represented as a video segment.

This object should have the same structure as VideoSegment

safeSearchAnnotations

Array of Object

Safe search annotations.

This object should have the same structure as SafeSearchAnnotation

error

Object

If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.

This object should have the same structure as Status

See also

google.cloud.videointelligence.v1beta1.VideoAnnotationResults definition in proto format

VideoContext

static

Video context and/or feature-specific parameters.

Properties

Parameter

segments

Array of Object

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

This object should have the same structure as VideoSegment

labelDetectionMode

number

If label detection has been requested, what labels should be detected in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

The number should be among the values of LabelDetectionMode

stationaryCamera

boolean

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects.

labelDetectionModel

string

Model to use for label detection. Supported values: "latest" and "stable" (the default).

faceDetectionModel

string

Model to use for face detection. Supported values: "latest" and "stable" (the default).

shotChangeDetectionModel

string

Model to use for shot change detection. Supported values: "latest" and "stable" (the default).

safeSearchDetectionModel

string

Model to use for safe search detection. Supported values: "latest" and "stable" (the default).

See also

google.cloud.videointelligence.v1beta1.VideoContext definition in proto format

VideoSegment

static

Video segment.

Properties

Parameter

startTimeOffset

number

Start offset in microseconds (inclusive). Unset means 0.

endTimeOffset

number

End offset in microseconds (inclusive). Unset means 0.

See also

google.cloud.videointelligence.v1beta1.VideoSegment definition in proto format