Package types (2.3.3)

API documentation for videointelligence_v1beta2.types package.

Classes

AnnotateVideoProgress

Video annotation progress. Included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

AnnotateVideoRequest

Video annotation request. .. attribute:: input_uri

Input video location. Currently, only Google Cloud Storage <https://cloud.google.com/storage/> URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs <https://cloud.google.com/storage/docs/request-endpoints>. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as input_content. If set, input_content should be unset.

:type: str

AnnotateVideoResponse

Video annotation response. Included in the response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Entity

Detected entity from video analysis. .. attribute:: entity_id

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API <https://developers.google.com/knowledge-graph/>__.

:type: str

ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.

ExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION. .. attribute:: model

Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

:type: str

ExplicitContentFrame

Video frame level annotation results for explicit content. .. attribute:: time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

:type: google.protobuf.duration_pb2.Duration

FaceAnnotation

Face annotation. .. attribute:: thumbnail

Thumbnail of a representative face view (in JPEG format).

:type: bytes

FaceDetectionConfig

Config for FACE_DETECTION. .. attribute:: model

Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

:type: str

FaceFrame

Video frame level annotation results for face detection. .. attribute:: normalized_bounding_boxes

Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame.

:type: Sequence[google.cloud.videointelligence_v1beta2.types.NormalizedBoundingBox]

FaceSegment

Video segment level annotation results for face detection. .. attribute:: segment

Video segment where a face was detected.

:type: google.cloud.videointelligence_v1beta2.types.VideoSegment

Feature

Video annotation feature.

LabelAnnotation

Label annotation. .. attribute:: entity

Detected entity.

:type: google.cloud.videointelligence_v1beta2.types.Entity

LabelDetectionConfig

Config for LABEL_DETECTION. .. attribute:: label_detection_mode

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

:type: google.cloud.videointelligence_v1beta2.types.LabelDetectionMode

LabelDetectionMode

Label detection mode.

LabelFrame

Video frame level annotation results for label detection. .. attribute:: time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

:type: google.protobuf.duration_pb2.Duration

LabelSegment

Video segment level annotation results for label detection. .. attribute:: segment

Video segment where a label was detected.

:type: google.cloud.videointelligence_v1beta2.types.VideoSegment

Likelihood

Bucketized representation of likelihood.

NormalizedBoundingBox

Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].

ShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION. .. attribute:: model

Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

:type: str

VideoAnnotationProgress

Annotation progress for a single video. .. attribute:: input_uri

Video file location in Google Cloud Storage <https://cloud.google.com/storage/>__.

:type: str

VideoAnnotationResults

Annotation results for a single video. .. attribute:: input_uri

Video file location in Google Cloud Storage <https://cloud.google.com/storage/>__.

:type: str

VideoContext

Video context and/or feature-specific parameters. .. attribute:: segments

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

:type: Sequence[google.cloud.videointelligence_v1beta2.types.VideoSegment]

VideoSegment

Video segment. .. attribute:: start_time_offset

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

:type: google.protobuf.duration_pb2.Duration