Package types (2.2.0)

API documentation for videointelligence_v1p1beta1.types package.

Classes

AnnotateVideoProgress

Video annotation progress. Included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

AnnotateVideoRequest

Video annotation request. .. attribute:: input_uri

Input video location. Currently, only Google Cloud Storage <https://cloud.google.com/storage/> URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs <https://cloud.google.com/storage/docs/request-endpoints>. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as input_content. If set, input_content should be unset.

:type: str

AnnotateVideoResponse

Video annotation response. Included in the response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Entity

Detected entity from video analysis. .. attribute:: entity_id

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API <https://developers.google.com/knowledge-graph/>__.

:type: str

ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.

ExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION. .. attribute:: model

Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

:type: str

ExplicitContentFrame

Video frame level annotation results for explicit content. .. attribute:: time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

:type: google.protobuf.duration_pb2.Duration

Feature

Video annotation feature.

LabelAnnotation

Label annotation. .. attribute:: entity

Detected entity.

:type: google.cloud.videointelligence_v1p1beta1.types.Entity

LabelDetectionConfig

Config for LABEL_DETECTION. .. attribute:: label_detection_mode

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

:type: google.cloud.videointelligence_v1p1beta1.types.LabelDetectionMode

LabelDetectionMode

Label detection mode.

LabelFrame

Video frame level annotation results for label detection. .. attribute:: time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

:type: google.protobuf.duration_pb2.Duration

LabelSegment

Video segment level annotation results for label detection. .. attribute:: segment

Video segment where a label was detected.

:type: google.cloud.videointelligence_v1p1beta1.types.VideoSegment

Likelihood

Bucketized representation of likelihood.

ShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION. .. attribute:: model

Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

:type: str

SpeechContext

Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

SpeechRecognitionAlternative

Alternative hypotheses (a.k.a. n-best list). .. attribute:: transcript

Output only. Transcript text representing the words that the user spoke.

:type: str

SpeechTranscription

A speech recognition result corresponding to a portion of the audio.

SpeechTranscriptionConfig

Config for SPEECH_TRANSCRIPTION. .. attribute:: language_code

Required. Required The language of the supplied audio as a BCP-47 <https://www.rfc-editor.org/rfc/bcp/bcp47.txt> language tag. Example: "en-US". See Language Support <https://cloud.google.com/speech/docs/languages> for a list of the currently supported language codes.

:type: str

VideoAnnotationProgress

Annotation progress for a single video. .. attribute:: input_uri

Output only. Video file location in Google Cloud Storage <https://cloud.google.com/storage/>__.

:type: str

VideoAnnotationResults

Annotation results for a single video. .. attribute:: input_uri

Output only. Video file location in Google Cloud Storage <https://cloud.google.com/storage/>__.

:type: str

VideoContext

Video context and/or feature-specific parameters. .. attribute:: segments

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

:type: Sequence[google.cloud.videointelligence_v1p1beta1.types.VideoSegment]

VideoSegment

Video segment. .. attribute:: start_time_offset

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

:type: google.protobuf.duration_pb2.Duration

WordInfo

Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as enable_word_time_offsets.