Resource: AnnotateVideoRequest
Video annotation request.
| JSON representation | |
|---|---|
{ "inputUri": string, "inputContent": string, "features": [ enum(  | 
              |
| Fields | |
|---|---|
inputUri | 
                
                   
 
                    Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:   | 
              
inputContent | 
                
                   
 
                    The video data bytes. Encoding: base64. If unset, the input video(s) should be specified via   | 
              
features[] | 
                
                   
 Requested video annotation features.  | 
              
videoContext | 
                
                   
 Additional video context and/or feature-specific parameters.  | 
              
outputUri | 
                
                   
 
                    Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:   | 
              
locationId | 
                
                   
 
                    Optional cloud region where annotation should take place. Supported cloud regions:   | 
              
Feature
Video annotation feature.
| Enums | |
|---|---|
FEATURE_UNSPECIFIED | 
                Unspecified. | 
LABEL_DETECTION | 
                Label detection. Detect objects, such as dog or flower. | 
SHOT_CHANGE_DETECTION | 
                Shot change detection. | 
SAFE_SEARCH_DETECTION | 
                Safe search detection. | 
VideoContext
Video context and/or feature-specific parameters.
| JSON representation | |
|---|---|
{ "segments": [ { object(  | 
              |
| Fields | |
|---|---|
segments[] | 
                
                   
 Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.  | 
              
labelDetectionMode | 
                
                   
 
                    If label detection has been requested, what labels should be detected in addition to video-level labels or segment-level labels. If unspecified, defaults to   | 
              
stationaryCamera | 
                
                   
 Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects.  | 
              
labelDetectionModel | 
                
                   
 Model to use for label detection. Supported values: "latest" and "stable" (the default).  | 
              
shotChangeDetectionModel | 
                
                   
 Model to use for shot change detection. Supported values: "latest" and "stable" (the default).  | 
              
safeSearchDetectionModel | 
                
                   
 Model to use for safe search detection. Supported values: "latest" and "stable" (the default).  | 
              
LabelDetectionMode
Label detection mode.
| Enums | |
|---|---|
LABEL_DETECTION_MODE_UNSPECIFIED | 
                Unspecified. | 
SHOT_MODE | 
                Detect shot-level labels. | 
FRAME_MODE | 
                Detect frame-level labels. | 
SHOT_AND_FRAME_MODE | 
                Detect both shot-level and frame-level labels. | 
Methods | 
            |
|---|---|
                
 | 
              Performs asynchronous video annotation. |