Stay organized with collections
Save and categorize content based on your preferences.
Action recognition identifies different actions from video clips, such as
walking or dancing. Each of the actions may or may not be performed throughout
the entire duration of the video.
Using an AutoML model
Before you begin
For background on creating an AutoML model, check out the Vertex AI
Beginner's guide. For
instructions on how to create your AutoML model,
see Video data in
"Develop and use ML models" in the Vertex AI documentation.
Use your AutoML model
The following code sample demonstrates how to use your
AutoML model
for action recognition using the streaming client library.
importiofromgoogle.cloudimportvideointelligence_v1p3beta1asvideointelligence# path = 'path_to_file'# project_id = 'project_id'# model_id = 'automl_action_recognition_model_id'client=videointelligence.StreamingVideoIntelligenceServiceClient()model_path="projects/{}/locations/us-central1/models/{}".format(project_id,model_id)automl_config=videointelligence.StreamingAutomlActionRecognitionConfig(model_name=model_path)video_config=videointelligence.StreamingVideoConfig(feature=videointelligence.StreamingFeature.STREAMING_AUTOML_ACTION_RECOGNITION,automl_action_recognition_config=automl_config,)# config_request should be the first in the stream of requests.config_request=videointelligence.StreamingAnnotateVideoRequest(video_config=video_config)# Set the chunk size to 5MB (recommended less than 10MB).chunk_size=5*1024*1024defstream_generator():yieldconfig_request# Load file content.# Note: Input videos must have supported video codecs. See# https://cloud.google.com/video-intelligence/docs/streaming/streaming#supported_video_codecs# for more details.withio.open(path,"rb")asvideo_file:whileTrue:data=video_file.read(chunk_size)ifnotdata:breakyieldvideointelligence.StreamingAnnotateVideoRequest(input_content=data)requests=stream_generator()# streaming_annotate_video returns a generator.# The default timeout is about 300 seconds.# To process longer videos it should be set to# larger than the length (in seconds) of the video.responses=client.streaming_annotate_video(requests,timeout=900)# Each response corresponds to about 1 second of video.forresponseinresponses:# Check for errors.ifresponse.error.message:print(response.error.message)breakforlabelinresponse.annotation_results.label_annotations:forframeinlabel.frames:print("At {:3d}s segment, {:5.1%}{}".format(frame.time_offset.seconds,frame.confidence,label.entity.entity_id,))
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-30 UTC."],[],[]]