Aktionserkennung

Die Aktionserkennung identifiziert verschiedene Aktionen von Videoclips wie Gehen oder Tanzen. Jede Aktion kann während der gesamten Dauer des Videos ausgeführt werden.

AutoML-Modell verwenden

Hinweis

Hintergrundinformationen zum Erstellen eines AutoML-Modells finden Sie in der Anleitung für Einsteiger der Vertex AI. Eine Anleitung zum Erstellen eines AutoML-Modells finden Sie unter „Dataset erstellen” mithilfe der Konsole oder der API.

Ihr AutoML-Modell verwenden

Im folgenden Codebeispiel wird veranschaulicht, wie Sie Ihr AutoML-Modell für die Aktionserkennung mithilfe der Streaming-Clientbibliothek verwenden.

Python

import io

from google.cloud import videointelligence_v1p3beta1 as videointelligence

# path = 'path_to_file'
# project_id = 'project_id'
# model_id = 'automl_action_recognition_model_id'

client = videointelligence.StreamingVideoIntelligenceServiceClient()

model_path = "projects/{}/locations/us-central1/models/{}".format(
    project_id, model_id
)

automl_config = videointelligence.StreamingAutomlActionRecognitionConfig(
    model_name=model_path
)

video_config = videointelligence.StreamingVideoConfig(
    feature=videointelligence.StreamingFeature.STREAMING_AUTOML_ACTION_RECOGNITION,
    automl_action_recognition_config=automl_config,
)

# config_request should be the first in the stream of requests.
config_request = videointelligence.StreamingAnnotateVideoRequest(
    video_config=video_config
)

# Set the chunk size to 5MB (recommended less than 10MB).
chunk_size = 5 * 1024 * 1024

def stream_generator():
    yield config_request
    # Load file content.
    # Note: Input videos must have supported video codecs. See
    # https://cloud.google.com/video-intelligence/docs/streaming/streaming#supported_video_codecs
    # for more details.
    with io.open(path, "rb") as video_file:
        while True:
            data = video_file.read(chunk_size)
            if not data:
                break
            yield videointelligence.StreamingAnnotateVideoRequest(
                input_content=data
            )

requests = stream_generator()

# streaming_annotate_video returns a generator.
# The default timeout is about 300 seconds.
# To process longer videos it should be set to
# larger than the length (in seconds) of the video.
responses = client.streaming_annotate_video(requests, timeout=900)

# Each response corresponds to about 1 second of video.
for response in responses:
    # Check for errors.
    if response.error.message:
        print(response.error.message)
        break

    for label in response.annotation_results.label_annotations:
        for frame in label.frames:
            print(
                "At {:3d}s segment, {:5.1%} {}".format(
                    frame.time_offset.seconds,
                    frame.confidence,
                    label.entity.entity_id,
                )
            )