Reconnaître du texte

La fonction Détection de texte effectue une reconnaissance optique des caractères (OCR). Elle détecte et extrait le texte dans une vidéo d'entrée.

La détection de texte est disponible pour toutes les langues acceptées par l'API Cloud Vision.

Effectuer une requête de détection de texte pour une vidéo sur Google Cloud Storage

Les exemples suivants illustrent la détection de texte sur un fichier hébergé dans Google Cloud Storage.

Protocole

Reportez-vous au point de terminaison videos:annotate de l'API pour obtenir des informations complètes à ce sujet.

Pour effectuer la détection de texte, adressez une requête POST au point de terminaison v1p2beta1/videos:annotate.

L'exemple suivant utilise la commande gcloud auth application-default print-access-token pour obtenir un jeton d'accès à un compte de service configuré pour le projet à l'aide du SDK Cloud de Google Cloud Platform. Pour obtenir des instructions sur l'installation du SDK Cloud et sur la configuration d'un projet avec un compte de service, consultez la page Démarrage rapide.

curl -X POST \
     -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
     -H "Content-Type: application/json; charset=utf-8" \
     --data "{
      'input_uri': 'gs://cloud-videointelligence-demo/assistant.mp4',
      'features': ['TEXT_DETECTION'],
    }" "https://videointelligence.googleapis.com/v1/videos:annotate"

Si une requête d'annotation Video Intelligence aboutit, elle renvoie une réponse qui ne contient qu'un champ de nom :

{
  "name": "us-west1.12542045171217819670"
}

Ce nom représente une opération de longue durée, qui peut être interrogée à l'aide de l'API v1.operations.

Pour récupérer les résultats de l'opération, remplacez NAME dans la commande ci-dessous par la valeur name indiquée dans la réponse que vous venez de recevoir :

curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer  $(gcloud auth application-default print-access-token)" \
"https://videointelligence.googleapis.com/v1/operations/your-operation-name"

Une fois l'opération terminée, une réponse done: true sera renvoyée, et vous devriez recevoir une liste des annotations de détection de texte.

Les annotations de détection de texte sont renvoyées sous la forme d'une liste textAnnotations.

"textAnnotations": [
  {
    "text": "Hair Salon",
    "segments": [
      {
        "segment": {
          "startTimeOffset": "0.833333s",
          "endTimeOffset": "2.291666s"
        },
        "confidence": 0.99438506,
        "frames": [
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.7015625,
                  "y": 0.59583336
                },
                {
                  "x": 0.7984375,
                  "y": 0.59583336
                },
                {
                  "x": 0.7984375,
                  "y": 0.64166665
                },
                {
                  "x": 0.7015625,
                  "y": 0.64166665
                }
              ]
            },
            "timeOffset": "0.833333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.041666s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.250s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6319444
                },
                {
                  "x": 0.70234376,
                  "y": 0.6319444
                }
              ]
            },
            "timeOffset": "1.458333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.666666s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.875s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "2.083333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "2.291666s"
          }
        ]
      }
    ]
  },
  {
    "text": "\"Sure, give me one second.\"",
    "segments": [
      {
        "segment": {
          "startTimeOffset": "10.625s",
          "endTimeOffset": "13.333333s"
        },
        "confidence": 0.98716676,
        "frames": [
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.60859376,
                  "y": 0.59583336
                },
                {
                  "x": 0.8952959,
                  "y": 0.5903528
                },
                {
                  "x": 0.89560676,
                  "y": 0.6417387
                },
                {
                  "x": 0.60890454,
                  "y": 0.64721924
                }
              ]
            },
            "timeOffset": "10.625s"
          },
  ...

    ]
  }

C#

public static object DetectTextGcs(string gcsUri)
{
    var client = VideoIntelligenceServiceClient.Create();
    var request = new AnnotateVideoRequest
    {
        InputUri = gcsUri,
        Features = { Feature.TextDetection },
    };

    Console.WriteLine("\nProcessing video for text detection.");
    var op = client.AnnotateVideo(request).PollUntilCompleted();

    // Retrieve the first result because only one video was processed.
    var annotationResults = op.Result.AnnotationResults[0];

    // Get only the first result.
    var textAnnotation = annotationResults.TextAnnotations[0];
    Console.WriteLine($"\nText: {textAnnotation.Text}");

    // Get the first text segment.
    var textSegment = textAnnotation.Segments[0];
    var startTime = textSegment.Segment.StartTimeOffset;
    var endTime = textSegment.Segment.EndTimeOffset;
    Console.Write(
        $"Start time: {startTime.Seconds + startTime.Nanos / 1e9 }, ");
    Console.WriteLine(
        $"End time: {endTime.Seconds + endTime.Nanos / 1e9 }");

    Console.WriteLine($"Confidence: {textSegment.Confidence}");

    // Show the result for the first frame in this segment.
    var frame = textSegment.Frames[0];
    var timeOffset = frame.TimeOffset;
    Console.Write("Time offset for the first frame: ");
    Console.WriteLine(timeOffset.Seconds + timeOffset.Nanos * 1e9);
    Console.WriteLine("Rotated Bounding Box Vertices:");
    foreach (var vertex in frame.RotatedBoundingBox.Vertices)
    {
        Console.WriteLine(
            $"\tVertex x: {vertex.X}, Vertex.y: {vertex.Y}");
    }
    return 0;
}

Go

import (
	"fmt"
	"io"
	"log"

	"context"

	"github.com/golang/protobuf/ptypes"

	video "cloud.google.com/go/videointelligence/apiv1"
	videopb "google.golang.org/genproto/googleapis/cloud/videointelligence/v1"
)

// textDetectionGCS analyzes a video and extracts the text from the video's audio.
func textDetectionGCS(w io.Writer, gcsURI string) error {
	// gcsURI := "gs://python-docs-samples-tests/video/googlework_short.mp4"

	ctx := context.Background()

	// Creates a client.
	client, err := video.NewClient(ctx)
	if err != nil {
		log.Fatalf("Failed to create client: %v", err)
	}

	op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
		InputUri: gcsURI,
		Features: []videopb.Feature{
			videopb.Feature_TEXT_DETECTION,
		},
	})
	if err != nil {
		log.Fatalf("Failed to start annotation job: %v", err)
	}

	resp, err := op.Wait(ctx)
	if err != nil {
		log.Fatalf("Failed to annotate: %v", err)
	}

	// Only one video was processed, so get the first result.
	result := resp.GetAnnotationResults()[0]

	for _, annotation := range result.TextAnnotations {
		fmt.Fprintf(w, "Text: %q\n", annotation.GetText())

		// Get the first text segment.
		segment := annotation.GetSegments()[0]
		start, _ := ptypes.Duration(segment.GetSegment().GetStartTimeOffset())
		end, _ := ptypes.Duration(segment.GetSegment().GetEndTimeOffset())
		fmt.Fprintf(w, "\tSegment: %v to %v\n", start, end)

		fmt.Fprintf(w, "\tConfidence: %f\n", segment.GetConfidence())

		// Show the result for the first frame in this segment.
		frame := segment.GetFrames()[0]
		seconds := float32(frame.GetTimeOffset().GetSeconds())
		nanos := float32(frame.GetTimeOffset().GetNanos())
		fmt.Fprintf(w, "\tTime offset of the first frame: %fs\n", seconds+nanos/1e9)

		fmt.Fprintf(w, "\tRotated bounding box vertices:\n")
		for _, vertex := range frame.GetRotatedBoundingBox().GetVertices() {
			fmt.Fprintf(w, "\t\tVertex x=%f, y=%f\n", vertex.GetX(), vertex.GetY())
		}
	}

	return nil
}

Java

/**
 * Detect Text in a video.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static VideoAnnotationResults detectTextGcs(String gcsUri) throws Exception {
  try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
    // Create the request
    AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder()
        .setInputUri(gcsUri)
        .addFeatures(Feature.TEXT_DETECTION)
        .build();

    // asynchronously perform object tracking on videos
    OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future =
        client.annotateVideoAsync(request);

    System.out.println("Waiting for operation to complete...");
    // The first result is retrieved because a single video was processed.
    AnnotateVideoResponse response = future.get(300, TimeUnit.SECONDS);
    VideoAnnotationResults results = response.getAnnotationResults(0);

    // Get only the first annotation for demo purposes.
    TextAnnotation annotation = results.getTextAnnotations(0);
    System.out.println("Text: " + annotation.getText());

    // Get the first text segment.
    TextSegment textSegment = annotation.getSegments(0);
    System.out.println("Confidence: " + textSegment.getConfidence());
    // For the text segment display it's time offset
    VideoSegment videoSegment = textSegment.getSegment();
    Duration startTimeOffset = videoSegment.getStartTimeOffset();
    Duration endTimeOffset = videoSegment.getEndTimeOffset();
    // Display the offset times in seconds, 1e9 is part of the formula to convert nanos to seconds
    System.out.println(String.format("Start time: %.2f",
        startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9));
    System.out.println(String.format("End time: %.2f",
        endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));

    // Show the first result for the first frame in the segment.
    TextFrame textFrame = textSegment.getFrames(0);
    Duration timeOffset = textFrame.getTimeOffset();
    System.out.println(String.format("Time offset for the first frame: %.2f",
        timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));

    // Display the rotated bounding box for where the text is on the frame.
    System.out.println("Rotated Bounding Box Vertices:");
    List<NormalizedVertex> vertices = textFrame.getRotatedBoundingBox().getVerticesList();
    for (NormalizedVertex normalizedVertex : vertices) {
      System.out.println(String.format(
          "\tVertex.x: %.2f, Vertex.y: %.2f",
          normalizedVertex.getX(),
          normalizedVertex.getY()));
    }
    return results;
  }
}

Node.js

// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/video-intelligence').v1p2beta1;
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
 * TODO(developer): Uncomment the following line before running the sample.
 */
// const gcsUri = 'GCS URI of the video to analyze, e.g. gs://my-bucket/my-video.mp4';

const request = {
  inputUri: gcsUri,
  features: ['TEXT_DETECTION'],
};
// Detects text in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');
// Gets annotations for video
const textAnnotations = results[0].annotationResults[0].textAnnotations;
textAnnotations.forEach(textAnnotation => {
  console.log(`Text ${textAnnotation.text} occurs at:`);
  textAnnotation.segments.forEach(segment => {
    const time = segment.segment;
    if (time.startTimeOffset.seconds === undefined) {
      time.startTimeOffset.seconds = 0;
    }
    if (time.startTimeOffset.nanos === undefined) {
      time.startTimeOffset.nanos = 0;
    }
    if (time.endTimeOffset.seconds === undefined) {
      time.endTimeOffset.seconds = 0;
    }
    if (time.endTimeOffset.nanos === undefined) {
      time.endTimeOffset.nanos = 0;
    }
    console.log(
      `\tStart: ${time.startTimeOffset.seconds}` +
        `.${(time.startTimeOffset.nanos / 1e6).toFixed(0)}s`
    );
    console.log(
      `\tEnd: ${time.endTimeOffset.seconds}.` +
        `${(time.endTimeOffset.nanos / 1e6).toFixed(0)}s`
    );
    console.log(`\tConfidence: ${segment.confidence}`);
    segment.frames.forEach(frame => {
      const timeOffset = frame.timeOffset;
      if (timeOffset.seconds === undefined) {
        timeOffset.seconds = 0;
      }
      if (timeOffset.nanos === undefined) {
        timeOffset.nanos = 0;
      }
      console.log(
        `Time offset for the frame: ${timeOffset.seconds}` +
          `.${(timeOffset.nanos / 1e6).toFixed(0)}s`
      );
      console.log(`Rotated Bounding Box Vertices:`);
      frame.rotatedBoundingBox.vertices.forEach(vertex => {
        console.log(`Vertex.x:${vertex.x}, Vertex.y:${vertex.y}`);
      });
    });
  });
});

Python

"""Detect text in a video stored on GCS."""
from google.cloud import videointelligence_v1p2beta1 as videointelligence

video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.TEXT_DETECTION]

operation = video_client.annotate_video(
    input_uri=input_uri,
    features=features)

print('\nProcessing video for text detection.')
result = operation.result(timeout=300)

# The first result is retrieved because a single video was processed.
annotation_result = result.annotation_results[0]

# Get only the first result
text_annotation = annotation_result.text_annotations[0]
print('\nText: {}'.format(text_annotation.text))

# Get the first text segment
text_segment = text_annotation.segments[0]
start_time = text_segment.segment.start_time_offset
end_time = text_segment.segment.end_time_offset
print('start_time: {}, end_time: {}'.format(
    start_time.seconds + start_time.nanos * 1e-9,
    end_time.seconds + end_time.nanos * 1e-9))

print('Confidence: {}'.format(text_segment.confidence))

# Show the result for the first frame in this segment.
frame = text_segment.frames[0]
time_offset = frame.time_offset
print('Time offset for the first frame: {}'.format(
    time_offset.seconds + time_offset.nanos * 1e-9))
print('Rotated Bounding Box Vertices:')
for vertex in frame.rotated_bounding_box.vertices:
    print('\tVertex.x: {}, Vertex.y: {}'.format(vertex.x, vertex.y))

Ruby

# path = "Path to a video file on Google Cloud Storage: gs://bucket/video.mp4"

require "google/cloud/video_intelligence"

video = Google::Cloud::VideoIntelligence.new

# Register a callback during the method call
operation = video.annotate_video input_uri: path, features: [:TEXT_DETECTION] do |operation|
  raise operation.results.message? if operation.error?
  puts "Finished Processing."

  text_annotations = operation.results.annotation_results.first.text_annotations

  text_annotations.each do |text_annotation|
    puts "Text: #{text_annotation.text}"

    # Print information about the first segment of the text.
    text_segment = text_annotation.segments.first
    start_time_offset = text_segment.segment.start_time_offset
    end_time_offset = text_segment.segment.end_time_offset
    start_time = (start_time_offset.seconds +
                   start_time_offset.nanos / 1e9)
    end_time =   (end_time_offset.seconds +
                   end_time_offset.nanos / 1e9)
    puts "start_time: #{start_time}, end_time: #{end_time}"

    puts "Confidence: #{text_segment.confidence}"

    # Print information about the first frame of the segment.
    frame = text_segment.frames.first
    time_offset = (frame.time_offset.seconds +
                    frame.time_offset.nanos / 1e9)
    puts "Time offset for the first frame: #{time_offset}"

    puts "Rotated bounding box vertices:"
    frame.rotated_bounding_box.vertices.each do |vertex|
      puts "\tVertex.x: #{vertex.x}, Vertex.y: #{vertex.y}"
    end
  end
end

puts "Processing video for text detection:"
operation.wait_until_done!

Effectuer une requête de détection de texte pour un fichier vidéo local

Les exemples suivants illustrent la détection de texte sur un fichier stocké en local.

Protocole

Reportez-vous au point de terminaison videos:annotate de l'API pour obtenir des informations complètes à ce sujet.

Pour effectuer une détection de texte : encodez votre vidéo en base64, puis incluez la sortie encodée en base64 dans le champ input_content de la requête POST que vous adressez au point de terminaison v1p2beta1/videos:annotate. Pour plus d'informations sur l'encodage en base64, consultez la page Encodage Base64.

L'exemple suivant utilise la commande gcloud auth application-default print-access-token pour obtenir un jeton d'accès à un compte de service configuré pour le projet à l'aide du SDK Cloud de Google Cloud Platform. Pour obtenir des instructions sur l'installation du SDK Cloud et sur la configuration d'un projet avec un compte de service, consultez la page Démarrage rapide.

curl -X POST \
     -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
     -H "Content-Type: application/json; charset=utf-8" \
     --data "{
      'input_content': 'UklGRg41AwBBVkkgTElTVAwBAABoZHJsYXZpaDgAAAA1ggAAxPMBAAAAAAAQCAA...',
      'features': ['TEXT_DETECTION'],
    }" "https://videointelligence.googleapis.com/v1/videos:annotate"

Si une requête d'annotation Video Intelligence aboutit, elle renvoie une réponse qui ne contient qu'un champ de nom :

{
  "name": "us-west1.12542045171217819670"
}

Ce nom représente une opération de longue durée, qui peut être interrogée à l'aide de l'API v1.operations.

Pour récupérer les résultats de l'opération, remplacez NAME dans la commande ci-dessous par la valeur name indiquée dans la réponse que vous venez de recevoir :

curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer  $(gcloud auth application-default print-access-token)" \
"https://videointelligence.googleapis.com/v1/operations/your-operation-name"

Une fois l'opération terminée, une réponse done: true sera renvoyée, et vous devriez recevoir une liste des annotations de détection de texte.

Les annotations de détection de texte sont renvoyées sous la forme d'une liste textAnnotations.

"textAnnotations": [
  {
    "text": "Hair Salon",
    "segments": [
      {
        "segment": {
          "startTimeOffset": "0.833333s",
          "endTimeOffset": "2.291666s"
        },
        "confidence": 0.99438506,
        "frames": [
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.7015625,
                  "y": 0.59583336
                },
                {
                  "x": 0.7984375,
                  "y": 0.59583336
                },
                {
                  "x": 0.7984375,
                  "y": 0.64166665
                },
                {
                  "x": 0.7015625,
                  "y": 0.64166665
                }
              ]
            },
            "timeOffset": "0.833333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.041666s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.250s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6319444
                },
                {
                  "x": 0.70234376,
                  "y": 0.6319444
                }
              ]
            },
            "timeOffset": "1.458333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.666666s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "1.875s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "2.083333s"
          },
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.70234376,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6
                },
                {
                  "x": 0.7992188,
                  "y": 0.6333333
                },
                {
                  "x": 0.70234376,
                  "y": 0.6333333
                }
              ]
            },
            "timeOffset": "2.291666s"
          }
        ]
      }
    ]
  },
  {
    "text": "\"Sure, give me one second.\"",
    "segments": [
      {
        "segment": {
          "startTimeOffset": "10.625s",
          "endTimeOffset": "13.333333s"
        },
        "confidence": 0.98716676,
        "frames": [
          {
            "rotatedBoundingBox": {
              "vertices": [
                {
                  "x": 0.60859376,
                  "y": 0.59583336
                },
                {
                  "x": 0.8952959,
                  "y": 0.5903528
                },
                {
                  "x": 0.89560676,
                  "y": 0.6417387
                },
                {
                  "x": 0.60890454,
                  "y": 0.64721924
                }
              ]
            },
            "timeOffset": "10.625s"
          },
  ...

    ]
  }

C#

public static object DetectText(string filePath)
{
    var client = VideoIntelligenceServiceClient.Create();
    var request = new AnnotateVideoRequest
    {
        InputContent = Google.Protobuf.ByteString.CopyFrom(File.ReadAllBytes(filePath)),
        Features = { Feature.TextDetection },
    };

    Console.WriteLine("\nProcessing video for text detection.");
    var op = client.AnnotateVideo(request).PollUntilCompleted();

    // Retrieve the first result because only one video was processed.
    var annotationResults = op.Result.AnnotationResults[0];

    // Get only the first result.
    var textAnnotation = annotationResults.TextAnnotations[0];
    Console.WriteLine($"\nText: {textAnnotation.Text}");

    // Get the first text segment.
    var textSegment = textAnnotation.Segments[0];
    var startTime = textSegment.Segment.StartTimeOffset;
    var endTime = textSegment.Segment.EndTimeOffset;
    Console.Write(
        $"Start time: {startTime.Seconds + startTime.Nanos / 1e9 }, ");
    Console.WriteLine(
        $"End time: {endTime.Seconds + endTime.Nanos / 1e9 }");

    Console.WriteLine($"Confidence: {textSegment.Confidence}");

    // Show the result for the first frame in this segment.
    var frame = textSegment.Frames[0];
    var timeOffset = frame.TimeOffset;
    Console.Write("Time offset for the first frame: ");
    Console.WriteLine(timeOffset.Seconds + timeOffset.Nanos * 1e9);
    Console.WriteLine("Rotated Bounding Box Vertices:");
    foreach (var vertex in frame.RotatedBoundingBox.Vertices)
    {
        Console.WriteLine(
            $"\tVertex x: {vertex.X}, Vertex.y: {vertex.Y}");
    }
    return 0;
}

Go

import (
	"fmt"
	"io"
	"io/ioutil"
	"log"

	"context"

	"github.com/golang/protobuf/ptypes"

	video "cloud.google.com/go/videointelligence/apiv1"
	videopb "google.golang.org/genproto/googleapis/cloud/videointelligence/v1"
)

// textDetection analyzes a video and extracts the text from the video's audio.
func textDetection(w io.Writer, filename string) error {
	// filename := "resources/googlework_short.mp4"

	ctx := context.Background()

	// Creates a client.
	client, err := video.NewClient(ctx)
	if err != nil {
		log.Fatalf("Failed to create client: %v", err)
	}

	fileBytes, err := ioutil.ReadFile(filename)
	if err != nil {
		return err
	}

	op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
		InputContent: fileBytes,
		Features: []videopb.Feature{
			videopb.Feature_TEXT_DETECTION,
		},
	})
	if err != nil {
		log.Fatalf("Failed to start annotation job: %v", err)
	}

	resp, err := op.Wait(ctx)
	if err != nil {
		log.Fatalf("Failed to annotate: %v", err)
	}

	// Only one video was processed, so get the first result.
	result := resp.GetAnnotationResults()[0]

	for _, annotation := range result.TextAnnotations {
		fmt.Fprintf(w, "Text: %q\n", annotation.GetText())

		// Get the first text segment.
		segment := annotation.GetSegments()[0]
		start, _ := ptypes.Duration(segment.GetSegment().GetStartTimeOffset())
		end, _ := ptypes.Duration(segment.GetSegment().GetEndTimeOffset())
		fmt.Fprintf(w, "\tSegment: %v to %v\n", start, end)

		fmt.Fprintf(w, "\tConfidence: %f\n", segment.GetConfidence())

		// Show the result for the first frame in this segment.
		frame := segment.GetFrames()[0]
		seconds := float32(frame.GetTimeOffset().GetSeconds())
		nanos := float32(frame.GetTimeOffset().GetNanos())
		fmt.Fprintf(w, "\tTime offset of the first frame: %fs\n", seconds+nanos/1e9)

		fmt.Fprintf(w, "\tRotated bounding box vertices:\n")
		for _, vertex := range frame.GetRotatedBoundingBox().GetVertices() {
			fmt.Fprintf(w, "\t\tVertex x=%f, y=%f\n", vertex.GetX(), vertex.GetY())
		}
	}

	return nil
}

Java

/**
 * Detect text in a video.
 *
 * @param filePath the path to the video file to analyze.
 */
public static VideoAnnotationResults detectText(String filePath) throws Exception {
  try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
    // Read file
    Path path = Paths.get(filePath);
    byte[] data = Files.readAllBytes(path);

    // Create the request
    AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder()
        .setInputContent(ByteString.copyFrom(data))
        .addFeatures(Feature.TEXT_DETECTION)
        .build();

    // asynchronously perform object tracking on videos
    OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future =
        client.annotateVideoAsync(request);

    System.out.println("Waiting for operation to complete...");
    // The first result is retrieved because a single video was processed.
    AnnotateVideoResponse response = future.get(300, TimeUnit.SECONDS);
    VideoAnnotationResults results = response.getAnnotationResults(0);

    // Get only the first annotation for demo purposes.
    TextAnnotation annotation = results.getTextAnnotations(0);
    System.out.println("Text: " + annotation.getText());

    // Get the first text segment.
    TextSegment textSegment = annotation.getSegments(0);
    System.out.println("Confidence: " + textSegment.getConfidence());
    // For the text segment display it's time offset
    VideoSegment videoSegment = textSegment.getSegment();
    Duration startTimeOffset = videoSegment.getStartTimeOffset();
    Duration endTimeOffset = videoSegment.getEndTimeOffset();
    // Display the offset times in seconds, 1e9 is part of the formula to convert nanos to seconds
    System.out.println(String.format("Start time: %.2f",
        startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9));
    System.out.println(String.format("End time: %.2f",
        endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));

    // Show the first result for the first frame in the segment.
    TextFrame textFrame = textSegment.getFrames(0);
    Duration timeOffset = textFrame.getTimeOffset();
    System.out.println(String.format("Time offset for the first frame: %.2f",
        timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));

    // Display the rotated bounding box for where the text is on the frame.
    System.out.println("Rotated Bounding Box Vertices:");
    List<NormalizedVertex> vertices = textFrame.getRotatedBoundingBox().getVerticesList();
    for (NormalizedVertex normalizedVertex : vertices) {
      System.out.println(String.format(
          "\tVertex.x: %.2f, Vertex.y: %.2f",
          normalizedVertex.getX(),
          normalizedVertex.getY()));
    }
    return results;
  }
}

Node.js

// Imports the Google Cloud Video Intelligence library + Node's fs library
const Video = require('@google-cloud/video-intelligence').v1p2beta1;
const fs = require('fs');
const util = require('util');
// Creates a client
const video = new Video.VideoIntelligenceServiceClient();

/**
 * TODO(developer): Uncomment the following line before running the sample.
 */
// const path = 'Local file to analyze, e.g. ./my-file.mp4';

// Reads a local video file and converts it to base64
const file = await util.promisify(fs.readFile)(path);
const inputContent = file.toString('base64');

const request = {
  inputContent: inputContent,
  features: ['TEXT_DETECTION'],
};
// Detects text in a video
const [operation] = await video.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');

// Gets annotations for video
const textAnnotations = results[0].annotationResults[0].textAnnotations;
textAnnotations.forEach(textAnnotation => {
  console.log(`Text ${textAnnotation.text} occurs at:`);
  textAnnotation.segments.forEach(segment => {
    const time = segment.segment;
    if (time.startTimeOffset.seconds === undefined) {
      time.startTimeOffset.seconds = 0;
    }
    if (time.startTimeOffset.nanos === undefined) {
      time.startTimeOffset.nanos = 0;
    }
    if (time.endTimeOffset.seconds === undefined) {
      time.endTimeOffset.seconds = 0;
    }
    if (time.endTimeOffset.nanos === undefined) {
      time.endTimeOffset.nanos = 0;
    }
    console.log(
      `\tStart: ${time.startTimeOffset.seconds}` +
        `.${(time.startTimeOffset.nanos / 1e6).toFixed(0)}s`
    );
    console.log(
      `\tEnd: ${time.endTimeOffset.seconds}.` +
        `${(time.endTimeOffset.nanos / 1e6).toFixed(0)}s`
    );
    console.log(`\tConfidence: ${segment.confidence}`);
    segment.frames.forEach(frame => {
      const timeOffset = frame.timeOffset;
      if (timeOffset.seconds === undefined) {
        timeOffset.seconds = 0;
      }
      if (timeOffset.nanos === undefined) {
        timeOffset.nanos = 0;
      }
      console.log(
        `Time offset for the frame: ${timeOffset.seconds}` +
          `.${(timeOffset.nanos / 1e6).toFixed(0)}s`
      );
      console.log(`Rotated Bounding Box Vertices:`);
      frame.rotatedBoundingBox.vertices.forEach(vertex => {
        console.log(`Vertex.x:${vertex.x}, Vertex.y:${vertex.y}`);
      });
    });
  });
});

Python

"""Detect text in a local video."""
from google.cloud import videointelligence_v1p2beta1 as videointelligence

video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.TEXT_DETECTION]
video_context = videointelligence.types.VideoContext()

with io.open(path, 'rb') as file:
    input_content = file.read()

operation = video_client.annotate_video(
    input_content=input_content,  # the bytes of the video file
    features=features,
    video_context=video_context)

print('\nProcessing video for text detection.')
result = operation.result(timeout=300)

# The first result is retrieved because a single video was processed.
annotation_result = result.annotation_results[0]

# Get only the first result
text_annotation = annotation_result.text_annotations[0]
print('\nText: {}'.format(text_annotation.text))

# Get the first text segment
text_segment = text_annotation.segments[0]
start_time = text_segment.segment.start_time_offset
end_time = text_segment.segment.end_time_offset
print('start_time: {}, end_time: {}'.format(
    start_time.seconds + start_time.nanos * 1e-9,
    end_time.seconds + end_time.nanos * 1e-9))

print('Confidence: {}'.format(text_segment.confidence))

# Show the result for the first frame in this segment.
frame = text_segment.frames[0]
time_offset = frame.time_offset
print('Time offset for the first frame: {}'.format(
    time_offset.seconds + time_offset.nanos * 1e-9))
print('Rotated Bounding Box Vertices:')
for vertex in frame.rotated_bounding_box.vertices:
    print('\tVertex.x: {}, Vertex.y: {}'.format(vertex.x, vertex.y))

Ruby

# "Path to a local video file: path/to/file.mp4"

require "google/cloud/video_intelligence"

video = Google::Cloud::VideoIntelligence.new

video_contents = File.binread path

# Register a callback during the method call
operation = video.annotate_video input_content: video_contents, features: [:TEXT_DETECTION] do |operation|
  raise operation.results.message? if operation.error?
  puts "Finished Processing."

  text_annotations = operation.results.annotation_results.first.text_annotations

  text_annotations.each do |text_annotation|
    puts "Text: #{text_annotation.text}"

    # Print information about the first segment of the text.
    text_segment = text_annotation.segments.first
    start_time_offset = text_segment.segment.start_time_offset
    end_time_offset = text_segment.segment.end_time_offset
    start_time = (start_time_offset.seconds +
                   start_time_offset.nanos / 1e9)
    end_time =   (end_time_offset.seconds +
                   end_time_offset.nanos / 1e9)
    puts "start_time: #{start_time}, end_time: #{end_time}"

    puts "Confidence: #{text_segment.confidence}"

    # Print information about the first frame of the segment.
    frame = text_segment.frames.first
    time_offset = (frame.time_offset.seconds +
                    frame.time_offset.nanos / 1e9)
    puts "Time offset for the first frame: #{time_offset}"

    puts "Rotated bounding box vertices:"
    frame.rotated_bounding_box.vertices.each do |vertex|
      puts "\tVertex.x: #{vertex.x}, Vertex.y: #{vertex.y}"
    end
  end
end

puts "Processing video for text detection:"
operation.wait_until_done!
Cette page vous a-t-elle été utile ? Évaluez-la :

Envoyer des commentaires concernant…

Documentation de l'API Cloud Video Intelligence