Obtenir des horodatages au niveau du mot

Cette page explique comment obtenir les valeurs de décalage temporel pour le contenu audio transcrit par Speech-to-Text.

Speech-to-Text peut inclure des valeurs de décalage temporel (horodatage) dans le texte de réponse de votre requête de reconnaissance. Ces valeurs indiquent le début et la fin de chaque mot prononcé reconnu dans le contenu audio fourni. Une valeur de décalage temporel représente la durée écoulée depuis le début du contenu audio, par incréments de 100 ms.

Les décalages temporels sont particulièrement utiles pour analyser des fichiers audio plus longs, dans lesquels vous pourriez avoir besoin de rechercher un mot précis dans le texte reconnu et de le localiser (chercher) dans le contenu audio d'origine. Speech-to-Text permet d'utiliser des décalages temporels avec toutes les méthodes de reconnaissance vocale, à savoir speech:recognize, speech:longrunningrecognize et Streaming.

Les valeurs de décalage temporel ne sont incluses que pour la première alternative fournie dans la réponse de reconnaissance.

Pour inclure des décalages temporels dans les résultats de votre requête, définissez le paramètre enableWordTimeOffsets sur true dans la configuration de votre requête.

Protocole

Reportez-vous au point de terminaison speech:longrunningrecognize de l'API pour obtenir des informations complètes.

Pour réaliser une reconnaissance vocale synchrone, exécutez une requête POST en fournissant le corps de requête approprié. Voici un exemple de requête POST utilisant curl. Cet exemple fait intervenir le jeton d'accès associé à un compte de service configuré pour le projet à l'aide du SDK Cloud de Google Cloud. Pour obtenir des instructions sur l'installation du SDK Cloud, la configuration d'un projet avec un compte de service et l'obtention d'un jeton d'accès, consultez la page Démarrage rapide.

curl -X POST \
     -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
     -H "Content-Type: application/json; charset=utf-8" \
     --data "{
  'config': {
    'language_code': 'en-US',
    'enableWordTimeOffsets': true
  },
  'audio':{
    'uri':'gs://gcs-test-data/vr.flac'
  }
}" "https://speech.googleapis.com/v1/speech:longrunningrecognize"

Pour savoir comment configurer le corps de la requête, consultez la documentation de référence sur RecognitionConfig et RecognitionAudio.

Si la requête aboutit, le serveur renvoie un code d'état HTTP 200 OK et la réponse au format JSON :

{
  "name": "7612202767953098924"
}

name est le nom de l'opération de longue durée créée pour la requête.

Le traitement du fichier vr.flac prend environ 30 secondes. Pour récupérer le résultat de l'opération, envoyez une requête GET au point de terminaison https://speech.googleapis.com/v1/operations/. Remplacez your-operation-name par la valeur name provenant de votre requête longrunningrecognize.

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
     -H "Content-Type: application/json; charset=utf-8"
     "https://speech.googleapis.com/v1/operations/your-operation-name"

Si la requête aboutit, le serveur renvoie un code d'état HTTP 200 OK et la réponse au format JSON :

{
  "name": "7612202767953098924",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata",
    "progressPercent": 100,
    "startTime": "2017-07-20T16:36:55.033650Z",
    "lastUpdateTime": "2017-07-20T16:37:17.158630Z"
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeResponse",
    "results": [
      {
        "alternatives": [
          {
            "transcript": "okay so what am I doing here...(etc)...",
            "confidence": 0.96596134,
            "words": [
              {
                "startTime": "1.400s",
                "endTime": "1.800s",
                "word": "okay"
              },
              {
                "startTime": "1.800s",
                "endTime": "2.300s",
                "word": "so"
              },
              {
                "startTime": "2.300s",
                "endTime": "2.400s",
                "word": "what"
              },
              {
                "startTime": "2.400s",
                "endTime": "2.600s",
                "word": "am"
              },
              {
                "startTime": "2.600s",
                "endTime": "2.600s",
                "word": "I"
              },
              {
                "startTime": "2.600s",
                "endTime": "2.700s",
                "word": "doing"
              },
              {
                "startTime": "2.700s",
                "endTime": "3s",
                "word": "here"
              },
              {
                "startTime": "3s",
                "endTime": "3.300s",
                "word": "why"
              },
              {
                "startTime": "3.300s",
                "endTime": "3.400s",
                "word": "am"
              },
              {
                "startTime": "3.400s",
                "endTime": "3.500s",
                "word": "I"
              },
              {
                "startTime": "3.500s",
                "endTime": "3.500s",
                "word": "here"
              },
              ...
            ]
          }
        ]
      },
      {
        "alternatives": [
          {
            "transcript": "so so what am I doing here...(etc)...",
            "confidence": 0.9642093,
          }
        ]
      }
    ]
  }
}

Si l'opération n'est pas terminée, vous pouvez interroger le point de terminaison en exécutant plusieurs fois la requête GET jusqu'à ce que la propriété done de la réponse passe à la valeur true.

gcloud

Reportez-vous à la commande recognize-long-running pour obtenir tous les détails.

Pour effectuer une reconnaissance vocale asynchrone, servez-vous de l'outil de ligne de commande gcloud en fournissant le chemin d'accès d'un fichier local ou une URL Google Cloud Storage. Incluez le paramètre --include-word-time-offsets.

gcloud ml speech recognize-long-running \
    'gs://cloud-samples-tests/speech/brooklyn.flac' \
    --language-code='en-US' --include-word-time-offsets --async

Si la requête aboutit, le serveur renvoie l'ID de l'opération de longue durée au format JSON.

{
  "name": OPERATION_ID
}

Vous pouvez ensuite obtenir des informations sur l'opération en exécutant la commande suivante :

gcloud ml speech operations describe OPERATION_ID

Vous avez également la possibilité d'interroger l'opération jusqu'à ce qu'elle soit terminée en exécutant la commande suivante :

gcloud ml speech operations wait OPERATION_ID

Une fois l'opération terminée, le serveur renvoie une transcription du contenu audio au format JSON.

{
  "@type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeResponse",
  "results": [
    {
      "alternatives": [
        {
          "confidence": 0.9840146,
          "transcript": "how old is the Brooklyn Bridge",
          "words": [
            {
              "endTime": "0.300s",
              "startTime": "0s",
              "word": "how"
            },
            {
              "endTime": "0.600s",
              "startTime": "0.300s",
              "word": "old"
            },
            {
              "endTime": "0.800s",
              "startTime": "0.600s",
              "word": "is"
            },
            {
              "endTime": "0.900s",
              "startTime": "0.800s",
              "word": "the"
            },
            {
              "endTime": "1.100s",
              "startTime": "0.900s",
              "word": "Brooklyn"
            },
            {
              "endTime": "1.500s",
              "startTime": "1.100s",
              "word": "Bridge"
            }
          ]
        }
      ]
    }
  ]
}

C#

static object AsyncRecognizeGcsWords(string storageUri)
{
    var speech = SpeechClient.Create();
    var longOperation = speech.LongRunningRecognize(new RecognitionConfig()
    {
        Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
        SampleRateHertz = 16000,
        LanguageCode = "en",
        EnableWordTimeOffsets = true,
    }, RecognitionAudio.FromStorageUri(storageUri));
    longOperation = longOperation.PollUntilCompleted();
    var response = longOperation.Result;
    foreach (var result in response.Results)
    {
        foreach (var alternative in result.Alternatives)
        {
            Console.WriteLine($"Transcript: { alternative.Transcript}");
            Console.WriteLine("Word details:");
            Console.WriteLine($" Word count:{alternative.Words.Count}");
            foreach (var item in alternative.Words)
            {
                Console.WriteLine($"  {item.Word}");
                Console.WriteLine($"    WordStartTime: {item.StartTime}");
                Console.WriteLine($"    WordEndTime: {item.EndTime}");
            }
        }
    }
    return 0;
}

Go


func asyncWords(client *speech.Client, out io.Writer, gcsURI string) error {
	ctx := context.Background()

	// Send the contents of the audio file with the encoding and
	// and sample rate information to be transcripted.
	req := &speechpb.LongRunningRecognizeRequest{
		Config: &speechpb.RecognitionConfig{
			Encoding:              speechpb.RecognitionConfig_LINEAR16,
			SampleRateHertz:       16000,
			LanguageCode:          "en-US",
			EnableWordTimeOffsets: true,
		},
		Audio: &speechpb.RecognitionAudio{
			AudioSource: &speechpb.RecognitionAudio_Uri{Uri: gcsURI},
		},
	}

	op, err := client.LongRunningRecognize(ctx, req)
	if err != nil {
		return err
	}
	resp, err := op.Wait(ctx)
	if err != nil {
		return err
	}

	// Print the results.
	for _, result := range resp.Results {
		for _, alt := range result.Alternatives {
			fmt.Fprintf(out, "\"%v\" (confidence=%3f)\n", alt.Transcript, alt.Confidence)
			for _, w := range alt.Words {
				fmt.Fprintf(out,
					"Word: \"%v\" (startTime=%3f, endTime=%3f)\n",
					w.Word,
					float64(w.StartTime.Seconds)+float64(w.StartTime.Nanos)*1e-9,
					float64(w.EndTime.Seconds)+float64(w.EndTime.Nanos)*1e-9,
				)
			}
		}
	}
	return nil
}

Java

/**
 * Performs non-blocking speech recognition on remote FLAC file and prints the transcription as
 * well as word time offsets.
 *
 * @param gcsUri the path to the remote LINEAR16 audio file to transcribe.
 */
public static void asyncRecognizeWords(String gcsUri) throws Exception {
  // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
  try (SpeechClient speech = SpeechClient.create()) {

    // Configure remote file request for FLAC
    RecognitionConfig config =
        RecognitionConfig.newBuilder()
            .setEncoding(AudioEncoding.FLAC)
            .setLanguageCode("en-US")
            .setSampleRateHertz(16000)
            .setEnableWordTimeOffsets(true)
            .build();
    RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();

    // Use non-blocking call for getting file transcription
    OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
        speech.longRunningRecognizeAsync(config, audio);
    while (!response.isDone()) {
      System.out.println("Waiting for response...");
      Thread.sleep(10000);
    }

    List<SpeechRecognitionResult> results = response.get().getResultsList();

    for (SpeechRecognitionResult result : results) {
      // There can be several alternative transcripts for a given chunk of speech. Just use the
      // first (most likely) one here.
      SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
      System.out.printf("Transcription: %s\n", alternative.getTranscript());
      for (WordInfo wordInfo : alternative.getWordsList()) {
        System.out.println(wordInfo.getWord());
        System.out.printf(
            "\t%s.%s sec - %s.%s sec\n",
            wordInfo.getStartTime().getSeconds(),
            wordInfo.getStartTime().getNanos() / 100000000,
            wordInfo.getEndTime().getSeconds(),
            wordInfo.getEndTime().getNanos() / 100000000);
      }
    }
  }
}

Node.js

// Imports the Google Cloud client library
const speech = require('@google-cloud/speech');

// Creates a client
const client = new speech.SpeechClient();

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const gcsUri = 'gs://my-bucket/audio.raw';
// const encoding = 'Encoding of the audio file, e.g. LINEAR16';
// const sampleRateHertz = 16000;
// const languageCode = 'BCP-47 language code, e.g. en-US';

const config = {
  enableWordTimeOffsets: true,
  encoding: encoding,
  sampleRateHertz: sampleRateHertz,
  languageCode: languageCode,
};

const audio = {
  uri: gcsUri,
};

const request = {
  config: config,
  audio: audio,
};

// Detects speech in the audio file. This creates a recognition job that you
// can wait for now, or get its result later.
const [operation] = await client.longRunningRecognize(request);

// Get a Promise representation of the final result of the job
const [response] = await operation.promise();
response.results.forEach(result => {
  console.log(`Transcription: ${result.alternatives[0].transcript}`);
  result.alternatives[0].words.forEach(wordInfo => {
    // NOTE: If you have a time offset exceeding 2^32 seconds, use the
    // wordInfo.{x}Time.seconds.high to calculate seconds.
    const startSecs =
      `${wordInfo.startTime.seconds}` +
      '.' +
      wordInfo.startTime.nanos / 100000000;
    const endSecs =
      `${wordInfo.endTime.seconds}` +
      '.' +
      wordInfo.endTime.nanos / 100000000;
    console.log(`Word: ${wordInfo.word}`);
    console.log(`\t ${startSecs} secs - ${endSecs} secs`);
  });
});

PHP

use Google\Cloud\Speech\V1\SpeechClient;
use Google\Cloud\Speech\V1\RecognitionAudio;
use Google\Cloud\Speech\V1\RecognitionConfig;
use Google\Cloud\Speech\V1\RecognitionConfig\AudioEncoding;

/** Uncomment and populate these variables in your code */
// $audioFile = 'path to an audio file';

// change these variables if necessary
$encoding = AudioEncoding::LINEAR16;
$sampleRateHertz = 32000;
$languageCode = 'en-US';

if (!extension_loaded('grpc')) {
    throw new \Exception('Install the grpc extension (pecl install grpc)');
}

// When true, time offsets for every word will be included in the response.
$enableWordTimeOffsets = true;

// get contents of a file into a string
$content = file_get_contents($audioFile);

// set string as audio content
$audio = (new RecognitionAudio())
    ->setContent($content);

// set config
$config = (new RecognitionConfig())
    ->setEncoding($encoding)
    ->setSampleRateHertz($sampleRateHertz)
    ->setLanguageCode($languageCode)
    ->setEnableWordTimeOffsets($enableWordTimeOffsets);

// create the speech client
$client = new SpeechClient();

// create the asyncronous recognize operation
$operation = $client->longRunningRecognize($config, $audio);
$operation->pollUntilComplete();

if ($operation->operationSucceeded()) {
    $response = $operation->getResult();

    // each result is for a consecutive portion of the audio. iterate
    // through them to get the transcripts for the entire audio file.
    foreach ($response->getResults() as $result) {
        $alternatives = $result->getAlternatives();
        $mostLikely = $alternatives[0];
        $transcript = $mostLikely->getTranscript();
        $confidence = $mostLikely->getConfidence();
        printf('Transcript: %s' . PHP_EOL, $transcript);
        printf('Confidence: %s' . PHP_EOL, $confidence);
        foreach ($mostLikely->getWords() as $wordInfo) {
            $startTime = $wordInfo->getStartTime();
            $endTime = $wordInfo->getEndTime();
            printf('  Word: %s (start: %s, end: %s)' . PHP_EOL,
                $wordInfo->getWord(),
                $startTime->serializeToJsonString(),
                $endTime->serializeToJsonString());
        }
    }
} else {
    print_r($operation->getError());
}

$client->close();

Python

def transcribe_gcs_with_word_time_offsets(gcs_uri):
    """Transcribe the given audio file asynchronously and output the word time
    offsets."""
    from google.cloud import speech

    client = speech.SpeechClient()

    audio = speech.RecognitionAudio(uri=gcs_uri)
    config = speech.RecognitionConfig(
        encoding=speech.RecognitionConfig.AudioEncoding.FLAC,
        sample_rate_hertz=16000,
        language_code="en-US",
        enable_word_time_offsets=True,
    )

    operation = client.long_running_recognize(config=config, audio=audio)

    print("Waiting for operation to complete...")
    result = operation.result(timeout=90)

    for result in result.results:
        alternative = result.alternatives[0]
        print("Transcript: {}".format(alternative.transcript))
        print("Confidence: {}".format(alternative.confidence))

        for word_info in alternative.words:
            word = word_info.word
            start_time = word_info.start_time
            end_time = word_info.end_time

            print(
                f"Word: {word}, start_time: {start_time.total_seconds()}, end_time: {end_time.total_seconds()}"
            )

Ruby

# storage_path = "Path to file in Cloud Storage, eg. gs://bucket/audio.raw"

require "google/cloud/speech"

speech = Google::Cloud::Speech.speech

config = { encoding:                 :LINEAR16,
           sample_rate_hertz:        16_000,
           language_code:            "en-US",
           enable_word_time_offsets: true }
audio  = { uri: storage_path }

operation = speech.long_running_recognize config: config, audio: audio

puts "Operation started"

operation.wait_until_done!

raise operation.results.message if operation.error?

results = operation.response.results

results.first.alternatives.each do |alternative|
  puts "Transcription: #{alternative.transcript}"

  alternative.words.each do |word|
    start_time = word.start_time.seconds + word.start_time.nanos / 1_000_000_000.0
    end_time   = word.end_time.seconds + word.end_time.nanos / 1_000_000_000.0

    puts "Word: #{word.word} #{start_time} #{end_time}"
  end
end