Separating different speakers in an audio recording

This page describes how to get labels for different speakers in audio data transcribed by Speech-to-Text.

Sometimes, audio data contains samples of more than one person talking. For example, audio from a telephone call usually features voices from two or more people. A transcription of the call ideally includes who speaks at which times.

Speaker diarization

Speech-to-Text can recognize multiple speakers in the same audio clip. When you send an audio transcription request to Speech-to-Text, you can include a parameter telling Speech-to-Text to identify the different speakers in the audio sample. This feature, called speaker diarization, detects when speakers change and labels by number the individual voices detected in the audio.

When you enable speaker diarization in your transcription request, Speech-to-Text attempts to distinguish the different voices included in the audio sample. The transcription result tags each word with a number assigned to individual speakers. Words spoken by the same speaker bear the same number. A transcription result can include numbers up to as many speakers as Speech-to-Text can uniquely identify in the audio sample.

When you use speaker diarization, Speech-to-Text produces a running aggregate of all the results provided in the transcription. Each result includes the words from the previous result. Thus, the words array in the final result provides the complete, diarized results of the transcription.

Enabling speaker diarization in a request

To enable speaker diarization, you need to set the enableSpeakerDiarization field to true in the RecognitionConfig parameters for the request. To improve your transcription results, you should also specify the number of speakers present in the audio clip by setting the diarizationSpeakerCount field in the RecognitionConfig parameters. Speech-to-Text uses a default value if you do not provide a value for diarizationSpeakerCount.

Cloud Speech-to-Text supports speaker diarization for all speech recognition methods: speech:recognize speech:longrunningrecognize, and Streaming.

The following code snippet demonstrates how to enable speaker diarization in a transcription request to Speech-to-Text.


Refer to the speech:recognize API endpoint for complete details.

To perform synchronous speech recognition, make a POST request and provide the appropriate request body. The following shows an example of a POST request using curl. The example uses the access token for a service account set up for the project using the Google Cloud Cloud SDK. For instructions on installing the Cloud SDK, setting up a project with a service account, and obtaining an access token, see the quickstart.

curl -s -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \ \
    --data '{
    "config": {
        "encoding": "LINEAR16",
        "languageCode": "en-US",
        "enableSpeakerDiarization": true,
        "diarizationSpeakerCount": 2,
        "model": "phone_call"
    "audio": {
        "uri": "gs://cloud-samples-tests/speech/commercial_mono.wav"
}' > speaker-diarization.txt

If the request is successful, the server returns a 200 OK HTTP status code and the response in JSON format, saved to a file named speaker-diarization.txt.

  "results": [
      "alternatives": [
          "transcript": "hi I'd like to buy a Chromecast and I was wondering whether you could help me with that certainly which color would you like we have blue black and red uh let's go with the black one would you like the new Chromecast Ultra model or the regular Chrome Cast regular Chromecast is fine thank you okay sure we like to ship it regular or Express Express please terrific it's on the way thank you thank you very much bye",
          "confidence": 0.92142606,
          "words": [
              "startTime": "0s",
              "endTime": "1.100s",
              "word": "hi",
              "speakerTag": 2
              "startTime": "1.100s",
              "endTime": "2s",
              "word": "I'd",
              "speakerTag": 2
              "startTime": "2s",
              "endTime": "2s",
              "word": "like",
              "speakerTag": 2
              "startTime": "2s",
              "endTime": "2.100s",
              "word": "to",
              "speakerTag": 2
              "startTime": "6.500s",
              "endTime": "6.900s",
              "word": "certainly",
              "speakerTag": 1
              "startTime": "6.900s",
              "endTime": "7.300s",
              "word": "which",
              "speakerTag": 1
              "startTime": "7.300s",
              "endTime": "7.500s",
              "word": "color",
              "speakerTag": 1
      "languageCode": "en-us"


 * Please include the following imports to run this sample.
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import;
 * import java.nio.file.Files;
 * import java.nio.file.Path;
 * import java.nio.file.Paths;

public static void sampleLongRunningRecognize() {
  // TODO(developer): Replace these variables before running the sample.
  String localFilePath = "resources/commercial_mono.wav";

 * Print confidence level for individual words in a transcription of a short audio file Separating
 * different speakers in an audio file recording
 * @param localFilePath Path to local audio file, e.g. /path/audio.wav
public static void sampleLongRunningRecognize(String localFilePath) {
  try (SpeechClient speechClient = SpeechClient.create()) {

    // If enabled, each word in the first alternative of each result will be
    // tagged with a speaker tag to identify the speaker.
    boolean enableSpeakerDiarization = true;

    // Optional. Specifies the estimated number of speakers in the conversation.
    int diarizationSpeakerCount = 2;

    // The language of the supplied audio
    String languageCode = "en-US";
    RecognitionConfig config =
    Path path = Paths.get(localFilePath);
    byte[] data = Files.readAllBytes(path);
    ByteString content = ByteString.copyFrom(data);
    RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(content).build();
    LongRunningRecognizeRequest request =
    OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> future =

    System.out.println("Waiting for operation to complete...");
    LongRunningRecognizeResponse response = future.get();
    for (SpeechRecognitionResult result : response.getResultsList()) {
      // First alternative has words tagged with speakers
      SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
      System.out.printf("Transcript: %s\n", alternative.getTranscript());
      // Print the speakerTag of each word
      for (WordInfo word : alternative.getWordsList()) {
        System.out.printf("Word: %s\n", word.getWord());
        System.out.printf("Speaker tag: %s\n", word.getSpeakerTag());
  } catch (Exception exception) {
    System.err.println("Failed to create the client due to: " + exception);


const fs = require('fs');

// Imports the Google Cloud client library
const speech = require('@google-cloud/speech').v1p1beta1;

// Creates a client
const client = new speech.SpeechClient();

 * TODO(developer): Uncomment the following lines before running the sample.
// const fileName = 'Local path to audio file, e.g. /path/to/audio.raw';

const config = {
  encoding: `LINEAR16`,
  sampleRateHertz: 8000,
  languageCode: `en-US`,
  enableSpeakerDiarization: true,
  diarizationSpeakerCount: 2,
  model: `phone_call`,

const audio = {
  content: fs.readFileSync(fileName).toString('base64'),

const request = {
  config: config,
  audio: audio,

const [response] = await client.recognize(request);
const transcription = response.results
  .map(result => result.alternatives[0].transcript)
console.log(`Transcription: ${transcription}`);
console.log(`Speaker Diarization:`);
const result = response.results[response.results.length - 1];
const wordsInfo = result.alternatives[0].words;
// Note: The transcript within each result is separate and sequential per result.
// However, the words list within an alternative includes all the words
// from all the results thus far. Thus, to get all the words with speaker
// tags, you only have to take the words list from the last result:
wordsInfo.forEach(a =>
  console.log(` word: ${a.word}, speakerTag: ${a.speakerTag}`)


from import speech_v1p1beta1
import io

def sample_long_running_recognize(local_file_path):
    Print confidence level for individual words in a transcription of a short audio
    Separating different speakers in an audio file recording

      local_file_path Path to local audio file, e.g. /path/audio.wav

    client = speech_v1p1beta1.SpeechClient()

    # local_file_path = 'resources/commercial_mono.wav'

    # If enabled, each word in the first alternative of each result will be
    # tagged with a speaker tag to identify the speaker.
    enable_speaker_diarization = True

    # Optional. Specifies the estimated number of speakers in the conversation.
    diarization_speaker_count = 2

    # The language of the supplied audio
    language_code = "en-US"
    config = {
        "enable_speaker_diarization": enable_speaker_diarization,
        "diarization_speaker_count": diarization_speaker_count,
        "language_code": language_code,
    with, "rb") as f:
        content =
    audio = {"content": content}

    operation = client.long_running_recognize(config, audio)

    print(u"Waiting for operation to complete...")
    response = operation.result()

    for result in response.results:
        # First alternative has words tagged with speakers
        alternative = result.alternatives[0]
        print(u"Transcript: {}".format(alternative.transcript))
        # Print the speaker_tag of each word
        for word in alternative.words:
            print(u"Word: {}".format(word.word))
            print(u"Speaker tag: {}".format(word.speaker_tag))