Chirp: Universal speech model

Chirp is the next generation of Google's speech-to-text models. Representing the culmination of years of research, the first version of Chirp is now available for Speech-to-Text. We intend to improve and expand Chirp to more languages and domains. For details, see our paper, Google USM.

We trained Chirp models with a different architecture than our current speech models. A single model unifies data from multiple languages. However, users still specify the language in which the model should recognize speech. Chirp does not support some of the Google Speech features that other models have. See below for a complete list.

Model identifiers

Chirp is available in the Cloud Speech-to-Text API v2. You can leverage it like any other model.

The model identifier for Chirp is: chirp.

You can specify this model while creating a recognizer to leverage Chirp.

Available API methods

Chirp processes speech in much larger chunks than other models do. This means it might not be suitable for true, real-time use. Chirp is available through the following API methods:

  • v2 Speech.Recognize (good for short audio < 1 min)
  • v2 Speech.BatchRecognize (good for long audio 1 min to 8 hrs)

Chirp is not available on the following API methods:

  • v2 Speech.StreamingRecognize
  • v1 Speech.StreamingRecognize
  • v1 Speech.Recognize
  • v1 Speech.LongRunningRecognize
  • v1p1beta1 Speech.StreamingRecognize
  • v1p1beta1 Speech.Recognize
  • v1p1beta1 Speech.LongRunningRecognize

Chirp is only available in certain regions. You can see supported regions in the supported languages page.

Languages

You can see the supported languages in the full language list.

Feature support and limitations

Chirp does not currently support many of the STT API features. See below for specific restrictions.

  • Confidence scores: The API returns a value, but it isn't truly a confidence score.
  • Speech adaptation: No adaptation features supported.
  • Diarization: Automatic diarization isn't supported. Channel separation isn't supported.
  • Forced normalization: Not supported.
  • Word level confidence: Not supported.
  • Language detection: Not supported.

Chirp does support the following features:

  • Automatic punctuation: The punctuation is predicted by the model. It can be disabled.
  • Word timings: Optionally returned.

Getting started with Chirp in the Google Cloud console

  1. Ensure you have signed up for a Google Cloud account and created a project.
  2. Go to Speech in Google Cloud console.
  3. Enable the API if it's not already enabled.
  4. Create an STT Recognizer that uses Chirp. a. Go to the Recognizers tab and click Create.

    Screenshot of the Speech-to-text Recognizer list.

    b. From the Create Recognizer page, enter the necessary fields for Chirp.

    Screenshot of the Speech-to-text create Recognizer page.

    i. Name your recognizer.

    ii. Select chirp as the Model.

    iii. Select the language you want to use. You must use one recognizer per language that you plan to test.

    iv. Do not select any other features.

  5. Make sure that you have an STT UI Workspace. If you do not have one already, you need to create a workspace. a. Visit the transcriptions page, and click New Transcription.

    b. Open the Workspace dropdown and click New Workspace to create a workspace for transcription.

    c. From the Create a new workspace navigation sidebar, click Browse.

    d. Click to create a new bucket.

    e. Enter a name for your bucket and click Continue.

    f. Click Create to create your Cloud Storage bucket.

    g. Once the bucket is created, click Select to select your bucket for use.

    h. Click Create to finish creating your workspace for the speech-to-text UI.

  6. Perform a transcription on your actual audio.

    Screenshot of the Speech-to-text transcription creation page, showing file selection or upload.

    a. From the New Transcription page, select your audio file through either upload (Local upload) or specifying an existing Cloud Storage file (Cloud storage). Note: The UI tries to assess your audio file parameters automatically.

    b. Click Continue to move to the Transcription options.

    Screenshot of the Speech-to-text transcription creation page showing selecting Chirp model and submiting a transcription job.

    c. Select the Spoken language that you plan to use for recognition with Chirp from your previously created recognizer.

    d. In the model dropdown, select Chirp - Universal Speech Model.

    e. In the Recognizer dropdown, select your newly created recognizer.

    f. Click Submit to run your first recognition request using Chirp.

  7. View your Chirp transcription result. a. From the Transcriptions page, click the name of the transcription to view its result.

    b. In the Transcription details page, view your transcription result, and optionally playback the audio in the browser.