Chirp is the next generation of Google's speech-to-text models. Representing the culmination of years of research, the first version of Chirp is now available for Speech-to-Text. We intend to improve and expand Chirp to more languages and domains. For details, see our paper, Google USM.
We trained Chirp models with a different architecture than our current speech models. A single model unifies data from multiple languages. However, users still specify the language in which the model should recognize speech. Chirp does not support some of the Google Speech features that other models have. See Feature support and limitations for a complete list.
Model identifiers
Chirp is available in the Speech-to-Text API v2. You can use it like any other model.
The model identifier for Chirp is: chirp
.
You can specify this model in synchronous or batch recognition requests.
Available API methods
Chirp processes speech in much larger chunks than other models do. This means it might not be suitable for true, real-time use. Chirp is available through the following API methods:
v2
Speech.Recognize
(good for short audio < 1 min)v2
Speech.BatchRecognize
(good for long audio 1 min to 8 hrs)
Chirp is not available on the following API methods:
v2
Speech.StreamingRecognize
v1
Speech.StreamingRecognize
v1
Speech.Recognize
v1
Speech.LongRunningRecognize
v1p1beta1
Speech.StreamingRecognize
v1p1beta1
Speech.Recognize
v1p1beta1
Speech.LongRunningRecognize
Regions
Chirp is available in the following regions:
us-central1
europe-west4
asia-southeast1
See the languages page for more information.
Languages
You can see the supported languages in the full language list.
Feature support and limitations
Chirp does not support some of the STT API features:
- Confidence scores: The API returns a value, but it isn't truly a confidence score.
- Speech adaptation: No adaptation features supported.
- Diarization: Automatic diarization isn't supported.
- Forced normalization: Not supported.
- Word level confidence: Not supported.
- Language detection: Not supported.
Chirp does support the following features:
- Automatic punctuation: The punctuation is predicted by the model. It can be disabled.
- Word timings: Optionally returned.
- Language-agnostic audio transcription: The model automatically infers the spoken language in your audio file and adds it to the results.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Speech-to-Text APIs.
-
Make sure that you have the following role or roles on the project: Cloud Speech Administrator
Check for the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
-
In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.
Grant the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
- Click Grant access.
-
In the New principals field, enter your user identifier. This is typically the email address for a Google Account.
- In the Select a role list, select a role.
- To grant additional roles, click Add another role and add each additional role.
- Click Save.
-
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Speech-to-Text APIs.
-
Make sure that you have the following role or roles on the project: Cloud Speech Administrator
Check for the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
-
In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.
Grant the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
- Click Grant access.
-
In the New principals field, enter your user identifier. This is typically the email address for a Google Account.
- In the Select a role list, select a role.
- To grant additional roles, click Add another role and add each additional role.
- Click Save.
-
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
Client libraries can use Application Default Credentials to easily authenticate with Google APIs and send requests to those APIs. With Application Default Credentials, you can test your application locally and deploy it without changing the underlying code. For more information, see Authenticate for using client libraries.
Also ensure you have installed the client library.
Perform synchronous speech recognition with Chirp
Here is an example of performing synchronous speech recognition on a local audio file using Chirp:
Python
Make a request with language-agnostic transcription enabled
The following code samples demonstrate how to make a request with language-agnostic transcription enabled.
Python
Get started with Chirp in the Google Cloud console
- Ensure you have signed up for a Google Cloud account and created a project.
- Go to Speech in Google Cloud console.
- Enable the API if it's not already enabled.
- Go to the Transcriptions subpage.
- Click New Transcription
Make sure that you have an STT workspace. If you don't, create one.
Open the Workspace drop-down and click New Workspace.
From the Create a new workspace navigation sidebar, click Browse.
Click to create a bucket.
Enter a name for your bucket and click Continue.
Click Create.
After the bucket is created, click Select to select your bucket.
Click Create to finish creating your workspace for Speech-to-Text.
Perform a transcription on your audio.
- From the New Transcription page, choose an option for selecting your audio file:
- Upload by clicking Local upload.
- Click Cloud storage to specify an existing Cloud Storage file.
- Click Continue.
From the Transcription options section, select the Spoken language that you plan to use for recognition with Chirp from your previously created recognizer.
At the Model* drop-down, select Chirp.
At the Region drop-down, select a region, such as us-central1.
Click Continue.
To run your first recognition request using Chirp, in the main section, click Submit.
- From the New Transcription page, choose an option for selecting your audio file:
View your Chirp transcription result.
From the Transcriptions page, click the name of the transcription.
On the Transcription details page, view your transcription result, and optionally playback the audio in the browser.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
-
Optional: Revoke the authentication credentials that you created, and delete the local credential file.
gcloud auth application-default revoke
-
Optional: Revoke credentials from the gcloud CLI.
gcloud auth revoke
Console
gcloud
Delete a Google Cloud project:
gcloud projects delete PROJECT_ID
What's next
- Practice transcribing short audio files.
- Learn how to transcribe streaming audio.
- Learn how to transcribe long audio files.
- For best performance, accuracy, and other tips, see the best practices documentation.