Creating translated subtitles with AI
Contributed by Google employees.
This tutorial shows how to do the following:
- Transcribe audio files with spoken dialog into text and SRT subtitle files.
- Get accurate timings of spoken sentences for subtitles.
- Translate subtitles to other languages.
This tutorial uses billable components of Google Cloud, including the following:
- Cloud Storage
- Cloud Speech-to-Text
- Cloud Translation
Watch the companion video
To see this tutorial in action, you can watch the Google Cloud Level Up episode first, and then follow the steps in this tutorial yourself.
Before you begin
This tutorial assumes that you already have a Google Cloud account set up.
Create a Google Cloud project
- Go to the Cloud Console.
- Click the project selector in the upper-left corner and select New Project.
- Give the project a name and click Create.
- Click the project selector again and select your new project.
On your local development machine, install the following tools:
- Google Cloud SDK (gcloud command-line tool)
gcloudto use your new Google Cloud project:
Export an environment variable with your current Google Cloud project ID:
PROJECT_ID=$(gcloud info --format='value(config.project)')
Enable the services used in this tutorial:
gcloud services enable speech.googleapis.com texttospeech.googleapis.com translate.googleapis.com storage-component.googleapis.com
Clone the GitHub repository associated with the community tutorials:
git clone https://github.com/GoogleCloudPlatform/community.git
Change to the tutorial directory:
Create a new Python 3 virtual environment:
python3 -m venv venv
Activate the virtual environment:
Install the required Python modules:
pip3 install -r requirements.txt
Create two Cloud Storage buckets: one for input, one for output. Because bucket names are a global namespace, you must use unique bucket names.
Export the two bucket names into environment variables. Replace
[YOUR_SECOND_BUCKET]with your custom bucket names:
Create the buckets:
gsutil mb gs://$BUCKET_IN gsutil mb gs://$BUCKET_OUT
Create a Service Account and JSON key
In this section, you create a Service Account in your Google Cloud project and grant sufficient permissions to it so that it can use the AI services. You also download a JSON key for the Service Account. The JSON key is used by the Python utilities to authenticate with the Cloud services.
Create a new Service Account:
gcloud iam service-accounts create ml-dev --description="ML APIs developer access" --display-name="ML Developer Service Account"
Grant the ML Developer role to the Service Account:
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:ml-dev@$PROJECT_ID.iam.gserviceaccount.com --role roles/ml.developer
Grant the Project Viewer role to the Service Account:
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:ml-dev@$PROJECT_ID.iam.gserviceaccount.com --role roles/viewer
Grant the Storage Object Admin role to the Service Account, so that it can upload and download objects to and from Cloud Storage:
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:ml-dev@$PROJECT_ID.iam.gserviceaccount.com --role roles/storage.objectAdmin
Create a JSON key for the Service Account:
gcloud iam service-accounts keys create ./ml-dev.json --iam-account ml-dev@$PROJECT_ID.iam.gserviceaccount.com
The key file will be downloaded to the current working directory.
Export your service account JSON key to the shell environment variables, so that the utilities can authenticate with the Cloud AI services:
The SRT subtitle format
Example SRT subtitle file with two subtitle entries:
1 00:00:00,000 --> 00:00:01,800 This is an example text file. 2 00:00:01,800 --> 00:00:04,300 It can be used to test the artificial intelligence.
Each entry contains the following items:
- incrementing index number, starting from 1
- start and stop times for the subtitle, in the format hh:mm:ss,ms
- subtitle body in one or more lines of text
Preparing the dialog audio
The input data for the solution is an audio file that contains spoken dialog. The first step is to transcribe the audio file's speech to text.
Extracting the dialog track and optimizing the audio for Speech-to-Text
Your starting point may be any of these or more:
- a new video in post-production, being edited before publishing
- an existing video encoded as a playback-optimized video file
- an existing multichannel audio file in which one track contains the dialog
- an audio recording of just the spoken dialog
Regardless of the source data type, you need to prepare an audio file for transcribing that contains only the spoken dialog. If possible, the file should not contain any other audio (such as music), or video tracks. The audio file should be in a format that can be used by the Cloud Speech-to-Text API. To prepare an optimized audio file, follow the steps in Optimizing audio files for Speech-to-Text. The quality of the audio input can greatly affect the quality of the transcribed output.
This tutorial includes a pre-created audio file
example.wav, which the next steps use for demonstration.
About the utility used to transcribe audio files
To transcribe audio files, this tutorial uses the example utility
The utility performs the following steps:
Configures the API request and sets the following parameters:
This gives millisecond-accurate start and stop times of each spoken word.
This adds punctuation marks, such as commas or periods.
Calls the Cloud Speech-to-Text API and passes input parameters to the service:
- URI of the source audio file in Cloud Storage (example:
- Sample rate of the audio in Hertz (default:
- Language code of the spoken dialog (default:
- Max characters per line, before breaking to the next line (default:
- URI of the source audio file in Cloud Storage (example:
Receives the transcribed text from the service, including metadata (including the timing of each spoken word).
Writes two output files:
- a plain text file, with each sentence on a separate line (breaking to the next sentence with either a punctuation mark, or if the sentence exceeds the configured maximum characters per line limit)
- an SRT subtitle file, with each sentence as a separate subtitle entry
Transcribe dialog to plain text and SRT subtitles
To transcribe the audio file, do the following:
Upload your dialog audio file to the Cloud Storage bucket:
gsutil cp example.wav gs://$BUCKET_IN/
View the command-line options for the transcribing utility:
python3 speech2srt.py -h
Transcribe the file
python3 speech2srt.py --storage_uri gs://$BUCKET_IN/example.wav --sample_rate_hertz 24000
If successful, the command should output the following:
Transcribing gs://[YOUR_FIRST_BUCKET]/example.wav ... Transcribing finished Writing en-US subtitles to: en.srt Writing text to: en.txt
Example plain text output
This is an example text file. It can be used to test the artificial intelligence solution the solution can transcribe spoken dialogue in to text. It can convert text into subtitles and it can translate subtitles to multiple Target languages.
Example SRT subtitles output
1 00:00:00,000 --> 00:00:01,800 This is an example text file. 2 00:00:01,800 --> 00:00:04,300 It can be used to test the artificial intelligence 3 00:00:04,300 --> 00:00:08,300 solution the solution can transcribe spoken dialogue 4 00:00:08,300 --> 00:00:08,900 in to text. 5 00:00:08,900 --> 00:00:12,100 It can convert text into subtitles and it can translate 6 00:00:12,100 --> 00:00:14,500 subtitles to multiple Target languages.
en.txtis used as the translation input file for translating into other languages in later steps. The
en.srtfile subtitles file for your video.
Open both output files
en.srtin a text editor and fix any transcribing mistakes where necessary.
Load the SRT subtitles file in your video player, enable subtitles, and verify that the subtitles are displayed correctly. Refer to the Level Up YouTube episode for an example on how to load the subtitles to YouTube Studio.
About the translation utilities
Now that you have created subtitles in the original language, you can use the
Cloud Translation API to generate subtitles in other languages. To achieve this,
you can use the included utilities
The utilities perform the following steps:
- Queries and prints the list of languages that the Translation API can translate to and from.
- Calls the API with the
- Uses the source text file in Cloud Storage as the input.
- Specifies the source text file's original language.
- Specifies the target languages for the translation operation.
- Specifies the output bucket for the translation text files and descriptive
- Reads the translation output file
index.csvto identify the translated output text files.
- Opens the original language SRT subtitle file to read the timings for each subtitle entry.
- For each translated text file, does the following:
- Replaces the original language subtitle's body text with the translated text.
- Writes the translated subtitles as SRT output files.
The tools match the sentences in the plain text files, and SRT subtitle files, by their line and index number. For example, line 1 in the plain text file has the same content as the SRT subtitle at index 1. For this reason, the current versions of the utilities only support 1 line of text per subtitle entry.
Translate subtitles into other languages
To generate subtitles for multiple target languages, do the following:
Upload the transcribed original language text to Cloud Storage:
gsutil cp en.txt gs://$BUCKET_IN/
View the command-line options for the utility
python3 translate_txt.py -h
The output bucket must be empty before executing the translation step. To empty the bucket, use the following command:
(venv) $ gsutil rm gs://$BUCKET_OUT/*
Call the Translation API service and specify the list of target languages:
python3 translate_txt.py --project_id $PROJECT_ID --source_lang en --target_lang ko,hi --input_uri gs://$BUCKET_IN/en.txt --output_uri gs://$BUCKET_OUT/
This example command specifies that you want to translate the plain text file
gs://$BUCKET_IN/en.txtto Korean and Finnish, and store the output files in the Cloud Storage bucket
If everything went well, the output should look like the following:
Supported Languages: af am ar az be bg bn bs ca ceb co cs cy da de el en eo es et eu fa fi fr fy ga gd gl gu ha haw hi hmn hr ht hu hy id ig is it iw ja jw ka kk km kn ko ku ky la lb lo lt lv mg mi mk ml mn mr ms mt my ne nl no ny or pa pl ps pt ro ru rw sd si sk sl sm sn so sq sr st su sv sw ta te tg th tk tl tr tt ug uk ur uz vi xh yi yo zh-CN zh-TW zu Waiting for operation to complete... Total Characters: 484 Translated Characters: 484
Copy the output files to your local machine:
gsutil cp gs://$BUCKET_OUT/* .
index.csvfile, which contains information about the translation operation output files:
The output should be the following:
Here you can see that the service translated the source file
en.txtand wrote two output files, in Finnish and Korean, respectively.
Create SRT subtitles from the Finnish and Korean plain text files:
python3 txt2srt.py --srt en.srt --index index.csv
You should see the following command output:
Loading en.srt Updating subtitles for each translated language Wrote SRT file fi.srt Wrote SRT file ko.srt
txt2srt.pygenerated the translated subtitles by loading the original
en.srtEnglish subtitles for the timing information, and replaced each subtitle entry's body text with the corresponding line of text, from the Finnish and Korean translated files.
Check the translated subtitles:
head -8 fi.srt
The output should look like the following:
1 00:00:00,000 --> 00:00:01,800 Tämä on esimerkki tekstitiedostosta. 2 00:00:01,800 --> 00:00:04,300 Sitä voidaan käyttää tekoälyn testaamiseen
As with the original language speech-to-text transcribing result, check the output files and fix any mistakes using a text editor.
Now you have subtitles for your video in multiple languages.
Delete the Google Cloud project
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the project.
Caution: Deleting a project has the following consequences:
- If you used an existing project, you'll also delete any other work you've done in the project.
- You can't reuse the project ID of a deleted project. If you created a custom project ID that you plan to use in the
future, delete the resources inside the project instead. This ensures that URLs that use the project ID, such as
appspot.comURL, remain available.
To delete a project, do the following:
- In the Cloud Console, go to the Projects page.
- In the project list, select the project you want to delete and click Delete project.
- In the dialog, type the project ID, and then click Shut down to delete the project.