The speech recognition model is just one of the models in the Tensor2Tensor library. Tensor2Tensor (T2T) is a library of deep learning models and datasets as well as a set of scripts that allow you to train the models and to download and prepare the data. This model does speech-to-text conversion.
Before you begin
Before starting this tutorial, follow the steps below to check that your Google Cloud Platform project is correctly set up.
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
Select or create a Google Cloud Platform project.
Make sure that billing is enabled for your Google Cloud Platform project.
This walkthrough uses billable components of Google Cloud Platform. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up resources you create when you've finished with them to avoid unnecessary charges.
Set up your resources
This section provides information on setting up Cloud Storage storage, VM, and Cloud TPU resources for tutorials.
Create a Cloud Storage bucket
You need a Cloud Storage bucket to store the data you use to train
your model and the training results. The
ctpu up tool used in this tutorial
sets up default permissions for the Cloud TPU service account. If you want
finer-grain permissions, review the access level permissions.
Go to the Cloud Storage page on the GCP Console.
Create a new bucket, specifying the following options:
- A unique name of your choosing.
- Default storage class:
- Location: If you want to use a Cloud TPU device, accept the default presented. If you want to use a Cloud TPU Pod slice, you must specify a region where Cloud TPU Pods are available.
This section demonstrates using the Cloud TPU provisioning
creating and managing Cloud TPU project resources. The
resources are comprised of a virtual machine (VM) and a Cloud TPU
resource that have the same name. These resources
must reside in the same region/zone as the bucket you just created.
You can also set up your VM and TPU resources using
gcloud commands or through
the Cloud Console. See the
managing VM and TPU resources page
to learn all the ways you can set up and manage your Compute Engine VM
and Cloud TPU resources.
ctpu up to create resources
Open a Cloud Shell window.
ctpu upspecifying the flags shown for either a Cloud TPU device or Pod slice. Refer to CTPU Reference for flag options and descriptions.
Set up a Cloud TPU device:
$ ctpu up
The following configuration message appears:
ctpu will use the following configuration: Name: [your TPU's name] Zone: [your project's zone] GCP Project: [your project's name] TensorFlow Version: 1.13 VM: Machine Type: [your machine type] Disk Size: [your disk size] Preemptible: [true or false] Cloud TPU: Size: [your TPU size] Preemptible: [true or false] OK to create your Cloud TPU resources with the above configuration? [Yn]:
Press y to create your Cloud TPU resources.
ctpu up command creates a virtual machine (VM) and Cloud TPU
From this point on, a prefix of
(vm)$ means you should run the command on the
Compute Engine VM instance.
Verify your Compute Engine VM
ctpu up command has finished executing, verify that your shell
prompt has changed from
change shows that you are now logged into your Compute Engine VM.
Add disk space to your VM
T2T conveniently packages data generation for many common open-source datasets
t2t-datagen script. The script downloads the data, preprocess it, and
makes it ready for training. To do so, it needs local disk space.
You can skip this step if you run
t2t-datagen on your local machine (
install tensor2tensor and then see the
t2t-datagen command below).
- Follow the Compute Engine guide to add a disk to your Compute Engine VM.
- Set the disk size to 200GB (the recommended minimum size).
- Set When deleting instance to Delete disk to ensure that the disk is removed when you remove the VM.
Make a note of the path to your new disk. For example:
Generate the training dataset
On your Compute Engine VM:
Create the following environment variables for directories:
(vm)$ STORAGE_BUCKET=gs://YOUR-BUCKET-NAME (vm)$ DATA_DIR=$STORAGE_BUCKET/data/ (vm)$ TMP_DIR=/mnt/disks/mnt-dir/t2t_tmp
YOUR-BUCKET-NAMEis the name of your Cloud Storage bucket.
DATA_DIRis a location on Cloud Storage.
TMP_DIRis a location on the disk that you added to your Compute Engine VM at the start of the tutorial.
Create a temporary directory on the disk that you added to your Compute Engine VM at the start of the tutorial:
(vm)$ mkdir $TMP_DIR
t2t-datagenscript to generate both the full dataset and the small clean version, which you will use for evaluation.
As the audio import in
soxto generate normalized waveforms, first, install it on your workstation (for example,
apt-get install sox) and then run the following commands.
(vm)$ t2t-datagen --problem=librispeech --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR (vm)$ t2t-datagen --problem=librispeech_clean --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR
librispeech_train_full_test_clean trains on the full dataset
but evaluate on the clean dataset.
You can also use
librispeech_clean_small which is a small version
of the clean dataset.
You can view the data on Cloud Storage by going to the Google Cloud Platform Console and choosing Storage from the left-hand menu. Click the name of the bucket that you created for this tutorial.
Training the model
To train a model on Cloud TPU set up
OUT_DIR and run
the trainer with big batches and truncated sequences:
(vm)$ t2t-trainer \ --model=transformer \ --hparams_set=transformer_librispeech_tpu \ --problem=librispeech_train_full_test_clean \ --train_steps=210000 \ --eval_steps=3 \ --local_eval_frequency=100 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --use_tpu \ --cloud_tpu_name=$TPU_NAME
After this step is completed, run the training again for more steps with smaller batch size and full sequences:
(vm)$ t2t-trainer \ --model=transformer \ --hparams_set=transformer_librispeech_tpu \ --hparams=max_length=295650,max_input_seq_length=3650,max_target_seq_length=650,batch_size=6 \ --problem=librispeech_train_full_test_clean \ --train_steps=230000 \ --eval_steps=3 \ --local_eval_frequency=100 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --use_tpu \ --cloud_tpu_name=$TPU_NAME
To avoid incurring charges to your GCP account for the resources used in this topic:
Disconnect from the Compute Engine VM:
Your prompt should now be
user@projectname, showing you are in the Cloud Shell.
In your Cloud Shell, run
ctpu deletewith the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:
$ ctpu delete [optional: --zone]
ctpu statusto make sure you have no instances allocated to avoid unnecessary charges for TPU usage. The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:
2018/04/28 16:16:23 WARNING: Setting zone to "us-central1-b" No instances currently exist. Compute Engine VM: -- Cloud TPU: --
gsutilas shown, replacing
YOUR-BUCKET-NAMEwith the name of the Cloud Storage bucket you created for this tutorial:
$ gsutil rm -r gs://YOUR-BUCKET-NAME
- Learn more about
ctpu, including how to install it on a local machine.
- Explore more Tensor2Tensor models for TPU.
- Experiment with more TPU samples.
- Explore the TPU tools in TensorBoard.