The Transformer model uses stacks of self-attention layers and feed-forward layers to process sequential input like text. It supports the following variants:
transformer(encoder-decoder) for sequence to sequence modeling. Example use case: translation.
transformer(decoder-only) for single sequence modeling. Example use case: language modeling.
transformer_encoder(encoder-only) runs only the encoder for sequence to class modeling. Example use case: sentiment classification.
The Transformer is just one of the models in the Tensor2Tensor library. Tensor2Tensor (T2T) is a library of deep learning models and datasets as well as a set of scripts that allow you to train the models and to download and prepare the data.
Before you begin
Before starting this tutorial, follow the steps below to check that your Google Cloud Platform project is correctly set up.
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
Select or create a GCP project.
Make sure that billing is enabled for your project.
This walkthrough uses billable components of Google Cloud Platform. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up resources you create when you've finished with them to avoid unnecessary charges.
Set up your resources
This section provides information on setting up Cloud Storage storage, VM, and Cloud TPU resources for tutorials.
Create a Cloud Storage bucket
You need a Cloud Storage bucket to store the data you use to train
your model and the training results. The
ctpu up tool used in this tutorial
sets up default permissions for the Cloud TPU service account. If you want
finer-grain permissions, review the access level permissions.
Go to the Cloud Storage page on the GCP Console.
Create a new bucket, specifying the following options:
- A unique name of your choosing.
- Default storage class:
- Location: If you want to use a Cloud TPU device, accept the default presented. If you want to use a Cloud TPU Pod slice, you must specify a region where Cloud TPU Pods are available.
This section demonstrates using the Cloud TPU provisioning
creating and managing Cloud TPU project resources. The
resources are comprised of a virtual machine (VM) and a Cloud TPU
resource that have the same name. These resources
must reside in the same region/zone as the bucket you just created.
ctpu up to create resources
Open a Cloud Shell window.
ctpu upand specify options for either a Cloud TPU device or Pod slice:
You can use flags to change the following options:
- --name - name of your Cloud TPU resource and your VM.
- --zone - region and zone of the physical assets. The zone must be the same for the VM and Cloud TPU. The bucket must be in the same region.
- --project name - name of an existing project.
- --tpu_size - version and size of the Cloud TPU. The default is one device with 8 cores.
- --disk-size-gb - disk size. Use only if your dataset requires more than the default 250GB.
- --machine-type - virtual machine (VM) memory per CPU.
- --preemptible - interruptable, but lower cost Cloud TPU.
Set up a Cloud TPU device:
$ ctpu up
The following configuration message appears:
ctpu will use the following configuration: Name: [your TPU's name] Zone: [your project's zone] GCP Project: [your project's name] TensorFlow Version: 1.12 VM: Machine Type: [your machine type] Disk Size: [your disk size] Preemptible: [true or false] Cloud TPU: Size: [your TPU size] Preemptible: [true or false] OK to create your Cloud TPU resources with the above configuration? [Yn]:
Press y to create your Cloud TPU resources.
ctpu up command creates a virtual machine (VM) and Cloud TPU
From this point on, a prefix of
(vm)$ means you should run the command on the
Compute Engine VM instance.
Verify your Compute Engine VM
ctpu up command has finished executing, verify that your shell
prompt has changed from
change shows that you are now logged into your Compute Engine VM.
Add disk space to your VM
T2T conveniently packages data generation for many common open-source datasets
t2t-datagen script. The script downloads the data, preprocess it, and
makes it ready for training. To do so, it needs local disk space.
You can skip this step if you run
t2t-datagen on your local machine (
install tensor2tensor and then see the
t2t-datagen command below).
- Follow the Compute Engine guide to add a disk to your Compute Engine VM.
- Set the disk size to 200GB (the recommended minimum size).
- Set When deleting instance to Delete disk to ensure that the disk is removed when you remove the VM.
Make a note of the path to your new disk. For example:
Generate the training dataset
On your Compute Engine VM:
Create the following environment variables:
(vm)$ STORAGE_BUCKET=gs://YOUR-BUCKET-NAME (vm)$ DATA_DIR=$STORAGE_BUCKET/data/ (vm)$ TMP_DIR=/mnt/disks/mnt-dir/t2t_tmp
YOUR-BUCKET-NAMEis the name of your Cloud Storage bucket.
DATA_DIRis a location on Cloud Storage.
TMP_DIRis a location on the disk that you added to your Compute Engine VM at the start of the tutorial.
Create a temporary directory on the disk that you added to your Compute Engine VM at the start of the tutorial:
(vm)$ mkdir /mnt/disks/mnt-dir/t2t_tmp
t2t-datagenscript to generate the training and evaluation data on the Cloud Storage bucket, so that the Cloud TPU can access the data:
(vm)$ t2t-datagen --problem=translate_ende_wmt32k_packed --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR
You can view the data on Cloud Storage by going to the Google Cloud Platform Console and
choosing Storage from the left-hand menu. Click the name of the bucket that
you created for this tutorial. You should see sharded files named
Train an English-German translation model
Run the following commands on your Compute Engine VM:
Set up an environment variable for the training directory, which must be a Cloud Storage location:
t2t-trainerto train and evaluate the model:
(vm)$ t2t-trainer \ --model=transformer \ --hparams_set=transformer_tpu \ --problem=translate_ende_wmt32k_packed \ --train_steps=10 \ --eval_steps=3 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --use_tpu=True \ --cloud_tpu_name=$TPU_NAME
The above command runs 10 training steps, then 3 evaluation steps. You can (and should) increase the number of training steps by adjusting the
--train_stepsflag. Translations usually begin to be reasonable after ~40k steps. The model typically converges to its maximum quality after ~250k steps.
View the output in your Cloud Storage bucket by going to the Google Cloud Platform Console and choosing Storage from the left-hand menu. Click the name of the bucket that you created for this tutorial. Within the bucket, navigate to the training directory, for example, /training/transformer_ende_1, to see the model output.
To see training and evaluation metrics, launch TensorBoard and point it at the training directory in Cloud Storage.
Train a language model
You can use the
transformer model for language modeling as well. Run the
following commands to generate the training data and specify the output file:
(vm)$ t2t-datagen --problem=languagemodel_lm1b8k_packed --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR (vm)$ OUT_DIR=$STORAGE_BUCKET/training/transformer_lang_model
Run the following command to train and evaluate the model:
(vm)$ t2t-trainer \ --model=transformer \ --hparams_set=transformer_tpu \ --problem=languagemodel_lm1b8k_packed \ --train_steps=10 \ --eval_steps=8 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --use_tpu=True \ --cloud_tpu_name=$TPU_NAME
This model converges after approximately 250,000 steps.
Train a sentiment classifier
You can use the
transformer_encoder model for sentiment classification. Run
the following commands to generate the training data and specify the output file:
(vm)$ t2t-datagen --problem=sentiment_imdb --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR (vm)$ OUT_DIR=$STORAGE_BUCKET/training/transformer_sentiment_classifier
Run the following command to train and evaluate the model:
(vm)$ t2t-trainer \ --model=transformer_encoder \ --hparams_set=transformer_tiny_tpu \ --problem=sentiment_imdb \ --train_steps=10 \ --eval_steps=1 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --use_tpu=True \ --cloud_tpu_name=$TPU_NAME
This model achieves approximately 85% accuracy after approximately 2,000 steps.
To avoid incurring charges to your GCP account for the resources used in this tutorial:
Disconnect from the Compute Engine VM:
Your prompt should now be
user@projectname, showing you are in the Cloud Shell.
In your Cloud Shell, run
ctpu deletewith the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:
$ ctpu delete [optional: --zone]
The operation may take a few moments. A message like the one below indicates there are no more allocated instances:
2018/04/28 16:16:23 WARNING: Setting zone to "us-central1-b" No instances currently exist. Compute Engine VM: -- Cloud TPU: --
ctpu statuswith the --zone flag you used when you set up the Cloud TPU. This checks that your instance was deleted so you can avoid unnecessary charges for TPU usage.
gsutilas shown, replacing
YOUR-BUCKET-NAMEwith the name of the Cloud Storage bucket you created for this tutorial:
$ gsutil rm -r gs://YOUR-BUCKET-NAME
- Learn more about
ctpu, including how to install it on a local machine.
- Explore more Tensor2Tensor models for TPU.
- Experiment with more TPU samples.
- Explore the TPU tools in TensorBoard.