Running the Transformer with Tensor2Tensor on Cloud TPU

This tutorial shows you how to train the Transformer model (from Attention Is All You Need) with Tensor2Tensor on a Cloud TPU.

Model description

The Transformer model uses stacks of self-attention layers and feed-forward layers to process sequential input like text. It supports the following variants:

  • transformer (decoder-only) for single sequence modeling. Example use case: language modeling.
  • transformer (encoder-decoder) for sequence to sequence modeling. Example use case: translation.
  • transformer_encoder (encoder-only) runs only the encoder for sequence to class modeling. Example use case: sentiment classification.

The Transformer is just one of the models in the Tensor2Tensor library. Tensor2Tensor (T2T) is a library of deep learning models and datasets as well as a set of scripts that allow you to train the models and to download and prepare the data.

Objectives

  • Generate the training dataset
  • Train a language model on a single Cloud TPU or a Cloud TPU Pod
  • Train an English-German translation model on a single Cloud TPU
  • Train a sentiment classifier on a single Cloud TPU
  • Clean up Cloud TPU resources

Costs

This tutorial uses billable components of Google Cloud, including:

  • Compute Engine
  • Cloud TPU
  • Cloud Storage

Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.

Before you begin

If you plan to train on a TPU Pod slice, please make sure you read Training on TPU Pods that explains the special considerations when training on a Pod slice.

Before starting this tutorial, follow the steps below to check that your Google Cloud project is correctly set up.

This section provides information on setting up Cloud Storage bucket and a Compute Engine VM.

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Create a variable for your project's name.

    export PROJECT_NAME=project-name
    
  3. Configure the gcloud command-line tool to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_NAME}
    
  4. Create a Cloud Storage bucket using the following command:

    gsutil mb -p ${PROJECT_NAME} -c standard -l europe-west4 -b on gs://bucket-name
    

    This Cloud Storage bucket stores the data you use to train your model and the training results.

  5. Launch a Compute Engine VM using the ctpu up command.

    ctpu up --zone=europe-west4-a \
    --vm-only \
    --disk-size-gb=300 \
    --machine-type=n1-standard-8 \
    --tf-version=1.15 \
    --name=transformer-tutorial
    
  6. The configuration you specified appears. Enter y to approve or n to cancel.

  7. When the ctpu up command has finished executing, verify that your shell prompt has changed from username@projectname to username@vm-name. This change shows that you are now logged into your Compute Engine VM.

    gcloud compute ssh transformer-tutorial --zone=europe-west4-a
    

    As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

On your Compute Engine VM:

  1. Create the following environment variables:

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
    (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/transformer
    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/data
    (vm)$ export TMP_DIR=${HOME}/t2t_tmp
  2. Create a directory to store temporary files:

    (vm)$ mkdir ${TMP_DIR}
  3. Add the path to the tensor2tensor scripts used to process the model data:

    (vm)$ export PATH=.local/bin:${PATH}

Train a language model on a single Cloud TPU

  1. Generate the training dataset for the language model.

    (vm)$ t2t-datagen --problem=languagemodel_lm1b32k_packed \
     --data_dir=${DATA_DIR} \
     --tmp_dir=${TMP_DIR}
  2. Run the following command to create your Cloud TPU resource.

    (vm)$ ctpu up --tpu-only \
     --tf-version=1.15 \
     --name=transformer-tutorial
  3. Set an environment variable for the TPU name.

    (vm)$ export TPU_NAME=transformer-tutorial
  4. Run the training script.

    (vm)$ t2t-trainer \
     --model=transformer \
     --hparams_set=transformer_tpu \
     --problem=languagemodel_lm1b32k_packed \
     --eval_steps=3 \
     --data_dir=${DATA_DIR} \
     --output_dir=${MODEL_DIR}/language_lm1b32k \
     --use_tpu=True \
     --cloud_tpu_name=${TPU_NAME} \
     --train_steps=10

    The above command runs 10 training steps, then 3 evaluation steps. It runs in approximately 5 minutes on a v3-8 TPU node. To make this model more accurate, you need to increase the number of training steps by adjusting the --train_steps flag. It is recommended that you train the model using at least 40k steps. The model typically converges to its maximum quality after ~250k steps.

  5. Delete the Cloud TPU resource you created.

    (vm)$ ctpu delete --tpu-only --zone=europe-west4-a --name=transformer-tutorial

Train a language model on a Cloud TPU Pod

  1. Run the ctpu up command, using the tpu-size parameter to specify the Pod slice you want to use. For example, the following command uses a v2-32 Pod slice.

    (vm)$ ctpu up --tpu-only \
     --tpu-size=v2-32 \
     --zone=europe-west4-a \
     --tf-version=1.15 \
     --name=transformer-tutorial-pod
  2. Set an environment variable for the new TPU name.

    (vm)$ export TPU_NAME=transformer-tutorial-pod
  3. Run the training script.

    (vm)$ t2t-trainer \
     --model=transformer \
     --hparams_set=transformer_tpu \
     --problem=languagemodel_lm1b32k_packed \
     --eval_steps=3 \
     --data_dir=${DATA_DIR} \
     --output_dir=${MODEL_DIR}/language_lm1b32k_pod \
     --use_tpu=True \
     --cloud_tpu_name=${TPU_NAME} \
     --tpu_num_shards=32  \
     --schedule=train \
     --train_steps=25000

    The above command runs 25,000 training steps, and then runs three evaluation steps. It takes approximately 30 minutes to complete this training on a Cloud TPU v2-32.

    It is recommended that you train the model using at least 40k steps. The model typically converges to its maximum quality after ~250k steps.

  4. Delete the Cloud TPU resource you created for training the model on a single device.

    (vm)$ ctpu delete --tpu-only \
     --zone=europe-west4-a \
     --name=transformer-tutorial-pod

Train an English-German translation model on a single Cloud TPU

  1. Use the t2t-datagen script to generate the training and evaluation data for the translation model on the Cloud Storage bucket:

    (vm)$ t2t-datagen \
     --problem=translate_ende_wmt32k_packed \
     --data_dir=${DATA_DIR} \
     --tmp_dir=${TMP_DIR}
  2. Run the following command to create your Cloud TPU resource.

    (vm)$ ctpu up --tpu-only \
     --tf-version=1.15 \
     --name=transformer-tutorial
  3. Set an environment variable for the new TPU name.

    (vm)$ export TPU_NAME=transformer-tutorial
  4. Run t2t-trainer to train and evaluate the model:

    (vm)$ t2t-trainer \
     --model=transformer \
     --hparams_set=transformer_tpu \
     --problem=translate_ende_wmt32k_packed \
     --eval_steps=3 \
     --data_dir=${DATA_DIR} \
     --output_dir=${MODEL_DIR}/translate_ende \
     --use_tpu=True \
     --cloud_tpu_name=${TPU_NAME} \
     --train_steps=10

    The above command runs 10 training steps, then 3 evaluation steps. It runs in approximately 5 minutes on a v3-8 TPU node. You can (and should) increase the number of training steps by adjusting the --train_steps flag. Translations usually begin to be reasonable after ~40k steps. The model typically converges to its maximum quality after ~250k steps.

  5. Delete the Cloud TPU resource you created for training the model on a single device.

    (vm)$ ctpu delete --tpu-only --zone=europe-west4-a --name=transformer-tutorial

Train a sentiment classifier model on a single Cloud TPU

  1. Generate the dataset for the sentiment classifier model.

    (vm)$ t2t-datagen --problem=sentiment_imdb \
     --data_dir=${DATA_DIR} \
     --tmp_dir=${TMP_DIR}
  2. Run the following command to create your Cloud TPU resource.

    (vm)$ ctpu up --tpu-only \
     --tf-version=1.15 \
     --name=transformer-tutorial
  3. Run the training script.

    (vm)$ t2t-trainer \
     --model=transformer_encoder \
     --hparams_set=transformer_tiny_tpu \
     --problem=sentiment_imdb \
     --eval_steps=1 \
     --data_dir=${DATA_DIR} \
     --output_dir=${MODEL_DIR}/sentiment_classifier \
     --use_tpu=True \
     --cloud_tpu_name=${TPU_NAME} \
     --train_steps=10
    

    The above command runs 10 training steps, then 3 evaluation steps. It runs in approximately 5 minutes on a v3-8 TPU node. This model achieves approximately 85% accuracy after approximately 2,000 steps.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  2. In your Cloud Shell, run ctpu delete with the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:

    $ ctpu delete --zone=europe-west4-a \
    --name=transformer-tutorial
    
  3. Run the following command to verify the Compute Engine VM and Cloud TPU have been shut down:

    $ ctpu status --zone=europe-west4-a
    

    The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    2018/04/28 16:16:23 WARNING: Setting zone to "europe-west4-a"
    No instances currently exist.
     Compute Engine VM:     --
     Cloud TPU:             --
    
  4. Run gsutil as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://bucket-name
    

What's next