This tutorial specifically focuses on the FairSeq version of Transformer, and the WMT 18 translation task, translating English to German.
Objectives
- Prepare the dataset.
- Run the training job.
- Verify the output results.
Costs
This tutorial uses the following billable components of Google Cloud:
- Compute Engine
- Cloud TPU
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
Before starting this tutorial, check that your Google Cloud project is correctly set up.
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
This walkthrough uses billable components of Google Cloud. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up resources you create when you've finished with them to avoid unnecessary charges.
Set up a Compute Engine instance
Open a Cloud Shell window.
Create a variable for your project's ID.
export PROJECT_ID=project-id
Configure Google Cloud CLI to use the project where you want to create Cloud TPU.
gcloud config set project ${PROJECT_ID}
The first time you run this command in a new Cloud Shell VM, an
Authorize Cloud Shell
page is displayed. ClickAuthorize
at the bottom of the page to allowgcloud
to make API calls with your credentials.From the v, launch the Compute Engine resource required for this tutorial.
gcloud compute --project=${PROJECT_ID} instances create transformer-tutorial \ --zone=us-central1-a \ --machine-type=n1-standard-16 \ --image-family=torch-xla \ --image-project=ml-images \ --boot-disk-size=200GB \ --scopes=https://www.googleapis.com/auth/cloud-platform
Connect to the new Compute Engine instance.
gcloud compute ssh transformer-tutorial --zone=us-central1-a
Launch a Cloud TPU resource
From the Compute Engine virtual machine, launch a Cloud TPU resource using the following command:
(vm) $ gcloud compute tpus create transformer-tutorial \ --zone=us-central1-a \ --network=default \ --version=pytorch-1.13 \ --accelerator-type=v3-8
Identify the IP address for the Cloud TPU resource.
(vm) $ gcloud compute tpus list --zone=us-central1-a
The IP address is located under the
NETWORK_ENDPOINTS
column. You will need this IP address when you create and configure the PyTorch environment.
Download the data
Create a directory,
pytorch-tutorial-data
to store the model data.(vm) $ mkdir $HOME/pytorch-tutorial-data
Navigate to the
pytorch-tutorial-data
directory.(vm) $ cd $HOME/pytorch-tutorial-data
Download the model data.
(vm) $ wget https://dl.fbaipublicfiles.com/fairseq/data/wmt18_en_de_bpej32k.zip
Extract the data.
(vm) $ sudo apt-get install unzip && \ unzip wmt18_en_de_bpej32k.zip
Create and configure the PyTorch environment
Start a
conda
environment.(vm) $ conda activate torch-xla-1.13
Configure environmental variables for the Cloud TPU resource.
(vm) $ export TPU_IP_ADDRESS=ip-address; \ export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470"
Train the model
To train the model, run the following script:
(vm) $ python /usr/share/torch-xla-1.13/tpu-examples/deps/fairseq/train.py \
$HOME/pytorch-tutorial-data/wmt18_en_de_bpej32k \
--save-interval=1 \
--arch=transformer_vaswani_wmt_en_de_big \
--max-target-positions=64 \
--attention-dropout=0.1 \
--no-progress-bar \
--criterion=label_smoothed_cross_entropy \
--source-lang=en \
--lr-scheduler=inverse_sqrt \
--min-lr 1e-09 \
--skip-invalid-size-inputs-valid-test \
--target-lang=de \
--label-smoothing=0.1 \
--update-freq=1 \
--optimizer adam \
--adam-betas '(0.9, 0.98)' \
--warmup-init-lr 1e-07 \
--lr 0.0005 \
--warmup-updates 4000 \
--share-all-embeddings \
--dropout 0.3 \
--weight-decay 0.0 \
--valid-subset=valid \
--max-epoch=25 \
--input_shapes 128x64 \
--num_cores=8 \
--metrics_debug \
--log_steps=100
Clean up
Perform a cleanup to avoid incurring unnecessary charges to your account after using the resources you created:
Disconnect from the Compute Engine instance, if you have not already done so:
(vm) $ exit
Your prompt should now be
user@projectname
, showing you are in the Cloud Shell.In your Cloud Shell, use the Google Cloud CLI to delete the Compute Engine instance.
$ gcloud compute instances delete transformer-tutorial --zone=us-central1-a
Use Google Cloud CLI to delete the Cloud TPU resource.
$ gcloud compute tpus delete transformer-tutorial --zone=us-central1-a
What's next
Try the PyTorch colabs:
- Getting Started with PyTorch on Cloud TPUs
- Training MNIST on TPUs
- Training ResNet18 on TPUs with Cifar10 dataset
- Inference with Pretrained ResNet50 Model
- Fast Neural Style Transfer
- MultiCore Training AlexNet on Fashion MNIST
- Single Core Training AlexNet on Fashion MNIST