This tutorial shows how to train DLRM and DCN v2 ranking models which can be used for tasks such as click-through rate (CTR) prediction. See the note in Set up to run the DLRM or DCN model to see how to set parameters to train either a DLRM or a DCN v2 ranking model.
The model inputs are numerical and categorical features, and output is a scalar (for example click probability). The model can be trained and evaluated on Cloud TPU. The deep ranking models are both memory intensive (for embedding tables and lookup) and compute intensive for deep networks (MLPs). TPUs are designed for both.
The model uses a TPUEmbedding layer for categorical features. TPU embedding supports large embedding tables with fast lookup, the size of embedding tables scales linearly with the size of a TPU pod. Up to 90 GB embedding tables can be used for TPU v3-8, 5.6 TB for a v3-512 Pod, and 22.4 TB for a v3-2048 TPU Pod.
The model code is in the TensorFlow Recommenders library, while input pipeline, configuration and training loop is described in the TensorFlow Model Garden.
Objectives
- Set up the training environment
- Run the training job using synthetic data
- Verify the output results
Costs
In this document, you use the following billable components of Google Cloud:
- Compute Engine
- Cloud TPU
- Cloud Storage
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
Before starting this tutorial, check that your Google Cloud project is correctly set up.
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
This walkthrough uses billable components of Google Cloud. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up your TPU resources you create when you've finished with them to avoid unnecessary charges.
Set up your resources
This section provides information on setting up Cloud Storage bucket, VM, and Cloud TPU resources used by this tutorial.
Open a Cloud Shell window.
Create a variable for your project's ID.
export PROJECT_ID=project-id
Configure Google Cloud CLI to use the project where you want to create Cloud TPU.
For more information on the
gcloud
command, see the Google Cloud CLI Reference.gcloud config set project ${PROJECT_ID}
The first time you run this command in a new Cloud Shell VM, an
Authorize Cloud Shell
page is displayed. ClickAuthorize
at the bottom of the page to allowgcloud
to make API calls with your credentials.Create a Service Account for the Cloud TPU project.
gcloud beta services identity create --service tpu.googleapis.com --project $PROJECT_ID
The command returns a Cloud TPU Service Account with following format:
service-PROJECT_NUMBER@cloud-tpu.iam.gserviceaccount.com
Create a Cloud Storage bucket using the following command where the
--location
option specifies the region where the bucket should be created. See the types and zones for more details on zones and regions:gcloud storage buckets create gs://bucket-name --project=${PROJECT_ID} --location=europe-west4
This Cloud Storage bucket stores the data you use to train your model and the training results. The
gcloud compute tpus tpu-vm
tool used in this tutorial sets up default permissions for the Cloud TPU Service Account you set up in the previous step. If you want finer-grain permissions, review the access level permissions.The bucket location must be in the same region as your Compute Engine (VM) and your Cloud TPU node.
Launch a Compute Engine VM and Cloud TPU using the
gcloud
command.$ gcloud compute tpus tpu-vm create dlrm-dcn-tutorial \ --zone=europe-west4-a \ --accelerator-type=v3-8 \ --version=tpu-vm-tf-2.18.0-se
Command flag descriptions
zone
- The zone where you plan to create your Cloud TPU.
accelerator-type
- The accelerator type specifies the version and size of the Cloud TPU you want to create. For more information about supported accelerator types for each TPU version, see TPU versions.
version
- The Cloud TPU software version.
Connect to the Compute Engine instance using SSH. When you are connected to the VM, your shell prompt changes from
username@projectname
tousername@vm-name
:gcloud compute tpus tpu-vm ssh dlrm-dcn-tutorial --zone=europe-west4-a
Set Cloud Storage bucket variables
Set up the following environment variables, replacing bucket-name with the name of your Cloud Storage bucket:
(vm)$ export STORAGE_BUCKET=gs://bucket-name (vm)$ export PYTHONPATH="/usr/share/tpu/models/:${PYTHONPATH}" (vm)$ export EXPERIMENT_NAME=dlrm-exp
Set an environment variable for the TPU name.
(vm)$ export TPU_NAME=local
The training application expects your training data to be accessible in Cloud Storage. The training application also uses your Cloud Storage bucket to store checkpoints during training.
Set up to run the DLRM or DCN model with synthetic data
The model can be trained on various datasets. Two commonly used ones
are Criteo Terabyte
and Criteo Kaggle.
This tutorial trains on synthetic data by setting the flag use_synthetic_data=True
.
The synthetic dataset is only useful for understanding how to use a Cloud TPU and validating end-to-end performance. The accuracy numbers and saved model won't be meaningful.
Visit the Criteo Terabyte and Criteo Kaggle websites for information on how to download and preprocess these datasets.
Install required packages.
(vm)$ pip3 install tensorflow-recommenders (vm)$ pip3 install -r /usr/share/tpu/models/official/requirements.txt
Change to the script directory.
(vm)$ cd /usr/share/tpu/models/official/recommendation/ranking
Run the training script. This uses a fake, Criteo-like dataset to train the DLRM model. The training takes approximately 20 minutes.
export EMBEDDING_DIM=32 python3 train.py --mode=train_and_eval \ --model_dir=${STORAGE_BUCKET}/model_dirs/${EXPERIMENT_NAME} --params_override=" runtime: distribution_strategy: 'tpu' task: use_synthetic_data: true train_data: input_path: '${DATA_DIR}/train/*' global_batch_size: 16384 validation_data: input_path: '${DATA_DIR}/eval/*' global_batch_size: 16384 model: num_dense_features: 13 bottom_mlp: [512,256,${EMBEDDING_DIM}] embedding_dim: ${EMBEDDING_DIM} top_mlp: [1024,1024,512,256,1] interaction: 'dot' vocab_sizes: [39884406, 39043, 17289, 7420, 20263, 3, 7120, 1543, 63, 38532951, 2953546, 403346, 10, 2208, 11938, 155, 4, 976, 14, 39979771, 25641295, 39664984, 585935, 12972, 108, 36] trainer: use_orbit: false validation_interval: 1000 checkpoint_interval: 1000 validation_steps: 500 train_steps: 1000 steps_per_loop: 1000 "
This training runs for approximately 10 minutes on a v3-8 TPU. When it completes, you will see messages similar to the following:
I0621 21:32:58.519792 139675269142336 tpu_embedding_v2_utils.py:907] Done with log of TPUEmbeddingConfiguration. I0621 21:32:58.540874 139675269142336 tpu_embedding_v2.py:389] Done initializing TPU Embedding engine. 1000/1000 [==============================] - 335s 335ms/step - auc: 0.7360 - accuracy: 0.6709 - prediction_mean: 0.4984 - label_mean: 0.4976 - loss: 0.0734 - regularization_loss: 0.0000e+00 - total_loss: 0.0734 - val_auc: 0.7403 - val_accuracy: 0.6745 - val_prediction_mean: 0.5065 - val_label_mean: 0.4976 - val_loss: 0.0749 - val_regularization_loss: 0.0000e+00 - val_total_loss: 0.0749 Model: "ranking" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tpu_embedding (TPUEmbedding) multiple 1 _________________________________________________________________ mlp (MLP) multiple 154944 _________________________________________________________________ mlp_1 (MLP) multiple 2131969 _________________________________________________________________ dot_interaction (DotInteract multiple 0 _________________________________________________________________ ranking_1 (Ranking) multiple 0 ================================================================= Total params: 2,286,914 Trainable params: 2,286,914 Non-trainable params: 0 _________________________________________________________________ I0621 21:43:54.977140 139675269142336 train.py:177] Train history: {'auc': [0.7359596490859985], 'accuracy': [0.67094486951828], 'prediction_mean': [0.4983849823474884], 'label_mean': [0.4975697994232178], 'loss': [0.07338511198759079], 'regularization_loss': [0], 'total_loss': [0.07338511198759079], 'val_auc': [0.7402724623680115], 'val_accuracy': [0.6744520664215088], 'val_prediction_mean': [0.5064718723297119], 'val_label_mean': [0.4975748658180237], 'val_loss': [0.07486172765493393], 'val_regularization_loss': [0], 'val_total_loss': [0.07486172765493393]}
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Disconnect from the Compute Engine instance, if you have not already done so:
(vm)$ exit
Your prompt should now be
username@projectname
, showing you are in the Cloud Shell.Delete your Cloud TPU resources.
$ gcloud compute tpus tpu-vm delete dlrm-dcn-tutorial \ --zone=europe-west4-a
Verify the resources have been deleted by running
gcloud compute tpus tpu-vm list
. The deletion might take several minutes. The output from the following command shouldn't include any of the resources created in this tutorial:$ gcloud compute tpus tpu-vm list --zone=europe-west4-a
Delete your Cloud Storage bucket using the gcloud CLI. Replace bucket-name with the name of your Cloud Storage bucket.
$ gcloud storage rm gs://bucket-name --recursive
What's next
The TensorFlow Cloud TPU tutorials generally train the model using a sample dataset. The results of this training are not usable for inference. To use a model for inference, you can train the data on a publicly available dataset or your own dataset. TensorFlow models trained on Cloud TPUs generally require datasets to be in TFRecord format.
You can use the dataset conversion tool sample to convert an image classification dataset into TFRecord format. If you are not using an image classification model, you will have to convert your dataset to TFRecord format yourself. For more information, see TFRecord and tf.Example.
Hyperparameter tuning
To improve the model's performance with your dataset, you can tune the model's hyperparameters. You can find information about hyperparameters common to all TPU supported models on GitHub. Information about model-specific hyperparameters can be found in the source code for each model. For more information on hyperparameter tuning, see Overview of hyperparameter tuning and Tune hyperparameters.
Inference
Once you have trained your model, you can use it for inference (also called prediction). You can use the Cloud TPU inference converter tool to prepare and optimize a TensorFlow model for inference on Cloud TPU v5e. For more information about inference on Cloud TPU v5e, see Cloud TPU v5e inference introduction.