The AI Platform Training training service manages computing resources in the cloud to train your models. This page describes the process to train an XGBoost model using AI Platform Training.
This tutorial trains a simple model to predict a person's income level based on the Census Income Data Set. You create a training application locally, upload it to Cloud Storage, and submit a training job. The AI Platform Training training service writes its output to your Cloud Storage bucket, and creates logs in Logging.
This content is also available on GitHub as a Jupyter notebook.
How to train your model on AI Platform Training
You can train your model on AI Platform Training in three steps:
- Create your Python model file
- Add code to download your data from Cloud Storage so that AI Platform Training can use it
- Add code to export and save the model to Cloud Storage after AI Platform Training finishes training the model
- Prepare a training application package
- Submit the training job
Before you begin
Complete the following steps to set up a GCP account, activate the AI Platform Training API, and install and activate the Cloud SDK.
Set up your GCP project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the AI Platform Training & Prediction and Compute Engine APIs.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the AI Platform Training & Prediction and Compute Engine APIs.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
Set up your environment
Choose one of the options below to set up your environment locally on macOS or in a remote environment on Cloud Shell.
For macOS users, we recommend that you set up your environment using the MACOS tab below. Cloud Shell, shown on the CLOUD SHELL tab, is available on macOS, Linux, and Windows. Cloud Shell provides a quick way to try AI Platform Training, but isn't suitable for ongoing development work.
macOS
-
Check Python installation
Confirm that you have Python installed and, if necessary, install it.python -V
-
Check
pip
installation
pip
is Python's package manager, included with current versions of Python. Check if you already havepip
installed by runningpip --version
. If not, see how to installpip
.You can upgrade
pip
using the following command:pip install -U pip
See the pip documentation for more details.
-
Install
virtualenv
virtualenv
is a tool to create isolated Python environments. Check if you already havevirtualenv
installed by runningvirtualenv --version
. If not, installvirtualenv
:pip install --user --upgrade virtualenv
To create an isolated development environment for this guide, create a new virtual environment in
virtualenv
. For example, the following command activates an environment namedaip-env
:virtualenv aip-env source aip-env/bin/activate
-
For the purposes of this tutorial, run the rest of the commands within your virtual environment.
See more information about usingvirtualenv
. To exitvirtualenv
, rundeactivate
.
Cloud Shell
-
Open the Google Cloud console.
-
Click the Activate Google Cloud Shell button at the top of the console window.
A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. It can take a few seconds for the shell session to be initialized.
Your Cloud Shell session is ready to use.
-
Configure the
gcloud
command-line tool to use your selected project.gcloud config set project [selected-project-id]
where
[selected-project-id]
is your project ID. (Omit the enclosing brackets.)
Install frameworks
macOS
Within your virtual environment, run the following command to install the versions of scikit-learn, XGBoost, and pandas used in AI Platform Training runtime version 2.11:
(aip-env)$ pip install scikit-learn==1.0.2 xgboost==1.6.2 pandas==1.3.5
By providing version numbers in the preceding command, you ensure that the dependencies in your virtual environment match the dependencies in the runtime version. This helps prevent unexpected behavior when your code runs on AI Platform Training.
For more details, installation options, and troubleshooting information, refer to the installation instructions for each framework:
Cloud Shell
Run the following command to install scikit-learn, XGBoost, and pandas:
pip install --user scikit-learn xgboost pandas
For more details, installation options, and troubleshooting information, refer to the installation instructions for each framework:
Set up your Cloud Storage bucket
You'll need a Cloud Storage bucket to store your training code and dependencies. For the purposes of this tutorial, it is easiest to use a dedicated Cloud Storage bucket in the same project you're using for AI Platform Training.
If you're using a bucket in a different project, you must ensure that your AI Platform Training service account can access your training code and dependencies in Cloud Storage. Without the appropriate permissions, your training job fails. See how to grant permissions for storage.
Make sure to use or set up a bucket in the same region you're using to run training jobs. See the available regions for AI Platform Training services.
This section shows you how to create a new bucket. You can use an existing bucket, but it must be in the same region where you plan on running AI Platform jobs. Additionally, if it is not part of the project you are using to run AI Platform Training, you must explicitly grant access to the AI Platform Training service accounts.
-
Specify a name for your new bucket. The name must be unique across all buckets in Cloud Storage.
BUCKET_NAME="YOUR_BUCKET_NAME"
For example, use your project name with
-aiplatform
appended:PROJECT_ID=$(gcloud config list project --format "value(core.project)") BUCKET_NAME=${PROJECT_ID}-aiplatform
-
Check the bucket name that you created.
echo $BUCKET_NAME
-
Select a region for your bucket and set a
REGION
environment variable.Use the same region where you plan on running AI Platform Training jobs. See the available regions for AI Platform Training services.
For example, the following code creates
REGION
and sets it tous-central1
:REGION=us-central1
-
Create the new bucket:
gcloud storage buckets create gs://$BUCKET_NAME --location=$REGION
About the data
The Census Income Data Set that this sample uses for training is hosted by the UC Irvine Machine Learning Repository.
Census data courtesy of: Lichman, M. (2013). UCI Machine Learning Repository http://archive.ics.uci.edu/ml. Irvine, CA: University of California, School of Information and Computer Science. This dataset is publicly available for anyone to use under the following terms provided by the Dataset Source - http://archive.ics.uci.edu/ml - and is provided "AS IS" without any warranty, express or implied, from Google. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
For your convenience, we have hosted the data in a public Cloud Storage
bucket: gs://cloud-samples-data/ai-platform/sklearn/census_data/
, which you can
download within your Python training file.
Create your Python model file
You can find all the training code for this section on GitHub:
train.py
.
The rest of this section provides an explanation of what the training code does.
Setup
Import the following libraries from Python, Google Cloud CLI, XGBoost, and scikit-learn. Set a variable for the name of your Cloud Storage bucket.
import xgboost as xgb
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import subprocess
from google.cloud import storage
# Fill in your Cloud Storage bucket name
BUCKET_ID = <YOUR_BUCKET_NAME>
Download data from Cloud Storage
During the typical development process, you upload your own data to
Cloud Storage so that AI Platform Training can access it. The data
for this tutorial is hosted in a public bucket:
gs://cloud-samples-data/ai-platform/sklearn/census_data/
The code below downloads the training dataset, adult.data
. (Evaluation data
is available in adult.test
, but is not used in this tutorial.)
Add your model code
The model training code does a few basic steps:
- Define and load data
- Convert categorical features to numerical features
- Extract numerical features with a scikit-learn pipeline
- Export and save the model to Cloud Storage
Define and load data
Convert categorical features to numerical features
Train, export and save the model to Cloud Storage
If your Cloud Storage bucket is in the same project you're using for AI Platform Training, then AI Platform Training can read from and write to your bucket. If not, you need to make sure that the project you are using to run AI Platform Training can access your Cloud Storage bucket. See how to grant permissions for storage.
Make sure to name your model file model.pkl
, model.joblib
, or
model.bst
if you want to use it to request online predictions with AI Platform Prediction.
Verify model file upload Cloud Storage (Optional)
In the command line, view the contents of the destination model folder to verify
that your model file has been uploaded to Cloud Storage. Set an
environment variable (BUCKET_ID
) for the name of your bucket, if you have not
already done so.
gcloud storage ls gs://$BUCKET_ID/census_*
The output should appear similar to the following:
gs://[YOUR-PROJECT-ID]/census_[DATE]_[TIME]/model.bst
Create training application package
The easiest (and recommended) way to create a training application package uses
gcloud
to package and upload the application when you submit your training
job. This method allows you to create a very simple file structure with only two
files. For this tutorial, the file structure of your training application
package should appear similar to the following:
census_training/
__init__.py
train.py
Create a directory locally:
mkdir census_training
Create a blank file named
__init__.py
:touch census_training/__init__.py
Save your training code in one Python file, and save that file within your
census_training
directory. See the example code fortrain.py
. You can usecURL
to download and save the file:curl https://raw.githubusercontent.com/GoogleCloudPlatform/cloudml-samples/master/xgboost/notebooks/census_training/train.py > census_training/train.py
Learn more about packaging a training application.
Submit training job
In this section, you use
gcloud ai-platform jobs submit training
to submit
your training job.
Specify training job parameters
Set the following environment variables for each parameter in your training job request:
PROJECT_ID
- Use the PROJECT_ID that matches your Google Cloud project.BUCKET_ID
- The name of your Cloud Storage bucket.JOB_NAME
- A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case:census_training_$(date +"%Y%m%d_%H%M%S")
JOB_DIR
- The path to a Cloud Storage location to use for your training job's output files. For example,gs://$BUCKET_ID/xgboost_job_dir
.TRAINING_PACKAGE_PATH
- The local path to the root directory of your training application. In this case:./census_training/
.MAIN_TRAINER_MODULE
- Specifies which file the AI Platform Training training service should run. This is formatted as[YOUR_FOLDER_NAME.YOUR_PYTHON_FILE_NAME]
. In this case,census_training.train
.REGION
- The name of the region you're using to run your training job. Use one of the available regions for the AI Platform Training training service. Make sure your Cloud Storage bucket is in the same region. This tutorial usesus-central1
.RUNTIME_VERSION
- You must specify a AI Platform Training runtime version that supports scikit-learn. In this example,2.11
.PYTHON_VERSION
- The Python version to use for the job. For this tutorial, specify Python 3.7.SCALE_TIER
- A predefined cluster specification for machines to run your training job. In this case,BASIC
. You can also use custom scale tiers to define your own cluster configuration for training.
For your convenience, the environment variables for this tutorial are below.
Replace [VALUES-IN-BRACKETS]
with the appropriate values:
PROJECT_ID=[YOUR-PROJECT-ID]
BUCKET_ID=[YOUR-BUCKET-ID]
JOB_NAME=census_training_$(date +"%Y%m%d_%H%M%S")
JOB_DIR=gs://$BUCKET_ID/xgboost_job_dir
TRAINING_PACKAGE_PATH="[YOUR-LOCAL-PATH-TO-TRAINING-PACKAGE]/census_training/"
MAIN_TRAINER_MODULE=census_training.train
REGION=us-central1
RUNTIME_VERSION=2.11
PYTHON_VERSION=3.7
SCALE_TIER=BASIC
Submit the request:
gcloud ai-platform jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $TRAINING_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier $SCALE_TIER
You should see output similar to the following:
Job [census_training_[DATE]_[TIME]] submitted successfully. Your job is still active. You may view the status of your job with the command $ gcloud ai-platform jobs describe census_training_20180718_160825 or continue streaming the logs with the command $ gcloud ai-platform jobs stream-logs census_training_[DATE]_[TIME] jobId: census_training_[DATE]_[TIME] state: QUEUED
Viewing your training logs (optional)
AI Platform Training captures all stdout
and stderr
streams and logging
statements. These logs are stored in Logging; they are visible
both during and after execution.
To view the logs for your training job:
Console
Open your AI Platform Training Jobs page.
Select the name of the training job to inspect. This brings you to the Job details page for your selected training job.
Within the job details, select the View logs link. This brings you to the Logging page where you can search and filter logs for your selected training job.
gcloud
You can view logs in your terminal with
gcloud ai-platform jobs stream-logs
.
gcloud ai-platform jobs stream-logs $JOB_NAME
What's next
- Get online predictions with XGBoost on AI Platform Training.
- See how to use custom scale tiers. to define your own cluster configuration for training.