This page shows you how to train a classification or regression model from a tabular dataset using either the Google Cloud console or the Vertex AI API.
Before you begin
Before you can train a model, you must complete the following:
Train a model
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go to the Datasets page.
Click the name of the dataset you want to use to train your model to open its details page.
If your data type uses annotation sets, select the annotation set you want to use for this model.
Click Train new model.
Select Other.
In the Train new model page, complete the following steps:
Select the model training method.
AutoML
is a good choice for a wide range of use cases.
Click Continue.
Enter the display name for your new model.
Select your target column.
The target column is the value that the model will predict.
Learn more about target column requirements.
Optional: To export your test dataset to BigQuery, check Export test dataset to BigQuery and provide the name of the table.
Optional: To choose how to split the data between training, test, and validation sets, open the Advanced options. You can choose between the following data split options:
- Random (Default): Vertex AI randomly selects the rows associated with each of the data sets. By default, Vertex AI selects 80% of your data rows for the training set, 10% for the validation set, and 10% for the test set.
- Manual: Vertex AI selects data rows for each of the data sets based on the values in a data split column. Provide the name of the data split column.
- Chronological: Vertex AI splits data based on the timestamp in a time column. Provide the name of the time column.
Learn more about data splits.
Click Continue.
Optional: Click Generate statistics. Generating statistics populates the Transformation dropdown menus.
On the Training options page, review your column list and exclude any columns from training that should not be used to train the model.
Review the transformations selected for your included features, along with whether invalid data is allowed, and make any required updates.
Learn more about transformations and invalid data.
If you want to specify a weight column, or change your optimization objective from the default, open the Advanced options and make your selections.
Learn more about weight columns and optimization objectives.
Click Continue.
In the Compute and pricing window, configure as follows:
Enter the maximum number of hours you want your model to train for.
This setting helps you put a cap on the training costs. The actual time elapsed can be longer than this value, because there are other operations involved in creating a new model.
Suggested training time is related to the size of your training data. The table below shows suggested training time ranges by row count; a large number of columns will also increase the required training time.
Rows Suggested training time Less than 100,000 1-3 hours 100,000 - 1,000,000 1-6 hours 1,000,000 - 10,000,000 1-12 hours More than 10,000,000 3 - 24 hours Click Start Training.
Model training can take many hours, depending on the size and complexity of your data and your training budget, if you specified one. You can close this tab and return to it later. You will receive an email when your model has completed training.
API
Select a tabular data type objective.
Classification
Select a tab for your language or environment:
REST
You use the trainingPipelines.create command to train a model.
Train the model.
Before using any of the request data, make the following replacements:
- LOCATION: Your region.
- PROJECT: Your project ID.
- TRAININGPIPELINE_DISPLAY_NAME: Display name for the training pipeline created for this operation.
- TARGET_COLUMN: The column (value) you want this model to predict.
- WEIGHT_COLUMN: (Optional) The weight column. Learn more.
- TRAINING_BUDGET: The maximum amount of time you want the model to train, in milli node hours (1,000 milli node hours equals one node hour).
- OPTIMIZATION_OBJECTIVE: Required only if you do not want the default optimization objective for your prediction type. Learn more.
- TRANSFORMATION_TYPE: The transformation type is provided for each column used to train the model. Learn more.
- COLUMN_NAME: The name of the column with the specified transformation type. Every column used to train the model must be specified.
- MODEL_DISPLAY_NAME: Display name for the newly trained model.
- DATASET_ID: ID for the training Dataset.
-
You can provide a
Split
object to control your data split. For information about controlling data split, see Controlling the data split using REST. - PROJECT_NUMBER: Your project's automatically generated project number
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines
Request JSON body:
{ "displayName": "TRAININGPIPELINE_DISPLAY_NAME", "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tabular_1.0.0.yaml", "trainingTaskInputs": { "targetColumn": "TARGET_COLUMN", "weightColumn": "WEIGHT_COLUMN", "predictionType": "classification", "trainBudgetMilliNodeHours": TRAINING_BUDGET, "optimizationObjective": "OPTIMIZATION_OBJECTIVE", "transformations": [ {"TRANSFORMATION_TYPE_1": {"column_name" : "COLUMN_NAME_1"} }, {"TRANSFORMATION_TYPE_2": {"column_name" : "COLUMN_NAME_2"} }, ... }, "modelToUpload": {"displayName": "MODEL_DISPLAY_NAME"}, "inputDataConfig": { "datasetId": "DATASET_ID", } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT_NUMBER/locations/us-central1/trainingPipelines/4567", "displayName": "myModelName", "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tabular_1.0.0.yaml", "modelToUpload": { "displayName": "myModelName" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2020-08-18T01:22:57.479336Z", "updateTime": "2020-08-18T01:22:57.479336Z" }
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Regression
Select a tab for your language or environment:
REST
You use the trainingPipelines.create command to train a model.
Train the model.
Before using any of the request data, make the following replacements:
- LOCATION: Your region.
- PROJECT: Your project ID.
- TRAININGPIPELINE_DISPLAY_NAME: Display name for the training pipeline created for this operation.
- TARGET_COLUMN: The column (value) you want this model to predict.
- WEIGHT_COLUMN: (Optional) The weight column. Learn more.
- TRAINING_BUDGET: The maximum amount of time you want the model to train, in milli node hours (1,000 milli node hours equals one node hour).
- OPTIMIZATION_OBJECTIVE: Required only if you do not want the default optimization objective for your prediction type. Learn more.
- TRANSFORMATION_TYPE: The transformation type is provided for each column used to train the model. Learn more.
- COLUMN_NAME: The name of the column with the specified transformation type. Every column used to train the model must be specified.
- MODEL_DISPLAY_NAME: Display name for the newly trained model.
- DATASET_ID: ID for the training Dataset.
-
You can provide a
Split
object to control your data split. For information about controlling data split, see Controlling the data split using REST. - PROJECT_NUMBER: Your project's automatically generated project number
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines
Request JSON body:
{ "displayName": "TRAININGPIPELINE_DISPLAY_NAME", "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tabular_1.0.0.yaml", "trainingTaskInputs": { "targetColumn": "TARGET_COLUMN", "weightColumn": "WEIGHT_COLUMN", "predictionType": "regression", "trainBudgetMilliNodeHours": TRAINING_BUDGET, "optimizationObjective": "OPTIMIZATION_OBJECTIVE", "transformations": [ {"TRANSFORMATION_TYPE_1": {"column_name" : "COLUMN_NAME_1"} }, {"TRANSFORMATION_TYPE_2": {"column_name" : "COLUMN_NAME_2"} }, ... }, "modelToUpload": {"displayName": "MODEL_DISPLAY_NAME"}, "inputDataConfig": { "datasetId": "DATASET_ID", } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT_NUMBER/locations/us-central1/trainingPipelines/4567", "displayName": "myModelName", "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tabular_1.0.0.yaml", "modelToUpload": { "displayName": "myModelName" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2020-08-18T01:22:57.479336Z", "updateTime": "2020-08-18T01:22:57.479336Z" }
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Control the data split using REST
You can control how your training data is split between the training,
validation, and test sets. When you use the Vertex AI API, use the
Split
object to
determine your data split. The Split
object can be included in the
inputDataConfig
object as one of several object types, each of which provides a
different way to split the training data.
The methods you can use to split your data depend on your data type:
FractionSplit
:- TRAINING_FRACTION: The fraction of the training data to be used for the training set.
- VALIDATION_FRACTION: The fraction of the training data to be used for the validation set.
- TEST_FRACTION: The fraction of the training data to be used for the test set.
If any of the fractions are specified, all must be specified. The fractions must add up to 1.0. Learn more.
"fractionSplit": { "trainingFraction": TRAINING_FRACTION, "validationFraction": VALIDATION_FRACTION, "testFraction": TEST_FRACTION },
PredefinedSplit
:- DATA_SPLIT_COLUMN: The column containing the data split values
(
TRAIN
,VALIDATION
,TEST
).
Manually specify the data split for each row by using a split column. Learn more.
"predefinedSplit": { "key": DATA_SPLIT_COLUMN },
- DATA_SPLIT_COLUMN: The column containing the data split values
(
TimestampSplit
:- TRAINING_FRACTION: The percentage of the training data to be used for the training set. Defaults to 0.80.
- VALIDATION_FRACTION: The percentage of the training data to be used for the validation set. Defaults to 0.10.
- TEST_FRACTION: The percentage of the training data to be used for the test set. Defaults to 0.10.
- TIME_COLUMN: The column containing the timestamps.
If any of the fractions are specified, all must be specified. The fractions must add up to 1.0. Learn more.
"timestampSplit": { "trainingFraction": TRAINING_FRACTION, "validationFraction": VALIDATION_FRACTION, "testFraction": TEST_FRACTION, "key": TIME_COLUMN }
Optimization objectives for classification or regression models
When you train a model, Vertex AI selects a default optimization objective based on your model type and the data type used for your target column.
Classification models are best for:Optimization objective | API value | Use this objective if you want to... |
---|---|---|
AUC ROC | maximize-au-roc |
Maximize the area under the receiver operating characteristic (ROC) curve. Distinguishes between classes. Default value for binary classification. |
Log loss | minimize-log-loss |
Keep prediction probabilities as accurate as possible. Only supported objective for multi-class classification. |
AUC PR | maximize-au-prc |
Maximize the area under the precision-recall curve. Optimizes results for predictions for the less common class. |
Precision at Recall | maximize-precision-at-recall |
Optimize precision at a specific recall value. |
Recall at Precision | maximize-recall-at-precision |
Optimize recall at a specific precision value. |
Optimization objective | API value | Use this objective if you want to... |
---|---|---|
RMSE | minimize-rmse |
Minimize root-mean-squared error (RMSE). Captures more extreme values accurately. Default value. |
MAE | minimize-mae |
Minimize mean-absolute error (MAE). Views extreme values as outliers with less impact on model. |
RMSLE | minimize-rmsle |
Minimize root-mean-squared log error (RMSLE). Penalizes error on relative size rather than absolute value. Useful when both predicted and actual values can be quite large. |
What's next
- Evaluate your model.
- Learn how to export your model.