The CREATE MODEL statement for deep neural network (DNN) models

This document describes the CREATE MODEL statement for creating deep neural network (DNN) models in BigQuery. DNN models are built using the TensorFlow DNNEstimator estimator. DNN models support hyperparameter tuning.

For information about the supported SQL statements and functions for each model type, see End-to-end user journey for each model.

CREATE MODEL syntax

{CREATE MODEL | CREATE MODEL IF NOT EXISTS | CREATE OR REPLACE MODEL} model_name
OPTIONS(model_option_list)
AS query_statement

model_option_list:
MODEL_TYPE = { 'DNN_CLASSIFIER' | 'DNN_REGRESSOR' }
    [, LEARN_RATE = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, OPTIMIZER = { { 'ADAGRAD' | 'ADAM' | 'FTRL' | 'RMSPROP' | 'SGD' } | HPARAM_CANDIDATES([candidates]) } ]
    [, L1_REG = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, L2_REG = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, ACTIVATION_FN = { { 'RELU' | 'RELU6' | 'CRELU' | 'ELU' | 'SELU' | 'SIGMOID' | 'TANH' } | HPARAM_CANDIDATES([candidates]) } ]
    [, BATCH_SIZE = { int64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, DROPOUT = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, HIDDEN_UNITS = { int_array | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) } ]
    [, INTEGRATED_GRADIENTS_NUM_STEPS = int64_value ]
    [, TF_VERSION = { '1.15' | '2.8.0' } ]
    [, AUTO_CLASS_WEIGHTS = { TRUE | FALSE } ]
    [, CLASS_WEIGHTS = struct_array ]
    [, ENABLE_GLOBAL_EXPLAIN = { TRUE | FALSE } ]
    [, EARLY_STOP = { TRUE | FALSE } ]
    [, MIN_REL_PROGRESS = float64_value ]
    [, INPUT_LABEL_COLS = string_array ]
    [, MAX_ITERATIONS = int64_value ]
    [, WARM_START = { TRUE | FALSE } ]
    [, DATA_SPLIT_METHOD = { 'AUTO_SPLIT' | 'RANDOM' | 'CUSTOM' | 'SEQ' | 'NO_SPLIT' } ]
    [, DATA_SPLIT_EVAL_FRACTION = float64_value ]
    [, DATA_SPLIT_TEST_FRACTION = float64_value ]
    [, DATA_SPLIT_COL = string_value ]
    [, NUM_TRIALS = int64_value ]
    [, MAX_PARALLEL_TRIALS = int64_value ]
    [, HPARAM_TUNING_ALGORITHM = { 'VIZIER_DEFAULT' | 'RANDOM_SEARCH' | 'GRID_SEARCH' } ]
    [, HPARAM_TUNING_OBJECTIVES = { 'ROC_AUC' | 'R2_SCORE' | ... } ]

CREATE MODEL

Creates and trains a new model in the specified dataset. If the model name exists, CREATE MODEL returns an error.

CREATE MODEL IF NOT EXISTS

Creates and trains a new model only if the model doesn't exist in the specified dataset.

CREATE OR REPLACE MODEL

Creates and trains a model and replaces an existing model with the same name in the specified dataset.

model_name

The name of the model you're creating or replacing. The model name must be unique in the dataset: no other model or table can have the same name. The model name must follow the same naming rules as a BigQuery table. A model name can:

  • Contain up to 1,024 characters
  • Contain letters (upper or lower case), numbers, and underscores

model_name is not case-sensitive.

If you don't have a default project configured, then you must prepend the project ID to the model name in the following format, including backticks:

`[PROJECT_ID].[DATASET].[MODEL]`

For example, `myproject.mydataset.mymodel`.

MODEL_TYPE

Syntax

MODEL_TYPE = { 'DNN_CLASSIFIER' | 'DNN_REGRESSOR' }

Description

Specifies the model type. This option is required.

LEARN_RATE

Syntax

LEARN_RATE = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) }

Description

The initial learn rate for training.

Arguments

If you aren't running hyperparameter tuning, then you can specify a FLOAT64 value. The default value is 0.001.

If you are running hyperparameter tuning, use one of the following options:

  • Use the HPARAM_RANGE keyword and specify two FLOAT64 values that define the range to use for the hyperparameter. For example, LEARN_RATE = HPARAM_RANGE(0.001, 0.005).
  • Use the HPARAM_CANDIDATES keyword and specify an array of FLOAT64 values to provide discrete values to use for the hyperparameter. For example, LEARN_RATE = HPARAM_CANDIDATES([0, 0.001, 0.01, 0.1]).

When running hyperparameter tuning, the valid range is [0, 1.0], the default range is [0, 1.0], and the scale type is LINEAR.

OPTIMIZER

Syntax

OPTIMIZER = { { 'ADAGRAD' | 'ADAM' | 'FTRL' | 'RMSPROP' | 'SGD' } | HPARAM_CANDIDATES([candidates]) }

Description

The optimizer for training the model.

Arguments

This option accepts the following values:

If you are running hyperparameter training, you can provide more than one value for this option by using HPARAM_CANDIDATES and specifying an array. For example, OPTIMIZER = HPARAM_CANDIDATES(['ADAM', 'FTRL', 'SGD']).

L1_REG

Syntax

L1_REG = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) }

Description

The L1 regularization strength of the OPTIMIZER. You can only use this option when OPTIMIZER is set to one of the following values:

Arguments

If you aren't running hyperparameter tuning, then you can specify a FLOAT64 value. The default value is 0.

If you are running hyperparameter tuning, then you can use one of the following options:

  • The HPARAM_RANGE keyword and two FLOAT64 values that define the range to use for the hyperparameter. For example, L1_REG = HPARAM_RANGE(0, 5.0).
  • The HPARAM_CANDIDATES keyword and an array of FLOAT64 values that provide discrete values to use for the hyperparameter. For example, L1_REG = HPARAM_CANDIDATES([0, 1.0, 3.0, 5.0]).

When running hyperparameter tuning, the valid range is (0, ∞), the default range is (0, 10.0], and the scale type is LOG.

L2_REG

Syntax

L2_REG = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) }

Description

The L2 regularization strength of the OPTIMIZER. You can only use this option when OPTIMIZER is set to one of the following values:

Arguments

If you aren't running hyperparameter tuning, then you can specify a FLOAT64 value. The default value is 0.

If you are running hyperparameter tuning, then you can use one of the following options:

  • The HPARAM_RANGE keyword and two FLOAT64 values that define the range to use for the hyperparameter. For example, L2_REG = HPARAM_RANGE(1.5, 5.0).
  • The HPARAM_CANDIDATES keyword and an array of FLOAT64 values that provide discrete values to use for the hyperparameter. For example, L2_REG = HPARAM_CANDIDATES([0, 1.0, 3.0, 5.0]).

When running hyperparameter tuning, the valid range is (0, ∞), the default range is (0, 10.0], and the scale type is LOG.

ACTIVATION_FN

Syntax

ACTIVATION_FN = { { 'RELU' | 'RELU6' | 'CRELU' | 'ELU' | 'SELU' | 'SIGMOID' | 'TANH' } | HPARAM_CANDIDATES([candidates]) }

Description

The activation function of the neural network.

Arguments

This option accepts the following values:

If you are running hyperparameter training, then you can provide more than one value for this option by using HPARAM_CANDIDATES and specifying an array. For example, ACTIVATION_FN = HPARAM_CANDIDATES(['RELU', 'RELU6', 'TANH']).

BATCH_SIZE

Syntax

BATCH_SIZE = { int64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) }

Description

The mini batch size of samples that are fed to the neural network.

Arguments

If you aren't running hyperparameter tuning, specify an INT64 value that is positive and is less than or equal to 8192. The default value is 32 or the number of samples, whichever is smaller.

If you are running hyperparameter tuning, use one of the following options:

  • The HPARAM_RANGE keyword and two FLOAT64 values that define the range to use for the hyperparameter. For example, BATCH_SIZE = HPARAM_RANGE(16, 64).
  • The HPARAM_CANDIDATES keyword and an array of FLOAT64 values that provide discrete values to use for the hyperparameter. For example, BATCH_SIZE = HPARAM_CANDIDATES([32, 64, 256, 1024]).

When running hyperparameter tuning, the valid range is (0, ∞), the default range is [16, 1024], and the scale type is LOG.

DROPOUT

Syntax

DROPOUT = { float64_value | HPARAM_RANGE(range) | HPARAM_CANDIDATES([candidates]) }

Description

The dropout rate of units in the neural network.

Arguments

If you aren't running hyperparameter tuning, then you can specify a FLOAT64 value that is positive and is less than or equal to 1.0. The default value is 0.

If you are running hyperparameter tuning, then you must use one of the following options:

  • The HPARAM_RANGE keyword and two FLOAT64 values that define the range to use for the hyperparameter. For example, DROPOUT = HPARAM_RANGE(0, 0.6).
  • The HPARAM_CANDIDATES keyword and an array of FLOAT64 values that provide discrete values to use for the hyperparameter. For example, DROPOUT = HPARAM_CANDIDATES([0.1, 0.3, 0.6]).

When running hyperparameter tuning, the valid range is [0, 1.0), the default range is [0, 0.8], and the scale type is LINEAR.

HIDDEN_UNITS

Syntax

HIDDEN_UNITS = { int_array | HPARAM_CANDIDATES([candidates]) }

Description

The hidden layers of the neural network.

Arguments

An array of integers that represents the architecture of the hidden layers. If not specified, BigQuery ML applies a single hidden layer that contains no more than 128 units. The number of units is calculated as [min(128, num_samples / (10 * (num_input_units + num_output_units)))]. The upper bound of the rule ensures that the model isn't over fitting.

The number in the middle of the array defines the shape of the latent space. For example, hidden_units=[128, 64, 4, 64, 128] defines a four-dimensional latent space.

The number of layers in hidden_units must be odd, and we recommend that the sequence be symmetrical.

The following example defines a model architecture that uses three hidden layers with 256, 128, and 64 nodes, respectively.

HIDDEN_UNITS = [256, 128, 64]

If you are running hyperparameter tuning, then you must use the HPARAM_CANDIDATES keyword and specify an array in the form ARRAY<STRUCT<ARRAY<INT64>>> to provide discrete values to use for the hyperparameter. Each struct value in the outer array represents a candidate neural architecture. The array of INT64 values in each struct represents a hidden layer.

The following example represents a neural architecture search with three candidates, which include a single layer of 8 neurons, two layers of neurons with 8 and 16 in sequence, and three layers of neurons with 16, 32 and 64 in sequence, respectively.

hidden_units=hparam_candidates([struct([8]), struct([8, 16]), struct([16, 32, 64])])

The valid range for the INT64 arrays is [1, ∞).

INTEGRATED_GRADIENTS_NUM_STEPS

Syntax

INTEGRATED_GRADIENTS_NUM_STEPS = int64_value

Description

Specifies the number of steps to sample between the example being explained and its baseline for approximating the integral when using integrated gradients attribution methods.

Arguments

An INT64 value that is less than or equal to 30 and greater than 0. The default value is 15.

You can only set this option if ENABLE_GLOBAL_EXPLAIN is TRUE.

TF_VERSION

Syntax

TF_VERSION = { '1.15' | '2.8.0' }

Description

Specifies the TensorFlow version for model training. The default value is 1.15.

Set TF_VERSION to 2.8.0 to use TensorFlow2 with the Keras API.

AUTO_CLASS_WEIGHTS

Syntax

AUTO_CLASS_WEIGHTS = { TRUE | FALSE }

Description

Determines whether to balance class labels by using weights for each class in inverse proportion to the frequency of that class.

Only use this option with classifier models.

By default, the training data used to create the model is unweighted. If the labels in the training data are imbalanced, the model might learn to predict the most popular class of labels more heavily, which you might not want.

To balance every class, set this option to TRUE. Balance is accomplished using the following formula:

total_input_rows / (input_rows_for_class_n * number_of_unique_classes)

Arguments

A BOOL value. The default value is FALSE.

CLASS_WEIGHTS

Syntax

CLASS_WEIGHTS = struct_array

Description

The weights to use for each class label. You can't specify this option if AUTO_CLASS_WEIGHTS is TRUE.

Arguments

An ARRAY of STRUCT values. Each STRUCT contains a STRING value that specifies the class label and a FLOAT64 value that specifies the weigh