This tutorial introduces data analysts to BigQuery ML. BigQuery ML enables users to create and execute machine learning models in BigQuery using SQL queries. The goal is to democratize machine learning by enabling SQL practitioners to build models using their existing tools and to increase development speed by eliminating the need for data movement.
In this tutorial, you use the penguin
table
to create a model that predicts the weight of a penguin based on the penguin's species,
island of residence, culmen length and depth, flipper length, and sex.
Objectives
In this tutorial, you use:
- BigQuery ML to create a linear regression model using the
CREATE MODEL
statement - The
ML.EVALUATE
function to evaluate the ML model - The
ML.PREDICT
function to make predictions using the ML model
Costs
This tutorial uses billable components of Google Cloud, including the following:
- BigQuery
- BigQuery ML
For more information on BigQuery costs, see the BigQuery pricing page.
For more information on BigQuery ML costs, see BigQuery ML pricing.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
- BigQuery is automatically enabled in new projects.
To activate BigQuery in a pre-existing project, go to
Enable the BigQuery API.
Step one: Create your dataset
The first step is to create a BigQuery dataset to store your ML model. To create your dataset:
In the Google Cloud console, go to the BigQuery page.
In the navigation panel, in the Resources section, click your project name.
On the right side, in the details panel, click Create dataset.
On the Create dataset page:
- For Dataset ID, enter
bqml_tutorial
. For Data location, choose United States (US). Currently, the public datasets are stored in the
US
multi-region location. For simplicity, you should place your dataset in the same location.
- For Dataset ID, enter
Leave all of the other default settings in place and click Create dataset.
Step two: Create your model
Next, you create a linear regression model using the penguins table for BigQuery. The following GoogleSQL query is used to create the model you use to predict the weight of a penguin.
#standardSQL CREATE OR REPLACE MODEL `bqml_tutorial.penguins_model` OPTIONS (model_type='linear_reg', input_label_cols=['body_mass_g']) AS SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL
In addition to creating the model, running the CREATE MODEL
command trains the
model you create.
Query details
The CREATE MODEL
clause is used to create and train the model named bqml_tutorial.penguins_model
.
The OPTIONS(model_type='linear_reg', input_label_cols=['body_mass_g'])
clause
indicates that you are creating a linear regression
model. A linear regression is a type of regression model that generates a
continuous value from a linear combination of input features. The body_mass_g
column is the input label column. For linear regression models, the label
column must be real-valued (the column values must be real numbers).
This query's SELECT
statement uses all the columns in the bigquery-public-data.ml_datasets.penguins
table.
This table contains the following columns that will all be used to predict a penguin's weight:
species
— Species of penguin (STRING).island
— Island that the penguin resides (STRING).culmen_length_mm
— Length of culmen in millimeters (FLOAT64).culmen_depth_mm
— Depth of culmen in millimeters (FLOAT64).flipper_length_mm
— Length of the flipper in millimeters (FLOAT64).sex
— The sex of the penguin (STRING).
The FROM
clause — bigquery-public-data.ml_datasets.penguins
—
indicates that you are querying the penguins table in the ml_datasets
dataset.
This dataset is in the bigquery-public-data
project.
The WHERE
clause — WHERE body_mass_g IS NOT NULL
— excludes rows where body_mass_g is NULL.
Run the CREATE MODEL
query
To run the CREATE MODEL
query to create and train your model:
In the Google Cloud console, click the Compose new query button.
Enter the following GoogleSQL query in the Query editor text area.
#standardSQL CREATE OR REPLACE MODEL `bqml_tutorial.penguins_model` OPTIONS (model_type='linear_reg', input_label_cols=['body_mass_g']) AS SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL
Click Run.
The query takes about 30 seconds to complete, after which your model (
penguins_model
) appears in the navigation panel. Because the query uses aCREATE MODEL
statement to create a table, you do not see query results.
Step three (optional): Get training statistics
To see the results of the model training, you can use the
ML.TRAINING_INFO
function, or you can view the statistics in the Google Cloud console. In this
tutorial, you use the Google Cloud console.
A machine learning algorithm builds a model by examining many examples and attempting to find a model that minimizes loss. This process is called empirical risk minimization.
Loss is the penalty for a bad prediction — a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have low loss, on average, across all examples.
To see the model training statistics that were generated when you ran the
CREATE MODEL
query:
In the Google Cloud console navigation panel, in the Resources section, expand [PROJECT_ID] > bqml_tutorial and then click penguins_model.
Click the Training tab, and then click Table. The results should look like the following:
The Training Data Loss column represents the loss metric calculated after the model is trained on the training dataset. Since you performed a linear regression, this column is the mean squared error. A "normal_equation" optimization strategy is automatically used for this training, so only one iteration is required to converge to the final model. For more details on the
optimize_strategy
option, see theCREATE MODEL
statement for generalized linear models.For more details on the
ML.TRAINING_INFO
function and "optimize_strategy" training option, see the BigQuery ML syntax reference.
Step four: Evaluate your model
After creating your model, you evaluate the performance of the model using
the ML.EVALUATE
function. The ML.EVALUATE
function evaluates the predicted
values against the actual data.
The query used to evaluate the model is as follows:
#standardSQL SELECT * FROM ML.EVALUATE(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL))
Query details
The top-most SELECT
statement retrieves the columns from your model.
The FROM
clause uses the ML.EVALUATE
function against your model: bqml_tutorial.penguins_model
.
This query's nested SELECT
statement and FROM
clause are the same as those
in the CREATE MODEL
query.
The WHERE
clause — WHERE body_mass_g IS NOT NULL
— excludes
rows where body_mass_g is NULL.
A proper evaluation would be on a subset of the penguins table that is
separate from the data used to train the model. You can also call ML.EVALUATE
without providing the input data. ML.EVALUATE
will retrieve the evaluation metrics calculated
during training, which uses the automatically reserved evaluation dataset:
#standardSQL SELECT * FROM ML.EVALUATE(MODEL `bqml_tutorial.penguins_model`)
You can also use Google Cloud console to view the evaluation metrics calculated during the training. The results should look like the following:
Run the ML.EVALUATE
query
To run the ML.EVALUATE
query that evaluates the model:
In the Google Cloud console, click the Compose new query button.
Enter the following GoogleSQL query in the Query editor text area.
#standardSQL SELECT * FROM ML.EVALUATE(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL))
(Optional) To set the processing location, click More > Query settings. For Processing location, choose
US
. This step is optional because the processing location is automatically detected based on the dataset's location.Click Run.
When the query is complete, click the Results tab below the query text area. The results should look like the following:
Because you performed a linear regression, the results include the following columns:
mean_absolute_error
mean_squared_error
mean_squared_log_error
median_absolute_error
r2_score
explained_variance
An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. 0 indicates that the model explains none of the variability of the response data around the mean. 1 indicates that the model explains all the variability of the response data around the mean.
Step five: Use your model to predict outcomes
Now that you have evaluated your model, the next step is to use it to predict an outcome. You use your model to predict the body mass in grams of all penguins that reside in Biscoe.
The query used to predict the outcome is as follows:
#standardSQL SELECT * FROM ML.PREDICT(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL AND island = "Biscoe"))
Query details
The top-most SELECT
statement retrieves the predicted_body_mass_g
column
along with the columns in bigquery-public-data.ml_datasets.penguins
.
This column is generated by the ML.PREDICT
function. When you use the
ML.PREDICT
function the output column name for the model is
predicted_<label_column_name>
. For linear regression models, predicted_label
is the estimated value of label
. For logistic regression models,
predicted_label
is one of the two input labels depending on which label has
the higher predicted probability.
The ML.PREDICT
function is used to predict results using your model: bqml_tutorial.penguins_model
.
This query's nested SELECT
statement and FROM
clause are the same as those
in the CREATE MODEL
query.
The WHERE
clause — WHERE island = "Biscoe"
— indicates that you are
limiting the prediction to the island of Biscoe.
Run the ML.PREDICT
query
To run the query that uses the model to predict an outcome:
In the Google Cloud console, click the Compose new query button.
Enter the following GoogleSQL query in the Query editor text area.
#standardSQL SELECT * FROM ML.PREDICT(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL AND island = "Biscoe"))
(Optional) To set the processing location, click More > Query settings. For Processing location, choose
US
. This step is optional because the processing location is automatically detected based on the dataset's location.Click Run.
When the query is complete, click the Results tab below the query text area. The results should look like the following:
Step six: Explain prediction results with explainable AI methods
To understand why your model is generating these prediction results, you can use
the ML.EXPLAIN_PREDICT
function.
ML.EXPLAIN_PREDICT
is an extended version of ML.PREDICT
. ML.EXPLAIN_PREDICT
returns prediction results with additional columns that explain those results. You can run ML.EXPLAIN_PREDICT
without ML.PREDICT
. For an in-depth explanation of Shapley values and explainable AI in BigQuery ML,
see BigQuery ML explainable AI overview.
The query used to generate explanations is as follows:
#standardSQL SELECT * FROM ML.EXPLAIN_PREDICT(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL AND island = "Biscoe"), STRUCT(3 as top_k_features))
Query details
Run the ML.EXPLAIN_PREDICT
query
To run the ML.EXPLAIN_PREDICT
query that explains the model:
In the Google Cloud console, click the Compose new query button.
Enter the following GoogleSQL query in the Query editor.
#standardSQL SELECT * FROM ML.EXPLAIN_PREDICT(MODEL `bqml_tutorial.penguins_model`, ( SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL AND island = "Biscoe"), STRUCT(3 as top_k_features))
Click Run.
When the query is complete, click the Results tab below the query editor. The results should look like the following:
For linear regression models, Shapley values are used to generate feature attribution values per feature in the model. ML.EXPLAIN_PREDICT
outputs the top 3 feature attributions per row of the table provided because top_k_features
was set to 3 in the query. These attributions are sorted by the absolute value of the attribution in descending order. In all examples, the feature sex
contributed the most to the overall prediction. For detailed explanations of the output columns of the ML.EXPLAIN_PREDICT
query, see
ML.EXPLAIN_PREDICT
syntax documentation
(Optional) Step seven: Globally explain your model
To know which features are the most important to determine the weights of the penguins in general, you can use the ML.GLOBAL_EXPLAIN
function. In order to use ML.GLOBAL_EXPLAIN
, the model must be retrained with
the option ENABLE_GLOBAL_EXPLAIN=TRUE
. Rerun the training query with this option using the following query:
#standardSQL CREATE OR REPLACE MODEL `bqml_tutorial.penguins_model` OPTIONS (model_type='linear_reg', input_label_cols=['body_mass_g'], enable_global_explain=TRUE) AS SELECT * FROM `bigquery-public-data.ml_datasets.penguins` WHERE body_mass_g IS NOT NULL
Access global explanations through ML.GLOBAL_EXPLAIN
The query used to generate global explanations is as follows:
#standardSQL SELECT * FROM ML.GLOBAL_EXPLAIN(MODEL `bqml_tutorial.penguins_model`)
Query details
Run the ML.GLOBAL_EXPLAIN
query
To run the ML.GLOBAL_EXPLAIN
query:
In the Google Cloud console, click the Compose new query button.
Enter the following GoogleSQL query in the Query editor text area.
#standardSQL SELECT * FROM ML.GLOBAL_EXPLAIN(MODEL `bqml_tutorial.penguins_model`)
(Optional) To set the processing location, click More > Query settings. For Processing location, choose
US
. This step is optional because the processing location is automatically detected based on the dataset's location.Click Run.
When the query is complete, click the Results tab below the query text area. The results should look like the following:
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
- You can delete the project you created.
- Or you can keep the project and delete the dataset.
Delete your dataset
Deleting your project removes all datasets and all tables in the project. If you prefer to reuse the project, you can delete the dataset you created in this tutorial:
If necessary, open the BigQuery page in the Google Cloud console.
In the navigation, click the bqml_tutorial dataset you created.
Click Delete dataset on the right side of the window. This action deletes the dataset, the table, and all the data.
In the Delete dataset dialog box, confirm the delete command by typing the name of your dataset (
bqml_tutorial
) and then click Delete.
Delete your project
To delete the project:
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- To learn more about machine learning, see the Machine learning crash course.
- For an overview of BigQuery ML, see Introduction to BigQuery ML.
- To learn more about the Google Cloud console, see Using the Google Cloud console.