Stay organized with collections
Save and categorize content based on your preferences.
The ML.GLOBAL_EXPLAIN function
This document describes the ML.GLOBAL_EXPLAIN function, which lets you provide
explanations for the entire model by aggregating the local explanations of the evaluation data. You can only use ML.GLOBAL_EXPLAIN with models that are trained with the
ENABLE_GLOBAL_EXPLAIN option
set to TRUE.
Syntax
ML.GLOBAL_EXPLAIN(
MODEL `PROJECT_ID.DATASET.MODEL`,
STRUCT(
[CLASS_LEVEL_EXPLAIN AS class_level_explain]))
Arguments
ML.GLOBAL_EXPLAIN takes the following arguments:
PROJECT_ID: your project ID.
DATASET: the BigQuery dataset that contains
the model.
MODEL: the name of the model.
CLASS_LEVEL_EXPLAIN: a BOOL value that specifies
whether global feature importances are returned for each class. Applies only
to non-AutoML Tables classification models. When set to FALSE, the global
feature importance of the entire model is returned rather than that of each
class. The default value is FALSE.
Regression models and AutoML Tables classification models only have
model-level global feature importance.
Output
The output of ML.GLOBAL_EXPLAIN has two formats:
For classification models with class_level_explain set
to FALSE, and for regression models, the following columns are returned:
feature: a STRING value that contains the feature name.
attribution: a FLOAT64 value that contains the feature importance to
the model overall.
For classification models with class_level_explain set to TRUE,
the following columns are returned:
<class_name>: a STRING value that contains the name of the class in the
label column.
feature: a STRING value that contains the feature name.
attribution: a FLOAT64 value that contains the feature importance to
this class.
For each class, only the top 10 most important features are returned.
Examples
The following examples assume your model is in your default project.
Regression model
This example gets global feature importance for the boosted tree regression
model mymodel in mydataset. The dataset is in your default project.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003e\u003ccode\u003eML.GLOBAL_EXPLAIN\u003c/code\u003e provides explanations for an entire model by aggregating the local explanations of the evaluation data.\u003c/p\u003e\n"],["\u003cp\u003eThis function is only usable with models trained with the \u003ccode\u003eENABLE_GLOBAL_EXPLAIN\u003c/code\u003e option set to \u003ccode\u003eTRUE\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe output format varies between classification models, offering class-specific importance, and regression models, which provide overall feature importance.\u003c/p\u003e\n"],["\u003cp\u003eThe function's output provides feature names and their importance or attribution as either \u003ccode\u003eFLOAT64\u003c/code\u003e or \u003ccode\u003eSTRING\u003c/code\u003e values.\u003c/p\u003e\n"],["\u003cp\u003eThe function also includes an optional parameter, \u003ccode\u003eclass_level_explain\u003c/code\u003e, which enables a user to get global importance for each class in non-AutoML classification models.\u003c/p\u003e\n"]]],[],null,["# The ML.GLOBAL_EXPLAIN function\n==============================\n\nThis document describes the `ML.GLOBAL_EXPLAIN` function, which lets you provide\nexplanations for the entire model by aggregating the local explanations of the evaluation data. You can only use `ML.GLOBAL_EXPLAIN` with models that are trained with the\n[`ENABLE_GLOBAL_EXPLAIN` option](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create#enable_global_explain)\nset to `TRUE`.\n\nSyntax\n------\n\n```sql\nML.GLOBAL_EXPLAIN(\n MODEL `PROJECT_ID.DATASET.MODEL`,\n STRUCT(\n [CLASS_LEVEL_EXPLAIN AS class_level_explain]))\n```\n\n### Arguments\n\n`ML.GLOBAL_EXPLAIN` takes the following arguments:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: your project ID.\n- \u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e: the BigQuery dataset that contains the model.\n- \u003cvar translate=\"no\"\u003eMODEL\u003c/var\u003e: the name of the model.\n- \u003cvar translate=\"no\"\u003eCLASS_LEVEL_EXPLAIN\u003c/var\u003e: a `BOOL` value that specifies\n whether global feature importances are returned for each class. Applies only\n to non-AutoML Tables classification models. When set to `FALSE`, the global\n feature importance of the entire model is returned rather than that of each\n class. The default value is `FALSE`.\n\n Regression models and AutoML Tables classification models only have\n model-level global feature importance.\n\nOutput\n------\n\nThe output of `ML.GLOBAL_EXPLAIN` has two formats:\n\n- For classification models with `class_level_explain` set\n to `FALSE`, and for regression models, the following columns are returned:\n\n - `feature`: a `STRING` value that contains the feature name.\n - `attribution`: a `FLOAT64` value that contains the feature importance to the model overall.\n- For classification models with `class_level_explain` set to `TRUE`,\n the following columns are returned:\n\n - `\u003cclass_name\u003e`: a `STRING` value that contains the name of the class in the label column.\n - `feature`: a `STRING` value that contains the feature name.\n - `attribution`: a `FLOAT64` value that contains the feature importance to this class.\n\n For each class, only the top 10 most important features are returned.\n\nExamples\n--------\n\nThe following examples assume your model is in your default project.\n\n### Regression model\n\nThis example gets global feature importance for the boosted tree regression\nmodel `mymodel` in `mydataset`. The dataset is in your default project. \n\n```sql\nSELECT\n *\nFROM\n ML.GLOBAL_EXPLAIN(MODEL `mydataset.mymodel`)\n```\n\n### Classifier model\n\nThis example gets global feature importance for the boosted tree classifier\nmodel `mymodel` in `mydataset`. The dataset is in your default project. \n\n```sql\nSELECT\n *\nFROM\n ML.GLOBAL_EXPLAIN(MODEL `mydataset.mymodel`, STRUCT(TRUE AS class_level_explain))\n```\n\nWhat's next\n-----------\n\n- For information about Explainable AI, see [BigQuery Explainable AI overview](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-xai-overview).\n- For information about the supported SQL statements and functions for each model type, see [End-to-end user journey for each model](/bigquery/docs/e2e-journey)."]]