TABLE_OPTIONS view
The INFORMATION_SCHEMA.TABLE_OPTIONS
view contains one row for each option,
for each table or view in a dataset. The TABLES
and TABLE_OPTIONS
views also contain high-level information about views.
For detailed information, query the
INFORMATION_SCHEMA.VIEWS
view.
Required permissions
To query the INFORMATION_SCHEMA.TABLE_OPTIONS
view, you need the following
Identity and Access Management (IAM) permissions:
bigquery.tables.get
bigquery.tables.list
bigquery.routines.get
bigquery.routines.list
Each of the following predefined IAM roles includes the preceding permissions:
roles/bigquery.admin
roles/bigquery.dataViewer
roles/bigquery.metadataViewer
For more information about BigQuery permissions, see Access control with IAM.
Schema
When you query the INFORMATION_SCHEMA.TABLE_OPTIONS
view, the query results
contain one row for each option, for each table or view in a dataset. For
detailed information about
views, query the
INFORMATION_SCHEMA.VIEWS
view
instead.
The INFORMATION_SCHEMA.TABLE_OPTIONS
view has the following schema:
Column name | Data type | Value |
---|---|---|
TABLE_CATALOG |
STRING |
The project ID of the project that contains the dataset |
TABLE_SCHEMA |
STRING |
The name of the dataset that contains the table or view also referred
to as the datasetId |
TABLE_NAME |
STRING |
The name of the table or view also referred to as the tableId |
OPTION_NAME |
STRING |
One of the name values in the options table |
OPTION_TYPE |
STRING |
One of the data type values in the options table |
OPTION_VALUE |
STRING |
One of the value options in the options table |
Options table
|
|
|
---|---|---|
|
|
A description of the table |
|
|
Whether automatic refresh is enabled for a materialized view |
|
|
The time when this table expires |
|
|
The table's descriptive name |
|
|
The name of the Cloud KMS key used to encrypt the table |
|
|
An array of STRUCT 's that represent the labels on the
table |
|
|
The default lifetime, in days, of all partitions in a partitioned table |
|
|
How frequently a materialized view is refreshed |
|
|
Whether queries over the table require a partition filter |
|
|
Tags attached to a table in a namespaced <key, value> syntax. For more information, see Tags and conditional access. |
For external tables, the following options are possible:
Options | |
---|---|
allow_jagged_rows |
If Applies to CSV data. |
allow_quoted_newlines |
If Applies to CSV data. |
bigtable_options |
Only required when creating a Bigtable external table. Specifies the schema of the Bigtable external table in JSON format. For a list of Bigtable table definition options, see
|
column_name_character_map |
Defines the scope of supported column name characters and the
handling behavior of unsupported characters. The default setting is
Supported values include:
Applies to CSV and Parquet data. |
compression |
The compression type of the data source. Supported values include:
Applies to CSV and JSON data. |
decimal_target_types |
Determines how to convert a Example: |
description |
A description of this table. |
enable_list_inference |
If Applies to Parquet data. |
enable_logical_types |
If Applies to Avro data. |
encoding |
The character encoding of the data. Supported values include:
Applies to CSV data. |
enum_as_string |
If Applies to Parquet data. |
expiration_timestamp |
The time when this table expires. If not specified, the table does not expire. Example: |
field_delimiter |
The separator for fields in a CSV file. Applies to CSV data. |
format |
The format of the external data.
Supported values for
Supported values for
The value |
hive_partition_uri_prefix |
A common prefix for all source URIs before the partition key encoding begins. Applies only to hive-partitioned external tables. Applies to Avro, CSV, JSON, Parquet, and ORC data. Example: |
file_set_spec_type |
Specifies how to interpret source URIs for load jobs and external tables. Supported values include:
For example, if you have a source URI of |
ignore_unknown_values |
If Applies to CSV and JSON data. |
json_extension |
For JSON data, indicates a particular JSON interchange format. If not specified, BigQuery reads the data as generic JSON records. Supported values include: |
max_bad_records |
The maximum number of bad records to ignore when reading the data. Applies to: CSV, JSON, and Google Sheets data. |
max_staleness |
Applicable for BigLake tables and object tables. Specifies whether cached metadata is used by operations against the table, and how fresh the cached metadata must be in order for the operation to use it. To disable metadata caching, specify 0. This is the default. To enable metadata caching, specify an
interval literal
value between 30 minutes and 7 days. For example, specify
|
null_marker |
The string that represents Applies to CSV data. |
object_metadata |
Only required when creating an object table. Set the value of this option to |
preserve_ascii_control_characters |
If Applies to CSV data. |
projection_fields |
A list of entity properties to load. Applies to Datastore data. |
quote |
The string used to quote data sections in a CSV file. If your data
contains quoted newline characters, also set the
Applies to CSV data. |
reference_file_schema_uri |
User provided reference file with the table schema. Applies to Parquet/ORC/AVRO data. Example: |
require_hive_partition_filter |
If Applies to Avro, CSV, JSON, Parquet, and ORC data. |
sheet_range |
Range of a Google Sheets spreadsheet to query from. Applies to Google Sheets data. Example: |
skip_leading_rows |
The number of rows at the top of a file to skip when reading the data. Applies to CSV and Google Sheets data. |
uris |
For external tables, including object tables, that aren't Bigtable tables:
An array of fully qualified URIs for the external data locations.
Each URI can contain one
asterisk ( The following examples show valid
For Bigtable tables:
The URI identifying the Bigtable table to use as a data source. You can only specify one Bigtable URI. Example:
For more information on constructing a Bigtable URI, see Retrieve the Bigtable URI. |
Scope and syntax
Queries against this view must include a dataset or a region qualifier. For queries with a dataset qualifier, you must have permissions for the dataset. For queries with a region qualifier, you must have permissions for the project. For more information see Syntax. The following table explains the region and resource scopes for this view:
View name | Resource scope | Region scope |
---|---|---|
[PROJECT_ID.]`region-REGION`.INFORMATION_SCHEMA.TABLE_OPTIONS |
Project level | REGION |
[PROJECT_ID.]DATASET_ID.INFORMATION_SCHEMA.TABLE_OPTIONS |
Dataset level | Dataset location |
Optional: PROJECT_ID
: the ID of your
Google Cloud project. If not specified, the default project is used.
REGION
: any dataset region name. For example,`region-us`
.DATASET_ID
: the ID of your dataset. For more information, see Dataset qualifier.
Example
Example 1:
The following example retrieves the default table expiration times for all
tables in mydataset
in your default project (myproject
) by querying the
INFORMATION_SCHEMA.TABLE_OPTIONS
view.
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:
`project_id`.dataset.INFORMATION_SCHEMA.view
;
for example, `myproject`.mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS
.
SELECT * FROM mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS WHERE option_name = 'expiration_timestamp';
The result is similar to the following:
+----------------+---------------+------------+----------------------+-------------+--------------------------------------+ | table_catalog | table_schema | table_name | option_name | option_type | option_value | +----------------+---------------+------------+----------------------+-------------+--------------------------------------+ | myproject | mydataset | mytable1 | expiration_timestamp | TIMESTAMP | TIMESTAMP "2020-01-16T21:12:28.000Z" | | myproject | mydataset | mytable2 | expiration_timestamp | TIMESTAMP | TIMESTAMP "2021-01-01T21:12:28.000Z" | +----------------+---------------+------------+----------------------+-------------+--------------------------------------+
Example 2:
The following example retrieves metadata about all tables in mydataset
that
contain test data. The query uses the values in the description
option to find
tables that contain "test" anywhere in the description. mydataset
is in your
default project — myproject
.
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:
`project_id`.dataset.INFORMATION_SCHEMA.view
;
for example,
`myproject`.mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS
.
SELECT * FROM mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS WHERE option_name = 'description' AND option_value LIKE '%test%';
The result is similar to the following:
+----------------+---------------+------------+-------------+-------------+--------------+ | table_catalog | table_schema | table_name | option_name | option_type | option_value | +----------------+---------------+------------+-------------+-------------+--------------+ | myproject | mydataset | mytable1 | description | STRING | "test data" | | myproject | mydataset | mytable2 | description | STRING | "test data" | +----------------+---------------+------------+-------------+-------------+--------------+