TABLE_STORAGE view
The INFORMATION_SCHEMA.TABLE_STORAGE
view provides a current snapshot of storage
usage for tables and materialized views. When you query the
INFORMATION_SCHEMA.TABLE_STORAGE
view, the query results contain one row for each table or materialized view for
the current project. The data in the INFORMATION_SCHEMA.TABLE_STORAGE
view is
not kept in real time, and updates are typically delayed by a few seconds to a few
minutes. Storage changes that are caused by partition or table expiration alone,
or that are caused by modifications to the dataset time travel window, might take
up to a day to be reflected in the INFORMATION_SCHEMA.TABLE_STORAGE
view.
The table storage views give you a convenient way to observe your current storage consumption, and in addition provide details on whether your storage uses logical uncompressed bytes, physical compressed bytes, or time travel bytes. This information can help you with tasks like planning for future growth and understanding the update patterns for tables.
Data included in the *_BYTES
columns
The *_BYTES
columns in the table storage views include information about your
use of storage bytes. This information is determined by looking at your storage
usage for materialized views and the following types of tables:
- Permanent tables created through any of the methods described in Create and use tables.
- Temporary tables created in sessions. These tables are placed into datasets with generated names like "_c018003e063d09570001ef33ae401fad6ab92a6a".
- Temporary tables created in multi-statement queries ("scripts"). These tables are placed into datasets with generated names like "_script72280c173c88442c3a7200183a50eeeaa4073719".
Data stored in the
query results cache
is not billed to you and so is not included in the *_BYTES
column values.
Clones and snapshots show *_BYTES
column values as if they were complete
tables, rather than showing the delta from the storage used by the base table,
so they are an over-estimation. Your bill does account correctly for this delta
in storage usage. For more information on the delta bytes stored and billed by clones and
snapshots, see the TABLE_STORAGE_USAGE_TIMELINE
view.
Forecast storage billing
In order to forecast the monthly storage billing for a dataset, you can use
either the logical
or physical *_BYTES
columns in this view, depending
on the
dataset storage billing model
used by the dataset. Please note that this is only a rough forecast, and
the precise billing amounts are calculated based on the usage by
BigQuery storage billing infrastructure and visible in
Cloud Billing.
For datasets that use a logical billing model, you can forecast your monthly storage costs as follows:
((ACTIVE_LOGICAL_BYTES
value / POW
(1024, 3)) * active logical bytes pricing) +
((LONG_TERM_LOGICAL_BYTES
value / POW
(1024, 3)) * long-term logical bytes pricing)
The ACTIVE_LOGICAL_BYTES
value for a table reflects the active bytes
currently used by that table.
For datasets that use a physical billing model, you can forecast your storage costs as follows:
((ACTIVE_PHYSICAL_BYTES + FAIL_SAFE_PHYSICAL_BYTES
value / POW
(1024, 3)) * active physical bytes pricing) +
((LONG_TERM_PHYSICAL_BYTES
value / POW
(1024, 3)) * long-term physical bytes pricing)
The ACTIVE_PHYSICAL_BYTES
value for a table reflects the active bytes
currently used by that table plus the bytes used for time travel for that table.
To see the active bytes of the table alone, subtract the
TIME_TRAVEL_PHYSICAL_BYTES
value from the
ACTIVE_PHYSICAL_BYTES
value.
For more information, see Storage pricing.
Required roles
To get the permissions that you need to query the INFORMATION_SCHEMA.TABLE_STORAGE
view,
ask your administrator to grant you the
BigQuery Metadata Viewer (roles/bigquery.metadataViewer
) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains
the permissions required to query the INFORMATION_SCHEMA.TABLE_STORAGE
view. To see the exact permissions that are
required, expand the Required permissions section:
Required permissions
The following permissions are required to query the INFORMATION_SCHEMA.TABLE_STORAGE
view:
-
bigquery.tables.get
-
bigquery.tables.list
You might also be able to get these permissions with custom roles or other predefined roles.
Schema
The INFORMATION_SCHEMA.TABLE_STORAGE
view has the following schema:
Column name | Data type | Value |
---|---|---|
PROJECT_ID |
STRING |
The project ID of the project that contains the dataset. |
PROJECT_NUMBER |
INT64 |
The project number of the project that contains the dataset. |
TABLE_CATALOG |
STRING |
The project ID of the project that contains the dataset. |
TABLE_SCHEMA |
STRING |
The name of the dataset that contains the table or materialized view,
also referred to as the datasetId . |
TABLE_NAME |
STRING |
The name of the table or materialized view, also referred to as the
tableId . |
CREATION_TIME |
TIMESTAMP |
The creation time of the table. |
TOTAL_ROWS |
INT64 |
The total number of rows in the table or materialized view. |
TOTAL_PARTITIONS |
INT64 |
The number of partitions present in the table or materialized view. Unpartitioned tables return 0. |
TOTAL_LOGICAL_BYTES |
INT64 |
Total number of logical (uncompressed) bytes in the table or materialized view. |
ACTIVE_LOGICAL_BYTES |
INT64 |
Number of logical (uncompressed) bytes that are younger than 90 days. |
LONG_TERM_LOGICAL_BYTES |
INT64 |
Number of logical (uncompressed) bytes that are older than 90 days. |
CURRENT_PHYSICAL_BYTES |
INT64 |
Total number of physical bytes for the current storage of the table across all partitions. |
TOTAL_PHYSICAL_BYTES |
INT64 |
Total number of physical (compressed) bytes used for storage, including active, long-term, and time travel (deleted or changed data) bytes. Fail-safe (deleted or changed data retained after the time-travel window) bytes aren't included. |
ACTIVE_PHYSICAL_BYTES |
INT64 |
Number of physical (compressed) bytes younger than 90 days, including time travel (deleted or changed data) bytes. |
LONG_TERM_PHYSICAL_BYTES |
INT64 |
Number of physical (compressed) bytes older than 90 days. |
TIME_TRAVEL_PHYSICAL_BYTES |
INT64 |
Number of physical (compressed) bytes used by time travel storage (deleted or changed data). |
STORAGE_LAST_MODIFIED_TIME |
TIMESTAMP |
The most recent time that data was written to the table. |
DELETED |
BOOLEAN |
Indicates whether or not the table is deleted. |
TABLE_TYPE |
STRING |
The type of table. For example, EXTERNAL or
BASE TABLE .
|
FAIL_SAFE_PHYSICAL_BYTES |
INT64 |
Number of physical (compressed) bytes used by the fail-safe storage (deleted or changed data). |
LAST_METADATA_INDEX_REFRESH_TIME |
TIMESTAMP |
The last metadata index refresh time of the table. |
Scope and syntax
Queries against this view must include a region qualifier. The following table explains the region scope for this view:
View name | Resource scope | Region scope |
---|---|---|
[`PROJECT_ID`.]`region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE[_BY_PROJECT] |
Project level | REGION |
- Optional:
PROJECT_ID
: the ID of your Google Cloud project. If not specified, the default project is used.
REGION
: any dataset region name. For example,`region-us`
.
The following example shows how to return storage information for tables in a specified project:
SELECT * FROM `myProject`.`region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE;
The following example shows how to return storage information for tables in a specified region:
SELECT * FROM `region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE_BY_PROJECT;
Examples
Example 1:
The following example shows you the total logical bytes billed for the current project.
SELECT SUM(total_logical_bytes) AS total_logical_bytes FROM `region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE;
The result is similar to the following:
+---------------------+ | total_logical_bytes | +---------------------+ | 971329178274633 | +---------------------+
Example 2:
The following example shows you how to forecast the price difference per dataset between logical and physical billing models for the next 30 days. This example assumes that future storage usage is constant over the next 30 days from the moment the query was run. Note that the forecast is limited to base tables, it excludes all other types of tables within a dataset.
The prices used in the pricing variables for this query are for
the us-central1
region. If you want to run this query for a different region,
update the pricing variables appropriately. See
Storage pricing for pricing information.
Open the BigQuery page in the Google Cloud console.
Enter the following GoogleSQL query in the Query editor box.
INFORMATION_SCHEMA
requires GoogleSQL syntax. GoogleSQL is the default syntax in the Google Cloud console.DECLARE active_logical_gib_price FLOAT64 DEFAULT 0.02; DECLARE long_term_logical_gib_price FLOAT64 DEFAULT 0.01; DECLARE active_physical_gib_price FLOAT64 DEFAULT 0.04; DECLARE long_term_physical_gib_price FLOAT64 DEFAULT 0.02; WITH storage_sizes AS ( SELECT table_schema AS dataset_name, -- Logical SUM(IF(deleted=false, active_logical_bytes, 0)) / power(1024, 3) AS active_logical_gib, SUM(IF(deleted=false, long_term_logical_bytes, 0)) / power(1024, 3) AS long_term_logical_gib, -- Physical SUM(active_physical_bytes) / power(1024, 3) AS active_physical_gib, SUM(active_physical_bytes - time_travel_physical_bytes) / power(1024, 3) AS active_no_tt_physical_gib, SUM(long_term_physical_bytes) / power(1024, 3) AS long_term_physical_gib, -- Restorable previously deleted physical SUM(time_travel_physical_bytes) / power(1024, 3) AS time_travel_physical_gib, SUM(fail_safe_physical_bytes) / power(1024, 3) AS fail_safe_physical_gib, FROM `region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE_BY_PROJECT WHERE total_physical_bytes + fail_safe_physical_bytes > 0 -- Base the forecast on base tables only for highest precision results AND table_type = 'BASE TABLE' GROUP BY 1 ) SELECT dataset_name, -- Logical ROUND(active_logical_gib, 2) AS active_logical_gib, ROUND(long_term_logical_gib, 2) AS long_term_logical_gib, -- Physical ROUND(active_physical_gib, 2) AS active_physical_gib, ROUND(long_term_physical_gib, 2) AS long_term_physical_gib, ROUND(time_travel_physical_gib, 2) AS time_travel_physical_gib, ROUND(fail_safe_physical_gib, 2) AS fail_safe_physical_gib, -- Compression ratio ROUND(SAFE_DIVIDE(active_logical_gib, active_no_tt_physical_gib), 2) AS active_compression_ratio, ROUND(SAFE_DIVIDE(long_term_logical_gib, long_term_physical_gib), 2) AS long_term_compression_ratio, -- Forecast costs logical ROUND(active_logical_gib * active_logical_gib_price, 2) AS forecast_active_logical_cost, ROUND(long_term_logical_gib * long_term_logical_gib_price, 2) AS forecast_long_term_logical_cost, -- Forecast costs physical ROUND((active_no_tt_physical_gib + time_travel_physical_gib + fail_safe_physical_gib) * active_physical_gib_price, 2) AS forecast_active_physical_cost, ROUND(long_term_physical_gib * long_term_physical_gib_price, 2) AS forecast_long_term_physical_cost, -- Forecast costs total ROUND(((active_logical_gib * active_logical_gib_price) + (long_term_logical_gib * long_term_logical_gib_price)) - (((active_no_tt_physical_gib + time_travel_physical_gib + fail_safe_physical_gib) * active_physical_gib_price) + (long_term_physical_gib * long_term_physical_gib_price)), 2) AS forecast_total_cost_difference FROM storage_sizes ORDER BY (forecast_active_logical_cost + forecast_active_physical_cost) DESC;
Click Run.
The result is similar to following:
+--------------+--------------------+-----------------------+---------------------+------------------------+--------------------------+-----------------------------+------------------------------+----------------------------------+-------------------------------+----------------------------------+--------------------------------+ | dataset_name | active_logical_gib | long_term_logical_gib | active_physical_gib | long_term_physical_gib | active_compression_ratio | long_term_compression_ratio | forecast_active_logical_cost | forecaset_long_term_logical_cost | forecast_active_physical_cost | forecast_long_term_physical_cost | forecast_total_cost_difference | +--------------+--------------------+-----------------------+---------------------+------------------------+--------------------------+-----------------------------+------------------------------+----------------------------------+-------------------------------+----------------------------------+--------------------------------+ | dataset1 | 10.0 | 10.0 | 1.0 | 1.0 | 10.0 | 10.0 | 0.2 | 0.1 | 0.04 | 0.02 | 0.24 |