This document describes how to write or save query results.
Temporary and permanent tables
BigQuery saves all query results to a table, which can be either permanent or temporary.
BigQuery uses temporary tables to cache query results that aren't written to a permanent table. The tables are created in a special dataset and named randomly. You can also create temporary tables for your own use. For further information, see Temporary tables.
After a query finishes, the temporary table exists for up to 24 hours. To view table structure and data, go to the BigQuery console, click Query history, and choose the query that created the temporary table. Then, in the Destination table row, click Temporary table.
You cannot query or share temporary tables, and they are not visible using any of the standard list or other table manipulation methods. You are not charged for storing temporary tables.
A permanent table can be a new or existing table in any dataset to which you have access. If you write query results to a new table, you are charged for storing the data. When you write query results to a permanent table, the tables you're querying must be in the same location as the dataset that contains the destination table.
Required permissions
At a minimum, to write query results to a table, you must be granted the following permissions:
bigquery.tables.create
permissions to create a new tablebigquery.tables.updateData
to write data to a new table, overwrite a table, or append data to a tablebigquery.jobs.create
to run a query job
Additional permissions such as bigquery.tables.getData
may be required to
access the data you're querying.
The following predefined IAM roles include both
bigquery.tables.create
and bigquery.tables.updateData
permissions:
bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
The following predefined IAM roles include bigquery.jobs.create
permissions:
bigquery.user
bigquery.jobUser
bigquery.admin
In addition, if a user has bigquery.datasets.create
permissions, when that
user creates a dataset, they are granted bigquery.dataOwner
access to it.
bigquery.dataOwner
access gives the user the ability to create and
update tables in the dataset.
For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.
Writing query results to a permanent table
When you write query results to a permanent table, you can create a new table, append the results to an existing table, or overwrite an existing table. You can write query results to a permanent table in the following ways:
- Using the Cloud Console.
- Using the
bq
command-line tool'sbq query
command. - Calling the
jobs.insert
API method and configuring aquery
job. - Using the client libraries.
Writing query results
Use the following procedure to write your query results to a permanent table. To help control costs, you can preview data before running the query.
Console
Open the BigQuery page in the Cloud Console.
In the Explorer panel, expand your project and select a dataset.
If the query editor is hidden, click Show editor at the top right of the window.
Enter a valid SQL query in the Query editor text area.
Click More and then select Query options.
Check the box to Set a destination table for query results.
In the Destination section, select the appropriate Project name and Dataset name where the table will be created, and choose a Table name.
In the Destination table write preference section, choose one of the following:
- Write if empty — Writes the query results to the table only if the table is empty.
- Append to table — Appends the query results to an existing table.
- Overwrite table — Overwrites an existing table with the same name using the query results.
(Optional) For Processing location, click Auto-select and choose your location.
Click Run query. This creates a query job that writes the query results to the table you specified.
Alternatively, if you forget to specify a destination table before running your query, you can copy the cached results table to a permanent table by clicking the Save Results button below the editor.
SQL
Data definition language (DDL) statements allow you to create and modify tables using standard SQL query syntax.
For more information, see the CREATE TABLE
statement
page and the CREATE TABLE
example:
Creating a new table from an existing table.
bq
Enter the bq query
command and specify the --destination_table
flag to
create a permanent table based on the query results. Specify the
use_legacy_sql=false
flag to use standard SQL syntax. To write the query
results to a table that is not in your default project, add the project ID
to the dataset name in the following format:
project_id:dataset
.
(Optional) Supply the --location
flag and set the value to your
location.
To control the write disposition for an existing destination table, specify one of the following optional flags:
--append_table
: If the destination table exists, the query results are appended to it.--replace
: If the destination table exists, it is overwritten with the query results.
bq --location=location query \ --destination_table project_id:dataset.table \ --use_legacy_sql=false 'query'
Replace the following:
location
is the name of the location used to process the query. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value toasia-northeast1
. You can set a default value for the location by using the.bigqueryrc
file.project_id
is your project ID.dataset
is the name of the dataset that contains the table to which you are writing the query results.table
is the name of the table to which you're writing the query results.query
is a query in standard SQL syntax.
If no write disposition flag is specified, the default behavior is to write
the results to the table only if it is empty. If the table exists and it is
not empty, the following error is returned: `BigQuery error in
query operation: Error processing job
project_id:bqjob_123abc456789_00000e1234f_1': Already
Exists: Table project_id:dataset.table
.
Examples:
Enter the following command to write query results to a destination table
named mytable
in mydataset
. The dataset is in your default project.
Since no write disposition flag is specified in the command, the table must
be new or empty. Otherwise, an Already exists
error is returned. The query
retrieves data from the USA Name Data public dataset.
bq query \ --destination_table mydataset.mytable \ --use_legacy_sql=false \ 'SELECT name, number FROM `bigquery-public-data`.usa_names.usa_1910_current WHERE gender = "M" ORDER BY number DESC'
Enter the following command to use query results to overwrite a destination
table named mytable
in mydataset
. The dataset is in your default
project. The command uses the --replace
flag to overwrite the destination
table.
bq query \ --destination_table mydataset.mytable \ --replace \ --use_legacy_sql=false \ 'SELECT name, number FROM `bigquery-public-data`.usa_names.usa_1910_current WHERE gender = "M" ORDER BY number DESC'
Enter the following command to append query results to a destination table
named mytable
in mydataset
. The dataset is in my-other-project
, not
your default project. The command uses the --append_table
flag to append
the query results to the destination table.
bq query \ --append_table \ --use_legacy_sql=false \ --destination_table my-other-project:mydataset.mytable \ 'SELECT name, number FROM `bigquery-public-data`.usa_names.usa_1910_current WHERE gender = "M" ORDER BY number DESC'
The output for each of these examples looks like the following. For readability, some output is truncated.
Waiting on bqjob_r123abc456_000001234567_1 ... (2s) Current status: DONE +---------+--------+ | name | number | +---------+--------+ | Robert | 10021 | | John | 9636 | | Robert | 9297 | | ... | +---------+--------+
API
To save query results to a permanent table, call the
jobs.insert
method,
configure a query
job, and include a value for the destinationTable
property. To control the write disposition for an existing destination
table, configure the writeDisposition
property.
To control the processing location for the query job, specify the location
property in the jobReference
section of the job resource.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Java API reference documentation.
To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration.
Node.js
Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Node.js API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Writing large query results
Normally, queries have a maximum response size. If you plan to run a query that might return larger results, you can do one of the following:
- In standard SQL, specify a destination table for the query results.
- In legacy SQL, specify a destination table and set the
allowLargeResults
option.
When you specify a destination table for large query results, you are charged for storing the data.
Limitations
In legacy SQL, writing large results is subject to these limitations:
- You must specify a destination table.
- You cannot specify a top-level
ORDER BY
,TOP
orLIMIT
clause. Doing so negates the benefit of usingallowLargeResults
, because the query output can no longer be computed in parallel. - Window functions can return
large query results only if used in conjunction with a
PARTITION BY
clause.
Writing large results using legacy SQL
To write large result sets using legacy SQL:
Console
Open the BigQuery page in the Cloud Console.
Click Compose new query.
Enter a valid SQL query in the Query editor text area. Use the
#legacySQL
prefix or be sure you have Use Legacy SQL checked in the query settings.Click More then select Query settings.
For Destination, check Set a destination table for query results.
For Project name, choose the project where the destination table will be created.
For Dataset name, choose the dataset that will store the table.
In the Table name field, enter a table name.
If you are writing a large results set to an existing table, you can use the Destination table write preference options to control the write disposition of the destination table:
- Write if empty: Writes the query results to the table only if the table is empty.
- Append to table: Appends the query results to an existing table.
- Overwrite table: Overwrites an existing table with the same name using the query results.
For Results Size, check Allow Large Results (no size limit).
(Optional) For Processing Location, click Auto-select and choose your data's location.
Click Save to update the query settings.
Click Run. This creates a query job that writes the large results set to the table you specified.
bq
Use the --allow_large_results
flag with the --destination_table
flag to
create a destination table to hold the large results set. Because the --
allow_large_results
option only applies to legacy SQL, you must also
specify the --use_legacy_sql=true
flag. To write the query results to a
table that is not in your default project, add the project ID to the dataset
name in the following format: PROJECT_ID:DATASET
.
Supply the --location
flag and set the value to your
location.
To control the write disposition for an existing destination table, specify one of the following optional flags:
--append_table
: If the destination table exists, the query results are appended to it.--replace
: If the destination table exists, it is overwritten with the query results.
bq --location=location query \ --destination_table PROJECT_ID:DATASET.TABLE \ --use_legacy_sql=true \ --allow_large_results "QUERY"
Replace the following:
LOCATION
is the name of the location used to process the query. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value toasia-northeast1
. You can set a default value for the location using the.bigqueryrc
file.PROJECT_ID
is your project ID.DATASET
is the name of the dataset that contains the table to which you are writing the query results.TABLE
is the name of the table to which you're writing the query results.QUERY
is a query in legacy SQL syntax.
Examples:
Enter the following command to write large query results to a destination
table named mytable
in mydataset
. The dataset is in your default
project. Since no write disposition flag is specified in the command, the
table must be new or empty. Otherwise, an Already exists
error is
returned. The query retrieves data from the USA Name Data public dataset.
This query is used for example purposes only. The results set returned does
not exceed the maximum response size.
bq query \
--destination_table mydataset.mytable \
--use_legacy_sql=true \
--allow_large_results \
"SELECT
name,
number
FROM
[bigquery-public-data:usa_names.usa_1910_current]
WHERE
gender = 'M'
ORDER BY
number DESC"
Enter the following command to use large query results to overwrite a
destination table named mytable
in mydataset
. The dataset is in
myotherproject
, not your default project. The command uses the --replace
flag to overwrite the destination table.
bq query \
--destination_table mydataset.mytable \
--replace \
--use_legacy_sql=true \
--allow_large_results \
"SELECT
name,
number
FROM
[bigquery-public-data:usa_names.usa_1910_current]
WHERE
gender = 'M'
ORDER BY
number DESC"
Enter the following command to append large query results to a destination
table named mytable
in mydataset
. The dataset is in myotherproject
,
not your default project. The command uses the --append_table
flag to
append the query results to the destination table.
bq query \
--destination_table myotherproject:mydataset.mytable \
--append_table \
--use_legacy_sql=true \
--allow_large_results \
"SELECT
name,
number
FROM
[bigquery-public-data:usa_names.usa_1910_current]
WHERE
gender = 'M'
ORDER BY
number DESC"
API
To write large results to a destination table, call the
jobs.insert
method,
configure a query
job, and set the allowLargeResults
property to true
.
Specify the destination table using the destinationTable
property. To
control the write disposition for an existing destination table, configure
the writeDisposition
property.
Specify your location in the location
property in the
jobReference
section of the job resource.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.
Java
To enable large results, set allow large
results
to true
and set the destination
table
to the desired
TableId
in a
QueryJobConfiguration.
Node.js
Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Node.js API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Python API reference documentation.
Downloading and saving query results from the Cloud Console
After you run a SQL query by using the Cloud Console, you can save the
results to another location. You can use the Cloud Console to download
query results to a local file, Sheets, or Drive. Saving
results to a local file, Sheets, or Drive is not supported by
the bq
command-line tool or the API.
Limitations
Downloading and saving query results are subject to the following limitations:
- You can download query results locally only in CSV or newline-delimited JSON format.
- You cannot download query results containing nested and repeated data in CSV format.
- You cannot save query results containing nested and repeated data to Sheets.
- To save query results to Drive using the Cloud Console, the results set must be 1 GB or less. If your results are larger than 1 GB, you can save them to a table instead.
- You can save query results to Drive only in CSV or newline-delimited JSON format.
Downloading query results to a local file
Downloading query results to a local file is not supported by the bq
command-line tool
or the API.
To download query results as a CSV or newline-delimited JSON file, use the Cloud Console:
Console
Open the BigQuery page in the Cloud Console.
Click Compose new query.
Enter a valid SQL query in the Query editor text area.
(Optional) To change the processing location, click More and select Query settings. Under Processing location, click Auto-select and choose your data's location.
Click Run.
When the results are returned, click the Save results and select the format/location where you want to save the results.
The file is downloaded to your browser's default download location.
Saving query results to Drive
Saving query results to Drive is not supported by the bq
command-line tool or
the API.
To save query results to Drive, use the Cloud Console:
Console
Open the BigQuery page in the Cloud Console.
Enter a valid SQL query in the Query editor text area.
Click Run.
When the results are returned, click Save Results.
Select CSV (Google Drive) or JSON (Google Drive). When you save results to Drive, you cannot choose the location. Results are always saved to the root "My Drive" location.
It may take a few minutes to save the results to Drive. When the results are saved, you receive a pop-up message that includes the filename —
bq-results-[TIMESTAMP]-[RANDOM_CHARACTERS].[CSV or JSON]
.In the pop-up message, click Open to open the file, or navigate to Drive and click My Drive.
Saving query results to Sheets
Saving query results to Sheets is not supported by the bq
command-line tool or
the API.
To save query results to Sheets, use the Cloud Console:
Console
Open the BigQuery page in the Cloud Console.
Click Compose new query.
Enter a valid SQL query in the Query editor text area.
(Optional) To change the processing location, click More and select Query settings. Under Processing location, click Auto-select and choose your data's location.
Click Run.
When the results are returned, click the Save results and select Google Sheets.
If necessary, follow the prompts to log into your Google Account and click Allow to give BigQuery permission to write the data to your Drive
MY Drive
folder.After following the prompts, you should receive an email with the subject "BigQuery Client Tools connected to your Google Account". The email contains information on the permissions you granted along with steps to remove the permissions.
When the results are saved, a message similar to the following appears below the query results in the Cloud Console:
Saved to Sheets as "results-20190225-103531. Open
. Click the link in the message to view your results in Sheets, or navigate to yourMy Drive
folder and open the file manually.When you save query results to Sheets, the filename begins with
results-[DATE]
where[DATE]
is today's date in the formatYYYYMMDD
.