Writing query results

This document describes how to write or save query results.

Temporary and permanent tables

BigQuery saves all query results to a table, which can be either permanent or temporary:

  • A temporary table is a randomly named table saved in a special dataset. Temporary tables are used to cache query results. A temporary table has a lifetime of approximately 24 hours. Temporary tables are not available for sharing, and are not visible using any of the standard list or other table manipulation methods. You are not charged for storing temporary tables.

  • A permanent table can be a new or existing table in any dataset to which you have access. If you write query results to a new table, you are charged for storing the data. When you write query results to a permanent table, the tables you're querying must be in the same location as the dataset that contains the destination table.

Writing query results to a permanent table

When you write query results to a permanent table, you can create a new table, append the results to an existing table, or overwrite an existing table. You can write query results to a permanent table by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using the command-line tool's bq query command
  • Calling the jobs.insert API method and configuring a query job

Required permissions

The permissions required for writing query results to a permanent table depend on the write disposition of the data.

Permissions for writing query results to a new table

If you are writing query results to a new table, you must have WRITER access at the dataset level, or you must be assigned a project-level IAM role that includes bigquery.tables.create permissions. The following predefined, project-level IAM roles include bigquery.tables.create permissions:

In addition, because the bigquery.user role has bigquery.datasets.create permissions, a user assigned to the bigquery.user role can create tables in any dataset that user creates. When a user assigned to the bigquery.user role creates a dataset, that user is given OWNER access to the dataset. OWNER access to a dataset gives the user full control over it and all the tables in it.

For more information on IAM roles and permissions in BigQuery, see Access control. For more information on dataset-level roles, see Primitive roles for datasets.

Permissions for overwriting or appending data

If you are using query results to overwrite an existing table or to append data to an existing table, you must have WRITER access at the dataset level, or you must be assigned a project-level IAM role that includes bigquery.tables.updateData permissions. The following predefined, project-level IAM roles include bigquery.tables.updateData permissions:

In addition, because the bigquery.user role has bigquery.datasets.create permissions, a user assigned to the bigquery.user role can overwrite or append data in any table that user creates in the dataset. When a user assigned to the bigquery.user role creates a dataset, that user is given OWNER access to the dataset. OWNER access to a dataset gives the user full control over it and all the tables in it.

For more information on IAM roles and permissions in BigQuery, see Access control. For more information on dataset-level roles, see Primitive roles for datasets.

Writing query results

To write your query results to a permanent table:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the BigQuery web UI

  2. In the navigation panel, in the Resources section, expand your project and select a dataset.

  3. If the Query editor is hidden, click Show editor at the top right of the window.

  4. Enter a valid SQL query in the Query editor text area.

  5. Click More below the editor, then select Query settings.

    Query settings

  6. Check the box to Set a destination table for query results.

    Set destination

  7. In the Destination section, select the appropriate Project name and Dataset name where the table will be created, and choose a Table name.

  8. In the Destination table write preference section, choose one of the following:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  9. (Optional) For Processing location, click Auto-select and choose your data's location.

  10. Click Run query. This creates a query job that writes the query results to the table you specified.

Alternately, if you forget to specify a destination table before running your query, you can copy the temporary table to a permanent table by clicking the Save view button below the editor.

Classic UI

Option 1: Use a DDL statement

Data definition language (DDL) statements allow you to create and modify tables using standard SQL query syntax.

For more information, see the CREATE TABLE statement page and the CREATE TABLE example: Creating a new table from an existing table.

Option 2: Use the classic web UI

  1. Go to the classic BigQuery web UI.
    Go to the classic BigQuery web UI

  2. Click the Compose query button.

  3. Enter a valid SQL query in the New Query text area.

  4. Click Show Options.

  5. In the Destination Table section, click Select Table.

  6. In the Select Destination Table dialog:

    1. For Project, choose the project where the destination table will be created.

    2. For Dataset, choose the dataset that will store the table.

    3. In the Table ID field, enter a table name. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character).

    4. Click OK.

  7. In the Destination Table section, for Write Preference, choose one of the following:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  8. (Optional) For Processing Location, click Unspecified and choose your data's location.

  9. Click Run query. This creates a query job that writes the query results to the table you specified.

Alternately, if you forget to specify a destination table before running your query, you can copy the temporary table to a permanent table by clicking the Save as Table button in the results window.

CLI

Enter the bq query command and specify the --destination_table flag to create a permanent table based on the query results. Specify the use_legacy_sql=false flag to use standard SQL syntax. To write the query results to a table that is not in your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET].

Supply the --location flag and set the value to your location.

To control the write disposition for an existing destination table, specify one of the following optional flags:

  • --append_table — If the destination table exists, the query results are appended to it.
  • --replace — If the destination table exists, it is overwritten with the query results.

    bq --location=[LOCATION] query --destination_table [PROJECT_ID]:[DATASET].[TABLE] --use_legacy_sql=false '[QUERY]'
    

Where:

  • [LOCATION] is the name of the location used to process the query. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location by using the .bigqueryrc file.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table to which you are writing the query results.
  • [TABLE] is the name of the table to which you're writing the query results.
  • [QUERY] is a query in standard SQL syntax.

If no write disposition flag is specified, the default behavior is to write the results to the table only if it is empty. If the table exists and it is not empty, the following error is returned: BigQuery error in query operation: Error processing job '[PROJECT_ID]:bqjob_123abc456789_00000e1234f_1': Already Exists: Table [PROJECT_ID]:[DATASET].[TABLE].

Examples:

Enter the following command to write query results to a destination table named mytable in mydataset. The dataset is in your default project. Since no write disposition flag is specified in the command, the table must be new or empty. Otherwise, an Already exists error is returned. The query retrieves data from the USA Name Data public dataset.

bq --location=US query --destination_table mydataset.mytable --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

Enter the following command to use query results to overwrite a destination table named mytable in mydataset. The dataset is in your default project. The command uses the --replace flag to overwrite the destination table.

bq --location=US query --destination_table mydataset.mytable --replace --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

Enter the following command to append query results to a destination table named mytable in mydataset. The dataset is in myotherproject, not your default project. The command uses the --append flag to append the query results to the destination table.

bq --location=US query --destination_table myotherproject:mydataset.mytable --append --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

API

To save query results to a permanent table, call the jobs.insert method, configure a query job, and include a value for the configuration.query.destinationTable property. To control the write disposition for an existing destination table, configure the configuration.query.writeDisposition property.

Specify your location in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")

q := client.Query("SELECT 17 as my_col")
q.Location = "US" // Location must match the dataset(s) referenced in query.
q.QueryConfig.Dst = client.Dataset(destDatasetID).Table(destTableID)
job, err := q.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}
it, err := job.Read(ctx)
for {
	var row []bigquery.Value
	err := it.Next(&row)
	if err == iterator.Done {
		break
	}
	if err != nil {
		return err
	}
	fmt.Println(row)
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration.

// BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
// String destinationDataset = 'my_destination_dataset';
// String destinationTable = 'my_destination_table';
String query = "SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;";
QueryJobConfiguration queryConfig =
    // Note that setUseLegacySql is set to false by default
    QueryJobConfiguration.newBuilder(query)
        // Save the results of the query to a permanent table.
        .setDestinationTable(TableId.of(destinationDataset, destinationTable))
        .build();

// Print the results.
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
  for (FieldValue val : row) {
    System.out.printf("%s,", val.toString());
  }
  System.out.printf("\n");
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

To save query results to a permanent table, create a QueryJobConfig and set the destination to the desired TableReference. Pass the job configuration to the query method.

# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'

job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table('your_table_id')
job_config.destination = table_ref
sql = """
    SELECT corpus
    FROM `bigquery-public-data.samples.shakespeare`
    GROUP BY corpus;
"""

# Start the query, passing in the extra configuration.
query_job = client.query(
    sql,
    # Location must match that of the dataset(s) referenced in the query
    # and of the destination table.
    location='US',
    job_config=job_config)  # API request - starts the query

query_job.result()  # Waits for the query to finish
print('Query results loaded to table {}'.format(table_ref.path))

Writing large query results

Normally, queries have a maximum response size. If you plan to run a query that might return larger results, you can:

  • In standard SQL, specify a destination table for the query results.
  • In legacy SQL, specify a destination table and set the allowLargeResults option.

When you specify a destination table for large query results, you are charged for storing the data.

Limitations

In legacy SQL, writing large results is subject to these limitations:

  • You must specify a destination table.
  • You cannot specify a top-level ORDER BY, TOP or LIMIT clause. Doing so negates the benefit of using allowLargeResults, because the query output can no longer be computed in parallel.
  • Window functions can return large query results only if used in conjunction with a PARTITION BY clause.

Writing large results using legacy SQL

To write large result sets using legacy SQL:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the BigQuery web UI

  2. Click Compose new query.

  3. Enter a valid BigQuery SQL query in the Query editor text area. Use the #legacySQL prefix or be sure you have Use Legacy SQL checked in the query settings.

  4. Click More then select Query settings.

    Query settings

  5. For Destination, check Set a destination table for query results.

    Set destination

  6. For Project name, choose the project where the destination table will be created.

  7. For Dataset name, choose the dataset that will store the table.

  8. In the Table name field, enter a table name.

  9. If you are writing a large results set to an existing table, you can use the Destination table write preference options to control the write disposition of the destination table:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.

    Overwrite table radio

  10. For Results Size, check Allow Large Results (no size limit).

    Query results size

  11. (Optional) For Processing Location, click Auto-select and choose your data's location.

    Query processing location

  12. Click Save to update the query settings.

  13. Click Run. This creates a query job that writes the large results set to the table you specified.

Classic UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. Click the Compose query button.

  3. Enter a valid BigQuery SQL query in the New Query text area. Use the #legacySQL prefix or be sure you have Use Legacy SQL checked in the query options.

  4. Click Show Options.

  5. For Destination Table, click Select Table.

  6. In the Select Destination Table dialog:

    1. For Project, choose the project where the destination table will be created.

    2. For Dataset, choose the dataset that will store the table.

    3. In the Table ID field, enter a table name.

    4. Click OK.

  7. If you are writing a large results set to an existing table, you can use the Write Preference option to control the write disposition of the destination table:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  8. For Results Size, check Allow Large Results.

    Allow large results option

  9. (Optional) For Processing Location, click Unspecified and choose your data's location.

  10. Click Run Query. This creates a query job that writes the large results set to the table you specified.

Command-line

Use the --allow_large_results flag with the --destination_table flag to create a destination table to hold the large results set. Because the --allow_large_results option only applies to legacy SQL, you must also specify the --use_legacy_sql=true flag. To write the query results to a table that is not in your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET]. Supply the --location flag and set the value to your location.

To control the write disposition for an existing destination table, specify one of the following optional flags:

  • --append_table — If the destination table exists, the query results are appended to it.
  • --replace — If the destination table exists, it is overwritten with the query results.

    bq --location=[LOCATION] query --destination_table [PROJECT_ID]:[DATASET].[TABLE_NAME] --use_legacy_sql=true --allow_large_results "[QUERY]"
    

Where:

  • [LOCATION] is the name of the location used to process the query. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table to which you are writing the query results.
  • [TABLE] is the name of the table to which you're writing the query results.
  • [QUERY] is a query in legacy SQL syntax.

Examples:

Enter the following command to write large query results to a destination table named mytable in mydataset. The dataset is in your default project. Since no write disposition flag is specified in the command, the table must be new or empty. Otherwise, an Already exists error is returned. The query retrieves data from the USA Name Data public dataset. This query is used for example purposes only. The results set returned does not exceed the maximum response size.

bq --location=US query --destination_table mydataset.mytable --use_legacy_sql=true --allow_large_results "SELECT name,number FROM [bigquery-public-data:usa_names.usa_1910_current] WHERE gender = 'M' ORDER BY number DESC"

Enter the following command to use large query results to overwrite a destination table named mytable in mydataset. The dataset is in myotherproject, not your default project. The command uses the --replace flag to overwrite the destination table.

bq --location=US query --destination_table mydataset.mytable --replace --use_legacy_sql=true --allow_large_results "SELECT name,number FROM [bigquery-public-data:usa_names.usa_1910_current] WHERE gender = 'M' ORDER BY number DESC"

Enter the following command to append large query results to a destination table named mytable in mydataset. The dataset is in myotherproject, not your default project. The command uses the --append flag to append the query results to the destination table.

bq --location=US query --destination_table myotherproject:mydataset.mytable --append --use_legacy_sql=true --allow_large_results "SELECT name,number FROM [bigquery-public-data:usa_names.usa_1910_current] WHERE gender = 'M' ORDER BY number DESC"

API

To write large results to a destination table, call the jobs.insert method, configure a query job, and set the configuration.query.allowLargeResults property to true. Specify the destination table using the configuration.query.destinationTable property. To control the write disposition for an existing destination table, configure the configuration.query.writeDisposition property.

Specify your location in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

q := client.Query(
	"SELECT corpus FROM [bigquery-public-data:samples.shakespeare] GROUP BY corpus;")
q.UseLegacySQL = true
q.AllowLargeResults = true
q.QueryConfig.Dst = client.Dataset(dstDatasetID).Table(dstTableID)
job, err := q.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}
it, err := job.Read(ctx)
for {
	var row []bigquery.Value
	err := it.Next(&row)
	if err == iterator.Done {
		break
	}
	if err != nil {
		return err
	}
	fmt.Println(row)
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

To enable large results, set allow large results to true and set the destination table to the desired TableId in a QueryJobConfiguration.

// BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
// String destinationDataset = 'my_destination_dataset';
// String destinationTable = 'my_destination_table';
String query = "SELECT corpus FROM [bigquery-public-data:samples.shakespeare] GROUP BY corpus;";
QueryJobConfiguration queryConfig =
    // To use legacy SQL syntax, set useLegacySql to true.
    QueryJobConfiguration.newBuilder(query)
        .setUseLegacySql(true)
        // Save the results of the query to a permanent table.
        .setDestinationTable(TableId.of(destinationDataset, destinationTable))
        // Allow results larger than the maximum response size.
        // If true, a destination table must be set.
        .setAllowLargeResults(true)
        .build();

// Print the results.
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
  for (FieldValue val : row) {
    System.out.printf("%s,", val.toString());
  }
  System.out.printf("\n");
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'

job_config = bigquery.QueryJobConfig()
# Set use_legacy_sql to True to use legacy SQL syntax.
job_config.use_legacy_sql = True
# Set the destination table
table_ref = client.dataset(dataset_id).table("your_table_id")
job_config.destination = table_ref
job_config.allow_large_results = True
sql = """
    SELECT corpus
    FROM [bigquery-public-data:samples.shakespeare]
    GROUP BY corpus;
"""
# Start the query, passing in the extra configuration.
query_job = client.query(
    sql,
    # Location must match that of the dataset(s) referenced in the query
    # and of the destination table.
    location="US",
    job_config=job_config,
)  # API request - starts the query

query_job.result()  # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))

Downloading and saving query results

After you run a SQL query, you can download the results to a file on your local machine, you can save the results to Google Drive or Google Sheets, or you can save the results to a permanent table in BigQuery.

Limitations

Downloading and saving query results are subject to the following limitations:

  • You can download query results only to a local file or to Google Sheets in the classic BigQuery web UI. To download the results to Google Drive, use the GCP Console.
  • To download query results using the classic BigQuery web UI, the results set must contain fewer than 16,000 rows, and it must be 10 MB or less. If your results are larger than 10 MB or 16,000 rows you can save them to a table.
  • You can download query results locally only in CSV or newline-delimited JSON format.
  • You cannot download query results containing nested and repeated data in CSV format.
  • You cannot save query results containing nested and repeated data to Google Sheets.
  • When you save query results to Google Sheets using the classic BigQuery web UI, the results set must contain fewer than 16,000 rows, and it must be 10 MB or less. If your results are larger than 10 MB or 16,000 rows you can save them to a table instead.
  • Saving results to a local file, to Google Sheets, or to Google Drive is not supported by the command-line tool or the API.
  • To save query results to Google Drive using the GCP Console, the results set must be 1 GB or less. If your results are larger than 1 GB, you can save them to a table instead.
  • You can save query results to Google Drive only in CSV or newline-delimited JSON format.

Downloading query results to a local file

Downloading query results to a local file is not supported by the command-line tool or the API.

To download query results as a CSV or newline-delimited JSON file by using the web UI:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the BigQuery web UI

  2. Click Compose new query.

  3. Enter a valid SQL query in the Query editor text area.

  4. (Optional) To change the processing location, click More and select Query settings. Under Processing location, click Auto-select and choose your data's location.

  5. Click Run.

  6. When the results are returned, click the Save results and select the format/location where you want to save the results.

    The file is downloaded to your browser's default download location.

Classic UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. Click the Compose Query button.

  3. Enter a valid SQL query in the New Query text area.

  4. Click Show Options.

  5. (Optional) For Processing Location, click Unspecified and choose your data's location.

  6. Click Run Query.

  7. When the results are returned, click the Download as CSV or Download as JSON button above the query results.

    screenshot of download and save buttons

    The file is downloaded to your browser's default download location.

Saving query results to Google Drive

Saving query results to Google Drive is not supported by the command-line tool, the API, or the classic BigQuery web UI.

To save query results to Google Drive by using the GCP Console:

Console

  1. Open the BigQuery web UI in the GCP Console.

    Go to the BigQuery web UI

  2. Enter a valid SQL query in the Query editor text area.

  3. Click Run.

  4. When the results are returned, click Save Results.

    screenshot of save results button

  5. Select CSV (Google Drive) or JSON (Google Drive). When you save results to Google Drive, you cannot choose the location. Results are always saved to the root "My Drive" location.

  6. It may take a few minutes to save the results to Google Drive. When the results are saved, you receive a pop-up message that includes the file name — bq-results-[TIMESTAMP]-[RANDOM_CHARACTERS].[CSV or JSON].

    screenshot of save results button

  7. In the pop-up message, click Open to open the file, or navigate to Google Drive and click My Drive.

Saving query results to a table

To save query results as a table:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the BigQuery web UI

  2. In the navigation panel, in the Resources section, expand your project and select a dataset.

  3. If the Query editor is hidden, click Show editor at the top right of the window.

  4. Enter a valid SQL query in the Query editor text area.

  5. Click More below the editor, then select Query settings.

    Query settings

  6. Check the box to Set a destination table for query results.

    Set destination

  7. In the Destination section, select the appropriate Project name and Dataset name where the table will be created, and choose a Table name.

  8. In the Destination table write preference section, choose one of the following:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  9. (Optional) For Processing location, click Auto-select and choose your data's location.

  10. Click Run query. This creates a query job that writes the query results to the table you specified.

Alternately, if you forget to specify a destination table before running your query, you can copy the temporary table to a permanent table by clicking the Save view button below the editor.

Classic UI

Option 1: Use a DDL statement

Data definition language (DDL) statements allow you to create and modify tables using standard SQL query syntax.

For more information, see the CREATE TABLE statement page and the CREATE TABLE example: Creating a new table from an existing table.

Option 2: Use the classic web UI

  1. Go to the classic BigQuery web UI.
    Go to the classic BigQuery web UI

  2. Click the Compose query button.

  3. Enter a valid SQL query in the New Query text area.

  4. Click Show Options.

  5. In the Destination Table section, click Select Table.

  6. In the Select Destination Table dialog:

    1. For Project, choose the project where the destination table will be created.

    2. For Dataset, choose the dataset that will store the table.

    3. In the Table ID field, enter a table name. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character).

    4. Click OK.

  7. In the Destination Table section, for Write Preference, choose one of the following:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  8. (Optional) For Processing Location, click Unspecified and choose your data's location.

  9. Click Run query. This creates a query job that writes the query results to the table you specified.

Alternately, if you forget to specify a destination table before running your query, you can copy the temporary table to a permanent table by clicking the Save as Table button in the results window.

CLI

Enter the bq query command and specify the --destination_table flag to create a permanent table based on the query results. Specify the use_legacy_sql=false flag to use standard SQL syntax. To write the query results to a table that is not in your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET].

Supply the --location flag and set the value to your location.

To control the write disposition for an existing destination table, specify one of the following optional flags:

  • --append_table — If the destination table exists, the query results are appended to it.
  • --replace — If the destination table exists, it is overwritten with the query results.

    bq --location=[LOCATION] query --destination_table [PROJECT_ID]:[DATASET].[TABLE] --use_legacy_sql=false '[QUERY]'
    

Where:

  • [LOCATION] is the name of the location used to process the query. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location by using the .bigqueryrc file.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table to which you are writing the query results.
  • [TABLE] is the name of the table to which you're writing the query results.
  • [QUERY] is a query in standard SQL syntax.

If no write disposition flag is specified, the default behavior is to write the results to the table only if it is empty. If the table exists and it is not empty, the following error is returned: BigQuery error in query operation: Error processing job '[PROJECT_ID]:bqjob_123abc456789_00000e1234f_1': Already Exists: Table [PROJECT_ID]:[DATASET].[TABLE].

Examples:

Enter the following command to write query results to a destination table named mytable in mydataset. The dataset is in your default project. Since no write disposition flag is specified in the command, the table must be new or empty. Otherwise, an Already exists error is returned. The query retrieves data from the USA Name Data public dataset.

bq --location=US query --destination_table mydataset.mytable --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

Enter the following command to use query results to overwrite a destination table named mytable in mydataset. The dataset is in your default project. The command uses the --replace flag to overwrite the destination table.

bq --location=US query --destination_table mydataset.mytable --replace --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

Enter the following command to append query results to a destination table named mytable in mydataset. The dataset is in myotherproject, not your default project. The command uses the --append flag to append the query results to the destination table.

bq --location=US query --destination_table myotherproject:mydataset.mytable --append --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'

API

To save query results to a permanent table, call the jobs.insert method, configure a query job, and include a value for the configuration.query.destinationTable property. To control the write disposition for an existing destination table, configure the configuration.query.writeDisposition property.

Specify your location in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")

q := client.Query("SELECT 17 as my_col")
q.Location = "US" // Location must match the dataset(s) referenced in query.
q.QueryConfig.Dst = client.Dataset(destDatasetID).Table(destTableID)
job, err := q.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}
it, err := job.Read(ctx)
for {
	var row []bigquery.Value
	err := it.Next(&row)
	if err == iterator.Done {
		break
	}
	if err != nil {
		return err
	}
	fmt.Println(row)
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration.

// BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
// String destinationDataset = 'my_destination_dataset';
// String destinationTable = 'my_destination_table';
String query = "SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;";
QueryJobConfiguration queryConfig =
    // Note that setUseLegacySql is set to false by default
    QueryJobConfiguration.newBuilder(query)
        // Save the results of the query to a permanent table.
        .setDestinationTable(TableId.of(destinationDataset, destinationTable))
        .build();

// Print the results.
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
  for (FieldValue val : row) {
    System.out.printf("%s,", val.toString());
  }
  System.out.printf("\n");
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

To save query results to a permanent table, create a QueryJobConfig and set the destination to the desired TableReference. Pass the job configuration to the query method.

# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'

job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table('your_table_id')
job_config.destination = table_ref
sql = """
    SELECT corpus
    FROM `bigquery-public-data.samples.shakespeare`
    GROUP BY corpus;
"""

# Start the query, passing in the extra configuration.
query_job = client.query(
    sql,
    # Location must match that of the dataset(s) referenced in the query
    # and of the destination table.
    location='US',
    job_config=job_config)  # API request - starts the query

query_job.result()  # Waits for the query to finish
print('Query results loaded to table {}'.format(table_ref.path))

Saving query results to Google Sheets

Saving query results to Google Sheets is not supported by the command-line tool or the API.

To save query results to Google Sheets by using the web UI:

Console

  1. Open the BigQuery web UI in the GCP Console.
    Go to the BigQuery web UI

  2. Click Compose new query.

  3. Enter a valid SQL query in the Query editor text area.

  4. (Optional) To change the processing location, click More and select Query settings. Under Processing location, click Auto-select and choose your data's location.

  5. Click Run.

  6. When the results are returned, click the Save results and select Google Sheets.

  7. If necessary, follow the prompts to log into your Google account and click Allow to give BigQuery permission to write the data to your Google Drive MY Drive folder.

    After following the prompts, you should receive an email with the subject "BigQuery Client Tools connected to your Google Account". The email contains information on the permissions you granted along with steps to remove the permissions.

  8. When the results are saved, a message similar to the following appears below the query results in the console BigQuery web UI: Saved to Sheets as "results-20190225-103531. Open. Click the link in the message to view your results in Google Sheets, or navigate to your My Drive folder and open the file manually.

    When you save query results to Google Sheets, the file name begins with results-[DATE] where [DATE] is today's date in the format YYYYMMDD.

Classic UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. Click the Compose Query button.

  3. Enter a valid SQL query in the New Query text area.

  4. Click Show Options.

  5. (Optional) For Processing Location, click Unspecified and choose your data's location.

  6. Click Run Query.

  7. When the results are returned, click the Save to Google Sheets button above the query results.

    screenshot of download and save buttons

  8. If necessary, follow the prompts to log into your Google account and click Allow to give BigQuery permission to write the data to your Google Drive MY Drive folder.

    After following the prompts, you should receive an email with the subject "BigQuery Client Tools connected to your Google Account". The email contains information on the permissions you granted along with steps to remove the permissions.

  9. When the results are saved, a message like the following appears above the query results in the classic BigQuery web UI: Results saved to Google Sheets. Click to view. Click the link in the message to view your results in Google Sheets, or navigate to your MY Drive folder and open the file manually.

    When you save query results to Google Sheets, the file name begins with results-[DATE] where [DATE] is today's date in the format YYYYMMDD.

Was this page helpful? Let us know how we did:

Send feedback about...

Need help? Visit our support page.