Exporting Data

This page describes how to export data out of BigQuery.

After you've loaded your data into BigQuery, you can export the data in several formats. BigQuery can export up to 1 GB of data per file, and supports exporting to multiple files.

You can also use Google Cloud Dataflow to read data from BigQuery. For more information about using Cloud Dataflow to read from, and write to, BigQuery, see BigQuery I/O in the Cloud Dataflow documentation.

Before you begin

  • Ensure that you have read access to the data you want to export. You need dataset-level READER access. Alternatively, you can use any BigQuery IAM role that provides read access, such as dataViewer, dataEditor, or user.
  • Ensure that you have write access to a Google Cloud Storage bucket. You can export data only to a Cloud Storage bucket.

Exporting data stored in BigQuery

The following examples export data from a public dataset to a Cloud Storage bucket. The data is exported in CSV format.

Web UI

  1. Go to the BigQuery web UI.
    Go to the BigQuery web UI

  2. In the navigation, find and click the expansion icon next to Public Datasets, and then expand bigquery-public-data:samples to display its contents.

    Left-navigation tree for Shakespeare sample data

  3. Find and click the down arrow icon down arrow icon image next to shakespeare.

  4. Select Export table to display the Export to Google Cloud Storage dialog.

  5. Leave the default settings in place for Export format and Compression (CSV and None, respectively).

  6. In the Google Cloud Storage URI textbox, enter a valid URI in the format gs://[BUCKET_NAME]/[FILENAME.CSV], where [BUCKET_NAME] is your Cloud Storage bucket name, and [FILENAME.CSV] is the name of your destination file.

  7. Click OK to export the table. While the job is running, (extracting) appears next to the name of the table in the navigation.

To check on the progress of the job, look near the top of the navigation for Job History for an Extract job.

Command-line


Use the bq extract command.

  • bq extract [DATASET].[TABLE_NAME] gs://[BUCKET_NAME]/[FILENAME.CSV]

For example, the following command exports the shakespeare table from the bigquery-public-data:samples dataset into a file named shakespeare.csv in a Cloud Storage bucket named example-bucket:

  • bq extract 'bigquery-public-data:samples.shakespeare' gs://example-bucket/shakespeare.csv

The default destination format is CSV. To export into JSON or Avro, use the destination_format flag:

  • bq extract --destination_format=NEWLINE_DELIMITED_JSON 'bigquery-public-data:samples.shakespeare' gs://example-bucket/shakespeare.json

API


To export data, create a job and populate the configuration.extract object.

  1. Create an extract job that points to the BigQuery source data and the Cloud Storage destination. For information about creating jobs, see Managing Jobs, Datasets, and Projects.

  2. Specify the source table by using the sourceTable configuration object, which comprises the project ID, dataset ID, and table ID.

  3. The destination URI(s) must be fully-qualified, in the format gs://[BUCKET_NAME]/[FILENAME.CSV]. Each URI can contain one '*' wildcard character and it must come after the bucket name.

  4. Specify the data format by setting the configuration.extract.destinationFormat property. For example, to export a JSON file, set this property to the value NEWLINE_DELIMITED_JSON.

  5. To check the job status, call jobs.get([JOB_ID]) with the ID of the job returned by the initial request.

    • If status.state = DONE, the job completed successfully.
    • If the status.errorResult property is present, the request failed, and that object will include information describing what went wrong.
    • If status.errorResult is absent, the job finished successfully, although there might have been some non-fatal errors. Non-fatal errors are listed in the returned job object's status.errors property.

API notes:

  • As a best practice, generate a unique ID and pass it as jobReference.jobId when calling jobs.insert() to create a job. This approach is more robust to network failure because the client can poll or retry on the known job ID.

  • Calling jobs.insert() on a given job ID is idempotent; in other words, you can retry as many times as you like on the same job ID, and at most one of those operations will succeed.

Configuring export options

You can configure two aspects of the exported data: the format, and the compression type.

Destination format

BigQuery supports CSV, JSON and Avro format. Nested or repeated data cannot be exported to CSV, but can be exported to JSON or Avro format.

Web UI


Set the destination format in the Export to Google Storage dialog.

  1. Follow steps 1 through 4 in the Exporting data stored in BigQuery section to display the Export to Google Cloud Storage dialog.

  2. Use the drop-down list next to Export format to select CSV, JSON, or Avro format.

Command-line


Use the bq extract command with the destination_format flag to set the format:

  • bq extract --destination_format=[CSV | NEWLINE_DELIMITED_JSON | AVRO] [DATASET].[TABLE_NAME] gs://[BUCKET_NAME]/[FILENAME]

For example, the following command exports the shakespeare table from the bigquery-public-data:samples dataset into a file named shakespeare.json in a Cloud Storage bucket named example-bucket in JSON format:

  • bq extract --destination_format=NEWLINE_DELIMITED_JSON 'bigquery-public-data:samples.shakespeare' gs://example-bucket/shakespeare.json

API


Specify the data format by setting the configuration.extract.destinationFormat property. For example, to export a JSON file, set this property to the value NEWLINE_DELIMITED_JSON.

Compression

BigQuery supports GZIP compression, but the default setting is no compression (NONE).

Web UI


Set the compression type in the Export to Google Storage dialog.

  1. Follow steps 1 through 4 in the Exporting data stored in BigQuery section to display the Export to Google Cloud Storage dialog.

  2. Select a compression type, either NONE or GZIP, using the selection buttons for Compression.

Command-line


Use the bq extract command with the compression flag to set the format:

  • bq extract --compression=[GZIP | NONE] [DATASET].[TABLE_NAME] gs://[BUCKET_NAME]/[FILENAME]

For example, the following command exports the shakespeare table from the bigquery-public-data:samples dataset into a file named shakespeare.zip in a Cloud Storage bucket named example-bucket in GZIP format:

  • bq extract --compression=GZIP 'bigquery-public-data:samples.shakespeare' gs://example-bucket/shakespeare.zip

API


Specify the compression type by setting the configuration.extract.compression property. For example, to use GZIP compression, set this property to the value GZIP.

Avro format

BigQuery expresses Avro formatted data in the following ways:

  • The resulting export files are Avro container files.
  • Each BigQuery row is represented as an Avro Record. Nested data is represented by nested Record objects.
  • REQUIRED fields are represented as the corresponding Avro types. For example, a BigQuery INTEGER type maps to an Avro LONG type.
  • NULLABLE fields are represented as an Avro Union of the corresponding type and "null".
  • REPEATED fields are represented as Avro arrays.
  • TIMESTAMP data types are represented as Avro LONG types.

The Avro format can't be used in combination with GZIP compression.

Extract configuration example

The following code example shows the configuration of a job that exports data to a CSV file.

Python

This sample uses the Google Cloud Client Library for Python.

def export_data_to_gcs(dataset_name, table_name, destination):
    bigquery_client = bigquery.Client()
    dataset = bigquery_client.dataset(dataset_name)
    table = dataset.table(table_name)
    job_name = str(uuid.uuid4())

    job = bigquery_client.extract_table_to_storage(
        job_name, table, destination)

    job.begin()

    wait_for_job(job)

    print('Exported {}:{} to {}'.format(
        dataset_name, table_name, destination))
def wait_for_job(job):
    while True:
        job.reload()
        if job.state == 'DONE':
            if job.error_result:
                raise RuntimeError(job.error_result)
            return
        time.sleep(1)

C#

This sample uses the Google APIs Client Library for C#.

public static Job ExportTable(Options options)
{
    var bigquery = CreateAuthorizedClient();
    var jobId = Guid.NewGuid().ToString();
    var request = new JobsResource.InsertRequest(bigquery, new Job
    {
        Configuration = new JobConfiguration
        {
            Extract = new JobConfigurationExtract
            {
                SourceTable = new TableReference
                {
                    ProjectId = options.ProjectId,
                    DatasetId = options.DatasetId,
                    TableId = options.TableId,
                },
                DestinationUris = new[] { options.CloudStoragePath },
                DestinationFormat = options.ExportFormat,
                Compression = options.Compression
            }
        },
        JobReference = new JobReference
        {
            ProjectId = options.ProjectId,
            JobId = jobId,
        }
    }, options.ProjectId);
    return request.Execute();
}

PHP

This sample uses the Google Cloud Client Library for PHP.

use Google\Cloud\ServiceBuilder;
use Google\Cloud\ExponentialBackoff;

/**
 * @param string $projectId  The Google project ID.
 * @param string $datasetId  The BigQuery dataset ID.
 * @param string $tableId    The BigQuery table ID.
 * @param string $bucketName The Cloud Storage bucket Name.
 * @param string $objectName The Cloud Storage object Name.
 * @param string $format     The export format, either CSV or JSON.
 */
function export_table($projectId, $datasetId, $tableId, $bucketName, $objectName, $format = 'csv')
{
    $builder = new ServiceBuilder([
        'projectId' => $projectId,
    ]);
    $bigQuery = $builder->bigQuery();
    $dataset = $bigQuery->dataset($datasetId);
    $table = $dataset->table($tableId);
    // load the storage object
    $storage = $builder->storage();
    $destinationObject = $storage->bucket($bucketName)->object($objectName);
    // create the import job
    $options = ['jobConfig' => ['destinationFormat' => $format]];
    $job = $table->export($destinationObject, $options);
    // poll the job until it is complete
    $backoff = new ExponentialBackoff(10);
    $backoff->execute(function () use ($job) {
        print('Waiting for job to complete' . PHP_EOL);
        $job->reload();
        if (!$job->isComplete()) {
            throw new Exception('Job has not yet completed', 500);
        }
    });
    // check if the job has errors
    if (isset($job->info()['status']['errorResult'])) {
        $error = $job->info()['status']['errorResult']['message'];
        printf('Error running job: %s' . PHP_EOL, $error);
    } else {
        print('Data exported successfully' . PHP_EOL);
    }
}

Node.js

This sample uses the Google Cloud Client Library for Node.js.

var Storage = require('@google-cloud/storage');

function exportTableToGCS (datasetId, tableId, bucketName, fileName, callback) {
  var bigquery = BigQuery();
  var storage = Storage();

  var table = bigquery.dataset(datasetId).table(tableId);
  var file = storage.bucket(bucketName).file(fileName);

  // Export a table to Google Cloud Storage
  // See https://googlecloudplatform.github.io/google-cloud-node/#/docs/bigquery/latest/bigquery/table?method=export
  table.export(file, function (err, job, apiResponse) {
    if (err) {
      return callback(err);
    }
    console.log('Started job: %s', job.id);

    job
      .on('error', callback)
      .on('complete', function (metadata) {
        console.log('Completed job: %s', job.id);
        return callback(null, metadata, apiResponse);
      });
  });
}

The destinationUris property indicates the location(s) and file name(s) where BigQuery should export your files to. Usually, you'll pass a single value so that BigQuery exports to a single file, but you can alternately pass one or more wildcard URIs. See the following section for information on what to specify for the destinationUris property.

Exporting data into one or more files

The destinationUris property indicates the location(s) and file name(s) where BigQuery should export your files to.

BigQuery supports a single wildcard operator (*) in each URI. Using the wildcard operator instructs BigQuery to create multiple sharded files based on the supplied pattern. The wildcard operator is replaced with a file number (starting at 0), left-padded to 12 digits. For example, 000000000001 for the second file.

The following table describes several possible options for the destinationUris property:

`destinationUris` options
Single URI

Use a single URI if you want BigQuery to export your data to a single file. This option is the most common use case, as exported data is generally less than BigQuery's 1 GB per file maximum value.

Property definition:

['gs://[YOUR_BUCKET]/file-name.json']

Creates:

gs://my-bucket/file-name.json
Single wildcard URI

Use a single wildcard URI if you think your exported data will be larger than BigQuery's 1 GB per file maximum value. BigQuery shards your data into multiple files based on the provided pattern.

Property definition:

['gs://[YOUR_BUCKET]/file-name-*.json']

Creates:

gs://my-bucket/file-name-000000000000.json
gs://my-bucket/file-name-000000000001.json
gs://my-bucket/file-name-000000000002.json
...
Multiple wildcard URIs

Use multiple wildcard URIs if you want to partition the export output. You would use this option if you're running a parallel processing job with a service like Hadoop on Google Cloud Platform. Determine how many workers are available to process the job, and create one URI per worker. BigQuery treats each URI location as a partition, and uses parallel processing to shard your data into multiple files in each location. You can use whatever pattern you'd like in your file name, assuming there is a single wildcard operator in each URI, each URI is unique, and the number of URIs does not exceed the quota policy.

When you pass more than one wildcard URI, BigQuery creates a special file at the end of each partition that indicates the "final" file in the set. This file name indicates how many shards BigQuery created.

For example, if your wildcard URI is gs://[YOUR_BUCKET]/file-name- <worker number>-*.json, and BigQuery creates 80 sharded files, the zero record file name is gs://my-bucket/file- name-<worker number>-000000000080.json. You can use this file name to determine that BigQuery created 80 sharded files (named 000000000000-000000000079).

Note that a zero record file might contain more than 0 bytes depending on the data format, such as when exporting data in CSV format with a column header.

String pattern:

gs://[YOUR_BUCKET]/file-name-<worker number>-*.json

Property definition:

['gs://my-bucket/file-name-1-*.json',
'gs://my-bucket/file-name-2-*.json',
'gs://my-bucket/file-name-3-*.json']

Creates:

This example assumes that BigQuery creates 80 sharded files in each partition.

gs://my-bucket/file-name-1-000000000000.json
gs://my-bucket/file-name-1-000000000001.json
...
gs://my-bucket/file-name-1-000000000080.json
gs://my-bucket/file-name-2-000000000000.json
gs://my-bucket/file-name-2-000000000001.json
...
gs://my-bucket/file-name-2-000000000080.json
gs://my-bucket/file-name-3-000000000000.json
gs://my-bucket/file-name-3-000000000001.json
...
gs://my-bucket/file-name-3-000000000080.json

Quota policy

The following limits apply for exporting data from BigQuery.

  • Daily Limit: 1,000 exports per day, up to 10 TB
  • Multiple Wildcard URI Limit: 500 URIs per export

What's next

Send feedback about...

BigQuery Documentation