Loading Data into BigQuery

There are many situations where you can query data without loading it. For all other situations, you must first load your data into BigQuery before you can run queries.

You can load data:

Loading data into BigQuery from Google Drive is not currently supported, but you can query data in Google Drive using an external table.

You can load data into a new table or partition, you can append data to an existing table or partition, or you can overwrite a table or partition. For more information on working with partitions, see Creating and updating date-partitioned tables.

When you load data into BigQuery, you can supply the table or partition schema, or for supported data formats, you can use schema auto-detection.

Supported data formats

BigQuery supports loading data from Cloud Storage and readable data sources in the following formats:

The default source format for loading data is CSV. To load data that is stored in one of the other supported data formats, specify the format explicitly. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).

Choosing a data ingestion format

You can load data into BigQuery in a variety of formats. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).

When you are loading data, choose a data ingestion format based upon the following factors:

  • Your data's schema.

    CSV, JSON, and Avro all support flat data. JSON, Avro, and Cloud Datastore backups also support data with nested and repeated fields. Nested and repeated data is useful for expressing hierarchical data. Nested and repeated fields also reduce duplication when denormalizing the data.

  • Embedded newlines.

    If your data contains embedded newlines, BigQuery can load the data much faster in JSON or Avro format. When you are loading data from JSON files, the rows must be newline delimited. BigQuery expects newline-delimited JSON files to contain a single record per line.

  • External limitations.

    Your data might come from a document store database that natively stores data in JSON format. Or, your data might come from a source that only exports in CSV format.

Loading encoded data

BigQuery supports UTF-8 encoding for both nested or repeated and flat data. BigQuery supports ISO-8859-1 encoding for flat data only for CSV files.

Character encodings

By default, the BigQuery service expects all source data to be UTF-8 encoded. Optionally, if you have CSV files with data encoded in ISO-8859-1 format, you should explicitly specify the encoding when you import your data so that BigQuery can properly convert your data to UTF-8 during the import process. Currently, it is only possible to import data that is ISO-8859-1 or UTF-8 encoded. Keep in mind the following when you specify the character encoding of your data:

  • If you don't specify an encoding, or explicitly specify that your data is UTF-8 but then provide a CSV file that is not UTF-8 encoded, BigQuery attempts to convert your CSV file to UTF-8.

    Generally, your data will be imported successfully but may not match byte-for-byte what you expect. To avoid this, specify the correct encoding and try your import again.

  • Delimiters must be encoded as ISO-8859-1.

    Generally, it is best practice to use a standard delimiter, such as a tab, pipe, or comma.

  • If BigQuery cannot convert a character, it is converted to the standard Unicode replacement character: �.
  • JSON files must always be encoded in UTF-8.

If you plan to load ISO-8859-1 encoded flat data using the API, specify the configuration.load.encoding property.

Loading compressed and uncompressed data

The Avro binary format is the preferred format for loading compressed data. Avro data is faster to load because the data can be read in parallel, even when the data blocks are compressed.

For other data formats, BigQuery can load uncompressed files significantly faster than compressed files because uncompressed files can be read in parallel. Because uncompressed files are larger, using them can lead to bandwidth limitations and higher Google Cloud Storage costs for data staged in Google Cloud Storage prior to being loaded into BigQuery. You should also note that line ordering is not guaranteed for compressed or uncompressed files. It's important to weigh these tradeoffs depending on your use case.

In general, if bandwidth is limited, compress your files using gzip before uploading them to Google Cloud Storage. Currently, when loading data into BigQuery gzip is the only supported file compression type. If loading speed is important to your app and you have a lot of bandwidth to load your data, leave your files uncompressed.

Loading denormalized, nested, and repeated data

Many developers are accustomed to working with relational databases and normalized data schemas. Normalization eliminates duplicate data from being stored, and provides consistency when regular updates are being made to the data.

BigQuery performs best when your data is denormalized. Rather than preserving a relational schema such as a star or snowflake schema, denormalize your data and take advantage of nested and repeated fields. Nested and repeated fields can maintain relationships without the performance impact of preserving a relational (normalized) schema.

The storage savings from normalized data are less of a concern in modern systems. Increases in storage costs are worth the performance gains from denormalizing data. Joins require data coordination (communication bandwidth). Denormalization localizes the data to individual slots so execution can be done in parallel.

If you need to maintain relationships while denormalizing your data, use nested and repeated fields instead of completely flattening your data. When relational data is completely flattened, network communication (shuffling) can negatively impact query performance.

For example, denormalizing an orders schema without using nested and repeated fields may require you to group by a field like order_id (when there is a one-to-many relationship). Because of the shuffling involved, grouping the data is less performant than denormalizing the data using nested and repeated fields.

In some circumstances, denormalizing your data and using nested and repeated fields may not result in increased performance. Avoid denormalization in these use cases:

  • You have a star schema with frequently changing dimensions.
  • BigQuery complements an Online Transaction Processing (OLTP) system with row-level mutation, but can't replace it.

Nested and repeated fields are supported in the following data formats:

  • Avro
  • JSON (newline delimited)
  • Cloud Datastore backups

For information on specifying nested and repeated fields in your schema when you are loading data, see Specifying nested and repeated fields.

Schema auto-detection

Schema auto-detection is available when you load data into BigQuery, and when you query an external data source.

When auto-detection is enabled, BigQuery starts the inference process by selecting a random file in the data source and scanning up to 100 rows of data to use as a representative sample. BigQuery then examines each field and attempts to assign a data type to that field based on the values in the sample.

You can use schema auto-detection when you load JSON or CSV files. Schema auto-detection is not available for Cloud Datastore backups and Avro files because schema information is automatically retrieved from Cloud Datastore backups and Avro files.

Required permissions

When you load data from Cloud Storage, you can load data into a new table, you can append data to an existing table, or you can overwrite a table. Loading data into a new table requires bigquery.tables.create permissions for the dataset. Appending data to an existing table or overwriting an existing table requires bigquery.tables.updateData permissions. In addition, you need access to the Cloud Storage data source.

You can set bigquery.tables.create and bigquery.tables.updateData permissions at the project level by granting any of the following predefined IAM roles:

  • bigquery.dataEditor
  • bigquery.dataOwner
  • bigquery.admin

The bigquery.dataEditor, bigquery.dataOwner, and bigquery.admin predefined IAM roles include both bigquery.tables.create and bigquery.tables.updateData permissions.

If you prefer to grant a more restrictive role to users and groups at the project level (such as bigquery.user), you can assign access at the dataset level. Granting Can edit permissions to a dataset gives a user or group bigquery.dataEditor access only for that dataset. Granting Is owner permissions to a dataset gives a user or group bigquery.dataOwner access only for that dataset.

Project-level access controls determine the users and groups allowed to access all datasets, tables, and views within a project. Dataset access controls determine the users (including service accounts) and groups allowed to access the tables and views in a specific dataset. Currently, you cannot assign access controls to individual tables and views.

For more information on IAM roles in BigQuery, see Access Control.

Loading data from other Google services

BigQuery Data Transfer Service

The BigQuery Data Transfer Service automates loading data into BigQuery from YouTube, Google AdWords, and DoubleClick.

After you configure a data transfer, the BigQuery Data Transfer Service automatically schedules and manages recurring data loads from the source application into BigQuery.

Google Analytics 360

To learn how to export your session and hit data from a Google Analytics 360 reporting view into BigQuery, see BigQuery Export in the Google Analytics Help Center.

For examples of querying Google Analytics data in BigQuery, see BigQuery cookbook in the Google Analytics Help.

Google Cloud Storage

BigQuery supports loading data from Cloud Storage. For more information, see loading data from Cloud Storage.

Google Cloud Datastore

BigQuery supports loading data from Cloud Datastore backups. For more information, see Loading Data from Cloud Datastore Backups.

Google Cloud Dataflow

Google Cloud Dataflow can load data directly into BigQuery. For more information about using Cloud Dataflow to read from, and write to, BigQuery, see BigQuery I/O in the Cloud Dataflow documentation.

Alternatives to loading data

You do not need to load data before running queries in the following situations:

Public datasets
Public datasets are datasets stored in BigQuery and shared with the public. For more information, see Public datasets.
Shared datasets
You can share datasets stored in BigQuery. If someone has shared a dataset with you, you can run queries on that dataset without loading the data.
External data sources
You can skip the data loading process by creating a table that is based on an external data source. For information about the benefits and limitations of this approach, see external data sources.
Stackdriver log files
Cloud Logging provides an option to export log files into BigQuery. See Exporting with the Logs Viewer for more information.

Another alternative to loading data is to stream the data one record at a time. Streaming is typically used when you need the data to be immmediately available. For information about streaming, see Streaming Data.

Quota policy

For information about the quota policy for loading data, see Load jobs on the Quota Policy page.

Pricing

You are not charged for loading data into BigQuery. For more information, see: Pricing.

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...