This page provides an overview of loading data into BigQuery.
There are many situations where you can query data without loading it. For all other situations, you must first load your data into BigQuery before you can run queries.
You can load data:
- From Cloud Storage
- From other Google services, such as Google Ad Manager and Google Ads
- From a readable data source (such as your local machine)
- By inserting individual records using streaming inserts
- Using DML statements to perform bulk inserts
- Using a BigQuery I/O transform in a Dataflow pipeline to write data to BigQuery
Loading data into BigQuery from Google Drive is not currently supported, but you can query data in Google Drive by using an external table.
You can load data into a new table or partition, you can append data to an existing table or partition, or you can overwrite a table or partition. For more information about working with partitions, see Managing partitioned tables.
When you load data into BigQuery, you can supply the table or partition schema, or, for supported data formats, you can use schema auto-detection.
Loading data into BigQuery is subject to the following limitations:
- Currently, you can load data into BigQuery only from Cloud Storage or a readable data source (such as your local machine).
Depending on the location and format of your source data, there may be additional limitations. For more information, see:
Loading files from a readable source
Loading files from Cloud Storage
Loading nested and repeated data
Supported data formats
BigQuery supports loading data from Cloud Storage and readable data sources in the following formats:
- Cloud Storage:
- Readable data source (such as your
- JSON (newline delimited only)
The default source format for loading data is CSV. To load data that is stored in one of the other supported data formats, specify the format explicitly. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).
Choosing a data ingestion format
You can load data into BigQuery in a variety of formats. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).
When you are loading data, choose a data ingestion format based upon the following factors:
Your data's schema.
Avro, CSV, JSON, ORC, and Parquet all support flat data. Avro, JSON, ORC, Parquet, Datastore exports, and Cloud Firestore exports also support data with nested and repeated fields. Nested and repeated data is useful for expressing hierarchical data. Nested and repeated fields also reduce duplication when denormalizing the data.
When you are loading data from JSON files, the rows must be newline delimited. BigQuery expects newline-delimited JSON files to contain a single record per line.
Your data might come from a document store database that natively stores data in JSON format. Or, your data might come from a source that only exports in CSV format.
Loading encoded data
BigQuery supports UTF-8 encoding for both nested or repeated and flat data. BigQuery supports ISO-8859-1 encoding for flat data only for CSV files.
By default, the BigQuery service expects all source data to be UTF-8 encoded. Optionally, if you have CSV files with data encoded in ISO-8859-1 format, you can explicitly specify the encoding when you import your data so that BigQuery can properly convert your data to UTF-8 during the import process. Currently, it is only possible to import data that is ISO-8859-1 or UTF-8 encoded. Keep in mind the following points when you specify the character encoding of your data:
- If you don't specify an encoding, or explicitly specify that your data is UTF-8 but then provide
a CSV file that is not UTF-8 encoded, BigQuery attempts to convert your CSV file to
Generally, your data is imported successfully, but it may not match byte-for-byte what you expect. To avoid this, specify the correct encoding and try your import again.
- Delimiters must be encoded as ISO-8859-1.
Generally, it's a best practice to use a standard delimiter, such as a tab, pipe, or comma.
- If BigQuery can't convert a character, it's converted to the standard Unicode replacement character: �.
- JSON files must always be encoded in UTF-8.
If you plan to load ISO-8859-1 encoded flat data using the API, specify the
encoding property in the
load job configuration.
Loading compressed and uncompressed data
The Avro binary format is the preferred format for loading both compressed and uncompressed data. Avro data is faster to load because the data can be read in parallel, even when the data blocks are compressed. Compressed Avro files are not supported, but compressed data blocks are. BigQuery supports the DEFLATE and Snappy codecs for compressed data blocks in Avro files.
Parquet binary format is also a good choice because Parquet's efficient, per-column encoding typically results in a better compression ratio and smaller files. Parquet files also leverage compression techniques that allow files to be loaded in parallel. Compressed Parquet files are not supported, but compressed data blocks are. BigQuery supports Snappy, GZip, and LZO_1X codecs for compressed data blocks in Parquet files.
The ORC binary format offers benefits similar to the benefits of the Parquet format. Data in ORC files is fast to load because data stripes can be read in parallel. The rows in each data stripe are loaded sequentially. To optimize load time, use a data stripe size of approximately 256 MB or less. Compressed ORC files are not supported, but compressed file footer and stripes are. BigQuery supports Zlib, Snappy, LZO, and LZ4 compression for ORC file footers and stripes.
For other data formats such as CSV and JSON, BigQuery can load uncompressed files significantly faster than compressed files because uncompressed files can be read in parallel. Because uncompressed files are larger, using them can lead to bandwidth limitations and higher Cloud Storage costs for data staged in Cloud Storage prior to being loaded into BigQuery. Keep in mind that line ordering isn't guaranteed for compressed or uncompressed files. It's important to weigh these tradeoffs depending on your use case.
In general, if bandwidth is limited, compress your CSV and JSON files by using gzip before uploading them to Cloud Storage. Currently, when you load data into BigQuery, gzip is the only supported file compression type for CSV and JSON files. If loading speed is important to your app and you have a lot of bandwidth to load your data, leave your files uncompressed.
Loading denormalized, nested, and repeated data
Many developers are accustomed to working with relational databases and normalized data schemas. Normalization eliminates duplicate data from being stored, and provides consistency when regular updates are made to the data.
BigQuery performs best when your data is denormalized. Rather than preserving a relational schema, such as a star or snowflake schema, you can improve performance by denormalizing your data and taking advantage of nested and repeated fields. Nested and repeated fields can maintain relationships without the performance impact of preserving a relational (normalized) schema.
The storage savings from using normalized data has less of an affect in modern systems. Increases in storage costs are worth the performance gains of using denormalized data. Joins require data coordination (communication bandwidth). Denormalization localizes the data to individual slots, so that execution can be done in parallel.
To maintain relationships while denormalizing your data, you can use nested and repeated fields instead of completely flattening your data. When relational data is completely flattened, network communication (shuffling) can negatively impact query performance.
For example, denormalizing an orders schema without using nested and repeated
fields might require you to group the data by a field like
(when there is a one-to-many relationship). Because of the shuffling involved,
grouping the data is less effective than denormalizing the data by using
nested and repeated fields.
In some circumstances, denormalizing your data and using nested and repeated fields doesn't result in increased performance. Avoid denormalization in these use cases:
- You have a star schema with frequently changing dimensions.
- BigQuery complements an Online Transaction Processing (OLTP) system with row-level mutation but can't replace it.
Nested and repeated fields are supported in the following data formats:
- JSON (newline delimited)
- Datastore exports
- Cloud Firestore exports
For information about specifying nested and repeated fields in your schema when you're loading data, see Specifying nested and repeated fields.
When auto-detection is enabled, BigQuery starts the inference process by selecting a random file in the data source and scanning up to 100 rows of data to use as a representative sample. BigQuery then examines each field and attempts to assign a data type to that field based on the values in the sample.
You can use schema auto-detection when you load JSON or CSV files. Schema auto-detection is not available for Avro files, ORC files, Parquet files, Datastore exports, or Cloud Firestore exports because schema information is self-described for these formats.
Loading data from other Google services
BigQuery Data Transfer Service
The BigQuery Data Transfer Service automates loading data into BigQuery from these services:Google Software as a Service (SaaS) apps
- Campaign Manager
- Cloud Storage (beta)
- Google Ad Manager
- Google Ads
- Google Merchant Center (beta)
- Google Play (beta)
- Search Ads 360 (beta)
- YouTube Channel reports
- YouTube Content Owner reports
- Amazon S3 (beta)
After you configure a data transfer, the BigQuery Data Transfer Service automatically schedules and manages recurring data loads from the source app into BigQuery.
Google Analytics 360
To learn how to export your session and hit data from a Google Analytics 360 reporting view into BigQuery, see BigQuery export in the Google Analytics Help Center.
For examples of querying Google Analytics data in BigQuery, see BigQuery cookbook in the Google Analytics help.
BigQuery supports loading data from Cloud Storage. For more information, see loading data from Cloud Storage.
BigQuery supports loading data from Datastore exports. For more information, see Loading data from Datastore exports.
BigQuery supports loading data from Cloud Firestore exports. For more information, see Loading data from Cloud Firestore exports.
Alternatives to loading data
You don't need to load data before running queries in the following situations:
- Public datasets
- Public datasets are datasets stored in BigQuery and shared with the public. For more information, see BigQuery public datasets.
- Shared datasets
- You can share datasets stored in BigQuery. If someone has shareda dataset with you, you can run queries on that dataset without loading the data.
- External data sources
- You can skip the data loading process by creating a table that is based on an external data source. For information about the benefits and limitations of this approach, see external data sources.
- Logging files
- Stackdriver Logging provides an option to export log files into BigQuery. See Exporting with the Logs Viewer for more information.
Another alternative to loading data is to stream the data one record at a time. Streaming is typically used when you need the data to be immediately available. For information about streaming, see Streaming data into BigQuery.
For information about the quota policy for loading data, see Load jobs on the Quotas and limits page.
Currently, there is no charge for loading data into BigQuery. For more information, see the pricing page.
To learn how to load data from Cloud Storage into BigQuery, see the documentation for your data format:
To learn how to load data from a local file, see Loading data from a local data source
For information about streaming data, see Streaming data into BigQuery