Loading JSON files from Cloud Storage
When you load newline delimited JSON data from Cloud Storage, you can load the data into a new table or partition, or you can append to or overwrite an existing table or partition. When your data is loaded into BigQuery, it is converted into columnar format for Capacitor (BigQuery's storage format).
When you load data from Cloud Storage into a BigQuery table, the dataset that contains the table must be in the same regional or multi- regional location as the Cloud Storage bucket.
The newline delimited JSON format is the same format as the JSON Lines format.
For information about loading JSON data from a local file, see Loading data from local files.
Limitations
When you load JSON files into BigQuery, note the following:
- JSON data must be newline delimited. Each JSON object must be on a separate line in the file.
- If you use gzip compression BigQuery cannot read the data in parallel. Loading compressed JSON data into BigQuery is slower than loading uncompressed data.
- You cannot include both compressed and uncompressed files in the same load job.
- The maximum size for a gzip file is 4 GB.
BigQuery does not support maps or dictionaries in JSON, due to potential lack of schema information in a pure JSON dictionary. For example, to represent a list of products in a cart
"products": {"my_product": 40.0, "product2" : 16.5}
is not valid, but"products": [{"product_name": "my_product", "amount": 40.0}, {"product_name": "product2", "amount": 16.5}]
is valid.If you need to keep the entire JSON object, then it should be put into a
string
column, which can be queried using JSON functions.If you are using the BigQuery API to load an integer outside the range of [-253+1, 253-1] (in most cases, this means larger than 9,007,199,254,740,991), into an integer (INT64) column, you must pass it as a string to avoid data corruption. This issue is caused by a limitation on integer size in JSON/ECMAScript. For more information, see the Numbers section of RFC 7159.
- When you load CSV or JSON data, values in
DATE
columns must use the dash (-
) separator and the date must be in the following format:YYYY-MM-DD
(year-month-day). - When you load JSON or CSV data, values in
TIMESTAMP
columns must use a dash (-
) separator for the date portion of the timestamp, and the date must be in the following format:YYYY-MM-DD
(year-month-day). Thehh:mm:ss
(hour-minute-second) portion of the timestamp must use a colon (:
) separator.
Required permissions
When you load data into BigQuery, you need permissions to run a load job and permissions that let you load data into new or existing BigQuery tables and partitions. If you are loading data from Cloud Storage, you also need permissions to access to the bucket that contains your data.
BigQuery permissions
At a minimum, the following permissions are required to load data into BigQuery. These permissions are required if you are loading data into a new table or partition, or if you are appending or overwriting a table or partition.
bigquery.tables.create
bigquery.tables.updateData
bigquery.jobs.create
The following predefined IAM roles include both
bigquery.tables.create
and bigquery.tables.updateData
permissions:
bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
The following predefined IAM roles include bigquery.jobs.create
permissions:
bigquery.user
bigquery.jobUser
bigquery.admin
In addition, if a user has bigquery.datasets.create
permissions, when that
user creates a dataset, they are granted bigquery.dataOwner
access to it.
bigquery.dataOwner
access lets the user create and
update tables in the dataset by using a load job.
For more information on IAM roles and permissions in BigQuery, see Access control.
Cloud Storage permissions
To load data from a Cloud Storage bucket, you must be granted
storage.objects.get
permissions. If you are using a URI wildcard,
you must also have storage.objects.list
permissions.
The predefined IAM role storage.objectViewer
can be granted to provide both storage.objects.get
and storage.objects.list
permissions.
Loading JSON data into a new table
You can load newline delimited JSON data from Cloud Storage into a new BigQuery table by using one of the following:
- The Cloud Console
- The
bq
command-line tool'sbq load
command - The
jobs.insert
API method and configuring aload
job - The client libraries
To load JSON data from Cloud Storage into a new BigQuery table:
Console
In the Cloud Console, open the BigQuery page.
In the Explorer panel, expand your project and select a dataset.
In the details panel, click Create table.
On the Create table page, in the Source section:
For Create table from, select Cloud Storage.
In the source field, browse to or enter the Cloud Storage URI. Note that you cannot include multiple URIs in the Cloud Console, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're creating.
For File format, select JSON (Newline delimited).
On the Create table page, in the Destination section:
For Dataset name, choose the appropriate dataset.
Verify that Table type is set to Native table.
In the Table name field, enter the name of the table you're creating in BigQuery.
In the Schema section, for Auto detect, check Schema and input parameters to enable schema auto detection. Alternatively, you can manually enter the schema definition by:
Enabling Edit as text and entering the table schema as a JSON array.
Using Add field to manually input the schema.
(Optional) To partition the table, choose your options in the Partition and cluster settings:
- To create a partitioned table,
click No partitioning, select Partition by field and choose a
DATE
orTIMESTAMP
column. This option is unavailable if your schema does not include aDATE
orTIMESTAMP
column. - To create an ingestion-time partitioned table, click No partitioning and select Partition by ingestion time.
- To create a partitioned table,
click No partitioning, select Partition by field and choose a
(Optional) For Partitioning filter, click the Require partition filter box to require users to include a
WHERE
clause that specifies the partitions to query. Requiring a partition filter can reduce cost and improve performance. For more information, see Querying partitioned tables. This option is unavailable if No partitioning is selected.(Optional) To cluster the table, in the Clustering order box, enter between one and four field names.
(Optional) Click Advanced options.
- For Write preference, leave Write if empty selected. This option creates a new table and loads your data into it.
- For Number of errors allowed, accept the default value of
0
or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in aninvalid
message and fail. - For Unknown values, check Ignore unknown values to ignore any values in a row that are not present in the table's schema.
- For Encryption, click Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.
Click Create table.
bq
Use the bq load
command, specify NEWLINE_DELIMITED_JSON
using the
--source_format
flag, and include a Cloud Storage URI.
You can include a single URI, a comma-separated list of URIs, or a URI
containing a wildcard.
Supply the schema inline, in a schema definition file, or use
schema auto-detect.
(Optional) Supply the --location
flag and set the value to your
location.
Other optional flags include:
--max_bad_records
: An integer that specifies the maximum number of bad records allowed before the entire job fails. The default value is0
. At most, five errors of any type are returned regardless of the--max_bad_records
value.--ignore_unknown_values
: When specified, allows and ignores extra, unrecognized values in CSV or JSON data.--autodetect
: When specified, enable schema auto-detection for CSV and JSON data.--quote
: The quote character to use to enclose records. The default value is"
. To indicate no quote character, use an empty string.--time_partitioning_type
: Enables time-based partitioning on a table and sets the partition type. Possible values areHOUR
,DAY
,MONTH
, andYEAR
. This flag is optional when you create a table partitioned on aDATE
,DATETIME
, orTIMESTAMP
column. The default partition type for time-based partitioning isDAY
.--time_partitioning_expiration
: An integer that specifies (in seconds) when a time-based partition should be deleted. The expiration time evaluates to the partition's UTC date plus the integer value.--time_partitioning_field
: TheDATE
orTIMESTAMP
column used to create a partitioned table. If time-based partitioning is enabled without this value, an ingestion-time partitioned table is created.--require_partition_filter
: When enabled, this option requires users to include aWHERE
clause that specifies the partitions to query. Requiring a partition filter can reduce cost and improve performance. For more information, see Querying partitioned tables.--clustering_fields
: A comma-separated list of up to four column names used to create a clustered table.--destination_kms_key
: The Cloud KMS key for encryption of the table data.For more information on partitioned tables, see:
For more information on clustered tables, see:
For more information on table encryption, see:
To load JSON data into BigQuery, enter the following command:
bq --location=LOCATION load \ --source_format=FORMAT \ DATASET.TABLE \ PATH_TO_SOURCE \ SCHEMA
Replace the following:
LOCATION
: your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file.FORMAT
:NEWLINE_DELIMITED_JSON
.DATASET
: an existing dataset.TABLE
: the name of the table into which you're loading data.PATH_TO_SOURCE
: a fully qualified Cloud Storage URI or a comma-separated list of URIs. Wildcards are also supported.SCHEMA
: a valid schema. The schema can be a local JSON file, or it can be typed inline as part of the command. If you use a schema file, do not give it an extension. You can also use the--autodetect
flag instead of supplying a schema definition.
Examples:
The following command loads data from gs://mybucket/mydata.json
into a
table named mytable
in mydataset
. The schema is defined in a local
schema file named myschema
.
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json \
./myschema
The following command loads data from gs://mybucket/mydata.json
into an
ingestion-time partitioned table named mytable
in mydataset
. The schema
is defined in a local schema file named myschema
.
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
--time_partitioning_type=DAY \
mydataset.mytable \
gs://mybucket/mydata.json \
./myschema
The following command loads data from gs://mybucket/mydata.json
into a
partitioned table named mytable
in mydataset
. The table is partitioned
on the mytimestamp
column. The schema is defined in a local schema file
named myschema
.
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
--time_partitioning_field mytimestamp \
mydataset.mytable \
gs://mybucket/mydata.json \
./myschema
The following command loads data from gs://mybucket/mydata.json
into a
table named mytable
in mydataset
. The schema is auto detected.
bq load \
--autodetect \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json
The following command loads data from gs://mybucket/mydata.json
into a
table named mytable
in mydataset
. The schema is defined inline in the
format FIELD:DATA_TYPE, FIELD:DATA_TYPE
.
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json \
qtr:STRING,sales:FLOAT,year:STRING
The following command loads data from multiple files in gs://mybucket/
into a table named mytable
in mydataset
. The Cloud Storage URI uses a
wildcard. The schema is auto detected.
bq load \
--autodetect \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata*.json
The following command loads data from multiple files in gs://mybucket/
into a table named mytable
in mydataset
. The command includes a comma-
separated list of Cloud Storage URIs with wildcards. The schema is
defined in a local schema file named myschema
.
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
"gs://mybucket/00/*.json","gs://mybucket/01/*.json" \
./myschema
API
Create a
load
job that points to the source data in Cloud Storage.(Optional) Specify your location in the
location
property in thejobReference
section of the job resource.The
source URIs
property must be fully qualified, in the formatgs://BUCKET/OBJECT
. Each URI can contain one '*' wildcard character.Specify the JSON data format by setting the
sourceFormat
property toNEWLINE_DELIMITED_JSON
.To check the job status, call
jobs.get(JOB_ID*)
, replacingJOB_ID
with the ID of the job returned by the initial request.- If
status.state = DONE
, the job completed successfully. - If the
status.errorResult
property is present, the request failed, and that object includes information describing what went wrong. When a request fails, no table is created and no data is loaded. - If
status.errorResult
is absent, the job finished successfully; although, there might have been some non-fatal errors, such as problems importing a few rows. Non-fatal errors are listed in the returned job object'sstatus.errors
property.
- If
API notes:
Load jobs are atomic and consistent; if a load job fails, none of the data is available, and if a load job succeeds, all of the data is available.
As a best practice, generate a unique ID and pass it as
jobReference.jobId
when callingjobs.insert
to create a load job. This approach is more robust to network failure because the client can poll or retry on the known job ID.Calling
jobs.insert
on a given job ID is idempotent. You can retry as many times as you like on the same job ID, and at most, one of those operations will succeed.
C#
Before trying this sample, follow the C# setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery C# API reference documentation.
BigQueryClient.CreateLoadJob()
method to start a load job
from Cloud Storage. To use newline-delimited JSON, create a
CreateLoadJobOptions
object and set its
SourceFormat
property to
FileFormat.NewlineDelimitedJson
.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery PHP API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
NEWLINE_DELIMITED_JSON
and pass the job config as the
job_config
argument to the load_table_from_uri()
method.
Ruby
Before trying this sample, follow the Ruby setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Ruby API reference documentation.
format
parameter to "json"
.
Loading nested and repeated JSON data
BigQuery supports loading nested and repeated data from source formats that support object-based schemas, such as JSON, Avro, ORC, Parquet, Firestore, and Datastore.
One JSON object, including any nested/repeated fields, must appear on each line.
The following example shows sample nested/repeated data. This table contains information about people. It consists of the following fields:
id
first_name
last_name
dob
(date of birth)addresses
(a nested and repeated field)addresses.status
(current or previous)addresses.address
addresses.city
addresses.state
addresses.zip
addresses.numberOfYears
(years at the address)
The JSON data file would look like the following. Notice that the address field
contains an array of values (indicated by [ ]
).
{"id":"1","first_name":"John","last_name":"Doe","dob":"1968-01-22","addresses":[{"status":"current","address":"123 First Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456 Main Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]} {"id":"2","first_name":"Jane","last_name":"Doe","dob":"1980-10-16","addresses":[{"status":"current","address":"789 Any Avenue","city":"New York","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321 Main Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
The schema for this table would look like the following:
[ { "name": "id", "type": "STRING", "mode": "NULLABLE" }, { "name": "first_name", "type": "STRING", "mode": "NULLABLE" }, { "name": "last_name", "type": "STRING", "mode": "NULLABLE" }, { "name": "dob", "type": "DATE", "mode": "NULLABLE" }, { "name": "addresses", "type": "RECORD", "mode": "REPEATED", "fields": [ { "name": "status", "type": "STRING", "mode": "NULLABLE" }, { "name": "address", "type": "STRING", "mode": "NULLABLE" }, { "name": "city", "type": "STRING", "mode": "NULLABLE" }, { "name": "state", "type": "STRING", "mode": "NULLABLE" }, { "name": "zip", "type": "STRING", "mode": "NULLABLE" }, { "name": "numberOfYears", "type": "STRING", "mode": "NULLABLE" } ] } ]
For information on specifying a nested and repeated schema, see Specifying nested and repeated fields.
Appending to or overwriting a table with JSON data
You can load additional data into a table either from source files or by appending query results.
In the Cloud Console, use the Write preference option to specify what action to take when you load data from a source file or from a query result.
You have the following options when you load additional data into a table:
Console option | bq tool flag |
BigQuery API property | Description |
---|---|---|---|
Write if empty | None | WRITE_EMPTY |
Writes the data only if the table is empty. |
Append to table | --noreplace or --replace=false ; if
--[no]replace is unspecified, the default is append |
WRITE_APPEND |
(Default) Appends the data to the end of the table. |
Overwrite table | --replace or --replace=true |
WRITE_TRUNCATE |
Erases all existing data in a table before writing the new data. This action also deletes the table schema and removes any Cloud KMS key. |
If you load data into an existing table, the load job can append the data or overwrite the table.
You can append or overwrite a table by using one of the following:
- The Cloud Console
- The
bq
command-line tool'sbq load
command - The
jobs.insert
API method and configuring aload
job - The client libraries
Console
In the Cloud Console, open the BigQuery page.
In the Explorer panel, expand your project and select a dataset.
In the details panel, click Create table.
On the Create table page, in the Source section:
For Create table from, select Cloud Storage.
In the source field, browse to or enter the Cloud Storage URI. Note that you cannot include multiple URIs in the Cloud Console, but wildcards are supported. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're appending or overwriting.
For File format, select JSON (Newline delimited).
On the Create table page, in the Destination section:
For Dataset name, choose the appropriate dataset.
In the Table name field, enter the name of the table you're appending or overwriting in BigQuery.
Verify that Table type is set to Native table.
In the Schema section, for Auto detect, check Schema and input parameters to enable schema auto detection. Alternatively, you can manually enter the schema definition by:
Enabling Edit as text and entering the table schema as a JSON array.
Using Add field to manually input the schema.
For Partition and cluster settings, leave the default values. You cannot convert a table to a partitioned or clustered table by appending or overwriting it, and the Cloud Console does not support appending to or overwriting partitioned or clustered tables in a load job.
Click Advanced options.
- For Write preference, choose Append to table or Overwrite table.
- For Number of errors allowed, accept the default value of
0
or enter the maximum number of rows containing errors that can be ignored. If the number of rows with errors exceeds this value, the job will result in aninvalid
message and fail. - For Unknown values, check Ignore unknown values to ignore any values in a row that are not present in the table's schema.
For Encryption, click Customer-managed key to use a Cloud Key Management Service key. If you leave the Google-managed key setting, BigQuery encrypts the data at rest.
Click Create table.
bq
Use the bq load
command, specify NEWLINE_DELIMITED_JSON
using the
--source_format
flag, and include a Cloud Storage URI.
You can include a single URI, a comma-separated list of URIs, or a URI
containing a wildcard.
Supply the schema inline, in a schema definition file, or use schema auto-detect.
Specify the --replace
flag to overwrite the
table. Use the --noreplace
flag to append data to the table. If no flag is
specified, the default is to append data.
It is possible to modify the table's schema when you append or overwrite it. For more information on supported schema changes during a load operation, see Modifying table schemas.
(Optional) Supply the --location
flag and set the value to your
location.
Other optional flags include:
--max_bad_records
: An integer that specifies the maximum number of bad records allowed before the entire job fails. The default value is0
. At most, five errors of any type are returned regardless of the--max_bad_records
value.--ignore_unknown_values
: When specified, allows and ignores extra, unrecognized values in CSV or JSON data.--autodetect
: When specified, enable schema auto-detection for CSV and JSON data.--destination_kms_key
: The Cloud KMS key for encryption of the table data.
bq --location=LOCATION load \ --[no]replace \ --source_format=FORMAT \ DATASET.TABLE \ PATH_TO_SOURCE \ SCHEMA
Replace the following:
LOCATION
: your location. The--location
flag is optional. You can set a default value for the location using the .bigqueryrc file.FORMAT
:NEWLINE_DELIMITED_JSON
.DATASET
: an existing dataset.TABLE
: the name of the table into which you're loading data.PATH_TO_SOURCE
: a fully qualified Cloud Storage URI or a comma-separated list of URIs. Wildcards are also supported.SCHEMA
: a valid schema. The schema can be a local JSON file, or it can be typed inline as part of the command. You can also use the--autodetect
flag instead of supplying a schema definition.
Examples:
The following command loads data from gs://mybucket/mydata.json
and
overwrites a table named mytable
in mydataset
. The schema is defined
using schema auto-detection.
bq load \
--autodetect \
--replace \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json
The following command loads data from gs://mybucket/mydata.json
and
appends data to a table named mytable
in mydataset
. The schema is
defined using a JSON schema file — myschema
.
bq load \
--noreplace \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json \
./myschema
API
Create a
load
job that points to the source data in Cloud Storage.(Optional) Specify your location in the
location
property in thejobReference
section of the job resource.The
source URIs
property must be fully-qualified, in the formatgs://BUCKET/OBJECT
. You can include multiple URIs as a comma-separated list. Note that wildcards are also supported.Specify the data format by setting the
configuration.load.sourceFormat
property toNEWLINE_DELIMITED_JSON
.Specify the write preference by setting the
configuration.load.writeDisposition
property toWRITE_TRUNCATE
orWRITE_APPEND
.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery PHP API reference documentation.
Python
To replace the rows in an existing table, set the
LoadJobConfig.write_disposition property
to the string WRITE_TRUNCATE
.
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Ruby
To replace the rows in an existing table, set the write
parameter of
Table.load_job()
to "WRITE_TRUNCATE"
.
Before trying this sample, follow the Ruby setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Ruby API reference documentation.
Loading hive-partitioned JSON data
BigQuery supports loading hive partitioned JSON data stored on Cloud Storage and populates the hive partitioning columns as columns in the destination BigQuery managed table. For more information, see Loading externally partitioned data.
Details of loading JSON data
This section describes how BigQuery parses various data types when loading JSON data.
Boolean. BigQuery can parse any of the following pairs for Boolean data: 1 or 0, true or false, t or f, yes or no, or y or n (all case insensitive). Schema autodetection will automatically detect any of these except 0 and 1.
Date. Columns with DATE types must be in the form YYYY-MM-DD
.
Datetime. Columns with DATETIME types must be in the form YYYY-MM-DD
HH:MM:SS[.SSSSSS]
.
Time. Columns with TIME types must be in the form HH:MM:SS[.SSSSSS]
.
Timestamp. BigQuery accepts a variety of timestamp formats. The timestamp must include a date portion and a time portion.
The date portion can be formatted as
YYYY-MM-DD
orYYYY/MM/DD
.The timestamp portion must be formatted as
HH:MM[:SS[.SSSSSS]]
(seconds and fractions of seconds are optional).The date and time must be separated by a space or 'T'.
Optionally, the date and time can be followed by a UTC offset or the UTC zone designator (
Z
). For more information, see Time zones.
For example, any of the following are valid timestamp values:
- 2018-08-19 12:11
- 2018-08-19 12:11:35
- 2018-08-19 12:11:35.22
- 2018/08/19 12:11
- 2018-07-05 12:54:00 UTC
- 2018-08-19 07:11:35.220 -05:00
- 2018-08-19T12:11:35.220Z
If you provide a schema, BigQuery also accepts Unix Epoch time for timestamp values. However, schema autodetection will not detect this case, and will treat the value as a numeric or string type instead.
Examples of Unix Epoch timestamp values:
- 1534680695
- 1.534680695e11
JSON options
To change how BigQuery parses JSON data, specify additional
options in the Cloud Console, the bq
command-line tool, the API, or the client
libraries.
JSON option | Console option | bq tool flag |
BigQuery API property | Description |
---|---|---|---|---|
Number of bad records allowed | Number of errors allowed | --max_bad_records |
maxBadRecords |
(Optional) The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is `0`, which requires that all records are valid. |
Unknown values | Ignore unknown values | --ignore_unknown_values |
ignoreUnknownValues |
(Optional) Indicates whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The `sourceFormat` property determines what BigQuery treats as an extra value: CSV: trailing columns, JSON: named values that don't match any column names. |