Modifying table schemas
This document describes how to modify the schema definitions for existing BigQuery tables.
You can make most schema modifications described in this document by using SQL data definition language (DDL) statements. These statements don't incur charges.
You can modify a table schema in all the ways described on this page by exporting your table data to Cloud Storage, and then loading the data into a new table with the modified schema definition. BigQuery load and export jobs are free, but you incur costs for storing the exported data in Cloud Storage. The following sections describe other ways of performing various types of schema modifications.
Add a column
You can add columns to an existing table's schema definition by using one of the following options:
- Add a new empty column.
- Overwrite a table with a load or query job.
- Append data to a table with a load or query job.
Any column you add must adhere to BigQuery's rules for column names. For more information on creating schema components, see Specifying a schema.
Add an empty column
If you add new columns to an existing table schema, the columns must be
NULLABLE
or REPEATED
. You cannot add a REQUIRED
column to an existing
table schema. Adding a REQUIRED
column to an existing table
schema in the API or bq command-line tool causes an error. However, you can create a
nested REQUIRED
column as part of a new RECORD
field.
REQUIRED
columns can be added only when you
create a table while loading data, or when you create an empty table with a
schema definition.
To add empty columns to a table's schema definition:
Console
In the Google Cloud console, go to the BigQuery page.
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click the Schema tab.
Click Edit schema. You might need to scroll to see this button.
In the Current schema page, under New fields, click Add field.
When you are done adding columns, click Save.
SQL
Use the
ALTER TABLE ADD COLUMN
DDL statement:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTER TABLE mydataset.mytable ADD COLUMN new_column STRING;
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
Issue the bq update
command and provide a JSON schema file. If the table
you're updating is in a project other than your default project, add the
project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.
bq update PROJECT_ID:DATASET.TABLE SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the path to the JSON schema file on your local machine.
When you specify an inline schema, you cannot specify the column
description, mode, and RECORD
(STRUCT
)
type. All column modes default to NULLABLE
. As a result, if you are
adding a new nested column to a RECORD
, you must
supply a JSON schema file.
If you attempt to add columns using an inline schema definition, you must
supply the entire schema definition including the new columns. Because you
cannot specify column modes using an inline schema definition, the update
changes any existing REPEATED
column to NULLABLE
, which
produces the following error: BigQuery error in update
operation: Provided Schema does not match Table
PROJECT_ID:dataset.table. Field field has changed mode
from REPEATED to NULLABLE.
The preferred method of adding columns to an existing table using the bq command-line tool is to supply a JSON schema file.
To add empty columns to a table's schema using a JSON schema file:
First, issue the
bq show
command with the--schema
flag and write the existing table schema to a file. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:PROJECT_ID:DATASET
.bq show \ --schema \ --format=prettyjson \ PROJECT_ID:DATASET.TABLE > SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the schema definition file written to your local machine.
For example, to write the schema definition of
mydataset.mytable
to a file, enter the following command.mydataset.mytable
is in your default project.bq show \ --schema \ --format=prettyjson \ mydataset.mytable > /tmp/myschema.json
Open the schema file in a text editor. The schema should look like the following:
[ { "mode": "REQUIRED", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "mode": "REPEATED", "name": "column3", "type": "STRING" } ]
Add the new columns to the end of the schema definition. If you attempt to add new columns elsewhere in the array, the following error is returned:
BigQuery error in update operation: Precondition Failed
.Using a JSON file, you can specify descriptions,
NULLABLE
orREPEATED
modes, andRECORD
types for new columns. For example, using the schema definition from the previous step, your new JSON array would look like the following. In this example, a newNULLABLE
column is added namedcolumn4
.column4
includes a description.[ { "mode": "REQUIRED", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "mode": "REPEATED", "name": "column3", "type": "STRING" }, { "description": "my new column", "mode": "NULLABLE", "name": "column4", "type": "STRING" } ]
For more information on working with JSON schema files, see Specifying a JSON schema file.
After updating your schema file, issue the following command to update the table's schema. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.bq update PROJECT_ID:DATASET.TABLE SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the schema definition file written to your local machine.
For example, enter the following command to update the schema definition of
mydataset.mytable
in your default project. The path to the schema file on your local machine is/tmp/myschema.json
.bq update mydataset.mytable /tmp/myschema.json
API
Call the tables.patch
method and use the schema
property to add empty columns to your schema
definition. Because the tables.update
method replaces the entire table
resource, the tables.patch
method is preferred.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Add a nested column to a RECORD
column
In addition to adding new columns to a table's schema, you can also add new
nested columns to a RECORD
column. The process for adding a new nested column is
similar to the process for adding a new column.
Console
Adding a new nested field to an existing RECORD
column is not
supported by the Google Cloud console.
SQL
Adding a new nested field to an existing RECORD
column by using a SQL DDL
statement is not supported.
bq
Issue the bq update
command and provide a JSON schema file that adds the
nested field to the existing RECORD
column's schema definition. If the
table you're updating is in a project other than your default project, add
the project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.
bq update PROJECT_ID:DATASET.TABLE SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the path to the JSON schema file on your local machine.
When you specify an inline schema, you cannot specify the column
description, mode, and RECORD
(STRUCT
)
type. All column modes default to NULLABLE
. As a result, if you are
adding a new nested column to a RECORD
, you must
supply a JSON schema file.
To add a nested column to a RECORD
using a JSON schema file:
First, issue the
bq show
command with the--schema
flag and write the existing table schema to a file. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:PROJECT_ID:DATASET.TABLE
.bq show \ --schema \ --format=prettyjson \ PROJECT_ID:DATASET.TABLE > SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the schema definition file written to your local machine.
For example, to write the schema definition of
mydataset.mytable
to a file, enter the following command.mydataset.mytable
is in your default project.bq show \ --schema \ --format=prettyjson \ mydataset.mytable > /tmp/myschema.json
Open the schema file in a text editor. The schema should look like the following. In this example,
column3
is a nested repeated column. The nested columns arenested1
andnested2
. Thefields
array lists the fields nested withincolumn3
.[ { "mode": "REQUIRED", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "fields": [ { "mode": "NULLABLE", "name": "nested1", "type": "STRING" }, { "mode": "NULLABLE", "name": "nested2", "type": "STRING" } ], "mode": "REPEATED", "name": "column3", "type": "RECORD" } ]
Add the new nested column to the end of the
fields
array. In this example,nested3
is the new nested column.[ { "mode": "REQUIRED", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "fields": [ { "mode": "NULLABLE", "name": "nested1", "type": "STRING" }, { "mode": "NULLABLE", "name": "nested2", "type": "STRING" }, { "mode": "NULLABLE", "name": "nested3", "type": "STRING" } ], "mode": "REPEATED", "name": "column3", "type": "RECORD" } ]
For more information on working with JSON schema files, see Specifying a JSON schema file.
After updating your schema file, issue the following command to update the table's schema. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.bq update PROJECT_ID:DATASET.TABLE SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the path to the JSON schema file on your local machine.
For example, enter the following command to update the schema definition of
mydataset.mytable
in your default project. The path to the schema file on your local machine is/tmp/myschema.json
.bq update mydataset.mytable /tmp/myschema.json
API
Call the tables.patch
method and use the schema
property to add the nested columns to your
schema definition. Because the tables.update
method replaces the entire
table resource, the tables.patch
method is preferred.
Add columns when you overwrite or append data
You can add new columns to an existing table when you load data into it and choose to overwrite the existing table. When you overwrite an existing table, the schema of the data you're loading is used to overwrite the existing table's schema. For information on overwriting a table using a load job, see the document for your data's format:
Add columns in a load append job
You can add columns to a table when you append data to it in a load job. The new schema is determined by one of the following:
- Autodetection (for CSV and JSON files)
- A schema specified in a JSON schema file (for CSV and JSON files)
- The self-describing source data for Avro, ORC, Parquet and Datastore export files
If you specify the schema in a JSON file, the new columns must be defined in it. If the new column definitions are missing, an error is returned when you attempt to append the data.
When you add new columns during an append operation,
the values in the new columns are set to NULL
for existing rows.
To add a new column when you append data to a table during a load job, use one of the following options:
bq
Use the bq load
command to load your data and specify the --noreplace
flag to indicate that you are appending the data to an existing table.
If the data you're appending is in CSV or newline-delimited JSON format,
specify the --autodetect
flag to use schema auto-detection
or supply the schema in a JSON schema file. The added columns can be
automatically inferred from Avro or Datastore export files.
Set the --schema_update_option
flag to ALLOW_FIELD_ADDITION
to indicate
that the data you're appending contains new columns.
If the table you're appending is in a dataset in a project other than your
default project, add the project ID to the dataset name in the following
format: PROJECT_ID:DATASET
.
(Optional) Supply the --location
flag and set the value to your
location.
Enter the load
command as follows:
bq --location=LOCATION load \ --noreplace \ --autodetect \ --schema_update_option=ALLOW_FIELD_ADDITION \ --source_format=FORMAT \ PROJECT_ID:DATASET.TABLE \ PATH_TO_SOURCE \ SCHEMA
Replace the following:
LOCATION
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file.FORMAT
: the format of the schema.NEWLINE_DELIMITED_JSON
,CSV
,AVRO
,PARQUET
,ORC
, orDATASTORE_BACKUP
.PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table.TABLE
: the name of the table you're appending.PATH_TO_SOURCE
: a fully-qualified Cloud Storage URI, a comma-separated list of URIs, or the path to a data file on your local machine.SCHEMA
: the path to a local JSON schema file. A schema file is required only for CSV and JSON files when--autodetect
is unspecified. Avro and Datastore schemas are inferred from the source data.
Examples:
Enter the following command to append a local Avro data file,
/tmp/mydata.avro
, to mydataset.mytable
using a load job. Because schemas
can be automatically inferred from Avro data you don't need to use
the --autodetect
flag. mydataset
is in your default project.
bq load \
--noreplace \
--schema_update_option=ALLOW_FIELD_ADDITION \
--source_format=AVRO \
mydataset.mytable \
/tmp/mydata.avro
Enter the following command append a newline-delimited JSON data file in
Cloud Storage to mydataset.mytable
using a load job. The --autodetect
flag is used to detect the new columns. mydataset
is in your default
project.
bq load \
--noreplace \
--autodetect \
--schema_update_option=ALLOW_FIELD_ADDITION \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json
Enter the following command append a newline-delimited JSON data file in
Cloud Storage to mydataset.mytable
using a load job. The schema
containing the new columns is specified in a local JSON schema file,
/tmp/myschema.json
. mydataset
is in myotherproject
, not your default
project.
bq load \
--noreplace \
--schema_update_option=ALLOW_FIELD_ADDITION \
--source_format=NEWLINE_DELIMITED_JSON \
myotherproject:mydataset.mytable \
gs://mybucket/mydata.json \
/tmp/myschema.json
API
Call the jobs.insert
method. Configure a load
job and set the following properties:
- Reference your data in Cloud Storage using the
sourceUris
property. - Specify the data format by setting the
sourceFormat
property. - Specify the schema in the
schema
property. - Specify the schema update option using the
schemaUpdateOptions
property. - Set the write disposition of the destination table to
WRITE_APPEND
using thewriteDisposition
property.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Add columns in a query append job
You can add columns to a table when you append query results to it.
When you add columns using an append operation in a query job, the schema of the query results is used to update the schema of the destination table. Note that you cannot query a table in one location and write the results to a table in another location.
To add a new column when you append data to a table during a query job, select one of the following options:
bq
Use the bq query
command to query your data and specify the
--destination_table
flag to indicate which table you're appending.
To specify that you are appending query results to an existing destination
table, specify the --append_table
flag.
Set the --schema_update_option
flag to ALLOW_FIELD_ADDITION
to indicate
that the query results you're appending contain new columns.
Specify the use_legacy_sql=false
flag to use GoogleSQL syntax for the
query.
If the table you're appending is in a dataset in a project other than your
default project, add the project ID to the dataset name in the following
format: PROJECT_ID:DATASET
. Note that the table
you're querying and the destination table must be in the same location.
(Optional) Supply the --location
flag and set the value to your
location.
bq --location=LOCATION query \ --destination_table PROJECT_ID:DATASET.TABLE \ --append_table \ --schema_update_option=ALLOW_FIELD_ADDITION \ --use_legacy_sql=false \ 'QUERY'
Replace the following:
LOCATION
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file. Note that you cannot append query results to a table in another location.PROJECT_ID
: your project ID.dataset
: the name of the dataset that contains the table you're appending.TABLE
: the name of the table you're appending.QUERY
: a query in GoogleSQL syntax.
Examples:
Enter the following command to query mydataset.mytable
in your default
project and to append the query results to mydataset.mytable2
(also in
your default project).
bq query \
--destination_table mydataset.mytable2 \
--append_table \
--schema_update_option=ALLOW_FIELD_ADDITION \
--use_legacy_sql=false \
'SELECT
column1,column2
FROM
mydataset.mytable'
Enter the following command to query mydataset.mytable
in your default
project and to append the query results to mydataset.mytable2
in
myotherproject
.
bq query \
--destination_table myotherproject:mydataset.mytable2 \
--append_table \
--schema_update_option=ALLOW_FIELD_ADDITION \
--use_legacy_sql=false \
'SELECT
column1,column2
FROM
mydataset.mytable'
API
Call the jobs.insert
method. Configure a query
job and set the following properties:
- Specify the destination table using the
destinationTable
property. - Set the write disposition of the destination table to
WRITE_APPEND
using thewriteDisposition
property. - Specify the schema update option using the
schemaUpdateOptions
property. - Specify the GoogleSQL query using the
query
property.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Change a column's name
To rename a column on a table, use the
ALTER TABLE RENAME COLUMN
DDL statement. The following example renames the column old_name
to new_name
on mytable
:
ALTER TABLE mydataset.mytable RENAME COLUMN old_name TO new_name;
For more
information about ALTER TABLE RENAME COLUMN
statements, see
DDL details.
Change a column's data type
Changing a column's data type isn't supported by the Google Cloud console, the bq command-line tool, or the BigQuery API. If you attempt to update a table by applying a schema that specifies a new data type for a column, an error is returned.
Change a column's data type with a DDL statement
You can use GoogleSQL to make certain changes to the data type of a
column. For more information and a complete list of supported data type
conversions, see the
ALTER COLUMN SET DATA TYPE
DDL statement.
The following example creates a table with a column of type INT64
, then
updates the type to NUMERIC
:
CREATE TABLE mydataset.mytable(c1 INT64); ALTER TABLE mydataset.mytable ALTER COLUMN c1 SET DATA TYPE NUMERIC;
The following example creates a table with a nested column with two fields, and
then updates the type of one of the columns from INT
to NUMERIC
:
CREATE TABLE mydataset.mytable(s1 STRUCT<a INT64, b STRING>); ALTER TABLE mydataset.mytable ALTER COLUMN s1 SET DATA TYPE STRUCT<a NUMERIC, b STRING>;
Cast a column's data type
To change a column's data type into a castable type, use a SQL query to select the table data, cast the relevant column, and overwrite the table. Casting and overwriting is not recommended for very large tables because it requires a full table scan.
The following example shows a SQL query that selects all the data from
column_two
and column_three
in mydataset.mytable
and casts column_one
from DATE
to STRING
. The query result is used to overwrite the existing
table. The overwritten table stores column_one
as a STRING
data type.
When using CAST
, a query can fail if BigQuery is unable to
perform the cast. For details on casting rules in GoogleSQL, see
Casting.
Console
In the Google Cloud console, go to the BigQuery page.
In the Query editor, enter the following query to select all of the data from
column_two
andcolumn_three
inmydataset.mytable
and to castcolumn_one
fromDATE
toSTRING
. The query uses an alias to castcolumn_one
with the same name.mydataset.mytable
is in your default project.SELECT column_two, column_three, CAST(column_one AS STRING) AS column_one FROM mydataset.mytable;
Click More and select Query settings.
In the Destination section, do the following:
Select Set a destination table for query results.
For Project name, leave the value set to your default project. This is the project that contains
mydataset.mytable
.For Dataset, choose
mydataset
.In the Table Id field, enter
mytable
.For Destination table write preference, select Overwrite table. This option overwrites
mytable
using the query results.
Optionally, choose your data's location.
To update the settings, click Save.
Click
Run.When the query job completes, the data type of
column_one
isSTRING
.
bq
Enter the following bq query
command to select all of the data from
column_two
and column_three
in mydataset.mytable
and to cast
column_one
from DATE
to STRING
. The query uses an alias to cast
column_one
with the same name. mydataset.mytable
is in your default
project.
The query results are written to mydataset.mytable
using the
--destination_table
flag, and the --replace
flag is used to overwrite
mytable
. Specify the use_legacy_sql=false
flag to use
GoogleSQL syntax.
Optionally, supply the --location
flag and set the value to your
location.
bq query \
--destination_table mydataset.mytable \
--replace \
--use_legacy_sql=false \
'SELECT
column_two,
column_three,
CAST(column_one AS STRING) AS column_one
FROM
mydataset.mytable'
API
To select all of the data from column_two
and column_three
in
mydataset.mytable
and to cast column_one
from DATE
to STRING
, call
the jobs.insert
method and configure a query
job. Optionally, specify your location in the
location
property in the jobReference
section.
The SQL query used in the query job would be SELECT column_two,
column_three, CAST(column_one AS STRING) AS column_one FROM
mydataset.mytable
. The query uses an alias to cast column_one
with the
same name.
To overwrite mytable
with the query results, include mydataset.mytable
in the configuration.query.destinationTable
property, and specify
WRITE_TRUNCATE
in the configuration.query.writeDisposition
property.
Change a column's mode
The only supported modification you can make to a column's mode is
changing it from REQUIRED
to NULLABLE
. Changing a column's mode from
REQUIRED
to NULLABLE
is also called column relaxation. You can also relax a
column when you load data to overwrite an existing table,
or when you append data to an existing table. You can't change a column's mode
from NULLABLE
to REQUIRED
.
Make a column NULLABLE
in an existing table
To change a column's mode from REQUIRED
to NULLABLE
, select one of
the following options:
Console
Go to the BigQuery page.
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click the Schema tab.
Click Edit schema. You might need to scroll to see this button.
In the Current schema page, locate the field that you want to change.
In the Mode drop-down list for that field, select
NULLABLE
.To update the settings, click Save.
SQL
Use the
ALTER COLUMN DROP NOT NULL
DDL statement.
The following example changes the mode of the column mycolumn
from
REQUIRED
to NULLABLE
:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTER TABLE mydataset.mytable ALTER COLUMN mycolumn DROP NOT NULL;
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
First, issue the
bq show
command with the--schema
flag and write the existing table schema to a file. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:PROJECT_ID:DATASET
.bq show \ --schema \ --format=prettyjson \ PROJECT_ID:DATASET.TABLE > SCHEMA_FILE
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA_FILE
: the schema definition file written to your local machine.
For example, to write the schema definition of
mydataset.mytable
to a file, enter the following command.mydataset.mytable
is in your default project.bq show \ --schema \ --format=prettyjson \ mydataset.mytable > /tmp/myschema.json
Open the schema file in a text editor. The schema should look like the following:
[ { "mode": "REQUIRED", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "mode": "REPEATED", "name": "column3", "type": "STRING" } ]
Change an existing column's mode from
REQUIRED
toNULLABLE
. In this example, the mode forcolumn1
is relaxed.[ { "mode": "NULLABLE", "name": "column1", "type": "STRING" }, { "mode": "REQUIRED", "name": "column2", "type": "FLOAT" }, { "mode": "REPEATED", "name": "column3", "type": "STRING" } ]
For more information on working with JSON schema files, see Specifying a JSON schema file.
After updating your schema file, issue the following command to update the table's schema. If the table you're updating is in a project other than your default project, add the project ID to the dataset name in the following format:
PROJECT_ID:DATASET
.bq update PROJECT_ID:DATASET.TABLE SCHEMA
Replace the following:
PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're updating.TABLE
: the name of the table you're updating.SCHEMA
: the path to the JSON schema file on your local machine.
For example, enter the following command to update the schema definition of
mydataset.mytable
in your default project. The path to the schema file on your local machine is/tmp/myschema.json
.bq update mydataset.mytable /tmp/myschema.json
API
Call tables.patch
and
use the schema
property to change a REQUIRED
column to NULLABLE
in
your schema definition. Because the tables.update
method replaces the
entire table resource, the tables.patch
method is preferred.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
'NULLABLE'
Make a column NULLABLE
with an appending load job
You can relax a column's mode when you append data to a table in a load job. Select one of the following based on the type of file:
- When appending data from CSV and JSON files, relax the mode for individual columns by specifying a JSON schema file.
- When appending data from Avro, ORC, or Parquet files, relax columns to
NULL
in your schema and let schema inference detect the relaxed columns.
To relax a column from REQUIRED
to NULLABLE
when you append data to a table
during a load job, select one of the following options:
Console
You cannot relax a column's mode using the Google Cloud console.
bq
Use the bq load
command to load your data and specify the --noreplace
flag to indicate that you are appending the data to an existing table.
If the data you're appending is in CSV or newline-delimited JSON format,
specify the relaxed columns in a local JSON schema file or use the
--autodetect
flag to use schema detection
to discover relaxed columns in the source data.
Relaxed columns can be automatically inferred from Avro, ORC, and Parquet
files. Column relaxation does not apply to Datastore export
appends. The columns in tables created by loading Datastore export
files are always NULLABLE
.
Set the --schema_update_option
flag to ALLOW_FIELD_RELAXATION
to
indicate that the data you're appending contains relaxed columns.
If the table you're appending is in a dataset in a project other than your
default project, add the project ID to the dataset name in the following
format: PROJECT_ID:DATASET
.
(Optional) Supply the --location
flag and set the value to your
location.
Enter the load
command as follows:
bq --location=LOCATION load \ --noreplace \ --schema_update_option=ALLOW_FIELD_RELAXATION \ --source_format=FORMAT \ PROJECT_ID:DATASET.TABLE \ PATH_TO_SOURCE \ SCHEMA
Replace the following:
LOCATION
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file.FORMAT
:NEWLINE_DELIMITED_JSON
,CSV
,PARQUET
,ORC
, orAVRO
.DATASTORE_BACKUP
files don't require column relaxation. The columns in tables created from Datastore export files are alwaysNULLABLE
.PROJECT_ID
: your project ID.dataset is the name of the dataset that contains the table.
TABLE
: the name of the table you're appending.PATH_TO_SOURCE
: a fully-qualified Cloud Storage URI, a comma-separated list of URIs, or the path to a data file on your local machine.SCHEMA
: the path to a local JSON schema file. This option is used only for CSV and JSON files. Relaxed columns are automatically inferred from Avro files.
Examples:
Enter the following command to append a local Avro data file,
/tmp/mydata.avro
, to mydataset.mytable
using a load job. Since relaxed
columns can be automatically inferred from Avro data you don't need to
specify a schema file. mydataset
is in your default project.
bq load \
--noreplace \
--schema_update_option=ALLOW_FIELD_RELAXATION \
--source_format=AVRO \
mydataset.mytable \
/tmp/mydata.avro
Enter the following command to append data from a newline-delimited JSON
file in Cloud Storage to mydataset.mytable
using a load job. The
schema containing the relaxed columns is in a local JSON schema file —
/tmp/myschema.json
. mydataset
is in your default project.
bq load \
--noreplace \
--schema_update_option=ALLOW_FIELD_RELAXATION \
--source_format=NEWLINE_DELIMITED_JSON \
mydataset.mytable \
gs://mybucket/mydata.json \
/tmp/myschema.json
Enter the following command to append data in a CSV file on your local
machine to mydataset.mytable
using a load job. The command uses schema
auto-detection to discover relaxed columns in the source data. mydataset
is in myotherproject
, not your default project.
bq load \
--noreplace \
--schema_update_option=ALLOW_FIELD_RELAXATION \
--source_format=CSV \
--autodetect \
myotherproject:mydataset.mytable \
mydata.csv
API
Call the jobs.insert
method. Configure a load
job and set the following properties:
- Reference your data in Cloud Storage using the
sourceUris
property. - Specify the data format by setting the
sourceFormat
property. - Specify the schema in the
schema
property. - Specify the schema update option using the
schemaUpdateOptions
property. - Set the write disposition of the destination table to
WRITE_APPEND
using thewriteDisposition
property.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Make all columns NULLABLE
with an append job
You can relax all columns in a table when you append query results to it. You
can relax all required fields in the destination table by setting the
--schema_update_option
flag to ALLOW_FIELD_RELAXATION
. You cannot relax
individual columns in a destination table by using a query append. To relax
individual columns with a load append job, see
Make a column NULLABLE
with an append job.
To relax all columns when you append query results to a destination table, select one of the following options:
Console
You cannot relax a column's mode using the Google Cloud console.
bq
Use the bq query
command to query your data and specify the
--destination_table
flag to indicate which table you're appending.
To specify that you are appending query results to an existing destination
table, specify the --append_table
flag.
Set the --schema_update_option
flag to ALLOW_FIELD_RELAXATION
to
indicate that all REQUIRED
columns in the table you're appending should be
changed to NULLABLE
.
Specify the use_legacy_sql=false
flag to use GoogleSQL syntax for the
query.
If the table you're appending is in a dataset in a project other than your
default project, add the project ID to the dataset name in the following
format: PROJECT_ID:DATASET
.
(Optional) Supply the --location
flag and set the value to your
location.
bq --location=LOCATION query \ --destination_table PROJECT_ID:DATASET.TABLE \ --append_table \ --schema_update_option=ALLOW_FIELD_RELAXATION \ --use_legacy_sql=false \ 'QUERY'
Replace the following:
LOCATION
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file.PROJECT_ID
: your project ID.DATASET
: the name of the dataset that contains the table you're appending.TABLE
: the name of the table you're appending.QUERY
: a query in GoogleSQL syntax.
Examples:
Enter the following command query mydataset.mytable
in your default
project to append the query results to mydataset.mytable2
(also in
your default project). The command changes all REQUIRED
columns in the
destination table to NULLABLE
.
bq query \
--destination_table mydataset.mytable2 \
--append_table \
--schema_update_option=ALLOW_FIELD_RELAXATION \
--use_legacy_sql=false \
'SELECT
column1,column2
FROM
mydataset.mytable'
Enter the following command query mydataset.mytable
in your default
project and to append the query results to mydataset.mytable2
in
myotherproject
. The command changes all REQUIRED
columns in the
destination table to NULLABLE
.
bq query \
--destination_table myotherproject:mydataset.mytable2 \
--append_table \
--schema_update_option=ALLOW_FIELD_RELAXATION \
--use_legacy_sql=false \
'SELECT
column1,column2
FROM
mydataset.mytable'
API
Call the jobs.insert
method. Configure a query
job and set the following properties:
- Specify the destination table using the
destinationTable
property. - Set the write disposition of the destination table to
WRITE_APPEND
using thewriteDisposition
property. - Specify the schema update option using the
schemaUpdateOptions
property. - Specify the GoogleSQL query using the
query
property.
Go
Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Java
Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Change a column's default value
To change the default value for a column, select one of the following options:
Console
In the Google Cloud console, go to the BigQuery page.
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click the Schema tab.
Click Edit schema. You might need to scroll to see this button.
In the Current schema page, locate the top-level field that you want to change.
Enter the default value for that field.
Click Save.
SQL
Use the
ALTER COLUMN SET DEFAULT
DDL statement.
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTER TABLE mydataset.mytable ALTER COLUMN column_name SET DEFAULT default_expression;
Click
Run.
For more information about how to run queries, see Run an interactive query.
Change a column description
To change the description for a column, select one of the following options:
Console
In the Google Cloud console, go to the BigQuery page.
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click the Schema tab.
Click Edit schema. You might need to scroll to see this button.
In the Current schema page, locate the field that you want to change.
Enter the description for that field.
Click Save.
SQL
Use the
ALTER COLUMN SET OPTIONS
DDL statement.
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTER TABLE mydataset.mytable ALTER COLUMN column_name SET OPTIONS (description = 'This is a column description.');
Click
Run.
For more information about how to run queries, see Run an interactive query.
Delete a column
You can delete a column from an existing table by using the
ALTER TABLE DROP COLUMN
DDL statement.
The statement does not immediately free up the storage that is associated with
the dropped column. To learn more about the impact on storage when you drop a
column on storage, see ALTER TABLE DROP COLUMN
statement
details.
There are two options for immediately reclaiming storage:
- Overwrite a table with a
SELECT * EXCEPT
query. - Export the data to Cloud Storage, delete the unwanted columns, and then load the data into a new table with the correct schema.