This document describes how to manage tables in BigQuery. You can manage your BigQuery tables in the following ways:
- Update a table's:
- Expiration time
- Description
- Schema definition
- Labels
- Rename (copy) a table
- Copy a table
- Delete a table
- Restore a deleted table
For more information about creating and using tables including getting table information, listing tables, and controlling access to table data, see Creating and using tables.
Updating table properties
You can update the following elements of a table:
Required permissions
At a minimum, to update a table, you must be granted bigquery.tables.update
and bigquery.tables.get
permissions. The following predefined
IAM roles include bigquery.tables.update
and
bigquery.tables.get
permissions:
bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
In addition, if a user has bigquery.datasets.create
permissions, when that
user creates a dataset, they are granted bigquery.dataOwner
access to it.
bigquery.dataOwner
access gives users the ability to update table properties
in datasets they create.
For more information about IAM roles and permissions in BigQuery, see Predefined roles and permissions.
Updating a table's description
You can update a table's description in the following ways:
- Using the Cloud Console.
- Using a DDL
ALTER TABLE
statement. - Using the
bq
command-line tool'sbq update
command. - Calling the
tables.patch
API method. - Using the client libraries.
To update a table's description:
Console
You cannot add a description when you create a table using the Cloud Console. After the table is created, you can add a description on the Details page.
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click Details.
In the Description section, click the pencil icon to edit the description.
Enter a description in the box, and click Update to save.
SQL
Data definition language (DDL) statements allow you to create and modify tables and views using standard SQL query syntax.
See more on Using Data Definition Language statements.
To update a table's description by using a DDL statement in the Cloud Console:
Click Compose new query.
Type your DDL statement into the Query editor text area.
ALTER TABLE mydataset.mytable SET OPTIONS ( description="Description of mytable" )
Click Run.
bq
Issue the bq update
command with the --description
flag. If you are
updating a table in a project other than your default project, add the
project ID to the dataset name in the following format:
project_id:dataset
.
bq update \ --description "description" \ project_id:dataset.table
Replace the following:
description
: the text describing the table in quotesproject_id
: your project IDdataset
: the name of the dataset that contains the table you're updatingtable
: the name of the table you're updating
Examples:
Enter the following command to change the description of mytable
in
mydataset
to "Description of mytable." mydataset
is in your default
project.
bq update --description "Description of mytable" mydataset.mytable
Enter the following command to change the description of mytable
in
mydataset
to "Description of mytable." mydataset
is in myotherproject
,
not your default project.
bq update \
--description "Description of mytable" \
myotherproject:mydataset.mytable
API
Call the tables.patch
method and use the description
property in the table resource
to update the table's description. Because the tables.update
method
replaces the entire table resource, the tables.patch
method is preferred.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Updating a table's expiration time
You can set a default table expiration time at the dataset level, or you can set a table's expiration time when the table is created. A table's expiration time is often referred to as "time to live" or TTL.
If you set the expiration when the table is created, the dataset's default table expiration is ignored. If you do not set a default table expiration at the dataset level, and you do not set a table expiration when the table is created, the table never expires and you must delete the table manually.
At any point after the table is created, you can update the table's expiration time in the following ways:
- Using the Cloud Console.
- Using a DDL
ALTER TABLE
statement. - Using the
bq
command-line tool'sbq update
command. - Calling the
tables.patch
API method. - Using the client libraries.
To update a table's expiration time:
Console
You cannot add an expiration time when you create a table using the Cloud Console. After a table is created, you can add or update a table expiration on the Table Details page.
In the Explorer panel, expand your project and dataset, then select the table.
In the detalis panel, click Details.
Click the pencil icon next to Table info
For Table expiration, select Specify date. Then select the expiration date using the calendar widget.
Click Update to save. The updated expiration time appears in the Table info section.
SQL
Data definition language (DDL) statements allow you to create and modify tables and views using standard SQL query syntax.
See more on Using Data Definition Language statements.
To update the expiration time by using a DDL statement in the Cloud Console:
Click Compose new query.
Type your DDL statement into the Query editor text area.
ALTER TABLE mydataset.mytable SET OPTIONS ( -- Sets table expiration to timestamp 2025-02-03 12:34:56 expiration_timestamp=TIMESTAMP "2025-02-03 12:34:56" )
Click Run.
bq
Issue the bq update
command with the --expiration
flag. If you are
updating a table in a project other than your default project,
add the project ID to the dataset name in the following format:
project_id:dataset
.
bq update \
--expiration integer \
project_id:dataset.table
Replace the following:
integer
: the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value. If you specify0
, the table expiration is removed, and the table never expires. Tables with no expiration must be manually deleted.project_id
: your project ID.dataset
: the name of the dataset that contains the table you're updating.table
: the name of the table you're updating.
Examples:
Enter the following command to update the expiration time of mytable
in
mydataset
to 5 days (432000 seconds). mydataset
is in your default
project.
bq update --expiration 432000 mydataset.mytable
Enter the following command to update the expiration time of mytable
in
mydataset
to 5 days (432000 seconds). mydataset
is in myotherproject
,
not your default project.
bq update --expiration 432000 myotherproject:mydataset.mytable
API
Call the tables.patch
method and use the expirationTime
property in the table resource
to update the table expiration in milliseconds. Because the tables.update
method replaces the entire table resource, the tables.patch
method is
preferred.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Updating a table's schema definition
For more information about updating a table's schema definition, see Modifying table schemas.
Renaming a table
Currently, you cannot change the name of an existing table. If you need to change the table name, follow the steps to copy the table. When you specify the destination table in the copy operation, use the new table name.
Copying a table
You can copy a table in the following ways:
- Using the Cloud Console.
- Using the
bq
command-line tool'sbq cp
command. - Calling the jobs.insert
API method and configuring a
copy
job. - Using the client libraries.
Required permissions
At a minimum, to copy tables and partitions, you must be granted the following permissions.
On the source dataset:
bigquery.tables.get
bigquery.tables.getData
On the destination dataset:
bigquery.tables.create
to create the copy of the table or partition in the destination dataset
The following predefined IAM roles include
bigquery.tables.create
, bigquery.tables.get
, and bigquery.tables.getData
permissions:
bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
Also, to run the copy job, you must be
granted bigquery.jobs.create
permissions.
The following predefined IAM roles include bigquery.jobs.create
permissions:
bigquery.user
bigquery.jobUser
bigquery.admin
In addition, if a user has bigquery.datasets.create
permissions, when that
user creates a dataset, they are granted bigquery.dataOwner
access to it.
bigquery.dataOwner
access gives the user the ability to copy tables and
partitions in the dataset, but access to the destination dataset is required
unless the user also created the destination dataset.
For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.
Limitations on copying tables
Table copy jobs are subject to the following limitations:
- When you copy a table, the name of the destination table must adhere to the same naming conventions as when you create a table.
- Table copies are subject to BigQuery's quota policies on copy jobs.
- When you use the Cloud Console to copy a table, you cannot overwrite an existing table in the destination dataset. The table must have a unique name in the destination dataset.
- When copying tables, the destination dataset must reside in the same location as the dataset containing the table being copied. For example, you cannot copy a table from an EU-based dataset and write it to a US-based dataset.
- Copying multiple source tables into a destination table is not supported by the Cloud Console.
- When copying multiple source tables to a destination table using the API or
bq
command-line tool, all source tables must have identical schemas. - The time that BigQuery takes to copy tables might vary significantly across different runs because the underlying storage is managed dynamically.
Copying a single source table
You can copy a single table in the following ways:
- Using the Cloud Console.
- Using the
bq
command-line tool'sbq cp
command. - Calling the
jobs.insert
API method, configuring acopy
job, and specifying thesourceTable
property. - Using the client libraries.
The Cloud Console supports only one source table and one destination
table in a copy job. To copy multiple source files
to a destination table, you must use the bq
command-line tool or the API.
To copy a single source table:
Console
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel, click Copy table.
In the Copy table dialog, under Destination:
- For Project name, choose the project that will store the copied table.
- For Dataset name, select the dataset where you want to store the copied table. The source and destination datasets must be in the same location.
- For Table name, enter a name for the new table. The name must be unique in the destination dataset. You cannot overwrite an existing table in the destination dataset using the Cloud Console. For more information about table name requirements, see Table naming.
Click Copy to start the copy job.
bq
Issue the bq cp
command. Optional flags can be used to control the write
disposition of the destination table:
-a
or--append_table
appends the data from the source table to an existing table in the destination dataset.-f
or--force
overwrites an existing table in the destination dataset and doesn't prompt you for confirmation.-n
or--no_clobber
returns the following error message if the table exists in the destination dataset:Table 'project_id:dataset.table' already exists, skipping.
If-n
is not specified, the default behavior is to prompt you to choose whether to replace the destination table.--destination_kms_key
is the customer-managed Cloud KMS key used to encrypt the destination table.
--destination_kms_key
is not demonstrated here. See
Protecting data with Cloud Key Management Service keys
for more information.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:
project_id:dataset
.
(Optional) Supply the --location
flag and set the value to your
location.
bq --location=location cp \ -a -f -n \project_id:dataset.source_table
\project_id:dataset.destination_table
Replace the following:
location
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value toasia-northeast1
. You can set a default value for the location using the.bigqueryrc
file.project_id
: your project ID.dataset
: the name of the source or destination dataset.source_table
: the table you're copying.destination_table
: the name of the table in the destination dataset.
Examples:
Enter the following command to copy mydataset.mytable
to
mydataset2.mytable2
. Both datasets are in your default project.
bq cp mydataset.mytable mydataset2.mytable2
Enter the following command to copy mydataset.mytable
and to overwrite a
destination table with the same name. The source dataset is in your default
project. The destination dataset is in myotherproject
. The -f
shortcut
is used to overwrite the destination table without a prompt.
bq cp -f \
mydataset.mytable \
myotherproject:myotherdataset.mytable
Enter the following command to copy mydataset.mytable
and to return an
error if the destination dataset contains a table with the same name. The
source dataset is in your default project. The destination dataset is in
myotherproject
. The -n
shortcut is used to prevent overwriting a table
with the same name.
bq cp -n \
mydataset.mytable \
myotherproject:myotherdataset.mytable
Enter the following command to copy a mydataset.mytable
and to append the
data to a destination table with the same name. The source dataset is in
your default project. The destination dataset is in myotherproject
. The -
a
shortcut is used to append to the destination table.
bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable
API
You can copy an existing table through the API by calling the
bigquery.jobs.insert
method, and configuring a copy
job. Specify your location in
the location
property in the jobReference
section of the
job resource.
You must specify the following values in your job configuration:
"copy": { "sourceTable": { // Required "projectId": string, // Required "datasetId": string, // Required "tableId": string // Required }, "destinationTable": { // Required "projectId": string, // Required "datasetId": string, // Required "tableId": string // Required }, "createDisposition": string, // Optional "writeDisposition": string, // Optional },
Where sourceTable
provides information about the table to be
copied, destinationTable
provides information about the new
table, createDisposition
specifies whether to create the
table if it doesn't exist, and writeDisposition
specifies
whether to overwrite or append to an existing table.
C#
Before trying this sample, follow the C# setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery C# API reference documentation.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery PHP API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Copying multiple source tables
You can copy multiple source tables to a destination table in the following ways:
- Using the
bq
command-line tool'sbq cp
command. - Calling the
jobs.insert
method, configuring acopy
job, and specifying thesourceTables
property. - Using the client libraries.
All source tables must have identical schemas, and only one destination table is allowed.
Source tables must be specified as a comma-separated list. You cannot use wildcards when you copy multiple source tables.
To copy multiple source tables:
Console
Copying multiple tables is not currently supported by the Cloud Console.
bq
Issue the bq cp
command and include multiple source tables as a
comma-separated list. Optional flags can be used to control the write
disposition of the destination table:
-a
or--append_table
appends the data from the source tables to an existing table in the destination dataset.-f
or--force
overwrites an existing destination table in the destination dataset and doesn't prompt you for confirmation.-n
or--no_clobber
returns the following error message if the table exists in the destination dataset:Table 'project_id:dataset.table' already exists, skipping.
If-n
is not specified, the default behavior is to prompt you to choose whether to replace the destination table.--destination_kms_key
is the customer-managed Cloud Key Management Service key used to encrypt the destination table.
--destination_kms_key
is not demonstrated here. See
Protecting data with Cloud Key Management Service keys
for more information.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:
project_id:dataset
.
(Optional) Supply the --location
flag and set the value to your
location.
bq --location=location cp \ -a -f -n \project_id:dataset.source_table
,project_id:dataset.source_table
\project_id:dataset.destination_table
Replace the following:
location
: the name of your location. The--location
flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value toasia-northeast1
. You can set a default value for the location using the.bigqueryrc
file.project_id
: your project ID.dataset
: the name of the source or destination dataset.source_table
: the table that you're copying.destination_table
: the name of the table in the destination dataset.
Examples:
Enter the following command to copy mydataset.mytable
and
mydataset.mytable2
to mydataset2.tablecopy
. All datasets are in your
default project.
bq cp \
mydataset.mytable,mydataset.mytable2 \
mydataset2.tablecopy
Enter the following command to copy mydataset.mytable
and
mydataset.mytable2
to myotherdataset.mytable
and to overwrite a
destination table with the same name. The destination dataset is in
myotherproject
, not your default project. The -f
shortcut is used to
overwrite the destination table without a prompt.
bq cp -f \
mydataset.mytable,mydataset.mytable2 \
myotherproject:myotherdataset.mytable
Enter the following command to copy myproject:mydataset.mytable
and
myproject:mydataset.mytable2
and to return an error if the destination
dataset contains a table with the same name. The destination dataset is in
myotherproject
. The -n
shortcut is used to prevent overwriting a table
with the same name.
bq cp -n \
myproject:mydataset.mytable,myproject:mydataset.mytable2 \
myotherproject:myotherdataset.mytable
Enter the following command to copy mydataset.mytable
and
mydataset.mytable2
and to append the data to a destination table with the
same name. The source dataset is in your default project. The destination
dataset is in myotherproject
. The -a
shortcut is used to append to the
destination table.
bq cp -a \
mydataset.mytable,mydataset.mytable2 \
myotherproject:myotherdataset.mytable
API
To copy multiple tables using the API, call the
jobs.insert
method, configure a table copy
job, and specify the sourceTables
property.
Specify your region in the location
property in the
jobReference
section of the job resource.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Node.js API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Deleting tables
You can delete a table in the following ways:
- Using the Cloud Console.
- Using the
bq
command-line toolbq rm
command. - Calling the
tables.delete
API method. - Using the client libraries.
Currently, you can only delete one table at a time.
When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.
Required permissions
At a minimum, to delete a table, you must be granted bigquery.tables.delete
and bigquery.tables.get
permissions. The following predefined
IAM roles include bigquery.tables.delete
and
bigquery.tables.get
permissions:
bigquery.dataOwner
bigquery.dataEditor
bigquery.admin
In addition, if a user has bigquery.datasets.create
permissions, when that
user creates a dataset, they are granted bigquery.dataOwner
access to it.
bigquery.dataOwner
access gives the user the ability to delete tables in the
dataset.
For more information about IAM roles and permissions in BigQuery, see Predefined roles and permissions.
Deleting a table
To delete a table:
Console
In the Explorer panel, expand your project and dataset, then select the table.
In the details panel*, click Delete table.
Type the name of the table in the dialog, then click Delete to confirm.
bq
Use the bq rm
command with the --table
flag (or -t
shortcut) to delete
a table. When you use the bq
command-line tool to remove a table, you must confirm the
action. You can use the --force
flag (or -f
shortcut) to skip
confirmation.
If the table is in a dataset in a project other than your default
project, add the project ID to the dataset name in the following format:
project_id:dataset
.
bq rm \ -f \ -t \ project_id:dataset.table
Replace the following:
project_id
: your project IDdataset
: the name of the dataset that contains the tabletable
: the name of the table that you're deleting
Examples:
Enter the following command to delete mytable
from mydataset
.
mydataset
is in your default project.
bq rm -t mydataset.mytable
Enter the following command to delete mytable
from mydataset
.
mydataset
is in myotherproject
, not your default project.
bq rm -t myotherproject:mydataset.mytable
Enter the following command to delete mytable
from mydataset
.
mydataset
is in your default project. The command uses the -f
shortcut
to bypass confirmation.
bq rm -f -t mydataset.mytable
API
Call the tables.delete
API method and specify the table to delete using the tableId
parameter.
C#
Before trying this sample, follow the C# setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery C# API reference documentation.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Go API reference documentation.
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Node.js API reference documentation.
PHP
Before trying this sample, follow the PHP setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery PHP API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Ruby
Before trying this sample, follow the Ruby setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Ruby API reference documentation.
Restoring deleted tables
You can undelete a table within seven days of deletion, including explicit deletions and implicit deletions due to table expiration. After seven days, it is not possible to undelete a table using any method, including opening a support ticket.
You can restore a deleted table by:
- Using the
@<time>
snapshot decorator in thebq
command-line tool - Using the client libraries
Console
You cannot undelete a table by using the Cloud Console.
bq
To restore a table, use a table copy operation with the @<time>
snapshot
decorator. First, determine a UNIX timestamp of when the table existed
(in milliseconds). Then, use the bq copy
command with the snapshot
decorator.
For example, enter the following command to copy
mydataset.mytable
at the time 1418864998000
into a new table
mydataset.newtable
.
bq cp mydataset.mytable@1418864998000 mydataset.newtable
(Optional) Supply the --location
flag and set the value to your
location.
You can also specify a relative offset. The following example copies the version of a table from one hour ago:
bq cp mydataset.mytable@-3600000 mydataset.newtable
For more information, see Restore a table from a point in time.
Go
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Java API reference documentation.
Node.js
Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Node.js API reference documentation.
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery Quickstart Using Client Libraries.
For more information, see the
BigQuery Python API reference documentation.
Next steps
- For more information about creating and using tables, see Creating and using tables.
- For more information about handling data, see Working With table data.
- For more information about specifying table schemas, see Specifying a schema.
- For more information about modifying table schemas, see Modifying table schemas.
- For more information about datasets, see Introduction to datasets.
- For more information about views, see Introduction to views.