Managing Tables

This document describes how to manage tables in BigQuery. You can manage your BigQuery tables in the following ways:

  • Update a table's:
    • Expiration time
    • Description
    • Schema definition
    • Labels
  • Rename (copy) a table
  • Copy a table
  • Delete a table
  • Restore a deleted table

For more information on creating and using tables including getting table information, listing tables, and controlling access to table data, see Creating and Using Tables.

Updating table properties

You can update a table's:

Required permissions

To update table properties, you must have WRITER access at the dataset level, or you must be assigned a project-level IAM role that includes requires bigquery.tables.update permissions. The following predefined, project-level IAM roles include bigquery.tables.update permissions:

In addition, because the bigquery.user role has bigquery.datasets.create permissions, a user assigned to the bigquery.user role can update any table that user creates in the dataset. When a user assigned to the bigquery.user role creates a dataset, that user is given OWNER access to the dataset. OWNER access to a dataset gives the user full control over it and all the tables and views in it.

For more information on IAM roles and permissions in BigQuery, see: access control.

Updating a table's description

To update a table's description, you can use the BigQuery web UI, the bq update CLI command, or the tables.patch API method.

To update a table's description:

Web UI

You cannot add a description when you create a table using the BigQuery web UI. After the table is created, you can add a description on the Table Details page.

  1. In the navigation pane, select your table.

  2. On the Table Details page, click Details.

  3. In the Description section, click Describe this table to open the description box.

  4. Enter a description in the box. When you click away from the box, the text is saved.

    Table description

CLI

Issue the bq update command with the --description flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET].

bq update --description "[DESCRIPTION]" [PROJECT_ID]:[DATASET].[TABLE]

Where:

  • [DESCRIPTION] is the text describing the table in quotes.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table you're updating.
  • [TABLE] is the name of the table you're updating.

Examples:

Enter the following command to change the description of mytable in mydatasetto "Description of mytable." mydataset is in your default project.

bq update --description "Description of mytable" mydataset.mytable

Enter the following command to change the description of mytable in mydataset to "Description of mytable." mydataset is in myotherproject, not your default project.

bq update --description "Description of mytable" myotherproject:mydataset.mytable

API

Call the tables.patch method and use the description property to update the table's description in the table resource. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
tableRef := client.Dataset(datasetID).Table(tableID)
meta, err := tableRef.Metadata(ctx)
if err != nil {
	return err
}
update := bigquery.TableMetadataToUpdate{
	Description: "Updated description.",
}
if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

// String datasetName = "my_dataset_name";
// String tableName = "my_table_name";
// String newDescription = "new_description";

Table beforeTable = bigquery.getTable(datasetName, tableName);
TableInfo tableInfo = beforeTable.toBuilder()
    .setDescription(newDescription)
    .build();
Table afterTable = bigquery.update(tableInfo);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

Configure the Table.description property and call Client.update_table() to send the update to the API.

# from google.cloud import bigquery
# client = bigquery.Client()
# table_ref = client.dataset('my_dataset').table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.description == 'Original description.'
table.description = 'Updated description.'

table = client.update_table(table, ['description'])  # API request

assert table.description == 'Updated description.'

Updating a table's expiration time

You can set a default table expiration time at the dataset level, or you can set a table's expiration time when the table is created. A table's expiration time is often referred to as "time to live" or TTL.

If you set the expiration when the table is created, the dataset's default table expiration is ignored. If you do not set a default table expiration at the dataset level, and you do not set a table expiration when the table is created, the table never expires and you must delete the table manually.

At any point after the table is created, you can update the table's expiration time using the BigQuery web UI, the CLI's bq update command, or the API's tables.patch method.

When you update a table's expiration time, you must calculate the expiration based on the table's creation time. For example, if your table was created on January 3, 2018, and today is January 5, if you set the expiration time at 1 day using the BigQuery web UI, the expiration time is considered "in the past" and the table is deleted immediately.

To update a table's expiration time:

Web UI

You cannot add an expiration time when you create a table using the web UI. After a table is created, you can add or update a table expiration on the Table Details page.

  1. In the navigation pane, select your table.

  2. On the Table Details page, click Details.

  3. For Expiration Time, click Edit.

  4. In the Update Expiration dialog, click In and enter an expiration time in days.

  5. Click OK. The updated expiration time appears on the Details page.

    Table expiration

CLI

Issue the bq update command with the --expiration flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET].

bq update --expiration [INTEGER] [PROJECT_ID]:[DATASET].[TABLE]

Where:

  • [INTEGER] is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table you're updating.
  • [TABLE] is the name of the table you're updating.

Examples:

Enter the following command to update the expiration time of mytable in mydataset to 5 days (432000 seconds). mydataset is in your default project.

bq update --expiration 432000 mydataset.mytable

Enter the following command to update the expiration time of mytable in mydataset to 5 days (432000 seconds). mydataset is in myotherproject, not your default project.

bq update --expiration 432000 myotherproject:mydataset.mytable

API

Call the tables.patch method and use the expirationTime property to update the table expiration in milliseconds. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
tableRef := client.Dataset(datasetID).Table(tableID)
meta, err := tableRef.Metadata(ctx)
if err != nil {
	return err
}
update := bigquery.TableMetadataToUpdate{
	ExpirationTime: time.Now().Add(time.Duration(5*24) * time.Hour), // table expiration in 5 days.
}
if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

Table beforeTable = bigquery.getTable(datasetName, tableName);

// Set table to expire 5 days from now.
long expirationMillis = DateTime.now().plusDays(5).getMillis();
TableInfo tableInfo = beforeTable.toBuilder()
        .setExpirationTime(expirationMillis)
        .build();
Table afterTable = bigquery.update(tableInfo);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

Configure Table.expires property and call Client.update_table() to send the update to the API.

import datetime
import pytz

# from google.cloud import bigquery
# client = bigquery.Client()
# table_ref = client.dataset('my_dataset').table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.expires is None

# set table to expire 5 days from now
expiration = datetime.datetime.now(pytz.utc) + datetime.timedelta(days=5)
table.expires = expiration
table = client.update_table(table, ['expires'])  # API request

# expiration is stored in milliseconds
margin = datetime.timedelta(microseconds=1000)
assert expiration - margin <= table.expires <= expiration + margin

Updating a table's schema definition

For instructions on updating a table's schema definition, see Modifying Table Schemas.

Renaming a table

Currently, you cannot change the name of an existing table. If you need to change the table name, follow the steps to copy the table. When you specify the destination table in the copy operation, use the new table name.

Copying a table

You can copy a table by using the BigQuery web UI, the command-line tool's bq cp command, or by calling the jobs.insert API method and configuring a copy job.

Required permissions

At the dataset level, copying a table requires READER access to the source dataset that contains the table being copied, and WRITER access to the destination dataset.

Instead of using dataset-level permissions, you can leverage a project-level IAM role that includes bigquery.tables.create and bigquery.tables.getData permissions. bigquery.tables.create permissions are required to create the copy of the table in the destination dataset. bigquery.tables.getData permissions are required to read the data in the table being copied.

The following predefined, project-level IAM roles include both bigquery.tables.create and bigquery.tables.getData permissions for every dataset in the project:

In addition, because the bigquery.user role has bigquery.datasets.create permissions, a user assigned to the bigquery.user role can read the data in any table that user creates and can create a copy of the table in any dataset that user creates. When a user assigned to the bigquery.user role creates a dataset, that user is given OWNER access to the dataset. OWNER access to a dataset gives the user full control over it and all the tables in it.

For more information on IAM roles and permissions in BigQuery, see access control. For more information on dataset-level roles, see Primitive roles for datasets.

Limitations on copying tables

Table copy jobs are subject to the following limitations:

  • When you copy a table, the name of the destination table must adhere to the same naming conventions as when you create a table.
  • Table copies are subject to BigQuery's quota policies on copy jobs.
  • When you use the BigQuery web UI to copy a table, you cannot overwrite an existing table in the destination dataset. The table must have a unique name in the destination dataset.
  • When copying tables, the destination dataset must reside in the same location as the dataset containing the table being copied. For example, you cannot copy a table from an EU-based dataset and write it to a US-based dataset.
  • Copying multiple source tables into a destination table is not supported by the web UI.
  • When copying multiple source tables to a destination table using the CLI or API, all source tables must have identical schemas.

Copying a single source table

You can copy a single table by using the BigQuery web UI, using the command-line tool's bq cp command, or by calling the jobs.insert API method, configuring a copy job, and specifying the configuration.copy.sourceTable property.

The BigQuery web UI supports only one source table and one destination table. To copy multiple source files to a destination table, you must use the command-line tool or the API.

To copy a single source table:

Web UI

  1. Click the down arrow display menu icon next to the table you want to copy and click Copy Table.

  2. In the Copy Table dialog:

    • For Destination project, choose the project that will store the copied table.
    • For Destination dataset, select the dataset where you want to store the copied table. The source and destination datasets must be in the same location.
    • For Destination table, enter a name for the new table. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character). You cannot overwrite an existing table in the destination dataset using the BigQuery web UI.

      Table copy

  3. Click OK to start the copy job.

Command-line

Issue the bq cp command. Optional flags can be used to control the write disposition of the destination table:

  • -a or --append_table appends the data from the source table to an existing table in the destination dataset.
  • -f or --force overwrites an existing table in the destination dataset and doesn't prompt you for confirmation.
  • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table '[PROJECT_ID]:[DATASET].[TABLE]' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
  • --destination_kms_key is the customer-managed Cloud KMS key used to encrypt the destination table.

--destination_kms_key is not demonstrated here. See Protecting Data with Cloud KMS Keys for more information.

If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: [PROJECT_ID]:[DATASET].

Supply the --location flag and set the value to your location.

bq --location=[LOCATION] cp -a -f -n [PROJECT_ID]:[DATASET].[SOURCE_TABLE] [PROJECT_ID]:[DATASET].[DESTINATION_TABLE]

Where:

  • [LOCATION] is the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the source or destination dataset.
  • [SOURCE_TABLE] is the table you're copying.
  • [DESTINATION_TABLE] is the name of the table in the destination dataset.

Examples:

Enter the following command to copy mydataset.mytable to mydataset2.mytable2. Both datasets are in your default project and were created in the US multi-region location.

bq --location=US cp mydataset.mytable mydataset2.mytable2

Enter the following command to copy mydataset.mytable and to overwrite a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -f shortcut is used to overwrite the destination table without a prompt. mydataset and myotherdataset were created in the US multi-region location.

bq --location=US cp -f mydataset.mytable myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable and to return an error if the destination dataset contains a table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -n shortcut is used to prevent overwriting a table with the same name. Both datasets were created in the US multi-region location.

bq --location=US cp -n mydataset.mytable myotherproject:myotherdataset.mytable

Enter the following command to copy a mydataset.mytable and to append the data to a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -a shortcut is used to append to the destination table. Both datasets were created in the US multi-region location.

bq --location=US cp -a mydataset.mytable myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable to mydataset2.mytable2. Both datasets are in your default project. Both datasets were created in the asia-northeast1 region.

bq --location=asia-northeast1 cp mydataset.mytable mydataset2.mytable2

API

To copy a table using the API:

You can copy an existing table through the API by calling the bigquery.jobs.insert method, and configuring a copy job. Specify your location in the location property in the jobReference section of the job resource.

You must specify the following values in your job configuration:

"copy": {
      "sourceTable": {       // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "destinationTable": {  // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "createDisposition": string,  // Optional
      "writeDisposition": string,   // Optional
    },

Where sourceTable provides information about the table to be copied, destinationTable provides information about the new table, createDisposition specifies whether to create the table if it doesn't exist, and writeDisposition specifies whether to overwrite or append to an existing table.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery C# API reference documentation .

public void CopyTable(
    string datasetId, string tableIdToBeCopied, string newTableId, BigQueryClient client)
{
    var table = client.GetTable(datasetId, tableIdToBeCopied);
    string query = $"SELECT * FROM {table}";
    var destination = client.GetTableReference(datasetId, newTableId);
    BigQueryJob job = client.CreateQueryJob(query,
        parameters: null,
        options: new QueryOptions { DestinationTable = destination });
    // Wait for the job to complete.
    job.GetQueryResults();
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
dataset := client.Dataset(datasetID)
copier := dataset.Table(dstID).CopierFrom(dataset.Table(srcID))
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId destinationId = TableId.of(dataset, tableName);
JobOption options = JobOption.fields(JobField.STATUS, JobField.USER_EMAIL);
Job job = table.copy(destinationId, options);
// Wait for the job to complete.
try {
  Job completedJob =
      job.waitFor(
          RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
          RetryOption.totalTimeout(Duration.ofMinutes(3)));
  if (completedJob != null && completedJob.getStatus().getError() == null) {
    // Job completed successfully.
  } else {
    // Handle error case.
  }
} catch (InterruptedException e) {
  // Handle interrupted wait
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Node.js API reference documentation .

// Imports the Google Cloud client library
const BigQuery = require('@google-cloud/bigquery');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = "your-project-id";
// const srcDatasetId = "my_src_dataset";
// const srcTableId = "my_src_table";
// const destDatasetId = "my_dest_dataset";
// const destTableId = "my_dest_table";

// Creates a client
const bigquery = new BigQuery({
  projectId: projectId,
});

// Copies the table contents into another table
bigquery
  .dataset(srcDatasetId)
  .table(srcTableId)
  .copy(bigquery.dataset(destDatasetId).table(destTableId))
  .then(results => {
    const job = results[0];

    // load() waits for the job to finish
    assert.equal(job.status.state, 'DONE');
    console.log(`Job ${job.id} completed.`);

    // Check the job's status for errors
    const errors = job.status.errors;
    if (errors && errors.length > 0) {
      throw errors;
    }
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery PHP API reference documentation .

use Google\Cloud\BigQuery\BigQueryClient;
use Google\Cloud\Core\ExponentialBackoff;

/** Uncomment and populate these variables in your code */
// $projectId = 'The Google project ID';
// $datasetId = 'The BigQuery dataset ID';
// $sourceTableId   = 'The BigQuery table ID to copy from';
// $destinationTableId = 'The BigQuery table ID to copy to';

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$dataset = $bigQuery->dataset($datasetId);
$sourceTable = $dataset->table($sourceTableId);
$destinationTable = $dataset->table($destinationTableId);
$copyConfig = $sourceTable->copy($destinationTable);
$job = $sourceTable->runJob($copyConfig);

// poll the job until it is complete
$backoff = new ExponentialBackoff(10);
$backoff->execute(function () use ($job) {
    print('Waiting for job to complete' . PHP_EOL);
    $job->reload();
    if (!$job->isComplete()) {
        throw new Exception('Job has not yet completed', 500);
    }
});
// check if the job has errors
if (isset($job->info()['status']['errorResult'])) {
    $error = $job->info()['status']['errorResult']['message'];
    printf('Error running job: %s' . PHP_EOL, $error);
} else {
    print('Table copied successfully' . PHP_EOL);
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()

source_dataset = client.dataset('samples', project='bigquery-public-data')
source_table_ref = source_dataset.table('shakespeare')

# dataset_id = 'my_dataset'
dest_table_ref = client.dataset(dataset_id).table('destination_table')

job = client.copy_table(
    source_table_ref,
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location='US')  # API request

job.result()  # Waits for job to complete.

assert job.state == 'DONE'
dest_table = client.get_table(dest_table_ref)  # API request
assert dest_table.num_rows > 0

Copying multiple source tables

You can copy multiple source tables to a destination table by using the command-line tool's bq cp command or by calling the jobs.insert method, configuring a copy job, and specifying the configuration.copy.sourceTables property. All source tables must have identical schemas, and only one destination table is allowed.

Source tables must be specified as a comma-separated list. You cannot use wildcards when you copy multiple source tables.

To copy multiple source tables:

Command-line

Issue the bq cp command and include multiple source tables as a comma-separated list. Optional flags can be used to control the write disposition of the destination table:

  • -a or --append_table appends the data from the source tables to an existing table in the destination dataset.
  • -f or --force overwrites an existing destination table in the destination dataset and doesn't prompt you for confirmation.
  • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table '[PROJECT_ID]:[DATASET].[TABLE]' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
  • --destination_kms_key is the customer-managed Cloud KMS key used to encrypt the destination table.

--destination_kms_key is not demonstrated here. See Protecting Data with Cloud KMS Keys for more information.

If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: [PROJECT_ID]:[DATASET].

Supply the --location flag and set the value to your location.

bq --location=[LOCATION] cp -a -f -n [PROJECT_ID]:[DATASET].[SOURCE_TABLE],[PROJECT_ID]:[DATASET].[SOURCE_TABLE] [PROJECT_ID]:[DATASET].[DESTINATION_TABLE]

Where:

  • [LOCATION] is the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the source or destination dataset.
  • [SOURCE_TABLE] is the table you're copying.
  • [DESTINATION_TABLE] is the name of the table in the destination dataset.

Examples:

Enter the following command to copy mydataset.mytable and mydataset.mytable2 to mydataset2.tablecopy. All datasets are in your default project and were created in the US multi-region location.

bq --location=US cp mydataset.mytable,mydataset.mytable2 mydataset2.tablecopy

Enter the following command to copy mydataset.mytable and mydataset.mytable2 to myotherdataset.mytableand to overwrite a destination table with the same name. The destination dataset is in myotherproject, not your default project. The -f shortcut is used to overwrite the destination table without a prompt. Both datasets were created in the US multi-region location.

bq --location=US cp -f mydataset.mytable,mydataset.mytable2 myotherproject:myotherdataset.mytable

Enter the following command to copy myproject:mydataset.mytable and myproject:mydataset.mytable2 and to return an error if the destination dataset contains a table with the same name. The destination dataset is in myotherproject. The -n shortcut is used to prevent overwriting a table with the same name. All datasets were created in the US multi-region location.

bq --location=US cp -n myproject:mydataset.mytable,myproject:mydataset.mytable2 myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable and mydataset.mytable2 and to append the data to a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -a shortcut is used to append to the destination table. All datasets were created in the US multi-region location.

bq --location=US cp -a mydataset.mytable,mydataset.mytable2 myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable and mydataset.mytable2 to mydataset2.tablecopy. All datasets are in your default project and were created in the asia-northeast1 region.

bq --location=asia-northeast1 cp mydataset.mytable,mydataset.mytable2 mydataset2.tablecopy

API

To copy multiple tables using the API, call the jobs.insert method, configure a copy job, and specify the configuration.copy.sourceTables property.

Specify your region in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
dataset := client.Dataset(datasetID)

srcTableIDs := []string{"table1", "table2"}
var tableRefs []*bigquery.Table
for _, v := range srcTableIDs {
	tableRefs = append(tableRefs, dataset.Table(v))
}
copier := dataset.Table(dstTableID).CopierFrom(tableRefs...)
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId destinationTable = TableId.of(datasetId, destinationTableId);
CopyJobConfiguration configuration =
    CopyJobConfiguration.newBuilder(
        destinationTable,
        Arrays.asList(
            TableId.of(datasetId, "table1"),
            TableId.of(datasetId, "table2")))
    .build();

// Copy the tables.
Job job = bigquery.create(JobInfo.of(configuration));
job = job.waitFor();

// Check the table
StandardTableDefinition table = bigquery.getTable(destinationTable).getDefinition();
System.out.println("State: " + job.getStatus().getState());
System.out.printf("Copied %d rows.\n", table.getNumRows());

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# source_dataset_id = 'my_source_dataset'
# dest_dataset_id = 'my_destination_dataset'

table1_ref = client.dataset(source_dataset_id).table('table1')
table2_ref = client.dataset(source_dataset_id).table('table2')
dest_table_ref = client.dataset(dest_dataset_id).table('destination_table')

job = client.copy_table(
    [table1_ref, table2_ref],
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location='US')  # API request
job.result()  # Waits for job to complete.

assert job.state == 'DONE'
dest_table = client.get_table(dest_table_ref)  # API request
assert dest_table.num_rows > 0

Deleting tables

You can delete a table using the BigQuery web UI, the command-line tool's bq rm command, or by calling the tables.delete API method. Currently, you can only delete one table at a time.

When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.

Required permissions

To delete a table, you must have OWNER access at the dataset level, or you must be assigned a project-level IAM role that includes bigquery.tables.delete permissions. The following predefined, project-level IAM roles include bigquery.tables.delete permissions:

Users assigned a predefined, project-level role can delete tables in any dataset in the project. Users assigned OWNER permissions at the dataset level can delete tables only in that dataset.

In addition, because the bigquery.user role has bigquery.datasets.create permissions, a user assigned to the bigquery.user role can delete tables in any dataset that user creates. When a user assigned to the bigquery.user role creates a dataset, that user is given OWNER access to the dataset. OWNER access to a dataset gives the user full control over the dataset and all the tables in it.

For more information on IAM roles and permissions in BigQuery, see Access Control. For more information on dataset-level roles, see Primitive roles for datasets.

Deleting a table

To delete a table:

Web UI

  1. Click the down arrow icon down arrow icon next to your table name in the navigation bar and click Delete table.

  2. When prompted, click OK to confirm.

Command-line

Use the bq rm command with the --table flag (or -t shortcut) to delete a table. When you use the CLI to remove a table, you must confirm the action. You can use the --force flag (or -f shortcut) to skip confirmation.

If the table is in a dataset in a project other than your default project, add the project ID to the dataset name in the following format: [PROJECT_ID]:[DATASET].

bq rm -f -t [PROJECT_ID]:[DATASET].[TABLE]

Where:

  • [PROJECT_ID] is your project ID.
  • [DATASET] is the name of the dataset that contains the table.
  • [TABLE] is the name of the table you're deleting.

Examples:

Enter the following command to delete mytable from mydataset. mydataset is in your default project.

bq rm -t mydataset.mytable

Enter the following command to delete mytable from mydataset. mydataset is in myotherproject, not your default project.

bq rm -t myotherproject:mydataset.mytable

Enter the following command to delete mytable from mydataset. mydataset is in your default project. The command uses the -f shortcut to bypass confirmation.

bq rm -f -t mydataset.mytable

API

Call the tables.delete API method and specify the table to delete using the tableId parameter.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery C# API reference documentation .

public void DeleteTable(string datasetId, string tableId, BigQueryClient client)
{
    client.DeleteTable(_projectId, datasetId, tableId);
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
table := client.Dataset(datasetID).Table(tableID)
if err := table.Delete(ctx); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId tableId = TableId.of(projectId, datasetName, tableName);
boolean deleted = bigquery.delete(tableId);
if (deleted) {
  // the table was deleted
} else {
  // the table was not found
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Node.js API reference documentation .

// Imports the Google Cloud client library
const BigQuery = require('@google-cloud/bigquery');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = "your-project-id";
// const datasetId = "my_dataset";
// const tableId = "my_table";

// Creates a client
const bigquery = new BigQuery({
  projectId: projectId,
});

// Deletes the table
bigquery
  .dataset(datasetId)
  .table(tableId)
  .delete()
  .then(() => {
    console.log(`Table ${tableId} deleted.`);
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery PHP API reference documentation .

use Google\Cloud\BigQuery\BigQueryClient;

/** Uncomment and populate these variables in your code */
// $projectId = 'The Google project ID';
// $datasetId = 'The BigQuery dataset ID';
// $tableId = 'The BigQuery table ID';

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$dataset = $bigQuery->dataset($datasetId);
$table = $dataset->table($tableId);
$table->delete();
printf('Deleted table %s.%s' . PHP_EOL, $datasetId, $tableId);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'
# table_id = 'my_table'

table_ref = client.dataset(dataset_id).table(table_id)
client.delete_table(table_ref)  # API request

print('Table {}:{} deleted.'.format(dataset_id, table_id))

Ruby

Before trying this sample, follow the Ruby setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Ruby API reference documentation .

# project_id = "Your Google Cloud project ID"
# dataset_id = "ID of the dataset delete table from"
# table_id   = "ID of the table to delete"

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new project: project_id
dataset  = bigquery.dataset dataset_id
table    = dataset.table table_id

table.delete

puts "Deleted table: #{table_id}"

Restoring a deleted table

It's possible to restore a table within 2 days of deletion. By leveraging snapshot decorator functionality, one may be able reference a table prior to the deletion event and then copy it. However, there are two primary caveats to creating a reference in this fashion:

  • You cannot reference a deleted table if a table bearing the same ID in the dataset was created after the deletion time.

  • You cannot reference a deleted table if the encapsulating dataset was also deleted/recreated since the table deletion event.

CLI

To undelete a table using legacy SQL, use the @<time> snapshot decorator. First, determine a UNIX timestamp of when the table existed (in milliseconds). Then, copy the table at that timestamp to a new table. The new table must have a different name than the deleted table.

For example, enter the following command to copy mydataset.mytable at the time 1418864998000 into a new table mydataset.newtable.

Supply the --location flag and set the value to your location.

bq --location=[LOCATION] cp mydataset.mytable@1418864998000 mydataset.newtable

For more information, see Table Decorators in Legacy SQL.

Go

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")

ds := client.Dataset(datasetID)
if _, err := ds.Table(tableID).Metadata(ctx); err != nil {
	return err
}
// Record the current time.  We'll use this as the snapshot time
// for recovering the table.
snapTime := time.Now()

// "Accidentally" delete the table.
if err := client.Dataset(datasetID).Table(tableID).Delete(ctx); err != nil {
	return err
}

// Construct the restore-from tableID using a snapshot decorator.
snapshotTableID := fmt.Sprintf("%s@%d", tableID, snapTime.UnixNano()/1e6)
// Choose a new table ID for the recovered table data.
recoverTableID := fmt.Sprintf("%s_recovered", tableID)

// Construct and run a copy job.
copier := ds.Table(recoverTableID).CopierFrom(ds.Table(snapshotTableID))
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

// String datasetId = "my_dataset";
String tableId = "oops_undelete_me";

// Record the current time.  We'll use this as the snapshot time
// for recovering the table.
long snapTime = Instant.now().getMillis();

// "Accidentally" delete the table.
bigquery.delete(TableId.of(datasetId, tableId));

// Construct the restore-from tableID using a snapshot decorator.
String snapshotTableId = String.format("%s@%d", tableId, snapTime);
// Choose a new table ID for the recovered table data.
String recoverTableId = String.format("%s_recovered", tableId);

// Construct and run a copy job.
CopyJobConfiguration configuration =
    CopyJobConfiguration.newBuilder(
        TableId.of(datasetId, recoverTableId),
        TableId.of(datasetId, snapshotTableId))
    .build();
Job job = bigquery.create(JobInfo.of(configuration));
job = job.waitFor();

// Check the table
StandardTableDefinition table = bigquery.getTable(
        TableId.of(datasetId, recoverTableId)).getDefinition();
System.out.println("State: " + job.getStatus().getState());
System.out.printf("Recovered %d rows.\n", table.getNumRows());

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# TODO(developer): Uncomment the lines below and replace with your values.
# import time
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'  # Replace with your dataset ID.
# table_id = 'my_table'      # Replace with your table ID.

table_ref = client.dataset(dataset_id).table(table_id)

# TODO(developer): Choose an appropriate snapshot point as epoch
# milliseconds. For this example, we choose the current time as we're about
# to delete the table immediately afterwards.
snapshot_epoch = int(time.time() * 1000)

# "Accidentally" delete the table.
client.delete_table(table_ref)  # API request

# Construct the restore-from table ID using a snapshot decorator.
snapshot_table_id = '{}@{}'.format(table_id, snapshot_epoch)
source_table_ref = client.dataset(dataset_id).table(snapshot_table_id)

# Choose a new table ID for the recovered table data.
recovered_table_id = '{}_recovered'.format(table_id)
dest_table_ref = client.dataset(dataset_id).table(recovered_table_id)

# Construct and run a copy job.
job = client.copy_table(
    source_table_ref,
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location='US')  # API request

job.result()  # Waits for job to complete.

print('Copied data from deleted table {} to {}'.format(
    table_id, recovered_table_id))

Next steps

Was this page helpful? Let us know how we did:

Send feedback about...

Need help? Visit our support page.