Managing tables

This document describes how to manage tables in BigQuery. You can manage your BigQuery tables in the following ways:

  • Update a table's:
    • Expiration time
    • Description
    • Schema definition
    • Labels
  • Rename (copy) a table
  • Copy a table
  • Delete a table
  • Restore a deleted table

For more information on creating and using tables including getting table information, listing tables, and controlling access to table data, see Creating and using tables.

Updating table properties

You can update a table's:

Required permissions

At a minimum, to update a table, you must be granted bigquery.tables.update and bigquery.tables.get permissions. The following predefined Cloud IAM roles include bigquery.tables.update and bigquery.tables.get permissions:

  • bigquery.dataEditor
  • bigquery.dataOwner
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives users the ability to update table properties in datasets they create.

For more information on Cloud IAM roles and permissions in BigQuery, see: Access control.

Updating a table's description

You can update a table's description by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using a DDL ALTER TABLE statement
  • Using the bq update CLI command
  • Calling the tables.patch API method
  • Using the client libraries

To update a table's description:

Console

You cannot add a description when you create a table using the GCP Console. After the table is created, you can add a description on the Details page.

  1. In the Resources pane, select your table.

  2. Below the Query editor, click Details.

    Edit table schema

  3. In the Description section, click the pencil icon to edit the description.

    Edit description

  4. Enter a description in the box, and click Update to save.

DDL

Data definition language (DDL) statements allow you to create and modify tables and views using standard SQL query syntax.

See more on Using Data Definition Language statements.

To update a table's description by using a DDL statement in the GCP Console:

  1. Click Compose new query.

  2. Type your DDL statement into the Query editor text area.

     ALTER TABLE mydataset.mytable
     SET OPTIONS (
       description="Description of mytable"
     )
     

  3. Click Run.

Classic UI

  1. In the navigation pane, select your table.

  2. On the Table Details page, click Details.

  3. In the Description section, click Describe this table to open the description box.

  4. Enter a description in the box. When you click away from the box, the text is saved.

    Table description

CLI

Issue the bq update command with the --description flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

bq update \
--description "description" \
project_id:dataset.table

Where:

  • description is the text describing the table in quotes.
  • project_id is your project ID.
  • dataset is the name of the dataset that contains the table you're updating.
  • table is the name of the table you're updating.

Examples:

Enter the following command to change the description of mytable in mydatasetto "Description of mytable." mydataset is in your default project.

bq update --description "Description of mytable" mydataset.mytable

Enter the following command to change the description of mytable in mydataset to "Description of mytable." mydataset is in myotherproject, not your default project.

bq update \
--description "Description of mytable" \
myotherproject:mydataset.mytable

API

Call the tables.patch method and use the description property in the table resource to update the table's description. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
tableRef := client.Dataset(datasetID).Table(tableID)
meta, err := tableRef.Metadata(ctx)
if err != nil {
	return err
}
update := bigquery.TableMetadataToUpdate{
	Description: "Updated description.",
}
if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

// String datasetName = "my_dataset_name";
// String tableName = "my_table_name";
// String newDescription = "new_description";

Table beforeTable = bigquery.getTable(datasetName, tableName);
TableInfo tableInfo = beforeTable.toBuilder()
    .setDescription(newDescription)
    .build();
Table afterTable = bigquery.update(tableInfo);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

Configure the Table.description property and call Client.update_table() to send the update to the API.

# from google.cloud import bigquery
# client = bigquery.Client()
# table_ref = client.dataset('my_dataset').table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.description == "Original description."
table.description = "Updated description."

table = client.update_table(table, ["description"])  # API request

assert table.description == "Updated description."

Updating a table's expiration time

You can set a default table expiration time at the dataset level, or you can set a table's expiration time when the table is created. A table's expiration time is often referred to as "time to live" or TTL.

If you set the expiration when the table is created, the dataset's default table expiration is ignored. If you do not set a default table expiration at the dataset level, and you do not set a table expiration when the table is created, the table never expires and you must delete the table manually.

At any point after the table is created, you can update the table's expiration time by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using a DDL ALTER TABLE statement
  • Using the CLI's bq update command
  • Calling the tables.patch API method
  • Using the client libraries

To update a table's expiration time:

Console

You cannot add an expiration time when you create a table using the GCP Console. After a table is created, you can add or update a table expiration on the Table Details page.

  1. In the Resources pane, select your table.

  2. Below the Query editor, click Details.

  3. Click the pencil icon next to Table info

  4. For Table expiration, select Specify date. Then select the expiration date using the calendar widget.

  5. Click Update to save. The updated expiration time appears in the Table info section.

DDL

Data definition language (DDL) statements allow you to create and modify tables and views using standard SQL query syntax.

See more on Using Data Definition Language statements.

To update the expiration time by using a DDL statement in the GCP Console:

  1. Click Compose new query.

  2. Type your DDL statement into the Query editor text area.

     ALTER TABLE mydataset.mytable
     SET OPTIONS (
       -- Sets table expiration to timestamp 2025-02-03 12:34:56
       expiration_timestamp=TIMESTAMP "2025-02-03 12:34:56"
     )
     

  3. Click Run.

Classic UI

  1. In the navigation pane, select your table.

  2. On the Table Details page, click Details.

  3. For Expiration Time, click Edit.

  4. In the Update Expiration dialog, click In and enter an expiration time in days.

  5. Click OK. The updated expiration time appears on the Details page.

    Table expiration

CLI

Issue the bq update command with the --expiration flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

bq update \
--expiration integer \
project_id:dataset.table

Where:

  • integer is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value. If you specify 0, the table expiration is removed, and the table never expires. Tables with no expiration must be manually deleted.
  • project_id is your project ID.
  • dataset is the name of the dataset that contains the table you're updating.
  • table is the name of the table you're updating.

Examples:

Enter the following command to update the expiration time of mytable in mydataset to 5 days (432000 seconds). mydataset is in your default project.

bq update --expiration 432000 mydataset.mytable

Enter the following command to update the expiration time of mytable in mydataset to 5 days (432000 seconds). mydataset is in myotherproject, not your default project.

bq update --expiration 432000 myotherproject:mydataset.mytable

API

Call the tables.patch method and use the expirationTime property in the table resource to update the table expiration in milliseconds. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
tableRef := client.Dataset(datasetID).Table(tableID)
meta, err := tableRef.Metadata(ctx)
if err != nil {
	return err
}
update := bigquery.TableMetadataToUpdate{
	ExpirationTime: time.Now().Add(time.Duration(5*24) * time.Hour), // table expiration in 5 days.
}
if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

Table beforeTable = bigquery.getTable(datasetName, tableName);

// Set table to expire 5 days from now.
long expirationMillis = DateTime.now().plusDays(5).getMillis();
TableInfo tableInfo = beforeTable.toBuilder()
        .setExpirationTime(expirationMillis)
        .build();
Table afterTable = bigquery.update(tableInfo);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

Configure Table.expires property and call Client.update_table() to send the update to the API.

import datetime
import pytz

# from google.cloud import bigquery
# client = bigquery.Client()
# table_ref = client.dataset('my_dataset').table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.expires is None

# set table to expire 5 days from now
expiration = datetime.datetime.now(pytz.utc) + datetime.timedelta(days=5)
table.expires = expiration
table = client.update_table(table, ["expires"])  # API request

# expiration is stored in milliseconds
margin = datetime.timedelta(microseconds=1000)
assert expiration - margin <= table.expires <= expiration + margin

Updating a table's schema definition

For instructions on updating a table's schema definition, see Modifying table schemas.

Renaming a table

Currently, you cannot change the name of an existing table. If you need to change the table name, follow the steps to copy the table. When you specify the destination table in the copy operation, use the new table name.

Copying a table

You can copy a table by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using the command-line tool's bq cp command
  • Calling the jobs.insert API method and configuring a copy job
  • Using the client libraries

Required permissions

At a minimum, to copy tables and partitions, you must be granted the following permissions.

On the source dataset:

  • bigquery.tables.get
  • bigquery.tables.getData

On the destination dataset:

  • bigquery.tables.create to create the copy of the table or partition in the destination dataset

The following predefined Cloud IAM roles include bigquery.tables.create, bigquery.tables.get, and bigquery.tables.getData permissions:

  • bigquery.dataEditor
  • bigquery.dataOwner
  • bigquery.admin

Also, to run the copy job, you must be granted bigquery.jobs.create permissions.

The following predefined Cloud IAM roles include bigquery.jobs.create permissions:

  • bigquery.user
  • bigquery.jobUser
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives the user the ability to copy tables and partitions in the dataset, but access to the destination dataset is required unless the user also created the destination dataset.

For more information on Cloud IAM roles and permissions in BigQuery, see Access control.

Limitations on copying tables

Table copy jobs are subject to the following limitations:

  • When you copy a table, the name of the destination table must adhere to the same naming conventions as when you create a table.
  • Table copies are subject to BigQuery's quota policies on copy jobs.
  • When you use the GCP Console or the classic BigQuery web UI to copy a table, you cannot overwrite an existing table in the destination dataset. The table must have a unique name in the destination dataset.
  • When copying tables, the destination dataset must reside in the same location as the dataset containing the table being copied. For example, you cannot copy a table from an EU-based dataset and write it to a US-based dataset.
  • Copying multiple source tables into a destination table is not supported by the GCP Console or the classic BigQuery web UI.
  • When copying multiple source tables to a destination table using the CLI or API, all source tables must have identical schemas.

Copying a single source table

You can copy a single table by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using the command-line tool's bq cp command
  • Calling the jobs.insert API method, configuring a copy job, and specifying the sourceTable property
  • Using the client libraries

The GCP Console and the classic BigQuery web UI support only one source table and one destination table in a copy job. To copy multiple source files to a destination table, you must use the command-line tool or the API.

To copy a single source table:

Console

  1. Select the table you want to copy from the Resources pane.

  2. Below the Query editor, click Copy table.

  3. In the Copy table dialog, under Destination:

    • For Project name, choose the project that will store the copied table.
    • For Dataset name, select the dataset where you want to store the copied table. The source and destination datasets must be in the same location.
    • For Table name, enter a name for the new table. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character). You cannot overwrite an existing table in the destination dataset using the GCP Console.
  4. Click Copy to start the copy job.

Classic UI

  1. Click the down arrow display menu icon next to the table you want to copy and click Copy Table.

  2. In the Copy Table dialog:

    • For Destination project, choose the project that will store the copied table.
    • For Destination dataset, select the dataset where you want to store the copied table. The source and destination datasets must be in the same location.
    • For Destination table, enter a name for the new table. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character). You cannot overwrite an existing table in the destination dataset using the classic BigQuery web UI.

      Table copy

  3. Click OK to start the copy job.

CLI

Issue the bq cp command. Optional flags can be used to control the write disposition of the destination table:

  • -a or --append_table appends the data from the source table to an existing table in the destination dataset.
  • -f or --force overwrites an existing table in the destination dataset and doesn't prompt you for confirmation.
  • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table 'project_id:dataset.table' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
  • --destination_kms_key is the customer-managed Cloud KMS key used to encrypt the destination table.

--destination_kms_key is not demonstrated here. See Protecting data with Cloud Key Management Service keys for more information.

If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: project_id:dataset.

(Optional) Supply the --location flag and set the value to your location.

bq --location=location cp \
-a -f -n \
project_id:dataset.source_table \
project_id:dataset.destination_table

Where:

  • location is the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • project_id is your project ID.
  • dataset is the name of the source or destination dataset.
  • source_table is the table you're copying.
  • destination_table is the name of the table in the destination dataset.

Examples:

Enter the following command to copy mydataset.mytable to mydataset2.mytable2. Both datasets are in your default project.

bq cp mydataset.mytable mydataset2.mytable2

Enter the following command to copy mydataset.mytable and to overwrite a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -f shortcut is used to overwrite the destination table without a prompt.

bq cp -f \
mydataset.mytable \
myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable and to return an error if the destination dataset contains a table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -n shortcut is used to prevent overwriting a table with the same name.

bq cp -n \
mydataset.mytable \
myotherproject:myotherdataset.mytable

Enter the following command to copy a mydataset.mytable and to append the data to a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The - a shortcut is used to append to the destination table.

bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable

API

You can copy an existing table through the API by calling the bigquery.jobs.insert method, and configuring a copy job. Specify your location in the location property in the jobReference section of the job resource.

You must specify the following values in your job configuration:

"copy": {
      "sourceTable": {       // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "destinationTable": {  // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "createDisposition": string,  // Optional
      "writeDisposition": string,   // Optional
    },

Where sourceTable provides information about the table to be copied, destinationTable provides information about the new table, createDisposition specifies whether to create the table if it doesn't exist, and writeDisposition specifies whether to overwrite or append to an existing table.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery C# API reference documentation .


using Google.Apis.Bigquery.v2.Data;
using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCopyTable
{
    public void CopyTable(
        string projectId = "your-project-id",
        string destinationDatasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        TableReference sourceTableRef = new TableReference()
        {
            TableId = "shakespeare",
            DatasetId = "samples",
            ProjectId = "bigquery-public-data"
        };
        TableReference destinationTableRef = client.GetTableReference(
            destinationDatasetId, "destination_table");
        BigQueryJob job = client.CreateCopyJob(
            sourceTableRef, destinationTableRef)
            .PollUntilCompleted();  // Wait for the job to complete.
        // Retrieve destination table
        BigQueryTable destinationTable = client.GetTable(destinationTableRef);
        Console.WriteLine(
            $"Copied {destinationTable.Resource.NumRows} rows from table "
            + $"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "
            + $"to {destinationTable.FullyQualifiedId}."
        );
    }
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
dataset := client.Dataset(datasetID)
copier := dataset.Table(dstID).CopierFrom(dataset.Table(srcID))
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId destinationId = TableId.of(dataset, tableName);
JobOption options = JobOption.fields(JobField.STATUS, JobField.USER_EMAIL);
Job job = table.copy(destinationId, options);
// Wait for the job to complete.
try {
  Job completedJob =
      job.waitFor(
          RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
          RetryOption.totalTimeout(Duration.ofMinutes(3)));
  if (completedJob != null && completedJob.getStatus().getError() == null) {
    // Job completed successfully.
  } else {
    // Handle error case.
  }
} catch (InterruptedException e) {
  // Handle interrupted wait
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Node.js API reference documentation .

// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function copyTable() {
  // Copies src_dataset:src_table to dest_dataset:dest_table.

  /**
   * TODO(developer): Uncomment the following lines before running the sample
   */
  // const srcDatasetId = "my_src_dataset";
  // const srcTableId = "my_src_table";
  // const destDatasetId = "my_dest_dataset";
  // const destTableId = "my_dest_table";

  // Copy the table contents into another table
  const [job] = await bigquery
    .dataset(srcDatasetId)
    .table(srcTableId)
    .copy(bigquery.dataset(destDatasetId).table(destTableId));

  console.log(`Job ${job.id} completed.`);

  // Check the job's status for errors
  const errors = job.status.errors;
  if (errors && errors.length > 0) {
    throw errors;
  }
}

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery PHP API reference documentation .

use Google\Cloud\BigQuery\BigQueryClient;
use Google\Cloud\Core\ExponentialBackoff;

/** Uncomment and populate these variables in your code */
// $projectId = 'The Google project ID';
// $datasetId = 'The BigQuery dataset ID';
// $sourceTableId   = 'The BigQuery table ID to copy from';
// $destinationTableId = 'The BigQuery table ID to copy to';

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$dataset = $bigQuery->dataset($datasetId);
$sourceTable = $dataset->table($sourceTableId);
$destinationTable = $dataset->table($destinationTableId);
$copyConfig = $sourceTable->copy($destinationTable);
$job = $sourceTable->runJob($copyConfig);

// poll the job until it is complete
$backoff = new ExponentialBackoff(10);
$backoff->execute(function () use ($job) {
    print('Waiting for job to complete' . PHP_EOL);
    $job->reload();
    if (!$job->isComplete()) {
        throw new Exception('Job has not yet completed', 500);
    }
});
// check if the job has errors
if (isset($job->info()['status']['errorResult'])) {
    $error = $job->info()['status']['errorResult']['message'];
    printf('Error running job: %s' . PHP_EOL, $error);
} else {
    print('Table copied successfully' . PHP_EOL);
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()

source_dataset = client.dataset("samples", project="bigquery-public-data")
source_table_ref = source_dataset.table("shakespeare")

# dataset_id = 'my_dataset'
dest_table_ref = client.dataset(dataset_id).table("destination_table")

job = client.copy_table(
    source_table_ref,
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location="US",
)  # API request

job.result()  # Waits for job to complete.

assert job.state == "DONE"
dest_table = client.get_table(dest_table_ref)  # API request
assert dest_table.num_rows > 0

Copying multiple source tables

You can copy multiple source tables to a destination table by:

  • Using the command-line tool's bq cp command
  • Calling the jobs.insert method, configuring a copy job, and specifying the sourceTables property
  • Using the client libraries

All source tables must have identical schemas, and only one destination table is allowed.

Source tables must be specified as a comma-separated list. You cannot use wildcards when you copy multiple source tables.

To copy multiple source tables:

Console

Copying multiple tables is not currently supported by the GCP Console.

Classic UI

Copying multiple tables is not currently supported by the classic BigQuery web UI.

CLI

Issue the bq cp command and include multiple source tables as a comma-separated list. Optional flags can be used to control the write disposition of the destination table:

  • -a or --append_table appends the data from the source tables to an existing table in the destination dataset.
  • -f or --force overwrites an existing destination table in the destination dataset and doesn't prompt you for confirmation.
  • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table 'project_id:dataset.table' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
  • --destination_kms_key is the customer-managed Cloud Key Management Service key used to encrypt the destination table.

--destination_kms_key is not demonstrated here. See Protecting data with Cloud Key Management Service keys for more information.

If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: project_id:dataset.

(Optional) Supply the --location flag and set the value to your location.

bq --location=location cp \
-a -f -n \
project_id:dataset.source_table,project_id:dataset.source_table \
project_id:dataset.destination_table

Where:

  • location is the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
  • project_id is your project ID.
  • dataset is the name of the source or destination dataset.
  • source_table is the table you're copying.
  • destination_table is the name of the table in the destination dataset.

Examples:

Enter the following command to copy mydataset.mytable and mydataset.mytable2 to mydataset2.tablecopy. All datasets are in your default project.

bq cp \
mydataset.mytable,mydataset.mytable2 \
mydataset2.tablecopy

Enter the following command to copy mydataset.mytable and mydataset.mytable2 to myotherdataset.mytableand to overwrite a destination table with the same name. The destination dataset is in myotherproject, not your default project. The -f shortcut is used to overwrite the destination table without a prompt.

bq cp -f \
mydataset.mytable,mydataset.mytable2 \
myotherproject:myotherdataset.mytable

Enter the following command to copy myproject:mydataset.mytable and myproject:mydataset.mytable2 and to return an error if the destination dataset contains a table with the same name. The destination dataset is in myotherproject. The -n shortcut is used to prevent overwriting a table with the same name.

bq cp -n \
myproject:mydataset.mytable,myproject:mydataset.mytable2 \
myotherproject:myotherdataset.mytable

Enter the following command to copy mydataset.mytable and mydataset.mytable2 and to append the data to a destination table with the same name. The source dataset is in your default project. The destination dataset is in myotherproject. The -a shortcut is used to append to the destination table.

bq cp -a \
mydataset.mytable,mydataset.mytable2 \
myotherproject:myotherdataset.mytable

API

To copy multiple tables using the API, call the jobs.insert method, configure a table copy job, and specify the sourceTables property.

Specify your region in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
dataset := client.Dataset(datasetID)

srcTableIDs := []string{"table1", "table2"}
var tableRefs []*bigquery.Table
for _, v := range srcTableIDs {
	tableRefs = append(tableRefs, dataset.Table(v))
}
copier := dataset.Table(dstTableID).CopierFrom(tableRefs...)
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId destinationTable = TableId.of(datasetId, destinationTableId);
CopyJobConfiguration configuration =
    CopyJobConfiguration.newBuilder(
        destinationTable,
        Arrays.asList(
            TableId.of(datasetId, "table1"),
            TableId.of(datasetId, "table2")))
    .build();

// Copy the tables.
Job job = bigquery.create(JobInfo.of(configuration));
job = job.waitFor();

// Check the table
StandardTableDefinition table = bigquery.getTable(destinationTable).getDefinition();
System.out.println("State: " + job.getStatus().getState());
System.out.printf("Copied %d rows.\n", table.getNumRows());

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# from google.cloud import bigquery
# client = bigquery.Client()
# source_dataset_id = 'my_source_dataset'
# dest_dataset_id = 'my_destination_dataset'

table1_ref = client.dataset(source_dataset_id).table("table1")
table2_ref = client.dataset(source_dataset_id).table("table2")
dest_table_ref = client.dataset(dest_dataset_id).table("destination_table")

job = client.copy_table(
    [table1_ref, table2_ref],
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location="US",
)  # API request
job.result()  # Waits for job to complete.

assert job.state == "DONE"
dest_table = client.get_table(dest_table_ref)  # API request
assert dest_table.num_rows > 0

Deleting tables

You can delete a table by:

  • Using the GCP Console or the classic BigQuery web UI
  • Using the command-line tool's bq rm command
  • Calling the tables.delete API method
  • Using the client libraries

Currently, you can only delete one table at a time.

When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.

Required permissions

At a minimum, to delete a table, you must be granted bigquery.tables.delete and bigquery.tables.get permissions. The following predefined Cloud IAM roles include bigquery.tables.delete and bigquery.tables.get permissions:

  • bigquery.dataOwner
  • bigquery.dataEditor
  • bigquery.admin

In addition, if a user has bigquery.datasets.create permissions, when that user creates a dataset, they are granted bigquery.dataOwner access to it. bigquery.dataOwner access gives the user the ability to delete tables in the dataset.

For more information on Cloud IAM roles and permissions in BigQuery, see Access control.

Deleting a table

To delete a table:

Console

  1. Select your table from the Resources pane. Below the Query editor, click Delete table.

  2. Type the name of the table in the dialog box, then click Delete to confirm.

Classic UI

  1. Click the down arrow icon down arrow icon next to your table name in the navigation bar and click Delete table.

  2. When prompted, click OK to confirm.

CLI

Use the bq rm command with the --table flag (or -t shortcut) to delete a table. When you use the CLI to remove a table, you must confirm the action. You can use the --force flag (or -f shortcut) to skip confirmation.

If the table is in a dataset in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

bq rm \
-f \
-t \
project_id:dataset.table

Where:

  • project_id is your project ID.
  • dataset is the name of the dataset that contains the table.
  • table is the name of the table you're deleting.

Examples:

Enter the following command to delete mytable from mydataset. mydataset is in your default project.

bq rm -t mydataset.mytable

Enter the following command to delete mytable from mydataset. mydataset is in myotherproject, not your default project.

bq rm -t myotherproject:mydataset.mytable

Enter the following command to delete mytable from mydataset. mydataset is in your default project. The command uses the -f shortcut to bypass confirmation.

bq rm -f -t mydataset.mytable

API

Call the tables.delete API method and specify the table to delete using the tableId parameter.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery C# API reference documentation .


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryDeleteTable
{
    public void DeleteTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id",
        string tableId = "your_table_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        client.DeleteTable(datasetId, tableId);
        Console.WriteLine($"Table {tableId} deleted.");
    }
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Go API reference documentation .

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")
table := client.Dataset(datasetID).Table(tableID)
if err := table.Delete(ctx); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

TableId tableId = TableId.of(projectId, datasetName, tableName);
boolean deleted = bigquery.delete(tableId);
if (deleted) {
  // the table was deleted
} else {
  // the table was not found
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Node.js API reference documentation .

// Import the Google Cloud client library
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function deleteTable() {
  // Deletes "my_table" from "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";

  // Delete the table
  await bigquery
    .dataset(datasetId)
    .table(tableId)
    .delete();

  console.log(`Table ${tableId} deleted.`);
}

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery PHP API reference documentation .

use Google\Cloud\BigQuery\BigQueryClient;

/** Uncomment and populate these variables in your code */
// $projectId = 'The Google project ID';
// $datasetId = 'The BigQuery dataset ID';
// $tableId = 'The BigQuery table ID';

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$dataset = $bigQuery->dataset($datasetId);
$table = $dataset->table($tableId);
$table->delete();
printf('Deleted table %s.%s' . PHP_EOL, $datasetId, $tableId);

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

from google.cloud import bigquery

# TODO(developer): Construct a BigQuery client object.
# client = bigquery.Client()

# TODO(developer): Set table_id to the ID of the table to fetch.
# table_id = 'your-project.your_dataset.your_table'

# If the table does not exist, delete_table raises
# google.api_core.exceptions.NotFound unless not_found_ok is True
client.delete_table(table_id, not_found_ok=True)
print("Deleted table '{}'.".format(table_id))

Ruby

Before trying this sample, follow the Ruby setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Ruby API reference documentation .

require "google/cloud/bigquery"

def delete_table dataset_id = "my_dataset_id", table_id = "my_table_id"
  bigquery = Google::Cloud::Bigquery.new
  dataset  = bigquery.dataset dataset_id
  table    = dataset.table table_id

  table.delete

  puts "Table #{table_id} deleted."
end

Restoring deleted tables

It's possible to restore a table within 2 days of deletion. By leveraging snapshot decorator functionality, you may be able to reference a table prior to the deletion event and then copy it. Note the following:

  • You cannot reference a deleted table if you have already created a new table with the same name in the same dataset.

  • You cannot reference a deleted table if you deleted the dataset that housed the table, and you have already created a new dataset with the same name.

Restoring a deleted table

You can restore a deleted table by:

  • Using the the @<time> snapshot decorator in the CLI
  • Using the client libraries

Console

You cannot undelete a table by using the GCP Console.

Classic UI

You cannot undelete a table by using the classic web UI.

CLI

To undelete a table using legacy SQL, use the @<time> snapshot decorator. First, determine a UNIX timestamp of when the table existed (in milliseconds). Then, copy the table at that timestamp to a new table. The new table must have a different name than the deleted table.

(Optional) Supply the --location flag and set the value to your location.

For example, enter the following command to copy mydataset.mytable at the time 1418864998000 into a new table mydataset.newtable.

bq cp mydataset.mytable@1418864998000 mydataset.newtable

For more information, see Table decorators in legacy SQL.

Go

// To run this sample, you will need to create (or reuse) a context and
// an instance of the bigquery client.  For example:
// import "cloud.google.com/go/bigquery"
// ctx := context.Background()
// client, err := bigquery.NewClient(ctx, "your-project-id")

ds := client.Dataset(datasetID)
if _, err := ds.Table(tableID).Metadata(ctx); err != nil {
	return err
}
// Record the current time.  We'll use this as the snapshot time
// for recovering the table.
snapTime := time.Now()

// "Accidentally" delete the table.
if err := client.Dataset(datasetID).Table(tableID).Delete(ctx); err != nil {
	return err
}

// Construct the restore-from tableID using a snapshot decorator.
snapshotTableID := fmt.Sprintf("%s@%d", tableID, snapTime.UnixNano()/1e6)
// Choose a new table ID for the recovered table data.
recoverTableID := fmt.Sprintf("%s_recovered", tableID)

// Construct and run a copy job.
copier := ds.Table(recoverTableID).CopierFrom(ds.Table(snapshotTableID))
copier.WriteDisposition = bigquery.WriteTruncate
job, err := copier.Run(ctx)
if err != nil {
	return err
}
status, err := job.Wait(ctx)
if err != nil {
	return err
}
if err := status.Err(); err != nil {
	return err
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Java API reference documentation .

// String datasetId = "my_dataset";
String tableId = "oops_undelete_me";

// Record the current time.  We'll use this as the snapshot time
// for recovering the table.
long snapTime = Instant.now().getMillis();

// "Accidentally" delete the table.
bigquery.delete(TableId.of(datasetId, tableId));

// Construct the restore-from tableID using a snapshot decorator.
String snapshotTableId = String.format("%s@%d", tableId, snapTime);
// Choose a new table ID for the recovered table data.
String recoverTableId = String.format("%s_recovered", tableId);

// Construct and run a copy job.
CopyJobConfiguration configuration =
    CopyJobConfiguration.newBuilder(
        TableId.of(datasetId, recoverTableId),
        TableId.of(datasetId, snapshotTableId))
    .build();
Job job = bigquery.create(JobInfo.of(configuration));
job = job.waitFor();

// Check the table
StandardTableDefinition table = bigquery.getTable(
        TableId.of(datasetId, recoverTableId)).getDefinition();
System.out.println("State: " + job.getStatus().getState());
System.out.printf("Recovered %d rows.\n", table.getNumRows());

Python

Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation .

# TODO(developer): Uncomment the lines below and replace with your values.
# import time
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'  # Replace with your dataset ID.
# table_id = 'my_table'      # Replace with your table ID.

table_ref = client.dataset(dataset_id).table(table_id)

# TODO(developer): Choose an appropriate snapshot point as epoch
# milliseconds. For this example, we choose the current time as we're about
# to delete the table immediately afterwards.
snapshot_epoch = int(time.time() * 1000)

# "Accidentally" delete the table.
client.delete_table(table_ref)  # API request

# Construct the restore-from table ID using a snapshot decorator.
snapshot_table_id = "{}@{}".format(table_id, snapshot_epoch)
source_table_ref = client.dataset(dataset_id).table(snapshot_table_id)

# Choose a new table ID for the recovered table data.
recovered_table_id = "{}_recovered".format(table_id)
dest_table_ref = client.dataset(dataset_id).table(recovered_table_id)

# Construct and run a copy job.
job = client.copy_table(
    source_table_ref,
    dest_table_ref,
    # Location must match that of the source and destination tables.
    location="US",
)  # API request

job.result()  # Waits for job to complete.

print(
    "Copied data from deleted table {} to {}".format(table_id, recovered_table_id)
)

Next steps

Оцените, насколько информация на этой странице была вам полезна:

Оставить отзыв о...

Текущей странице
Нужна помощь? Обратитесь в службу поддержки.