Stay organized with collections Save and categorize content based on your preferences.

Manage tables

This document describes how to manage tables in BigQuery. You can manage your BigQuery tables in the following ways:

For more information about creating and using tables including getting table information, listing tables, and controlling access to table data, see Creating and using tables.

Before you begin

Grant Identity and Access Management (IAM) roles that give users the necessary permissions to perform each task in this document. The permissions required to perform a task (if any) are listed in the "Required permissions" section of the task.

Update table properties

You can update the following elements of a table:

Required permissions

To update a table, you need the following IAM permissions:

  • bigquery.tables.update
  • bigquery.tables.get

Each of the following predefined IAM roles includes the permissions that you need in order to update a table:

  • roles/bigquery.dataEditor
  • roles/bigquery.dataOwner
  • roles/bigquery.admin

Additionally, if you have the bigquery.datasets.create permission, you can update the properties of the tables of the datasets that you create.

For more information about IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Update a table's description

You can update a table's description in the following ways:

  • Using the Google Cloud console.
  • Using a data definition language (DDL) ALTER TABLE statement.
  • Using the bq command-line tool's bq update command.
  • Calling the tables.patch API method.
  • Using the client libraries.

To update a table's description:

Console

You cannot add a description when you create a table using the Google Cloud console. After the table is created, you can add a description on the Details page.

  1. In the Explorer panel, expand your project and dataset, then select the table.

  2. In the details panel, click Details.

  3. In the Description section, click the pencil icon to edit the description.

    Edit description.

  4. Enter a description in the box, and click Update to save.

SQL

Use the ALTER TABLE SET OPTIONS statement. The following example updates the description of a table named mytable:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    ALTER TABLE mydataset.mytable
      SET OPTIONS (
        description = 'Description of mytable');
    

  3. Click Run.

For more information about how to run queries, see Running interactive queries.

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Issue the bq update command with the --description flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

    bq update \
    --description "description" \
    project_id:dataset.table
    

    Replace the following:

    • description: the text describing the table in quotes
    • project_id: your project ID
    • dataset: the name of the dataset that contains the table you're updating
    • table: the name of the table you're updating

    Examples:

    To change the description of the mytable table in the mydataset dataset to "Description of mytable", enter the following command. The mydataset dataset is in your default project.

    bq update --description "Description of mytable" mydataset.mytable
    

    To change the description of the mytable table in the mydataset dataset to "Description of mytable", enter the following command. The mydataset dataset is in the myotherproject project, not your default project.

    bq update \
    --description "Description of mytable" \
    myotherproject:mydataset.mytable
    

API

Call the tables.patch method and use the description property in the table resource to update the table's description. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

// updateTableDescription demonstrates how to fetch a table's metadata and updates the Description metadata.
func updateTableDescription(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydataset"
	// tableID := "mytable"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	tableRef := client.Dataset(datasetID).Table(tableID)
	meta, err := tableRef.Metadata(ctx)
	if err != nil {
		return err
	}
	update := bigquery.TableMetadataToUpdate{
		Description: "Updated description.",
	}
	if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Table;

public class UpdateTableDescription {

  public static void runUpdateTableDescription() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    String newDescription = "this is the new table description";
    updateTableDescription(datasetName, tableName, newDescription);
  }

  public static void updateTableDescription(
      String datasetName, String tableName, String newDescription) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      Table table = bigquery.getTable(datasetName, tableName);
      bigquery.update(table.toBuilder().setDescription(newDescription).build());
      System.out.println("Table description updated successfully to " + newDescription);
    } catch (BigQueryException e) {
      System.out.println("Table description was not updated \n" + e.toString());
    }
  }
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

Configure the Table.description property and call Client.update_table() to send the update to the API.
# from google.cloud import bigquery
# client = bigquery.Client()
# project = client.project
# dataset_ref = bigquery.DatasetReference(project, dataset_id)
# table_ref = dataset_ref.table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.description == "Original description."
table.description = "Updated description."

table = client.update_table(table, ["description"])  # API request

assert table.description == "Updated description."

Update a table's expiration time

You can set a default table expiration time at the dataset level, or you can set a table's expiration time when the table is created. A table's expiration time is often referred to as "time to live" or TTL.

When a table expires, it is deleted along with all of the data it contains. If necessary, you can undelete the expired table within the time travel window specified for the dataset, see Restore deleted tables for more information.

If you set the expiration when the table is created, the dataset's default table expiration is ignored. If you do not set a default table expiration at the dataset level, and you do not set a table expiration when the table is created, the table never expires and you must delete the table manually.

At any point after the table is created, you can update the table's expiration time in the following ways:

  • Using the Google Cloud console.
  • Using a data definition language (DDL) ALTER TABLE statement.
  • Using the bq command-line tool's bq update command.
  • Calling the tables.patch API method.
  • Using the client libraries.

To update a table's expiration time:

Console

You cannot add an expiration time when you create a table using the Google Cloud console. After a table is created, you can add or update a table expiration on the Table Details page.

  1. In the Explorer panel, expand your project and dataset, then select the table.

  2. In the details panel, click Details.

  3. Click the pencil icon next to Table info

  4. For Table expiration, select Specify date. Then select the expiration date using the calendar widget.

  5. Click Update to save. The updated expiration time appears in the Table info section.

SQL

Use the ALTER TABLE SET OPTIONS statement. The following example updates the expiration time of a table named mytable:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    ALTER TABLE mydataset.mytable
      SET OPTIONS (
        -- Sets table expiration to timestamp 2025-02-03 12:34:56
        expiration_timestamp = TIMESTAMP '2025-02-03 12:34:56');
    

  3. Click Run.

For more information about how to run queries, see Running interactive queries.

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Issue the bq update command with the --expiration flag. If you are updating a table in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

    bq update \
    --expiration integer \
    project_id:dataset.table
    

    Replace the following:

    • integer: the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value. If you specify 0, the table expiration is removed, and the table never expires. Tables with no expiration must be manually deleted.
    • project_id: your project ID.
    • dataset: the name of the dataset that contains the table you're updating.
    • table: the name of the table you're updating.

    Examples:

    To update the expiration time of the mytable table in the mydataset dataset to 5 days (432000 seconds), enter the following command. The mydataset dataset is in your default project.

    bq update --expiration 432000 mydataset.mytable
    

    To update the expiration time of the mytable table in the mydataset dataset to 5 days (432000 seconds), enter the following command. The mydataset dataset is in the myotherproject project, not your default project.

    bq update --expiration 432000 myotherproject:mydataset.mytable
    

API

Call the tables.patch method and use the expirationTime property in the table resource to update the table expiration in milliseconds. Because the tables.update method replaces the entire table resource, the tables.patch method is preferred.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// updateTableExpiration demonstrates setting the table expiration of a table to a specific point in time
// in the future, at which time it will be deleted.
func updateTableExpiration(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydataset"
	// tableID := "mytable"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	tableRef := client.Dataset(datasetID).Table(tableID)
	meta, err := tableRef.Metadata(ctx)
	if err != nil {
		return err
	}
	update := bigquery.TableMetadataToUpdate{
		ExpirationTime: time.Now().Add(time.Duration(5*24) * time.Hour), // table expiration in 5 days.
	}
	if _, err = tableRef.Update(ctx, update, meta.ETag); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Table;
import java.util.concurrent.TimeUnit;

public class UpdateTableExpiration {

  public static void runUpdateTableExpiration() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    // Update table expiration to one day.
    Long newExpiration = TimeUnit.MILLISECONDS.convert(1, TimeUnit.DAYS);
    updateTableExpiration(datasetName, tableName, newExpiration);
  }

  public static void updateTableExpiration(
      String datasetName, String tableName, Long newExpiration) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      Table table = bigquery.getTable(datasetName, tableName);
      bigquery.update(table.toBuilder().setExpirationTime(newExpiration).build());

      System.out.println("Table expiration updated successfully to " + newExpiration);
    } catch (BigQueryException e) {
      System.out.println("Table expiration was not updated \n" + e.toString());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

// Import the Google Cloud client library
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function updateTableExpiration() {
  // Updates a table's expiration.

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = 'my_dataset', // Existing dataset
  // const tableId = 'my_table', // Existing table
  // const expirationTime = Date.now() + 1000 * 60 * 60 * 24 * 5 // 5 days from current time in ms

  // Retreive current table metadata
  const table = bigquery.dataset(datasetId).table(tableId);
  const [metadata] = await table.getMetadata();

  // Set new table expiration to 5 days from current time
  metadata.expirationTime = expirationTime.toString();
  const [apiResponse] = await table.setMetadata(metadata);

  const newExpirationTime = apiResponse.expirationTime;
  console.log(`${tableId} expiration: ${newExpirationTime}`);
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

Configure Table.expires property and call Client.update_table() to send the update to the API.
import datetime

# from google.cloud import bigquery
# client = bigquery.Client()
# project = client.project
# dataset_ref = bigquery.DatasetReference(project, dataset_id)
# table_ref = dataset_ref.table('my_table')
# table = client.get_table(table_ref)  # API request

assert table.expires is None

# set table to expire 5 days from now
expiration = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
    days=5
)
table.expires = expiration
table = client.update_table(table, ["expires"])  # API request

# expiration is stored in milliseconds
margin = datetime.timedelta(microseconds=1000)
assert expiration - margin <= table.expires <= expiration + margin

To update the default dataset partition expiration time:

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Dataset;
import java.util.concurrent.TimeUnit;

// Sample to update partition expiration on a dataset.
public class UpdateDatasetPartitionExpiration {

  public static void main(String[] args) {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    // Set the default partition expiration (applies to new tables, only) in
    // milliseconds. This example sets the default expiration to 90 days.
    Long newExpiration = TimeUnit.MILLISECONDS.convert(90, TimeUnit.DAYS);
    updateDatasetPartitionExpiration(datasetName, newExpiration);
  }

  public static void updateDatasetPartitionExpiration(String datasetName, Long newExpiration) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      Dataset dataset = bigquery.getDataset(datasetName);
      bigquery.update(dataset.toBuilder().setDefaultPartitionExpirationMs(newExpiration).build());
      System.out.println(
          "Dataset default partition expiration updated successfully to " + newExpiration);
    } catch (BigQueryException e) {
      System.out.println("Dataset partition expiration was not updated \n" + e.toString());
    }
  }
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.


from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

# TODO(developer): Set dataset_id to the ID of the dataset to fetch.
# dataset_id = 'your-project.your_dataset'

dataset = client.get_dataset(dataset_id)  # Make an API request.

# Set the default partition expiration (applies to new tables, only) in
# milliseconds. This example sets the default expiration to 90 days.
dataset.default_partition_expiration_ms = 90 * 24 * 60 * 60 * 1000

dataset = client.update_dataset(
    dataset, ["default_partition_expiration_ms"]
)  # Make an API request.

print(
    "Updated dataset {}.{} with new default partition expiration {}".format(
        dataset.project, dataset.dataset_id, dataset.default_partition_expiration_ms
    )
)

Update a table's schema definition

For more information about updating a table's schema definition, see Modifying table schemas.

Rename a table

You can rename a table after it has been created by using the ALTER TABLE RENAME TO statement. The following example renames mytable to mynewtable:

ALTER TABLE mydataset.mytable
RENAME TO mynewtable;

Limitations on renaming tables

  • If you want to rename a table that has data streaming into it, you must stop the streaming and wait for BigQuery to indicate that streaming is not in use.
  • A table can usually be renamed within 72 hours of the last streaming operation, but it might take longer.
  • Table ACLs and row access policies are not always preserved after a table is renamed.
  • You can't concurrently rename a table and run a DML statement on that table.
  • Renaming a table removes all Data Catalog tags on the table.
  • You can't rename external tables.

Copy a table

This section describes how to create a full copy of a table. For information about other types of table copies, see table clones and table snapshots.

You can copy a table in the following ways:

  • Use the Google Cloud console.
  • Use the bq cp command.
  • Use a data definition language (DDL) CREATE TABLE COPY statement.
  • Call the jobs.insert API method and configure a copy job.
  • Use the client libraries.

Limitations on copying tables

Table copy jobs are subject to the following limitations:

  • When you copy a table, the name of the destination table must adhere to the same naming conventions as when you create a table.
  • Table copies are subject to BigQuery limits on copy jobs.
  • The Google Cloud console supports copying only one table at a time. You cannot overwrite an existing table in the destination dataset. The table must have a unique name in the destination dataset.
  • When copying tables, the destination dataset must reside in the same location as the dataset containing the table being copied. For example, you cannot copy a table from an EU-based dataset and write it to a US-based dataset.
  • Copying multiple source tables into a destination table is not supported by the Google Cloud console.
  • When copying multiple source tables to a destination table using the API, bq command-line tool, or the client libraries, all source tables must have identical schemas, including any partitioning or clustering.
  • The time that BigQuery takes to copy tables might vary significantly across different runs because the underlying storage is managed dynamically.
  • You cannot copy and append a source table to a destination table that has more columns than the source table, and the additional columns have default values. Instead, you can run INSERT destination_table SELECT * FROM source_table to copy over the data.

Required permissions

To perform the tasks in this document, you need the following permissions.

Permissions to copy tables and partitions

To copy tables and partitions, you need IAM permissions on the source and destination datasets.

  • On the source dataset, you need the following:

    • bigquery.tables.get
    • bigquery.tables.getData
  • On the destination dataset, you need the following:

    • bigquery.tables.create: This permission lets you create a copy of the table or partition in the destination dataset.

Each of the following predefined IAM roles includes the permissions that you need in order to copy tables and partitions:

  • roles/bigquery.dataEditor
  • roles/bigquery.dataOwner
  • roles/bigquery.admin

Permission to run a copy job

To run a copy job, you need the bigquery.jobs.create IAM permission.

Each of the following predefined IAM roles includes the permissions that you need to run a copy job:

  • roles/bigquery.user
  • roles/bigquery.jobUser
  • roles/bigquery.admin

Additionally, if you have the bigquery.datasets.create permission, you can copy tables and partitions in the datasets that you create. You also need the access to the destination dataset.

For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Copy a single source table

You can copy a single table in the following ways:

  • Using the Google Cloud console.
  • Using the bq command-line tool's bq cp command.
  • Using a data definition language (DDL) CREATE TABLE COPY statement.
  • Calling the jobs.insert API method, configuring a copy job, and specifying the sourceTable property.
  • Using the client libraries.

The Google Cloud console and the CREATE TABLE COPY statement support only one source table and one destination table in a copy job. To copy multiple source files to a destination table, you must use the bq command-line tool or the API.

To copy a single source table:

Console

  1. In the Explorer panel, expand your project and dataset, then select the table.

  2. In the details panel, click Copy table.

  3. In the Copy table dialog, under Destination:

    • For Project name, choose the project that will store the copied table.
    • For Dataset name, select the dataset where you want to store the copied table. The source and destination datasets must be in the same location.
    • For Table name, enter a name for the new table. The name must be unique in the destination dataset. You cannot overwrite an existing table in the destination dataset using the Google Cloud console. For more information about table name requirements, see Table naming.
  4. Click Copy to start the copy job.

SQL

Use the CREATE TABLE COPY statement to copy a table named table1 to a new table named table1copy:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    CREATE TABLE myproject.mydataset.table1copy
    COPY myproject.mydataset.table1;
    

  3. Click Run.

For more information about how to run queries, see Running interactive queries.

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Issue the bq cp command. Optional flags can be used to control the write disposition of the destination table:

    • -a or --append_table appends the data from the source table to an existing table in the destination dataset.
    • -f or --force overwrites an existing table in the destination dataset and doesn't prompt you for confirmation.
    • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table 'project_id:dataset.table' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
    • --destination_kms_key is the customer-managed Cloud KMS key used to encrypt the destination table.

    --destination_kms_key is not demonstrated here. See Protecting data with Cloud Key Management Service keys for more information.

    If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: project_id:dataset.

    (Optional) Supply the --location flag and set the value to your location.

    bq --location=location cp \
    -a -f -n \
    project_id:dataset.source_table \
    project_id:dataset.destination_table
    

    Replace the following:

    • location: the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
    • project_id: your project ID.
    • dataset: the name of the source or destination dataset.
    • source_table: the table you're copying.
    • destination_table: the name of the table in the destination dataset.

    Examples:

    To copy the mydataset.mytable table to the mydataset2.mytable2 table, enter the following command. Both datasets are in your default project.

    bq cp mydataset.mytable mydataset2.mytable2
    

    To copy the mydataset.mytable table and to overwrite a destination table with the same name, enter the following command. The source dataset is in your default project. The destination dataset is in the myotherproject project. The -f shortcut is used to overwrite the destination table without a prompt.

    bq cp -f \
    mydataset.mytable \
    myotherproject:myotherdataset.mytable
    

    To copy the mydataset.mytable table and to return an error if the destination dataset contains a table with the same name, enter the following command. The source dataset is in your default project. The destination dataset is in the myotherproject project. The -n shortcut is used to prevent overwriting a table with the same name.

    bq cp -n \
    mydataset.mytable \
    myotherproject:myotherdataset.mytable
    

    To copy the mydataset.mytable table and to append the data to a destination table with the same name, enter the following command. The source dataset is in your default project. The destination dataset is in the myotherproject project. The - a shortcut is used to append to the destination table.

    bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable
    

API

You can copy an existing table through the API by calling the bigquery.jobs.insert method, and configuring a copy job. Specify your location in the location property in the jobReference section of the job resource.

You must specify the following values in your job configuration:

"copy": {
      "sourceTable": {       // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "destinationTable": {  // Required
        "projectId": string, // Required
        "datasetId": string, // Required
        "tableId": string    // Required
      },
      "createDisposition": string,  // Optional
      "writeDisposition": string,   // Optional
    },

Where sourceTable provides information about the table to be copied, destinationTable provides information about the new table, createDisposition specifies whether to create the table if it doesn't exist, and writeDisposition specifies whether to overwrite or append to an existing table.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery C# API reference documentation.


using Google.Apis.Bigquery.v2.Data;
using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCopyTable
{
    public void CopyTable(
        string projectId = "your-project-id",
        string destinationDatasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        TableReference sourceTableRef = new TableReference()
        {
            TableId = "shakespeare",
            DatasetId = "samples",
            ProjectId = "bigquery-public-data"
        };
        TableReference destinationTableRef = client.GetTableReference(
            destinationDatasetId, "destination_table");
        BigQueryJob job = client.CreateCopyJob(
            sourceTableRef, destinationTableRef)
            .PollUntilCompleted() // Wait for the job to complete.
            .ThrowOnAnyError();

        // Retrieve destination table
        BigQueryTable destinationTable = client.GetTable(destinationTableRef);
        Console.WriteLine(
            $"Copied {destinationTable.Resource.NumRows} rows from table "
            + $"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "
            + $"to {destinationTable.FullyQualifiedId}."
        );
    }
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

// copyTable demonstrates copying a table from a source to a destination, and
// allowing the copy to overwrite existing data by using truncation.
func copyTable(projectID, datasetID, srcID, dstID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydataset"
	// srcID := "sourcetable"
	// dstID := "destinationtable"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	dataset := client.Dataset(datasetID)
	copier := dataset.Table(dstID).CopierFrom(dataset.Table(srcID))
	copier.WriteDisposition = bigquery.WriteTruncate
	job, err := copier.Run(ctx)
	if err != nil {
		return err
	}
	status, err := job.Wait(ctx)
	if err != nil {
		return err
	}
	if err := status.Err(); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.CopyJobConfiguration;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.TableId;

public class CopyTable {

  public static void runCopyTable() {
    // TODO(developer): Replace these variables before running the sample.
    String destinationDatasetName = "MY_DESTINATION_DATASET_NAME";
    String destinationTableId = "MY_DESTINATION_TABLE_NAME";
    String sourceDatasetName = "MY_SOURCE_DATASET_NAME";
    String sourceTableId = "MY_SOURCE_TABLE_NAME";

    copyTable(sourceDatasetName, sourceTableId, destinationDatasetName, destinationTableId);
  }

  public static void copyTable(
      String sourceDatasetName,
      String sourceTableId,
      String destinationDatasetName,
      String destinationTableId) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId sourceTable = TableId.of(sourceDatasetName, sourceTableId);
      TableId destinationTable = TableId.of(destinationDatasetName, destinationTableId);

      // For more information on CopyJobConfiguration see:
      // https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.html
      CopyJobConfiguration configuration =
          CopyJobConfiguration.newBuilder(destinationTable, sourceTable).build();

      // For more information on Job see:
      // https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.html
      Job job = bigquery.create(JobInfo.of(configuration));

      // Blocks until this job completes its execution, either failing or succeeding.
      Job completedJob = job.waitFor();
      if (completedJob == null) {
        System.out.println("Job not executed since it no longer exists.");
        return;
      } else if (completedJob.getStatus().getError() != null) {
        System.out.println(
            "BigQuery was unable to copy table due to an error: \n" + job.getStatus().getError());
        return;
      }
      System.out.println("Table copied successfully.");
    } catch (BigQueryException | InterruptedException e) {
      System.out.println("Table copying job was interrupted. \n" + e.toString());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function copyTable() {
  // Copies src_dataset:src_table to dest_dataset:dest_table.

  /**
   * TODO(developer): Uncomment the following lines before running the sample
   */
  // const srcDatasetId = "my_src_dataset";
  // const srcTableId = "my_src_table";
  // const destDatasetId = "my_dest_dataset";
  // const destTableId = "my_dest_table";

  // Copy the table contents into another table
  const [job] = await bigquery
    .dataset(srcDatasetId)
    .table(srcTableId)
    .copy(bigquery.dataset(destDatasetId).table(destTableId));

  console.log(`Job ${job.id} completed.`);

  // Check the job's status for errors
  const errors = job.status.errors;
  if (errors && errors.length > 0) {
    throw errors;
  }
}

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery PHP API reference documentation.

use Google\Cloud\BigQuery\BigQueryClient;
use Google\Cloud\Core\ExponentialBackoff;

/** Uncomment and populate these variables in your code */
// $projectId = 'The Google project ID';
// $datasetId = 'The BigQuery dataset ID';
// $sourceTableId   = 'The BigQuery table ID to copy from';
// $destinationTableId = 'The BigQuery table ID to copy to';

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$dataset = $bigQuery->dataset($datasetId);
$sourceTable = $dataset->table($sourceTableId);
$destinationTable = $dataset->table($destinationTableId);
$copyConfig = $sourceTable->copy($destinationTable);
$job = $sourceTable->runJob($copyConfig);

// poll the job until it is complete
$backoff = new ExponentialBackoff(10);
$backoff->execute(function () use ($job) {
    print('Waiting for job to complete' . PHP_EOL);
    $job->reload();
    if (!$job->isComplete()) {
        throw new Exception('Job has not yet completed', 500);
    }
});
// check if the job has errors
if (isset($job->info()['status']['errorResult'])) {
    $error = $job->info()['status']['errorResult']['message'];
    printf('Error running job: %s' . PHP_EOL, $error);
} else {
    print('Table copied successfully' . PHP_EOL);
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.


from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

# TODO(developer): Set source_table_id to the ID of the original table.
# source_table_id = "your-project.source_dataset.source_table"

# TODO(developer): Set destination_table_id to the ID of the destination table.
# destination_table_id = "your-project.destination_dataset.destination_table"

job = client.copy_table(source_table_id, destination_table_id)
job.result()  # Wait for the job to complete.

print("A copy of the table created.")

Copy multiple source tables

You can copy multiple source tables to a destination table in the following ways:

  • Using the bq command-line tool's bq cp command.
  • Calling the jobs.insert method, configuring a copy job, and specifying the sourceTables property.
  • Using the client libraries.

All source tables must have identical schemas, and only one destination table is allowed.

Source tables must be specified as a comma-separated list. You cannot use wildcards when you copy multiple source tables.

To copy multiple source tables, select one of the following choices:

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Issue the bq cp command and include multiple source tables as a comma-separated list. Optional flags can be used to control the write disposition of the destination table:

    • -a or --append_table appends the data from the source tables to an existing table in the destination dataset.
    • -f or --force overwrites an existing destination table in the destination dataset and doesn't prompt you for confirmation.
    • -n or --no_clobber returns the following error message if the table exists in the destination dataset: Table 'project_id:dataset.table' already exists, skipping. If -n is not specified, the default behavior is to prompt you to choose whether to replace the destination table.
    • --destination_kms_key is the customer-managed Cloud Key Management Service key used to encrypt the destination table.

    --destination_kms_key is not demonstrated here. See Protecting data with Cloud Key Management Service keys for more information.

    If the source or destination dataset is in a project other than your default project, add the project ID to the dataset names in the following format: project_id:dataset.

    (Optional) Supply the --location flag and set the value to your location.

    bq --location=location cp \
    -a -f -n \
    project_id:dataset.source_table,project_id:dataset.source_table \
    project_id:dataset.destination_table
    

    Replace the following:

    • location: the name of your location. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1. You can set a default value for the location using the .bigqueryrc file.
    • project_id: your project ID.
    • dataset: the name of the source or destination dataset.
    • source_table: the table that you're copying.
    • destination_table: the name of the table in the destination dataset.

    Examples:

    To copy the mydataset.mytable table and the mydataset.mytable2 table to mydataset2.tablecopy table, enter the following command . All datasets are in your default project.

    bq cp \
    mydataset.mytable,mydataset.mytable2 \
    mydataset2.tablecopy
    

    To copy the mydataset.mytable table and the mydataset.mytable2 table to myotherdataset.mytable table and to overwrite a destination table with the same name, enter the following command. The destination dataset is in the myotherproject project, not your default project. The -f shortcut is used to overwrite the destination table without a prompt.

    bq cp -f \
    mydataset.mytable,mydataset.mytable2 \
    myotherproject:myotherdataset.mytable
    

    To copy the myproject:mydataset.mytable table and the myproject:mydataset.mytable2 table and to return an error if the destination dataset contains a table with the same name, enter the following command. The destination dataset is in the myotherproject project. The -n shortcut is used to prevent overwriting a table with the same name.

    bq cp -n \
    myproject:mydataset.mytable,myproject:mydataset.mytable2 \
    myotherproject:myotherdataset.mytable
    

    To copy the mydataset.mytable table and the mydataset.mytable2 table and to append the data to a destination table with the same name, enter the following command. The source dataset is in your default project. The destination dataset is in the myotherproject project. The -a shortcut is used to append to the destination table.

    bq cp -a \
    mydataset.mytable,mydataset.mytable2 \
    myotherproject:myotherdataset.mytable
    

API

To copy multiple tables using the API, call the jobs.insert method, configure a table copy job, and specify the sourceTables property.

Specify your region in the location property in the jobReference section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

// copyMultiTable demonstrates using a copy job to copy multiple source tables into a single destination table.
func copyMultiTable(projectID, srcDatasetID string, srcTableIDs []string, dstDatasetID, dstTableID string) error {
	// projectID := "my-project-id"
	// srcDatasetID := "sourcedataset"
	// srcTableIDs := []string{"table1","table2"}
	// dstDatasetID = "destinationdataset"
	// dstTableID = "destinationtable"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	srcDataset := client.Dataset(srcDatasetID)
	dstDataset := client.Dataset(dstDatasetID)
	var tableRefs []*bigquery.Table
	for _, v := range srcTableIDs {
		tableRefs = append(tableRefs, srcDataset.Table(v))
	}
	copier := dstDataset.Table(dstTableID).CopierFrom(tableRefs...)
	copier.WriteDisposition = bigquery.WriteTruncate
	job, err := copier.Run(ctx)
	if err != nil {
		return err
	}
	status, err := job.Wait(ctx)
	if err != nil {
		return err
	}
	if err := status.Err(); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.CopyJobConfiguration;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.TableId;
import java.util.Arrays;

public class CopyMultipleTables {

  public static void runCopyMultipleTables() {
    // TODO(developer): Replace these variables before running the sample.
    String destinationDatasetName = "MY_DATASET_NAME";
    String destinationTableId = "MY_TABLE_NAME";
    copyMultipleTables(destinationDatasetName, destinationTableId);
  }

  public static void copyMultipleTables(String destinationDatasetName, String destinationTableId) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId destinationTable = TableId.of(destinationDatasetName, destinationTableId);

      // For more information on CopyJobConfiguration see:
      // https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.html
      CopyJobConfiguration configuration =
          CopyJobConfiguration.newBuilder(
                  destinationTable,
                  Arrays.asList(
                      TableId.of(destinationDatasetName, "table1"),
                      TableId.of(destinationDatasetName, "table2")))
              .build();

      // For more information on Job see:
      // https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.html
      Job job = bigquery.create(JobInfo.of(configuration));

      // Blocks until this job completes its execution, either failing or succeeding.
      Job completedJob = job.waitFor();
      if (completedJob == null) {
        System.out.println("Job not executed since it no longer exists.");
        return;
      } else if (completedJob.getStatus().getError() != null) {
        System.out.println(
            "BigQuery was unable to copy tables due to an error: \n" + job.getStatus().getError());
        return;
      }
      System.out.println("Table copied successfully.");
    } catch (BigQueryException | InterruptedException e) {
      System.out.println("Table copying job was interrupted. \n" + e.toString());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

// Import the Google Cloud client library
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function copyTableMultipleSource() {
  // Copy multiple source tables to a given destination.

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // sourceTable = 'my_table';
  // destinationTable = 'testing';

  // Create a client
  const dataset = bigquery.dataset(datasetId);

  const metadata = {
    createDisposition: 'CREATE_NEVER',
    writeDisposition: 'WRITE_TRUNCATE',
  };

  // Create table references
  const table = dataset.table(sourceTable);
  const yourTable = dataset.table(destinationTable);

  // Copy table
  const [apiResponse] = await table.copy(yourTable, metadata);
  console.log(apiResponse.configuration.copy);
}

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.


from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

# TODO(developer): Set dest_table_id to the ID of the destination table.
# dest_table_id = "your-project.your_dataset.your_table_name"

# TODO(developer): Set table_ids to the list of the IDs of the original tables.
# table_ids = ["your-project.your_dataset.your_table_name", ...]

job = client.copy_table(table_ids, dest_table_id)  # Make an API request.
job.result()  # Wait for the job to complete.

print("The tables {} have been appended to {}".format(table_ids, dest_table_id))

View current quota usage

You can view your current usage of query, load, extract, or copy jobs by running an INFORMATION_SCHEMA query to view metadata about the jobs ran over a specified time period. You can compare your current usage against the quota limit to determine your quota usage for a particular type of job. The following example query uses the INFORMATION_SCHEMA.JOBS view to list the number of query, load, extract, and copy jobs by project:

SELECT
  sum(case  when job_type="QUERY" then 1 else 0 end) as QRY_CNT,
  sum(case  when job_type="LOAD" then 1 else 0 end) as LOAD_CNT,
  sum(case  when job_type="EXTRACT" then 1 else 0 end) as EXT_CNT,
  sum(case  when job_type="COPY" then 1 else 0 end) as CPY_CNT
FROM `region-eu`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE date(creation_time)= CURRENT_DATE()

To view the quota limits for copy jobs, see Quotas and limits - Copy jobs.

Delete tables

You can delete a table in the following ways:

  • Using the Google Cloud console.
  • Using a data definition language (DDL) DROP TABLE statement.
  • Using the bq command-line tool bq rm command.
  • Calling the tables.delete API method.
  • Using the client libraries.

To delete all of the tables in the dataset, delete the dataset.

When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.

Required permissions

To delete a table, you need the following IAM permissions:

  • bigquery.tables.delete
  • bigquery.tables.get

Each of the following predefined IAM roles includes the permissions that you need in order to delete a table:

  • roles/bigquery.dataEditor
  • roles/bigquery.dataOwner
  • roles/bigquery.admin

Additionally, if you have the bigquery.datasets.create permission, you can delete tables of the datasets that you create.

For more information about IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Delete a table

To delete a table:

Console

  1. In the Explorer panel, expand your project and dataset, then select the table.

  2. In the details panel, click Delete table.

  3. Type "delete" in the dialog, then click Delete to confirm.

SQL

Use the DROP TABLE statement. The following example deletes a table named mytable:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    DROP TABLE mydataset.mytable;
    

  3. Click Run.

For more information about how to run queries, see Running interactive queries.

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Use the bq rm command with the --table flag (or -t shortcut) to delete a table. When you use the bq command-line tool to remove a table, you must confirm the action. You can use the --force flag (or -f shortcut) to skip confirmation.

    If the table is in a dataset in a project other than your default project, add the project ID to the dataset name in the following format: project_id:dataset.

    bq rm \
    -f \
    -t \
    project_id:dataset.table
    

    Replace the following:

    • project_id: your project ID
    • dataset: the name of the dataset that contains the table
    • table: the name of the table that you're deleting

    Examples:

    To delete the mytable table from the mydataset dataset, enter the following command. The mydataset dataset is in your default project.

    bq rm -t mydataset.mytable
    

    To delete the mytable table from the mydataset dataset, enter the following command. The mydataset dataset is in the myotherproject project, not your default project.

    bq rm -t myotherproject:mydataset.mytable
    

    To delete the mytable table from the mydataset dataset, enter the following command. The mydataset dataset is in your default project. The command uses the -f shortcut to bypass confirmation.

    bq rm -f -t mydataset.mytable
    

API

Call the tables.delete API method and specify the table to delete using the tableId parameter.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery C# API reference documentation.


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryDeleteTable
{
    public void DeleteTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id",
        string tableId = "your_table_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        client.DeleteTable(datasetId, tableId);
        Console.WriteLine($"Table {tableId} deleted.");
    }
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.