Restore table snapshots

This document describes how to restore a table from a table snapshot by using the Google Cloud Console, a CREATE TABLE CLONE query, a bq copy --restore command, or the jobs.insert API. It is intended for users who are familiar with table snapshots.

Permissions and roles

This section describes the Identity and Access Management (IAM) permissions that you need to restore a table snapshot, and the predefined IAM roles that grant those permissions.

Permissions

To restore a table snapshot, you need the following permissions:

Permission Resource
All of the following:

bigquery.tables.get
bigquery.tables.getData
bigquery.tables.restoreSnapshot
The table snapshot that you want to restore.
bigquery.tables.create The dataset that contains the destination table.

Roles

The predefined BigQuery roles that provide the required permissions are as follows:

Role Resource
Any of the following:

bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
The table snapshot that you want to restore.
Any of the following:

bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
The dataset that contains the destination table.

Restore a table snapshot

To restore a snapshot, specify the table snapshot that you want to restore from and the destination table. The destination table can be a new table, or you can overwrite an existing table with the table snapshot.

Restore to a new table

For example, restore the table snapshot library_backup.books into the new table library.books_new as follows:

Console

  1. In the Cloud Console, go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer pane, expand the project and dataset nodes of the table snapshot you want to restore from, in this case, library_backup.books.

  3. In the table snapshot pane that appears, click Restore.

    Restore table from snapshot

  4. In the Restore snapshot pane that appears, enter the Project, Dataset ID, and Table name for the destination table.

  5. To overwrite an existing table with the data from the table snapshot, select Overwrite table if it exists. If you don't select this option, and the destination table already exists, then you get an error.

  6. Click Restore.

SQL

Run the following query:

CREATE TABLE library.books_new
  CLONE library_backup.books

bq

Enter the following command in the Cloud Shell:

Go to Cloud Shell

bq cp --restore --no_clobber library_backup.books library.books_new

The --no_clobber flag instructs the command to fail if the destination table already exists.

API

Call the jobs.insert method with the following parameters:

Parameter Value
projectId myProject
Request body
{
  "configuration": {
    "copy": {
      "sourceTables": [
        {
          "projectId": "myProject",
          "datasetId": "library_backup",
          "tableId": "books"
        }
      ],
      "destinationTable": {
        "projectId": "myProject",
        "datasetId": "library",
        "tableId": "books_new"
      },
      "operationType": "RESTORE",
      "writeDisposition": "WRITE_EMPTY"
    }
  }
}

If an expiration is not specified, then the destination table expires after the default table expiration time for the dataset that contains the destination table; in this case, the library dataset.

Overwrite an existing table

Restore the table snapshot library_backup.books by overwriting the existing table library.books as follows:

Console

  1. In the Cloud Console, go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer pane, expand the project and dataset nodes of the table snapshot you want to restore from, in this case, library_backup.books.

  3. In the table snapshot pane that appears, click Restore.

    Restore table from snapshot

  4. In the Restore snapshot pane that appears, enter the Project, Dataset ID, and Table name for the table you want to overwrite.

  5. Click Restore.

SQL

Run the following query:

CREATE OR REPLACE TABLE
  library.books
  CLONE library_backup.books

bq

Enter the following command in the Cloud Shell:

Go to Cloud Shell

bq cp --restore --force library_backup.books library.books

API

Call the jobs.insert method with the following parameters:

Parameter Value
projectId myProject
Request body
{
  "configuration": {
    "copy": {
      "sourceTables": [
        {
          "projectId": "myProject",
          "datasetId": "library_backup",
          "tableId": "books"
        }
      ],
      "destinationTable": {
        "projectId": "myProject",
        "datasetId": "library",
        "tableId": "books"
      },
      "operationType": "RESTORE",
      "writeDisposition": "WRITE_TRUNCATE"
    }
  }
}

If an expiration is not specified, then the destination table expires after the default table expiration time for the dataset that contains the destination table; in this case, the library dataset.

What's next