This page provides an overview of Cloud Bigtable backups.

Bigtable backups let you save a copy of a table's schema and data, then restore from the backup to a new table at a later time. Before you read this page, you should be familiar with the Overview of Bigtable and Managing tables.

What backups are for

Backups can help you recover from application-level data corruption or from operator errors such as accidentally deleting a table.

What backups are not for

Backups are not intended for protection against regional or zonal failures. Use replication if you need the ability to fail over to different regions or zones.

Backups are not readable, so they are not useful for offline analytics.

Key features

  • Fully integrated: Backups are handled entirely by the Bigtable service, with no need to import or export.
  • Cost savings: Using Bigtable backups lets you avoid the costs associated with exporting, storing, and importing data using other services.
  • Automatic expiration: Each backup has a user-defined expiration date that can be up to 30 days after the backup is created.

Working with backups

See Managing backups for step-by-step instructions on backing up and restoring a table, as well as operations such as updating and deleting backups.

Use the following to work with Bigtable backups:

You can also access the API directly, but we strongly recommend that you do so only if you cannot use a Bigtable client library that makes backup calls to the API.

How backups work


A table backup is a cluster-level resource. Even if a table is in an instance with multiple clusters (meaning the cluster is using replication), a backup is created and stored on only one cluster in that instance.

Backup stored on a single cluster

A backup of a table includes all the data that was in the table at the time the backup was created, on the cluster where the backup is created. A backup is never larger than the size of the source table at the time the backup is created. You can create up to 50 backups per table per cluster.

You can delete a table that has a backup. To protect your backups, you cannot delete a cluster that contains a backup, and you cannot delete an instance that has one or more backups in any cluster.

A backup still exists after it is restored to a new table. You can delete it or let it expire when you no longer need it. Backup storage does not count toward a project's node storage limit.

Data in backups is encrypted and stored using a proprietary format.


There is no charge to create or restore a backup.

To store a backup, you are charged the standard backup storage rate for the region that the cluster containing the backup is in.

A backup is a complete logical copy of a table. Behind the scenes, Bigtable optimizes backup storage utilization, which means that a backup shares physical storage with the original table or other backups of the table whenever possible. Because of Bigtable's built-in storage optimizations, the cost of the backup might sometimes be less than the cost of a full physical copy of the table backup.

If you restore a table in an instance that uses replication, you are charged a one-time replication cost for the data to be copied to all clusters in the instance.


When you create a backup in an instance that is protected by a customer-managed encryption key (CMEK), the backup is pinned to the primary version of the table's CMEK at the time it is taken. Once the backup is created, its key and key version cannot be modified, even if the KMS key is rotated.

When you restore a table from a backup, the key version that the backup is pinned to must be enabled for the backup decryption process to succeed. The new table is protected with the latest primary version of the target instance's CMEK key.

Replication considerations

This section describes additional concepts to understand when backing up and restoring a table in an instance that uses replication.

Backing up

When you take a backup of a table in a replicated instance, you choose the cluster where you want to create and store the backup. There's no need to stop writing to the cluster that contains the backup, but you should be aware of how replicated writes to the cluster are handled.

A backup is a copy of the table in its state on the cluster where the backup is stored, at the time the backup is created. Table data that has not yet been replicated from another cluster in the instance is not included in the backup.

Each backup has a start and end time. Writes that are sent to the cluster shortly before or during the backup operation might not be included in the backup. Two factors contribute to this uncertainty:

  • A write might be sent to a section of the table that the backup has already copied.
  • A write to another cluster might not have replicated to the cluster that contains the backup.

In other words, there's a chance that some writes with a timestamp before the time of the backup might not be included in the backup. If this is unacceptable for your business requirements, you can use a consistency token with your write requests to ensure that all replicated writes are included in a backup.


When you restore a backup to a new table, replication to and from the other clusters in the instance starts immediately after the restore operation has completed on the cluster where the backup was stored.


Backing up

Creating a backup usually takes less than a minute, although it can take up to one hour. Under normal circumstances, backup creation does not affect serving performance.

For optimal performance, do not create a backup of a single table more than once every five minutes. Creating backups more frequently can potentially lead to an observable increase in serving latency.


Restoring a backup to a table in a single-cluster instance takes a few minutes. In multi-cluster instances, restoration takes longer because the data has to be copied to all the clusters.

If you store your tables in SSD clusters, you may initially experience higher read latency, even after a restore is complete, while the table is optimized. You can check the status at any time during the restore operation to see if optimization is still in process.

Access control

IAM permissions control access to backup and restore operations. Backup permissions are at the instance level and apply to all backups in the instance. To create a backup of a table, you must have permission to read the table and create backups in the instance that the table is in.

Action Required IAM permission
Create a backup bigtable.tables.readRows, bigtable.backups.create
Get a backup bigtable.backups.get
List backups bigtable.backups.list
Delete a backup bigtable.backups.delete
Update a backup bigtable.backups.update
Restore a backup to a table bigtable.tables.create, bigtable.backups.restore
Get an operation bigtable.instances.get
List operations bigtable.instances.get

Best practices


  • Don't back up a table more frequently than once every five minutes.
  • When you back up a table that uses replication, choose the cluster to store the backup after considering the following factors:
    • Cost. One cluster in your instance may be in a lower-cost region than the others.
    • Proximity to your application server. You might want to store the backup as close to your serving application as possible.
    • Storage utilization. You need enough storage space to keep your backup as it grows in size. Depending on your workload, you could have clusters of different sizes or with different disk usage. This may factor into which cluster you choose.
  • If you need to ensure that all replicated writes are included in a backup when you back up a table in an instance that uses replication, use a consistency token with your write requests.

Restoring backups

  • Plan ahead what you will name the new table if you need to restore from a backup. The key point is to be prepared ahead of time so that you don't have to decide when you're dealing with a problem.
  • If you are restoring a table for a reason other than accidental deletion, make sure all reads and writes are going to the new table before you delete the original table.

Quotas and limits

Backup and restore requests and backup storage are subject to Bigtable quotas and limits.


The following limitations apply to Bigtable backups:

  • You cannot read directly from a backup.
  • You cannot restore a table to an existing table or to a new table in a different instance.
  • If you restore a backup to a table in an SSD cluster and then delete the newly restored table, the table deletion might take a while to complete because Bigtable waits for table optimization to finish.
  • Backups are zonal and share the same availability guarantees as the cluster where the backup is created. Backups do not protect against regional outages.
  • A backup is a version of a table in a single cluster at a specific time. Backups do not represent a consistent state. The same also applies to backups of the same table in different clusters.
  • You cannot back up more than one table in a single operation.
  • You cannot export, copy, or move a Bigtable backup to another service, such as Cloud Storage.

What's next