This page provides an overview of Cloud Bigtable backups.
Cloud Bigtable backups let you save a copy of a table's schema and data, then restore from the backup to a new table at a later time. Before you read this page, you should be familiar with the Overview of Cloud Bigtable and Managing tables.
What backups are for
Backups can help you recover from application-level data corruption or from operator errors such as accidentally deleting a table.
What backups are not for
Backups are not intended for protection against regional or zonal failures. Use replication if you need the ability to fail over to different regions or zones.
Backups are not readable, so they are not useful for offline analytics.
- Fully integrated: Backups are handled entirely by the Cloud Bigtable service, with no need to import or export.
- Cost savings: Using Cloud Bigtable backups lets you avoid the costs associated with exporting, storing, and importing data using other services.
- Storage efficiency: Backups are incremental. Storage is optimized so that a backup captures only data that has changed since the previous backup.
- Automatic expiration: Each backup has a user-defined expiration date that can be up to 30 days after the backup is created.
Working with backups
See Managing backups for step-by-step instructions on backing up and restoring a table, as well as operations such as updating and deleting backups.
Use the following to work with Cloud Bigtable backups:
- The Cloud Console
- Cloud Bigtable client libraries
You can also access the API directly, but we strongly recommend that you do so only if you cannot use a Cloud Bigtable client library that makes backup calls to the API.
How backups work
A table backup is a cluster-level resource. Even if a table is in an instance with multiple clusters (meaning the cluster is using replication), a backup is created and stored on only one cluster in that instance.
A backup of a table includes all the data that was in the table at the time the backup was created, on the cluster where the backup is created. A backup is never larger than the size of the source table at the time the backup is created. You can create up to 50 backups per table per cluster.
You can delete a table that has a backup. To protect your backups, you cannot delete a cluster that contains a backup, and you cannot delete an instance that has one or more backups in any cluster.
A backup still exists after it is restored to a new table. You can delete it or let it expire when you no longer need it. Backup storage does not count toward a project's node storage limit.
Data in backups is encrypted and stored using a proprietary format.
There is no charge to create or restore a backup.
To store a backup, you are charged the standard backup storage rate for the region that the cluster containing the backup is in.
Bigtable backups optimize backup storage utilization. Backups are incremental. Therefore, the amount of backup storage for your backup depends on the amount of data divergence from the point of the previous backup.
This section describes additional concepts to understand when backing up and restoring a table in an instance that uses replication.
When you take a backup of a table in a replicated instance, you choose the cluster where you want to create and store the backup. There's no need to stop writing to the cluster that contains the backup, but you should be aware of how replicated writes to the cluster are handled.
A backup is a copy of the table in its state on the cluster where the backup is stored, at the time the backup is created. Table data that has not yet been replicated from another cluster in the instance is not included in the backup.
Each backup has a start and end time. Writes that are sent to the cluster shortly before or during the backup operation might not be included in the backup. Two factors contribute to this uncertainty:
- A write might be sent to a section of the table that the backup has already copied.
- A write to another cluster might not have replicated to the cluster that contains the backup.
In other words, there's a chance that some writes with a timestamp before the time of the backup might not be included in the backup. If this is unacceptable for your business requirements, you can use a consistency token with your write requests to ensure that all replicated writes are included in a backup.
When you restore a backup to a new table, replication to and from the other clusters in the instance starts immediately after the restore operation has completed on the cluster where the backup was stored.
Creating a backup usually takes less than a minute, although it can take up to one hour. Under normal circumstances, backup creation does not affect serving performance.
For optimal performance, do not create a backup of a single table more than once every five minutes. Creating backups more frequently can potentially lead to an observable increase in serving latency.
Restoring a backup to a table in a single-cluster instance takes a few minutes. In multi-cluster instances, restoration takes longer because the data has to be copied to all the clusters.
If you store your tables in SSD clusters, you may initially experience higher read latency, even after a restore is complete, while the table is optimized. You can check the status at any time during the restore operation to see if optimization is still in process.
IAM permissions control access to backup and restore operations. Backup permissions are at the instance level and apply to all backups in the instance. To create a backup of a table, you must have permission to read the table and create backups in the instance that the table is in.
|Action||Required IAM permission|
|Create a backup||bigtable.tables.readRows, bigtable.backups.create|
|Get a backup||bigtable.backups.get|
|Delete a backup||bigtable.backups.delete|
|Update a backup||bigtable.backups.update|
|Restore a backup to a table||bigtable.tables.create, bigtable.backups.restore|
|Get an operation||bigtable.instances.get|
- Don't back up a table more frequently than once every five minutes.
- When you back up a table that uses replication, choose the cluster to
store the backup after considering the following factors:
- Cost. One cluster in your instance may be in a lower-cost region than the others.
- Proximity to your application server. You might want to store the backup as close to your serving application as possible.
- Storage utilization. You need enough storage space to keep your backup as it grows in size. Depending on your workload, you could have clusters of different sizes or with different disk usage. This may factor into which cluster you choose.
- If you need to ensure that all replicated writes are included in a backup when you back up a table in an instance that uses replication, use a consistency token with your write requests.
- Plan ahead what you will name the new table if you need to restore from a backup. The key point is to be prepared ahead of time so that you don't have to decide when you're dealing with a problem.
- If you are restoring a table for a reason other than accidental deletion, make sure all reads and writes are going to the new table before you delete the original table.
Quotas and limits
Backup and restore requests and backup storage are subject to Cloud Bigtable quotas and limits.
The following limitations apply to Cloud Bigtable backups:
- You cannot read directly from a backup.
- You cannot restore a table to an existing table or to a new table in a different instance.
- If you restore a backup to a table in an SSD cluster and then delete the newly restored table, the table deletion might take a while to complete because Cloud Bigtable waits for table optimization to finish.
- Backups are zonal and share the same availability guarantees as the cluster where the backup is created. Backups do not protect against regional outages.
- A backup is a version of a table in a single cluster at a specific time. Backups do not represent a consistent state. The same also applies to backups of the same table in different clusters.
- You cannot back up more than one table in a single operation.
- You cannot export, copy, or move a Cloud Bigtable backup to another service, such as Cloud Storage.