Bigtable backups overview

This page provides an overview of Bigtable backups. The content presented here is intended for Bigtable administrators and developers.

Bigtable backups let you save a copy of a table's schema and data and then restore from the backup to a new table later. You can create backups manually or enable automated backup to let Bigtable create daily backups. You can also create a copy of a backup.

Before you read this page, you should be familiar with the Bigtable overview and Manage tables.

Features

  • Fully integrated: Backups are handled entirely by the Bigtable service, with no need to import or export.
  • Incremental: A backup shares physical storage with the source table and other backups of the table.
  • Cost effective: Using Bigtable backups lets you avoid the costs associated with exporting, storing, and importing data using other services.
  • Automatic expiration: Each backup has a user-defined expiration date that can be up to 90 days after the backup is created. You can store a copy of a backup for up to 30 days.
  • Flexible restore options: You can restore from a backup to a table in a different instance from where the backup was created.
  • Automated backup: Enable automated backup to let Bigtable create daily backups.

Use cases

Backups are useful for the following use cases:

  • Business continuity
  • Regulatory compliance
  • Testing and development
  • Disaster recovery

Consider the following disaster recovery scenarios:

Goal Backup strategy Restoration strategy
Protect against human error: You want to always have a recent backup of your data ready in case of accidental deletion or corruption. Determine the backups creation schedule that's right for your business needs, such as daily. Optionally, create periodic copies of the backups and store them in a different project or region for increased isolation and protection. For even more protection, store the backup copies in a project or instance with restricted access permissions. Restore to a new table from the backup or copy, and then re-route requests to the new table.
Zone unavailability: You need to make sure that in the unlikely event that a Google Cloud zone becomes unavailable, your data is still available. Create backups on a regular basis, such as daily. Then, periodically create a copy of the most recent backup and store it on one or more clusters in different zones (optionally in a different instance or project). If the zone where your serving cluster becomes unavailable, restore from the remote backup copy to a new table, and then re-route requests to the new table.
Data corruption: Use a backup to recover some of a table's data, such as when part of the source table has become corrupted. Periodically create backups of your data. Restore from the backup to a new table on the new instance. Then write an application using a Bigtable client library or Dataflow that reads from the new table and then writes the data back to the source table. When the data has been copied to the original table, delete the new table.

Working with Bigtable backups

The following actions are available for Bigtable backups. In all cases, the destination project, instance, and cluster must already exist. You are not able to create these resources as part of a backup operation. You are not able to create a copy of a backup copy.

Action Destination options
Create a backup
  • Any cluster in the same instance as the source table
Restore from a backup to a new table
  • Any instance
  • Any Bigtable region
  • Any project
Copy a backup
  • Any instance
  • Any Bigtable region
  • Any project

See Manage backups for step-by-step instructions on these actions as well as operations such as updating and deleting backups.

Use the following to work with Bigtable backups:

  • The Google Cloud console
  • The Google Cloud CLI
  • The Cloud Bigtable client libraries

You can also access the Cloud Bigtable Admin API directly, but we strongly recommend that you do so only if you cannot use a Cloud Bigtable client library that makes backup calls to the API.

How Bigtable backups work

Creating a backup involves understanding backup storage and retention in Bigtable.

Backup storage

A table backup can be created manually, or you can enable automated backup to let Bigtable to take daily backups. A table backup is stored on a cluster in an instance. In case of manual backups, your table backup is stored on one cluster in the instance that you select. When automated backup is enabled, a table backup is stored on each cluster in the instance.

A backup of a table includes all the data that was in the table at the time the backup was created, on the cluster where the backup is created. A backup is never larger than the size of the source table at the time that the backup is created.

Bigtable backups are incremental. The amount of storage that a backup consumes depends on the size of the table and the extent to which it can share storage of unchanged data with the original table or other backups of the same table. For that reason, a backup's size depends on the amount of data divergence since the prior backup.

You can create up to 150 backups per table per cluster.

You can delete a table that has a backup. To protect your backups, you cannot delete a cluster that contains a backup, and you cannot delete an instance that has one or more backups in any cluster.

A backup still exists after you restore from it to a new table. You can delete it or let it expire when you no longer need it. Backup storage does not count toward the node storage limit for a project.

Data in backups is encrypted.

Retention

You can specify a retention period of up to 90 days for a backup. If you create a copy of a backup, the maximum retention period for the copy is 30 days from the time the copy is created.

For tables with automated backup enabled, the default retention period is 3 days. You can modify the retention period for a backup to up to 90 days days from backup creation time.

Post-restoration storage

The storage cost for a new table restored from a backup is the same as for any table.

A table restored from a backup might not consume the same amount of storage as the original table, and it might decrease in size after restoration. The size difference depends on how recently compaction has occurred on the source cluster and the destination cluster.

Because compaction occurs on a rolling basis, it's possible that compaction occurs as soon as the table is created. However, compaction can take up to a week to occur.

Costs

Standard network costs apply when working with backups. You are not charged for backup operations, including creating, copying, or restoring from a backup.

Storage costs

To store a backup or a copy of a backup, you're charged the standard backup storage rate for the region that the cluster containing the backup or backup copy is in.

A backup is a complete logical copy of a table. Behind the scenes, Bigtable optimizes backup storage utilization. This optimization means that a backup is incremental — it shares physical storage with the original table or with other backups of the table whenever possible. Because of Bigtable's built-in storage optimizations, the cost to store a backup or a copy of a backup might sometimes be less than the cost of a full physical copy of the table backup.

In replicated instances where automated backup is enabled, the storage costs might be higher because backups are created on each cluster daily.

Costs when copying a backup

When you create a copy of a backup in a different region than the source backup, you are charged standard network rates for the cost of copying the data to the destination cluster. You are not charged for network traffic when you create a copy in the same region as the source backup.

Costs when restoring

When you restore a new table from a backup, you are billed for the network cost of replication. If the new table is in an instance that uses replication, you are charged a one-time replication cost for the data to be copied to all clusters in the instance.

If you restore to a different instance than where the backup was created, and the backup's instance and the destination instance don't have at least one cluster in the same region, you are charged a one-time cost for the initial data copy to the destination cluster at the standard network rates.

CMEK

When you create a backup in a cluster that is protected by a customer-managed encryption key (CMEK), the backup is pinned to the primary version of the cluster's CMEK key at the time it is taken. Once the backup is created, its key and key version cannot be modified, even if the KMS key is rotated.

When you restore from a backup, the key version that the backup is pinned to must be enabled for the backup decryption process to succeed. The new table is protected with the latest primary version of the CMEK key for each cluster in the destination instance. If you want to restore from a CMEK-protected backup to a different instance, the destination instance must be CMEK-protected as well but does not need to have the same CMEK configuration as the source instance.

Replication considerations

This section describes additional concepts to understand when backing up and restoring a table in an instance that uses replication.

Replication and backing up

When you take a backup of a table manually in a replicated instance, you choose the cluster where you want to create and store the backup. For tables with automated backup enabled, a daily backup is taken on each cluster in the instance.

There's no need to stop writing to the cluster that contains the backup, but you should understand how replicated writes to the cluster are handled.

A backup is a copy of the table in its state on the cluster where the backup is stored, at the time the backup is created. Table data that has not yet been replicated from another cluster in the instance is not included in the backup.

Each backup has a start and end time. Writes that are sent to the cluster shortly before or during the backup operation might not be included in the backup. Two factors contribute to this uncertainty:

  • A write might be sent to a section of the table that the backup has already copied.
  • A write to another cluster might not have been replicated to the cluster that contains the backup.

In other words, there's a chance that some writes with a timestamp before the time of the backup might not be included in the backup. This is also true for backups created when automated backup is enabled. An instance's backups are not exact copies of each other because backup times can vary from cluster to cluster.

If this inconsistency is unacceptable for your business requirements, you can use a consistency token with your write requests to ensure that all replicated writes are included in a backup.

Replication and restoring

When you restore a backup to a new table, replication to and from the other clusters in the instance starts immediately after the restore operation has completed on the destination cluster.

Performance

While creating backups, use the following best practices to ensure that your performance remains optimal.

Performance when backing up

Creating a backup usually takes less than a minute, although it can take up to one hour. Under normal circumstances, backup creation does not affect serving performance.

For optimal performance, don't create a backup of a single table more than once every five minutes. Creating backups more frequently can potentially lead to an observable increase in serving latency.

Performance when restoring

Restoring from a backup to a table in a single-cluster instance takes a few minutes. In replicated instances, restoration takes longer because the data has to be copied to all the clusters. Bigtable always chooses the most efficient route to copy data.

If you restore to a different instance from where the backup was created, the restore operation takes longer than if you restore to the same instance. This is especially true if the destination instance does not have a cluster in the same zone as the cluster where the backup was created.

A bigger table takes longer to restore than a smaller table.

If you have an SSD instance, you might initially experience higher read latency, even after a restore is complete, while the table is optimized. You can check the status at any time during the restore operation to see if optimization is still in process.

If you restore to a different instance from where the backup was created, the destination instance can use HDD or SSD storage. It does not need to use the same storage type as the source instance.

Access control

IAM permissions control access to backup and restore operations. Backup permissions are at the instance level and apply to all backups in the instance.

The account that you use to create a backup of a table must have permission to read the table and create backups in the instance that the table is in (the source instance).

The account that you use to copy a backup must have permission to read the source backup and to create a backup in the destination instance and project.

The account that you use to restore a new table from a backup must have permission to create a table in the instance that you are restoring to.

Action Required IAM permission
Create a backup bigtable.tables.readRows, bigtable.backups.create
Get a backup bigtable.backups.get
List backups bigtable.backups.list
Delete a backup bigtable.backups.delete
Update a backup bigtable.backups.update
Copy a backup bigtable.backups.read, bigtable.backups.create
Restore from a backup to a new table bigtable.tables.create, bigtable.backups.restore
Get an operation bigtable.instances.get
List operations bigtable.instances.get

Best practices

The following best practices must be noted before creating a backup strategy.

Creating backups

  • Don't back up a table more frequently than once every five minutes.
  • When you back up a table that uses replication, choose the cluster to store the backup after considering the following factors:
    • Cost. One cluster in your instance may be in a lower-cost region than the others.
    • Proximity to your application server. You might want to store the backup as close to your serving application as possible.
    • Storage utilization. You need enough storage space to keep your backups as they accumulate. Depending on your workload, you could have clusters of different sizes or with different disk usage. This may factor into which cluster you choose.
  • If you need to ensure that all replicated writes are included in a backup when you back up a table in an instance that uses replication, use a consistency token with your write requests.

Restoring from backups

  • Plan ahead what you will name the new table if you need to restore from a backup. The key point is to be prepared ahead of time so that you don't have to decide when you're dealing with a problem.
  • If you are restoring a table for a reason other than accidental deletion, make sure all reads and writes are going to the new table before you delete the original table.
  • If you plan to restore to a different instance, create the destination instance before you initiate the backup restore operation.

Quotas and limits

Backup and restore requests and backup storage are subject to Bigtable quotas and limits.

Limitations

The following limitations apply to Bigtable backups:

General

  • You can't read directly from a backup.
  • A backup is a version of a table in a single cluster at a specific time. Backups don't represent a consistent state. The same also applies to backups of the same table in different clusters.
  • You cannot back up more than one table in a single operation.
  • You cannot export, copy, or move a Bigtable backup to another service, such as Cloud Storage.
  • Bigtable backups contain only Bigtable data and are not integrated with or related to backups for other Google services.

Restoring

  • You can't restore from a backup to an existing table.
  • You can only restore to an instance that already exists. Bigtable does not create a new instance when restoring from a backup. If the destination instance specified in a restore request does not exist, the restore operation fails.
  • If you restore from a backup to a table in an SSD cluster and then delete the newly restored table, the table deletion might take a while to complete because Bigtable waits for table optimization to finish.

Copying

  • You can't create a copy of a backup that is within 24 hours of expiring.
  • You can't create a copy of a backup copy.

CMEK

  • A backup that is protected by CMEK must be restored to a new table in an instance that is CMEK-protected.
  • When you create a copy of a backup that is CMEK-protected, the destination cluster must also be CMEK-protected.

What's next