This page describes how backups of your Cloud SQL instance work.
For step-by-step directions for scheduling backups or creating an on-demand backup, see Creating and Managing On-Demand and Automatic Backups.
For an overview of how to restore data to an instance from the backup, see Overview of restoring an instance.
What backups provide
Backups help you restore lost data to your Cloud SQL instance. Additionally, if an instance is having a problem, you can restore it to a previous state by using the backup to overwrite it. Enable automated backups for any instance that contains necessary data. Backups protect your data from loss or damage.Enabling automated backups, along with transaction logging, is also required for some operations, such as clone and replica creation.
What backups cost
By default, for each instance, Cloud SQL retains seven automated backups, in addition to on-demand backups. You can configure how many automated backups to retain (from 1 to 365). We charge a lower rate for backup storage than for other types of instances.
Cloud SQL doesn't take a backup of an instance if you stop or delete the instance. In case you delete an instance, the data is preserved for 4 days.See the pricing page for more information.
Backups versus exports
Backups are managed by Cloud SQL according to retention policies, and are stored separately from the Cloud SQL instance. Cloud SQL backups differ from an export uploaded to Cloud Storage, where you manage the lifecycle. Backups encompass the entire database. Exports can select specific contents.
Backup and restore operations can't be used to upgrade a database to a later version. You can only restore from a backup to an instance with the same database version.To upgrade to a later version, consider using the Database Migration Service or exporting and then importing your database to a new Cloud SQL instance.
About backup size
Cloud SQL backups are incremental. They contain only data that changed after the previous backup was taken. Your oldest backup is a similar size to your database, but the sizes of subsequent backups depend on the rate of change of your data. When the oldest backup is deleted, the size of the next oldest backup increases so that a full backup still exists.
Types of backups
Cloud SQL performs two types of backups:
You can create a backup at any time. This could be useful if you are about to perform a risky operation on your database, or if you need a backup and you do not want to wait for the backup window. You can create on-demand backups for any instance, whether the instance has automatic backups enabled or not.
On-demand backups are not automatically deleted the way automated backups are. They persist until you delete them or until their instance is deleted. Because they are not automatically deleted, on-demand backups can have a long-term effect on your billing charges.
Automated backups are taken daily, within a 4-hour backup window. The backup starts during the backup window. When possible, schedule backups when your instance has the least activity.
During the backup window, automated backups occur every day your instance is running. One additional automated backup is taken after your instance is stopped to safeguard all changes prior to the instance stopping. Up to seven most recent backups are retained, by default. Automated backups are halted if your instance has been stopped for more than 36 hours. You can configure how many automated backups to retain, from 1 to 365. Backup and transaction log retention values can be changed from the default setting. Learn more.
Where backups are stored
Backups locations include:
- Default locations that Cloud SQL selects, based on the location of the original instance.
- Custom locations that you choose when you do not want to use the default location.
Default backup locations
If you do not specify a storage location, your backups are stored in the multiregion that is
geographically closest to the location of your Cloud SQL instance. For example, if your
Cloud SQL instance is in
us-central1, your backups are stored in the
us multi-region by default. However, a default location like
australia-southeast1 is outside of a multi-region. The closest multi-region is
Custom backup locations
Cloud SQL lets you select a custom location for your backup data. This is useful if your organization needs to comply with data residency regulations that require you to keep your backups within a specific geographic boundary. If your organization has this type of requirement, it probably uses a Resource Location Restriction organizational policy. With this policy, when you try to use a geographic location that does not comply with the policy, you see an alert on the Backups page. If you see this alert, you need to change the backup location to a location the policy allows.
When selecting a custom location for a backup, consider the following:
- Cost: one cluster in your instance may be in a lower-cost region than the others.
- Proximity to your application server: you might want to store the backup as close to your serving application as possible.
- Storage utilization: you need enough storage space to keep your backup as it grows in size. Depending on your workload, you might have clusters of different sizes or with different disk usages. This might factor into which cluster you choose.
For a complete list of valid regional values, see Instance Locations. For a complete list of multi-regional values, see Multi-regional locations.
For more information about setting locations for backups and seeing the locations of backups taken for an instance, see Set a custom location for backups and View backup locations.
Automated backup and transaction log retention
Automated backups are used to restore a Cloud SQL instance. A combination of automated backups and transaction logs are used to perform a point-in-time recovery.
Automated backups can be retained for up to a year by configuring the retention period whereas on-demand backups persist until you delete the backups or until your instance is deleted.
While transaction logs are counted in days, automated backups are not guaranteed to occur on a day boundary. Different units are used for these retention settings. Automated backup retention is a count and can be set from one to 365 backups. Transaction log retention is in days and can be set from one to seven. The default value for both is seven.
The lower bounds are useful for test instances, because logs and backups are deleted faster. For transaction logs, disk size doesn't grow as much with lower bounds. Using higher values for automated backups retention let you restore from further back in time.
Logs are purged once daily, not continuously. When the number of days of log retention is the same as the number of backups, insufficient log retention can result. For example, setting log retention to seven days and backup retention to seven backups means that between six and seven days of logs will be retained.
We recommend setting the number of backups to at least one more than the days of log retention to guarantee a minimum of specified days of log retention.
High write activity to the database can generate a large volume of transaction logs, which can consume significant disk space, and lead to disk growth for auto storage increase enabled instances. We recommend sizing instance storage to account for transaction log retention.
See Setting automated backup retention.See Setting transaction log retention.
Can I export a backup?
No, you can't export a backup. You can only export instance data. See Exporting data from Cloud SQL.
About the special backup user
Cloud SQL creates a special database user,
cloudsqladmin, for each
instance, and generates a unique instance-specific password for it.
Cloud SQL logs in as the
cloudsqladmin user to perform automated backups.
How backups affect instance operations
Writes and other operations are unaffected by backup operations.
Backup rate limitations
Cloud SQL limits the rate for backup operations on the data disk. You are allowed a maximum of five backup operations every 50 minutes per instance per project. If a backup operation fails, it does not count towards this quota. If you reach the limit, the operation fails with an error message that tells you when you can retry.
Let's take a look at how Cloud SQL performs rate limiting for backups.
Cloud SQL uses tokens from a bucket to determine how many backup operations are available at any one time. Each instance has a bucket. There's a maximum of five tokens in the bucket that you can use for backup operations. Every 10 minutes, a new token is added to the bucket. If the bucket is full, the token overflows.
Each time you issue a backup operation, a token is granted from the bucket. If the operation succeeds, the token is removed from the bucket. If it fails, the token is returned to the bucket. The following diagram shows how this works:
Unlogged tables are automatically wiped during backup restore.
|You can't see the current operation's status.||The Google Cloud console reports only success or failure when the operation
is done. It isn't designed to show warnings or other updates.
|You want to find out who issued an on-demand backup operation.||The user interface doesn't show the user who started an operation.
Look in the logs and filter by text to find the user. You may need to use audit logs for private information. Relevant log files include:
|You can't do a backup after an instance was deleted.||The grace period for a Cloud SQL instance purge is four days, with
the exception of read replicas, which are purged immediately.
During this time,
customer support can recreate the instance. If the instance is
recreated, its backups are also recreated. After instances are
purged, no data recovery is possible.
If you have done an export operation, you can create a new instance and then do an import operation to recreate the database. Exports are written to Cloud Storage and imports are read from there.
|An automated backup is stuck for many hours and can't be canceled.||Backups can take a long time depending on the database size.
If you really need to cancel the operation, you can ask
customer support to
|A restore operation can fail when one or more users referenced in the SQL dump file don't exist.||Before restoring a SQL dump, all the database users who own objects or
were granted permissions on objects in the dumped database must exist in the
target database. If they don't, the restore operation fails to recreate the
objects with the original ownership or permissions.
Create the database users before restoring the SQL dump.
|You want to increase the number of days that you can keep automatic backups from seven to 30 days, or longer.||You can
configure the number of automated backups to retain,
from 1 to 365. Automated backups get pruned
regularly based on the retention value configured. Unfortunately, this means that the
currently visible backups are the only automated backups you can restore from.
To keep backups indefinitely, you can create an on-demand backup, as they are not deleted in the same way as automated backups. On-demand backups remain indefinitely. That is, they remain until they're deleted or the instance they belong to is deleted. Because that type of backup is not deleted automatically, it can affect billing.
|An automated backup failed and you didn't receive an email notification.||Notifications aren't supported for backup failures.
When an automated backup fails, an
You can find the status of a backup through either the
gcloud sql backups list \ --project=PROJECT_ID \ --instance=INSTANCE_ID
gcloud sql backups describe BACKUP-ID \ --instance=INSTANCE_ID
|An instance is repeatedly failing because it is cycling between the failure and backup restore states. Attempts to connect to and use the database following restore fail.||
Things to try:
|You find you are missing data when performing a backup/restore operation.||Tables were created as unlogged. For example:
These tables are not included in a restore from a backup:
The solution is to avoid using unlogged tables if you want to restore those
tables through a backup. If you're restoring from a database that already
has unlogged tables, then you can dump the database to a file, and reload the
data after modifying the dumped file to