This page describes how to use point-in-time recovery (PITR) to restore your primary Cloud SQL instance.
To learn more about PITR, see Point-in-time recovery (PITR).
By default, PITR is enabled when you create a Cloud SQL Enterprise Plus edition instance, regardless of whether you create the instance by using the Google Cloud console, gcloud CLI, Terraform, or the Cloud SQL Admin API.
If you create a Cloud SQL Enterprise edition instance in the Google Cloud console, then PITR is enabled by default. Otherwise, if you create the instance by using the gcloud CLI, Terraform, or the Cloud SQL Admin API, then you must manually enable PITR.
Log storage for PITR
Cloud SQL uses write-ahead logging (WAL) archiving for PITR.On January 9, 2023, we launched storing write-ahead logs for PITR in Cloud Storage. Since this launch, the following conditions apply:
- All Cloud SQL Enterprise Plus edition instances store their write-ahead logs in Cloud Storage. Only Cloud SQL Enterprise Plus edition instances that you upgrade from Cloud SQL Enterprise edition and had PITR enabled before January 9, 2023 continue to store their logs on disk.
- Cloud SQL Enterprise edition instances created with PITR enabled before January 9, 2023 continue to store their logs on disk.
- If you upgrade a Cloud SQL Enterprise edition instance after August 15, 2024 that stores transaction logs for PITR on disk to Cloud SQL Enterprise Plus edition, then the upgrade process switches the storage location of the transaction logs used for PITR to Cloud Storage for you. For more information, see Upgrade an instance to Cloud SQL Enterprise Plus edition by using in-place upgrade.
- All Cloud SQL Enterprise edition instances that you create with PITR enabled after January 9, 2023 store logs in Cloud Storage.
For instances that store write-ahead logs only on disk, you can switch the storage location of the transaction logs used for PITR from disk to Cloud Storage by using gcloud CLI or the Cloud SQL Admin API without incurring any downtime. For more information, see Switch transaction log storage to Cloud Storage.
Log retention period
To see whether an instance stores the logs used for PITR in Cloud Storage, use Check the storage location of transaction logs used for PITR.
After you use a PostgreSQL client such as psql
or pgAdmin
to
connect to a database of the instance, run the following command:
show archive_command
. If any write-ahead logs are archived in Cloud Storage,
then you see
-async_archive -remote_storage
.
All other existing instances that have PITR enabled continue to have their logs stored on disk.
If the logs are stored in Cloud Storage, then Cloud SQL uploads logs every five minutes or less. As a result, if a Cloud SQL instance is available, then the instance can be recovered to the latest time. However, if the instance isn't available, then the recovery point objective is typically five minutes or less. Use the gcloud CLI or Admin API to check for the latest time to which you can restore the instance, and perform the recovery to that time.
The write-ahead logs used with PITR are deleted
automatically with their associated
automatic backup, which generally happens after the value set for
transactionLogRetentionDays
is met. This is the number of days of transaction logs that Cloud SQL
retains for PITR. For Cloud SQL Enterprise Plus edition, the number
of days of retained transaction logs can be set from 1 to 35, and for
Cloud SQL Enterprise edition, the value can be set from 1 to 7.
When you restore a backup on a Cloud SQL instance before enabling PITR, you lose the write-ahead logs that allow the operability of PITR.
For
customer-managed encryption key (CMEK)-enabled instances,
write-ahead logs are encrypted using the latest version of the
CMEK. To perform a restore, all versions of the key that were the latest for the
number of days that you configured for the
retained-transaction-log-days
parameter should be available.
For instances having write-ahead logs stored in Cloud Storage, the logs are stored in the same region as the primary instance. This log storage (up to 35 days for Cloud SQL Enterprise Plus edition and seven days for Cloud SQL Enterprise edition, the maximum length for PITR) generates no additional cost per instance.
Logs and disk usage
If your instance has PITR enabled, and if the size of your write-ahead logs on disk is causing an issue for your instance:
You can switch the storage location of the logs used for PITR from disk to Cloud Storage without downtime by using gcloud CLI or the Cloud SQL Admin API.
You can upgrade your instance to Cloud SQL Enterprise Plus edition.
You can increase the instance storage size, but the write-ahead log size increase in disk usage might be temporary.
We recommend enabling automatic storage increase to avoid unexpected storage issues. This recommendation applies only if your instance has PITR enabled and your logs are stored on disk.
You can deactivate PITR if you want to delete logs and recover storage. Decreasing the write-ahead logs used doesn't shrink the size of the disk provisioned for the instance.
Logs are purged once daily, not continuously. Setting log retention to two days means that at least two days of logs, and at most three days of logs, are retained. We recommend setting the number of backups to one more than the days of log retention.
For example, if you specify
7
for the value of thetransactionLogRetentionDays
parameter, then for thebackupRetentionSettings
parameter, set the number ofretainedBackups
to8
.
Enable PITR
When you create a new instance in the Google Cloud console, both Automated backups and Enable point-in-time recovery are automatically enabled.The following procedure enables PITR on an existing primary instance.
Console
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Open the more actions menu for the instance you want to enable PITR on and click Edit.
- Under Customize your instance, expand the Data Protection section.
- Select the Enable point-in-time recovery checkbox.
- In the Days of logs field, enter the number of days to retain logs, from 1-35 for Cloud SQL Enterprise Plus edition, or 1-7 for Cloud SQL Enterprise edition.
- Click Save.
gcloud
- Display the instance overview:
gcloud sql instances describe INSTANCE_NAME
- If you see
enabled: false
in thebackupConfiguration
section, enable scheduled backups:gcloud sql instances patch INSTANCE_NAME \ --backup-start-time=HH:MM
Specify the
backup-start-time
parameter using 24-hour time in UTC±00 time zone. - Enable PITR:
gcloud sql instances patch INSTANCE_NAME \ --enable-point-in-time-recovery
If you're enabling PITR on a primary instance, you can also configure the number of days for which you want to retain transaction logs by adding the following parameter:
--retained-transaction-log-days=RETAINED_TRANSACTION_LOG_DAYS
- Confirm your change:
gcloud sql instances describe INSTANCE_NAME
In the
backupConfiguration
section, you seepointInTimeRecoveryEnabled: true
if the change was successful.
Terraform
To enable PITR, use a Terraform resource.
Apply the changes
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Delete the changes
To delete your changes, do the following:
- To disable deletion protection, in your Terraform configuration file set the
deletion_protection
argument tofalse
.deletion_protection = "false"
- Apply the updated Terraform configuration by running the following command and
entering
yes
at the prompt:terraform apply
-
Remove resources previously applied with your Terraform configuration by running the following command and entering
yes
at the prompt:terraform destroy
REST v1
Before using any of the request data, make the following replacements:
- PROJECT_ID: the ID or project number of the Google Cloud project that contains the instance
- INSTANCE_NAME: the name of the primary or read replica instance that you're configuring for high availability
- START_TIME: the time (in hours and minutes)
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
REST v1beta4
Before using any of the request data, make the following replacements:
- PROJECT_ID: the ID or project number of the Google Cloud project that contains the instance
- INSTANCE_NAME: the name of the primary or read replica instance that you're configuring for high availability
- START_TIME: the time (in hours and minutes)
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
Perform PITR on an unavailable instance
Console
You might want to recover an instance that isn't available to a different zone because of the following reasons:
- The zone in which the instance is configured isn't accessible. This instance has a
FAILED
state. - The instance is undergoing maintenance. This instance has a
MAINTENANCE
state.
To recover an unavailable instance, complete the following steps:
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Find the row of the instance to clone.
- In the Actions column, click the More Actions menu.
- Click Create clone.
- On the Create a clone page, complete the following actions:
- In the Instance ID field, update the instance ID, if needed.
- Click Clone from an earlier point in time.
- In the Point in time field, select a date and time from which you want to clone data. This recovers the state of the instance from that point in time.
- Click Create clone.
While the clone initializes, you're returned to the instance listing page.
gcloud
You might want to recover an instance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
gcloud sql instances clone SOURCE_INSTANCE_NAME TARGET_INSTANCE_NAME \ --point-in-time DATE_AND_TIME_STAMP \ --preferred-zone ZONE_NAME \ --preferred-secondary-zone SECONDARY_ZONE_NAME
The user or service account that's running the gcloud sql instances clone
command must have the cloudsql.instances.clone
permission. For more information about required permissions to run gcloud CLI commands, see Cloud SQL permissions.
REST v1
You might want to recover an instance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- SOURCE_INSTANCE_NAME: the name of the source instance.
- TARGET_INSTANCE_NAME: the name of the target (cloned) instance.
- DATE_AND_TIME_STAMP: a date-and-time stamp for the source instance in the UTC time zone and in the RFC 3339 format (for example,
2012-11-15T16:19:00.094Z
). - ZONE_NAME: Optional. The name of the primary zone for the target instance. This is used to specify a different primary zone for the Cloud SQL instance that you want to clone. For a regional instance, this zone replaces the primary zone, but the secondary zone remains the same as the instance.
- SECONDARY_ZONE_NAME: Optional. The name of the secondary zone for the target instance. This is used to specify a different secondary zone for the regional Cloud SQL instance that you want to clone.
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone
Request JSON body:
{ "cloneContext": { "destinationInstanceName": "TARGET_INSTANCE_NAME", "pointInTime": "DATE_AND_TIME_STAMP", "preferredZone": "ZONE_NAME", "preferredSecondaryZone": "SECONDARY_ZONE_NAME" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
The user or service account that's using the instances.clone
API method must have the cloudsql.instances.clone
permission. For more information about required permissions to use API methods, see Cloud SQL permissions.
REST v1beta4
You might want to recover an instance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- SOURCE_INSTANCE_NAME: the name of the source instance.
- TARGET_INSTANCE_NAME: the name of the target (cloned) instance.
- DATE_AND_TIME_STAMP: a date-and-time stamp for the source instance in
the UTC time zone
and in the RFC 3339 format
(for example,
2012-11-15T16:19:00.094Z
). - ZONE_NAME: Optional. The name of the primary zone for the target instance. This is used to specify a different primary zone for the Cloud SQL instance that you want to clone. For a regional instance, this zone replaces the primary zone, but the secondary zone remains the same as the instance.
- SECONDARY_ZONE_NAME: Optional. The name of the secondary zone for the target instance. This is used to specify a different secondary zone for the regional Cloud SQL instance that you want to clone.
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone
Request JSON body:
{ "cloneContext": { "destinationInstanceName": "TARGET_INSTANCE_NAME", "pointInTime": "DATE_AND_TIME_STAMP", "preferredZone": "ZONE_NAME", "preferredSecondaryZone": "SECONDARY_ZONE_NAME" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
The user or service account that's using the instances.clone
API method must have the cloudsql.instances.clone
permission. For more information about required permissions to use API methods, see Cloud SQL permissions.
Get the latest recovery time
For an available instance, you can perform PITR to the latest time. If the instance is unavailable and the instance logs are stored in Cloud Storage, then you can retrieve the latest recovery time and perform the PITR to that time. In both cases, you can restore the instance to a different primary or secondary zone by providing values for the preferred zones.
gcloud
Get the latest time to which you can recover a Cloud SQL instance that's not available.
Replace INSTANCE_NAME with the name of the instance that you're querying.
gcloud sql instances get-latest-recovery-time INSTANCE_NAME
REST v1
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID
- INSTANCE_NAME: the name of the instance for which you're querying for the latest recovery time
HTTP method and URL:
GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z" }
REST v1beta4
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID
- INSTANCE_NAME: the name of the instance for which you're querying for the latest recovery time
HTTP method and URL:
GET https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z" }
Perform PITR
Console
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Open the more actions menu for the instance you want to recover and click Create clone.
- Optionally, on the Create a clone page, update the ID of the new clone.
- Select Clone from an earlier point in time.
- Enter a PITR time.
- Click Create clone.
gcloud
Create a clone using PITR.
Replace the following:
- SOURCE_INSTANCE_NAME - Name of the instance you're restoring from.
- NEW_INSTANCE_NAME - Name for the clone.
- TIMESTAMP - UTC timezone for the source instance in RFC 3339 format. For example, 2012-11-15T16:19:00.094Z.
gcloud sql instances clone SOURCE_INSTANCE_NAME \ NEW_INSTANCE_NAME \ --point-in-time 'TIMESTAMP'
REST v1
Before using any of the request data, make the following replacements:
- project-id: The project ID
- target-instance-id: The target instance ID
- source-instance-id: The source instance ID
- restore-timestamp The point-in-time to restore up to
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
REST v1beta4
Before using any of the request data, make the following replacements:
- project-id: The project ID
- target-instance-id: The target instance ID
- source-instance-id: The source instance ID
- restore-timestamp The point-in-time to restore up to
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
Deactivate PITR
Console
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Open the more actions menu for the instance you want to deactivate and select Edit.
- Under Customize your instance, expand the Data Protection section.
- Clear Enable point-in-time recovery.
- Click Save.
gcloud
- Deactivate point-in-time recovery:
gcloud sql instances patch INSTANCE_NAME \ --no-enable-point-in-time-recovery
- Confirm your change:
gcloud sql instances describe INSTANCE_NAME
In the
backupConfiguration
section, you seepointInTimeRecoveryEnabled: false
if the change was successful.
REST v1
Before using any of the request data, make the following replacements:
- project-id: The project ID
- instance-id: The instance ID
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
REST v1beta4
Before using any of the request data, make the following replacements:
- project-id: The project ID
- instance-id: The instance ID
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
Check the storage location of transaction logs used for PITR
You can check where your Cloud SQL instance is storing the transaction logs used for PITR.
gcloud
To determine whether your instance stores logs for PITR on disk or Cloud Storage, use the following command:
gcloud sql instances describe INSTANCE_NAME
Replace INSTANCE_NAME with the name of the instance.
You can also check the storage location of the transaction logs for multiple instances in the same project. To determine the location for multiple instances, use the following command:
gcloud sql instances list --show-transactional-log-storage-state
Example response:
NAME DATABASE_VERSION LOCATION TRANSACTIONAL_LOG_STORAGE_STATE my_01 POSTGRES_12 us-central-1 DISK my_02 POSTGRES_12 us-central-1 CLOUD_STORAGE ...
In the output of the command, the transactionalLogStorageState
field or the TRANSACTIONAL_LOG_STORAGE_STATE
column provides
information about where the transaction
logs for PITR are stored for the instance.
The possible transaction log
storage states are the following:
DISK
: the instance stores the transaction logs used for PITR on disk. If you upgrade a Cloud SQL Enterprise edition instance to Cloud SQL Enterprise Plus edition, then the upgrade process switches the log storage location to Cloud Storage automatically. For more information, see Upgrade an instance to Cloud SQL Enterprise Plus edition by using in-place upgrade. You can also choose to switch the storage location by using gcloud CLI or the Cloud SQL Admin API without upgrading the edition of your instance and without incurring any downtime. For more information, see Switch transaction log storage to Cloud Storage.SWITCHING_TO_CLOUD_STORAGE
: the instance is switching the storage location for the PITR transaction logs to Cloud Storage.SWITCHED_TO_CLOUD_STORAGE
: the instance has completed the switching the storage location for PITR transaction logs from disk to Cloud Storage.CLOUD_STORAGE
: the instance stores the transaction logs used for PITR in Cloud Storage.
Switch transaction log storage to Cloud Storage
If your instance stores its transaction logs used for PITR on disk, then you can switch the storage location to Cloud Storage without incurring any downtime. The overall process of switching the storage location takes approximately the duration of the transaction log retention period (days) to complete. As soon as you start the switch, transaction logs start accruing in Cloud Storage. During the operation, you can check the status of the overall process by using the command in Check the storage location of transaction logs used for PITR.
After the overall process of switching to Cloud Storage is complete, Cloud SQL uses transaction logs from Cloud Storage for PITR.
gcloud
To switch the storage location to Cloud Storage, use the following command:
gcloud sql instances patch INSTANCE_NAME \ --switch-transaction-logs-to-cloud-storage
Replace INSTANCE_NAME with the name of the instance. The instance must be a primary instance and not a replica instance. The response is similar to the following:
The following message is used for the patch API method. {"name": "INSTANCE_NAME", "project": "PROJECT_NAME", "switchTransactionalLogsToCloudStorageEnabled": "true"} Patching Cloud SQL instance...done. Updated [https://sqladmin.prod.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME].
If the command returns an error, then see Troubleshoot the switch to Cloud Storage for possible next steps.
REST v1
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- INSTANCE_ID: the instance ID. The instance must be a primary instance and not a replica instance.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "switchTransactionLogsToCloudStorageEnabled": true }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
If the request returns an error, then see Troubleshoot the switch to Cloud Storage for possible next steps.
REST v1beta4
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- INSTANCE_ID: the instance ID. The instance must be a primary instance and not a replica instance.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "switchTransactionLogsToCloudStorageEnabled": true }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
If the request returns an error, then see Troubleshoot the switch to Cloud Storage for possible next steps.
Set transaction log retention
To set the number of days to retain write-ahead logs:
Console
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Open the more actions menu for the instance you want to set the transaction log on and select Edit.
- Under Customize your instance, expand the Data Protection section.
- In the Enable point-in-time recovery section, expand Advanced options.
- Enter the number of days to retain logs, from 1-35 for Cloud SQL Enterprise Plus edition or 1-7 for Cloud SQL Enterprise edition.
- Click Save.
Edit the instance to set the number of days to retain
write-ahead logs. Replace the following: DAYS_TO_RETAIN: The number of days of transaction logs
to keep. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days,
with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is
between 1 and 7 days, with a default of 7 days. If no value is specified,
then the default value is used. This is valid only when PITR is enabled.
Keeping more days of transaction logs requires a bigger storage
size.
gcloud
gcloud sql instances patch INSTANCE_NAME \
--retained-transaction-log-days=DAYS_TO_RETAIN
Before using any of the request data,
make the following replacements:
DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days. If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.
HTTP method and URL:
Request JSON body:
To send your request, expand one of these options: You should receive a JSON response similar to the following:REST v1
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID
{
"settings":
{
"backupConfiguration":
{
"transactionLogRetentionDays": "DAYS_TO_RETAIN"
}
}
}
Before using any of the request data,
make the following replacements:
DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days. If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.
HTTP method and URL:
Request JSON body:
To send your request, expand one of these options: You should receive a JSON response similar to the following:REST v1beta4
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID
{
"settings":
{
"backupConfiguration":
{
"transactionLogRetentionDays": "DAYS_TO_RETAIN"
}
}
}
Troubleshoot
Issue | Troubleshooting |
---|---|
OR
|
The timestamp you provided is invalid. |
OR
|
The timestamp that you provided is for a time where backups or when binlog coordinates could not be found. |
Troubleshoot the switch to Cloud Storage
The following table lists possible errors that might return with the
INVALID REQUEST
code when you switch the storage location of the transaction logs from disk
to Cloud Storage.
Issue | Troubleshooting |
---|---|
Switching the storage location of the transaction logs
used for PITR is not supported for instances with database type %s.
|
Make sure that you're running the gcloud CLI command or making the API request on a Cloud SQL for MySQL or Cloud SQL for PostgreSQL instance. Switching the storage location for transaction logs by using gcloud CLI or the Cloud SQL Admin API is not supported for Cloud SQL for SQL Server. |
PostgreSQL transactional logging is not enabled on this instance.
|
PostgreSQL uses write-ahead logging as the transaction logs for point-in-time recovery (PITR). To support PITR, PostgreSQL requires that you enable write-ahead logging on the instance. For more information about how to enable write-ahead logging, see Enable PITR. |
This instance is already storing transaction logs used for PITR in
Cloud Storage
|
To verify the storage location of the transaction logs, run the command in Check the storage location of transaction logs used for PITR. |
The instance is already switching transaction logs used for PITR from disk
to Cloud Storage.
|
Wait for the switch operation to complete. To verify the status of the operation and the storage location of the transaction logs, run the command in Check the storage location of transaction logs used for PITR. |
What's next
- Configure flags on your clone