Best practices

This page provides best practices to follow to get the best performance, durability, and availability from Cloud SQL.

If you are experiencing issues with your Cloud SQL instance, see Diagnosing issues for help with troubleshooting and Known issues for a list of known issues with Cloud SQL.

Instance configuration and administration

Best practice More information
Read and follow the operational guidelines to ensure that your instances are covered by the Cloud SQL SLA. Although operational guidelines are not yet available for PostgreSQL instances, the same general principles apply.
Configure a maintenance window for your primary instance to control when disruptive updates can occur. See Maintenance window.
For read-heavy workloads, add read replicas to offload traffic from the primary instance. Optionally, you can use a load balancer such as HAProxy to manage traffic to the replicas.
If you delete and recreate instances regularly, use a timestamp in the instance ID to increase the likelihood that new instance IDs are usable. You cannot reuse the ID of a deleted instance for a few days after the instance is deleted.
Don't start an administrative operation before the previous operation has completed.

Cloud SQL instances do not accept a new operation request until they have completed the previous operation. If you attempt to start a new operation prematurely, the operation request fails. This includes instance restarts.

Note that the instance status in the GCP Console does not reflect whether an operation is running. The green checkmark denotes only that the instance is in the RUNNABLE state. To see whether an operation is running, go to the Operations tab and check the status of the most recent operation.

Data architecture

Best practice More information
Shard your instances where possible. When possible, using many smaller Cloud SQL instances is better than one large instance. Managing a large, monolithic instance presents challenges not posed by a larger number of smaller instances.
Don't use too many database tables.

Too many database tables can impact instance response time. More than 10,000 tables will affect your SLA coverage. See Operational guidelines for more information.

Although operational guidelines are not yet available for PostgreSQL instances, the same general principles apply.

Application implementation

Best practice More information
Use good connection management practices, such as connection pooling and exponential backoff. Using these techniques will improve your application's use of resources and help you stay within Cloud SQL connection limits. For more information and code samples, see Managing database connections.
Test your application's response to maintenance updates, which can happen at any time during the maintenance window. Changing the machine type of an instance is the closest approximation of a maintenance update. The application should attempt to reconnect to the database, preferably using exponential backoff, for at least 10 minutes to ensure the application will resume operation after a maintenance event. For more information, see Managing database connections.
Test your application's response to failovers, which can happen at any time. You can manually initiate a failover using the GCP Console, the gcloud command line tool, or the API. See Initiating failover.
Avoid very large transactions. Keep transactions small and short. If a large database update is needed, do it in several smaller transactions rather than one large transaction.
If you are using the Cloud SQL Proxy, make sure you are using the most up-to-date version. See Keeping the Cloud SQL Proxy up to date.

Data import and export

Best practice More information
Speed up imports for small instance sizes. For small instances, you can temporarily increase the tier of an instance to improve performance when importing large data sets.
If you are exporting data for import into Cloud SQL, be sure to use the proper procedure. See Exporting data from an externally-managed database server.
Protect your data with the appropriate Cloud SQL functionality.

Backups and exports are two ways to provide data redundancy and protection. They each protect against different scenarios and complement each other in a robust data protection strategy.

Backups are lightweight and quick to create; they provide a way to restore the data on your instance to its state at the time you took the backup. However, backups have some limitations. If you delete the instance, the backups are also deleted. You can't back up a single database or table. And if the region where the instance is located is unavailable, you cannot restore the instance from that backup, even in an available region.

Exports take longer to create, because an external file is created in Cloud Storage that can be used to recreate your data. Exports are unaffected if you delete the instance. In addition, you can export only a single database or even table, depending on the export format you choose.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud SQL for PostgreSQL