Diagnosing issues with Cloud SQL instances

This page contains a list of the most frequent issues you might run into when working with Cloud SQL instances and steps you can take to address them. Also review the Known issues, Troubleshooting, and Support page pages.

Viewing logs

To see information about recent operations, you can view the Cloud SQL instance operation logs or the PostgreSQL error logs.

Common connection issues

Verify that your application is closing connections properly

If you see errors containing "Aborted connection nnnn to db:", it usually indicates that your application is not stopping connections properly. Network issues can also cause this error. The error does not mean that there are problems with your Cloud SQL instance.

For examples of best practices for connection management, see Managing connections.

Verify that your certificates have not expired

If your instance is configured to use SSL, go to the Cloud SQL Instances page in the Cloud Console and open the instance. Open its Connections page and make sure that your server certificate is valid. If it has expired, you must add a new certificate and rotate to it. Learn more.

Verify that you are authorized to connect

If your connections are failing, check that you are authorized to connect:

  • If you are having trouble connecting using an IP address, for example, you are connecting from your on-premises environment with the psql client, then make sure that the IP address you are connecting from is authorized to connect to the Cloud SQL instance.

    Connections to a Cloud SQL instance using a private IP address are automatically authorized for RFC 1918 address ranges. This way, all private clients can access the database without going through the proxy. Non-RFC 1918 address ranges must be configured as authorized networks.

    Cloud SQL doesn't learn Non-RFC 1918 subnet routes from your VPC by default. You need to update the network peering to Cloud SQL to export any Non-RFC 1918 routes. For example:

    gcloud compute networks peerings update cloudsql-postgres-googleapis-com --network=NETWORK --export-subnet-routes-with-public-ip --project=PROJECT
  • Here's your current IP address.

  • Try the gcloud sql connect command to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and psql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the psql client pre-installed. Cloud Shell provides a Compute Engine instance that you can use to connect to Cloud SQL.
  • Temporarily allow all IP addresses to connect to an instance by authorizing

Understanding connection limits

There are no QPS limits for Cloud SQL instances. However, there are connection, size, and App Engine specific limits in place. See Quotas and Limits.

Database connections consume resources on the server and the connecting application. Always use good connection management practices to minimize your application's footprint and reduce the likelihood of exceeding Cloud SQL connection limits. For more information, see Managing database connections.

Show connections, processes and threads

To see the processes that are running on your database, use the pg_stat_activity table:

select * from pg_stat_activity;

Connections that show an IP address, such as, are connecting using IP. Connections with cloudsqlproxy~ are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost may be used by some internal Cloud SQL processes.

Connections timeout (from Compute Engine)

Connections with a Compute Engine instance time out after 10 minutes of inactivity, which can affect long-lived unused connections between your Compute Engine instance and your Cloud SQL instance. For more information, see Networking and Firewalls in the Compute Engine documentation.

To keep long-lived unused connections alive, you can set the TCP keepalive. The following commands set the TCP keepalive value to one minute and make the configuration permanent across instance reboots.

Display the current tcp_keepalive_time value.

cat /proc/sys/net/ipv4/tcp_keepalive_time

Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.

echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf

Apply the change.

sudo /sbin/sysctl --load=/etc/sysctl.conf

Display the tcp_keepalive_time value to verify the change was applied.

cat /proc/sys/net/ipv4/tcp_keepalive_time

Additional connectivity troubleshooting

For other connection issues, see the Connectivity section in the troubleshooting page.

Instance issues


For the best performance for backups, keep the number of tables to a reasonable number.

For other backups issues, see the Backups section in the troubleshooting page.

Import and export

Imports and exports into Cloud SQL using the import functionality (with a Cloud Storage bucket) can take a long time to complete, depending on the size of the database. This can have the following impacts:

  • You cannot stop a long-running Cloud SQL instance operation.
  • You can perform only one import or export operation at a time for each instance.

You can decrease the amount of time it takes to complete each operation by using the Cloud SQL import or export functionality with smaller batches of data.

For exports, you can use serverless export to minimize the impact on database performance and allow other operations to run on your instance while an export is running.

For other import and export issues, see the Import and export section in the troubleshooting page.

Disk space

If your instance reaches the maximum storage amount allowed, writes to the database fail. If you delete data, for example, by dropping a table, the space is freed, but it is not reflected in the reported Storage Used of the instance. You can run the VACUUM FULL command to recover unused space; note that write operations are blocked while the vacuum command is running. Learn more.

Suspended state

There are various reasons why Cloud SQL may suspend an instance, including:

  • Billing issues

    For example, if the credit card for the project's billing account has expired, the instance may be suspended. You can check the billing information for a project by going to the Google Cloud Console billing page, selecting the project, and viewing the billing account information used for the project. After you resolve the billing issue, the instance returns to runnable status within a few hours.

  • KMS key issues

    For example, if the KMS key version used to encrypt the user data in the Cloud SQL instance is not present, or if it has been disabled or destroyed. See Using customer-managed encryption keys (CMEK).

  • Legal issues

    For example, a violation of the Google Cloud Acceptable Use Policy may cause the instance to be suspended. For more information, see "Suspensions and Removals" in the Google Cloud Terms of Service.

  • Operational issues

    For example, if an instance is stuck in a crash loop (it crashes while starting or just after starting), Cloud SQL may suspend it.

While an instance is suspended, you can continue to view information about it or you can delete it, if billing issues triggered the suspension.

Cloud SQL users with Platinum, Gold, or Silver support packages can contact our support team directly about suspended instances. All users can use the earlier guidance along with the google-cloud-sql forum.

Keep a reasonable number of database tables

Database tables consume system resources. A large number can affect instance performance and availability, and cause the instance to lose its SLA coverage. Learn more.

General performance tips

Make sure that your instance is not constrained on memory or CPU. For performance-intensive workloads, ensure your instance has at least 60 GB of memory.

For slow database inserts, updates, or deletes, check the locations of the writer and database; sending data a long distance introduces latency.

Troubleshoot query performance problems using Query Insights.

For slow database selects, consider the following:

  • Caching is important for read performance. Check the various blks_hit / (blks_hit + blks_read) ratios from the PostgreSQL Statistics Collector. Ideally, the ratio is above 99%. If not, consider increasing the size of your instance's RAM.
  • If your workload consists of CPU intensive queries (sorting, regular expressions, other complex functions), your instance might be throttled; add vCPUs.
  • Check the location of the reader and database - latency affects read performance even more than write performance.
  • Investigate non-Cloud SQL specific performance improvements, such as adding appropriate indexing, reducing data scanned, and avoiding extra round trips.
If you observe poor performance executing queries, use EXPLAIN to identify where to add indexes to tables to improve query performance. For example, make sure every field that you use as a JOIN key has an index on both tables.

Additional Cloud SQL instance troubleshooting

For other Cloud SQL instance issues, see the Managing instances section in the troubleshooting page.

Error messages

For specific API error messages, see the Error messages reference page.

Troubleshooting customer-managed encryption keys (CMEK)

Cloud SQL administrator operations, such as create, clone, or update, might fail due to Cloud KMS errors, and missing roles or permissions. Common reasons for failure include a missing Cloud KMS key version, a disabled or destroyed Cloud KMS key version, insufficient IAM permissions to access the Cloud KMS key version, or the Cloud KMS key version is in a different region than the Cloud SQL instance. Use the following troubleshooting table to diagnose and resolve common problems.

Customer-managed encryption keys troubleshooting table

For this error... The issue might be... Try this...
Per-product, per-project service account not found The service account name is incorrect. Make sure you created a service account for the correct user project.


Cannot grant access to the service account The user account does not have permission to grant access to this key version. Add the Organization Administrator role on your user or service account.


Cloud KMS key version is destroyed The key version is destroyed. If the key version is destroyed, you cannot use it to encrypt or decrypt data.
Cloud KMS key version is disabled The key version is disabled. Re-enable the Cloud KMS key version.


Insufficient permission to use the Cloud KMS key The cloudkms.cryptoKeyEncrypterDecrypter role is missing on the user or service account you are using to run operations on Cloud SQL instances, or the Cloud KMS key version doesn't exist. Add the cloudkms.cryptoKeyEncrypterDecrypter role on your user or service account.


If the role is already on your account, see Creating a key to learn how to create a new key version. See note.
Cloud KMS key is not found The key version does not exist. Create a new key version. See Creating a key. See note.
Cloud SQL instance and Cloud KMS key version are in different regions The Cloud KMS key version and Cloud SQL instance must be in the same region. It does not work if the Cloud KMS key version is in a global region or multi-region. Create a key version in the same region where you want to create instances. See Creating a key. See note.