Known issues

This page lists known issues with Cloud SQL for PostgreSQL, along with ways you can avoid or recover from these issues.

If you are experiencing issues with your instance, make sure you also review the information in Diagnosing Issues.

Instance connection issues

  • Expired SSL/TLS certificates

    If your instance is configured to use SSL, go to the Cloud SQL Instances page in the Google Cloud console and open the instance. Open its Connections page, select the Security tab and make sure that your server certificate is valid. If it has expired, you must add a new certificate and rotate to it.

  • Cloud SQL Auth Proxy version

    If you are connecting using the Cloud SQL Auth Proxy, make sure you are using the most recent version. For more information, see Keeping the Cloud SQL Auth Proxy up to date.

  • Not authorized to connect

    If you try to connect to an instance that does not exist in that project, the error message only says that you are not authorized to access that instance.

  • Can't create a Cloud SQL instance

    If you see the Failed to create subnetwork. Router status is temporarily unavailable. Please try again later. Help Token: [token-ID] error message, try to create the Cloud SQL instance again.

  • The following only works with the default user ('postgres'): gcloud sql connect --user

    If you try to connect using this command with any other user, the error message says FATAL: database 'user' does not exist. The workaround is to connect using the default user ('postgres'), then use the "\c" psql command to reconnect as the different user.

  • PostgreSQL connections hang when IAM db proxy authentication is enabled.

    When the Cloud SQL Auth Proxy is started using TCP sockets and with the -enable_iam_login flag, then a PostgreSQL client hangs during TCP connection. One workaround is to use sslmode=disable in the PostgreSQL connection string. For example:

    psql "host=127.0.0.1 dbname=postgres user=me@google.com sslmode=disable"
    

    Another workaround is to start the Cloud SQL Auth Proxy using Unix sockets. This turns off PostgreSQL SSL encryption and lets the Cloud SQL Auth Proxy do the SSL encryption instead.

Administrative issues

  • Only one long-running Cloud SQL import or export operation can run at a time on an instance. When you start an operation, make sure you don't need to perform other operations on the instance. Also, when you start the operation, you can cancel it.

    PostgreSQL imports data in a single transaction. Therefore, if you cancel the import operation, then Cloud SQL doesn't persist data from the import.

    Issues with importing and exporting data

    • If your Cloud SQL instance uses PostgreSQL 17, but your databases use PostgreSQL 16 and earlier, then you can't use Cloud SQL to import these databases into your instance. To do this, use Database Migration Service.

    • If you use Database Migration Service to import a PostgreSQL 17 database into Cloud SQL, then it's imported as a PostgreSQL 16 database.

    • For PostgreSQL versions 15 and later, if the target database is created from template0, then importing data might fail and you might see a permission denied for schema public error message. To resolve this issue, provide public schema privileges to the cloudsqlsuperuser user by running the GRANT ALL ON SCHEMA public TO cloudsqlsuperuser SQL command.

    • Exporting many large objects cause instance to become unresponsive

      If your database contains many large objects (blobs), exporting the database can consume so much memory that the instance becomes unresponsive. This can happen even if the blobs are empty.

    • Cloud SQL doesn't support customized tablespaces but it does support data migration from customized tablespaces to the default tablespace, pg_default, in destination instance. For example, if you own a tablespace named dbspace is located at /home/data, after migration, all the data inside dbspace is migrated to the pg_default. But Cloud SQL will not create a tablespace named "dbspace" on its disk.

    • If you're trying to import and export data from a large database (for example, a database that has 500 GB of data or greater), then the import and export operations might take a long time to complete. In addition, other operations (for example, the backup operation) aren't available for you to perform while the import or export is occurring. A potential option to improve the performance of the import and export process is to restore a previous backup using gcloud or the API.

    • Cloud Storage supports a maximum single-object size up five terabytes. If you have databases larger than 5TB, the export operation to Cloud Storage fails. In this case, you need to break down your export files into smaller segments.

    Transaction logs and disk growth

    Logs are purged once daily, not continuously. When the number of days of log retention is configured to be the same as the number of backups, a day of logging might be lost, depending on when the backup occurs. For example, setting log retention to seven days and backup retention to seven backups means that between six and seven days of logs will be retained.

    We recommend setting the number of backups to at least one more than the days of log retention to guarantee a minimum of specified days of log retention.

    Instances with the following region names are displayed incorrectly in certain contexts, as follows:

    • us-central1 is displayed as us-central
    • europe-west1 is displayed as europe
    • asia-east1 is displayed as asia

    This issue occurs in the following contexts:

    • Alerting in Cloud Monitoring
    • Metrics Explorer
    • Cloud Logging

    You can mitigate the issue for Alerting in Cloud Monitoring, and for Metrics Explorer, by using Resource metadata labels. Use the system metadata label region instead of the cloudsql_database monitored resource label region.