Upgrade the database major version in-place

This page describes how to upgrade the database major version by upgrading your Cloud SQL instance in-place rather than by migrating data.

Introduction

Database software providers periodically release new major versions that contain new features, performance improvements, and security enhancements. Cloud SQL takes in new versions after they're released. After Cloud SQL offers support for a new major version, you can upgrade your instances to keep your database updated.

You can upgrade the database version of an instance in-place or by migrating data. In-place upgrades are a simpler way to upgrade your instance's major version. You don't need to migrate data or change application connection strings. With in-place upgrades, you can retain the name, IP address, and other settings of your current instance after the upgrade. In-place upgrades don't require you to move data files and can be completed faster. In some cases, the downtime is shorter than what migrating your data entails.

The Cloud SQL for the PostgreSQL in-place upgrade operation uses the pg_upgrade utility.

Plan a major version upgrade

  1. Choose a target major version.

    gcloud

    For information about installing and getting started with the gcloud CLI, see Install the gcloud CLI. For information about starting Cloud Shell, see Use Cloud Shell.

    To check the database versions that you can target for an in-place upgrade on your instance, do the following:

    1. Run the following command.
    2. gcloud sql instances describe INSTANCE_NAME
         

      Replace INSTANCE_NAME with the name of the instance.

    3. In the output of the command, locate the section that is labeled upgradableDatabaseVersions.
    4. Each subsection returns a database version that is available for upgrade. In each subsection, review the following fields.
      • majorVersion: the major version that you can target for the in-place upgrade.
      • name: the database version string that includes the major version.
      • displayName: the display name for the database version.

    REST v1

    To check which target database versions are available for a major version in-place upgrade, use the instances.get method of the Cloud SQL Admin API.

    Before using any of the request data, make the following replacements:

    • INSTANCE_NAME: The instance name.

    HTTP method and URL:

    GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME

    To send your request, expand one of these options:

    You should receive a JSON response similar to the following:

    
    upgradableDatabaseVersions:
    
    {
      major_version: "POSTGRES_15_0"
      name: "POSTGRES_15_0"
      display_name: "PostgreSQL 15.0"
    }
    
    

    REST v1beta4

    To check which target database versions are available for major version in-place upgrade of an instance, use the instances.get method of the Cloud SQL Admin API.

    Before using any of the request data, make the following replacements:

    • INSTANCE_NAME: The instance name.

    HTTP method and URL:

    GET https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME

    To send your request, expand one of these options:

    You should receive a JSON response similar to the following:

    
    upgradableDatabaseVersions:
    
    {
      major_version: "POSTGRES_15_0"
      name: "POSTGRES_15_0"
      display_name: "PostgreSQL 15.0"
    }
    
    

    For the complete list of the database versions that Cloud SQL supports, see Database versions and version policies.

  2. Consider the features offered in each database major version and address incompatibilities.

    New major versions introduce incompatible changes that might require you to modify the application code, the schema, or the database settings. Before you can upgrade your database instance, review the release notes of your target major version to determine the incompatibilities that you must address.

  3. Test the upgrade with a dry run.

    Perform a dry run of the end-to-end upgrade process in a test environment before you upgrade the production database. You can clone your instance to create an identical copy of the data on which to test the upgrade process.

    In addition to validating that the upgrade completes successfully, run tests to ensure that the application behaves as expected on the upgraded database.

  4. Decide on a time to upgrade.

    Upgrading requires the instance to become unavailable for a period of time. Plan to upgrade during a time period when database activity is low.

Prepare for a major version upgrade

Before you upgrade, complete the following steps.

  1. Check the LC_COLLATE value for the template and postgres databases. The character set for each database must be en_US.UTF8.

    If the LC_COLLATE value for the template and postgres databases isn't en_US.UTF8, then the major version upgrade fails. To fix this, if either database has a character set other than en_US.UTF8, then change the LC_COLLATE value to en_US.UTF8 before you perform the upgrade.

    To change the encoding of a database:

    1. Dump your database.
    2. Drop your database.
    3. Create a new database with the different encoding (for this example, en_US.UTF8).
    4. Reload your data.

    Another option is to rename the database:

    1. Close all connections to the database.
    2. Rename the database.
    3. Update your application configurations to use the new database name.
    4. Create a new, empty database with the default encoding.

    We recommend that you perform these steps on a cloned instance before applying them to a production instance.

  2. Manage your read replicas.

    Cloud SQL for PostgreSQL does not support cross-version replication, which means that you cannot upgrade the primary instance while the instance is replicating to the read replicas. Before upgrading, either disable replication for each read replica or delete the read replicas.

  3. If Cloud SQL is the logical replication source, disable pglogical extension replication as follows. You can enable it again after the upgrade. If Cloud SQL is the logical replication target, these steps are not required.
    1. Disable the subscription and disconnect the replica from the provider by using the following command:
      SELECT * FROM pglogical.alter_subscription_disable(subscription_name name, immediate bool);
      

      Replace name with the name of the existing subscription.

      Set the value of the immediate parameter to true if the subscription needs to be disabled immediately. By default, the value is false and the subscription is disabled only after the current transaction ends.

      For example:

      postgres=> SELECT * FROM pglogical.alter_subscription_disable('test_sub', true);
       alter_subscription_disable
      ----------------------------
       t
      (1 row)
      
    2. Drop the replication slot by connecting to the publisher or the Cloud SQL primary instance and running the following command:
      SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots
        WHERE slot_name IN (SELECT slot_name FROM pg_replication_slots);

      For example:

      postgres=> SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots
      postgres->    WHERE slot_name IN (SELECT slot_name FROM pg_replication_slots);
      -[ RECORD 1 ]------------+-
      pg_drop_replication_slot |
      
      postgres=>
      
  4. Manage your remaining PostgreSQL extensions.

    Most extensions work on the upgraded database major version. Drop any extensions that are no longer supported in your target version. For example, drop the chkpass extension if you're upgrading to PostgreSQL 11 or later versions.

    You can upgrade PostGIS and its related extensions to their latest supported versions manually.

    Sometimes, upgrading from PostGIS versions 2.x can create a situation where there are leftover database objects that aren't associated with the PostGIS extension. This can block the upgrade operation. For information about resolving this issue, see Fixing a broken postgis raster install.

    Sometimes, upgrading to PostGIS version 3.1.7 or later can't complete due to objects using deprecated functions. This can block the upgrade operation. To check the upgrade status, run SELECT PostGIS_full_version();. If there are warnings present, then drop any objects using the deprecated functions and update the PostGIS extension to any intermediate or higher version. After you complete these actions, run the SELECT PostGIS_full_version(); command again. Verify that no warnings appear. Then, proceed with the upgrade operation.

    To learn more about upgrading your PostGIS extensions, see Upgrading PostGIS. For issues associated with upgrading PostGIS, see Check the version of your PostgreSQL instance.
  5. Manage your custom database flags. Check the names of any custom database flags that you configured for your PostgreSQL instance. For issues associated with these flags, see Check the custom flags for your PostgreSQL instance.
  6. When performing an upgrade from one major version to another, attempt to connect to each database to see if there are any compatibility issues. Ensure that your databases can connect to each other. Check the datallowconn field for each database to ensure that a connection is allowed. A t value means that it's allowed, and an f value indicates that a connection can't be established.
  7. If you use the Datadog installation to upgrade your Cloud SQL instance to PostgreSQL 10 or later versions, then before you perform the upgrade, drop the pg_stat_activity() function.

Known limitations

The following limitations affect in-place major version upgrades for Cloud SQL for PostgreSQL:

  • You can't perform an in-place major version upgrade on an external replica.
  • Upgrading instances that have more than 1,000 databases from one version to another might take a long time and time out.
  • Use the select * from pg_largeobject_metadata; statement to query for the number of large objects in each PostgreSQL database of your Cloud SQL instance. If the result from all of your databases is more than 10 million large objects, then the upgrade fails. Cloud SQL rolls back to the previous version of your database.
  • Before you perform an in-place major version upgrade to PostgreSQL 16 and later, upgrade the PostGIS extension for all of your databases to version 3.4.0.
  • If you're using PostgreSQL versions 9.6, 10, 11, or 12, then version 3.4.0 of the PostGIS extension isn't supported. Therefore, to perform an in-place major version upgrade to PostgreSQL 16 and later, you must first upgrade to an intermediate version of PostgreSQL (versions 13, 14, or 15).
  • If you install the pgRouting and pg_squeeze extensions for your instance, then you can't perform a major version upgrade. To fix this, uninstall these extensions and then perform the upgrade. For more information about the extensions, see Configure PostgreSQL extensions.

  • If you enable the vacuum_defer_cleanup_age and force_parallel_mode flags, then you can't perform a major version upgrade. To fix this, delete these flags and then perform the upgrade. For more information about the flags, including how to delete them, see Configure database flags.

Upgrade the database major version in-place

When you initiate an upgrade operation, Cloud SQL first checks the configuration of your instance to ensure that it's compatible for an upgrade. After verifying your configuration, Cloud SQL makes your instance unavailable, makes a pre-upgrade backup, performs the upgrade, makes your instance available, and makes a post-upgrade backup.

Console

  1. In the Google Cloud console, go to the Cloud SQL Instances page.

    Go to Cloud SQL Instances

  2. To open the Overview page of an instance, click the instance name.
  3. Click Edit.
  4. In the Instance info section, click the Upgrade button and confirm that you want to go to the upgrade page.
  5. On the Choose a database version page, click the Database version for upgrade list and select one of the available database major versions.
  6. Click Continue.
  7. In the Instance ID box, enter the name of the instance and then click the Start upgrade button.
The operation takes several minutes to complete.

Verify that the upgraded database major version appears below the instance name on the instance Overview page.

gcloud

  1. Start the upgrade.

    Use the gcloud sql instances patch command with the --database-version flag.

    Before running the command, replace the following:

    • INSTANCE_NAME: The name of the instance.
    • DATABASE_VERSION: The enum for the database major version, which must be later than the current version. Specify a database version for a major version that is available as an upgrade target for the instance. You can obtain this enum as the first step of Plan for upgrade. If you need a complete list of database version enums, then see SqlDatabaseEnums.
    gcloud sql instances patch INSTANCE_NAME \
    --database-version=DATABASE_VERSION
    

    Major version upgrades take several minutes to complete. You might see a message indicating that the operation is taking longer than expected. You can either ignore this message or run the gcloud sql operations wait command to dismiss the message.

  2. Get the upgrade operation name.

    Use the gcloud sql operations list command with the --instance flag.

    Before running the command, replace the INSTANCE_NAME variable with the name of the instance.

    gcloud sql operations list --instance=INSTANCE_NAME
    
  3. Monitor the status of the upgrade.

    Use the gcloud sql operations describe command.

    Before running the command, replace the OPERATION variable with the upgrade operation name retrieved in the previous step.

    gcloud sql operations describe OPERATION
    

REST v1

  1. Start the in-place upgrade.

    Use a PATCH request with the instances:patch method.

    Before using any of the request data, replace these variables:

    • PROJECT_ID: The ID of the project.
    • INSTANCE_NAME: The name of the instance.

    HTTP method and URL:

    PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME
    

    Request JSON body:

    {
      "databaseVersion": DATABASE_VERSION
    }
    

    Replace DATABASE_VERSION with the enum for the database major version, which must be later than the current version. Specify a database version for a major version that is available as an upgrade target for the instance. You can obtain this enum as the first step of Plan for upgrade. If you need a full list of database version enums, then see SqlDatabaseVersion.

  2. Get the upgrade operation name.

    Use a GET request with the operations.list method after replacing PROJECT_ID with the ID of the project.

    HTTP method and URL:

    GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations
    
  3. Monitor the status of the upgrade.

    Use a GET request with the operations.get method after replacing the following variables:

    • PROJECT_ID: The ID of the project.
    • OPERATION_NAME: The upgrade operation name retrieved in the previous step.

    HTTP method and URL:

    GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operation/OPERATION_NAME
    

Terraform

To update the version of the database, use a Terraform resource and the Terraform provider for Google Cloud, version 4.34.0 or later.

resource "google_sql_database_instance" "instance" {
  name             = "postgres-instance"
  region           = "us-central1"
  database_version = "POSTGRES_14"
  settings {
    tier = "db-custom-2-7680"
  }
  # set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by
  # use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level.
  deletion_protection = false
}

Apply the changes

To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.

Prepare Cloud Shell

  1. Launch Cloud Shell.
  2. Set the default Google Cloud project where you want to apply your Terraform configurations.

    You only need to run this command once per project, and you can run it in any directory.

    export GOOGLE_CLOUD_PROJECT=PROJECT_ID

    Environment variables are overridden if you set explicit values in the Terraform configuration file.

Prepare the directory

Each Terraform configuration file must have its own directory (also called a root module).

  1. In Cloud Shell, create a directory and a new file within that directory. The filename must have the .tf extension—for example main.tf. In this tutorial, the file is referred to as main.tf.
    mkdir DIRECTORY && cd DIRECTORY && touch main.tf
  2. If you are following a tutorial, you can copy the sample code in each section or step.

    Copy the sample code into the newly created main.tf.

    Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.

  3. Review and modify the sample parameters to apply to your environment.
  4. Save your changes.
  5. Initialize Terraform. You only need to do this once per directory.
    terraform init

    Optionally, to use the latest Google provider version, include the -upgrade option:

    terraform init -upgrade

Apply the changes

  1. Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
    terraform plan

    Make corrections to the configuration as necessary.

  2. Apply the Terraform configuration by running the following command and entering yes at the prompt:
    terraform apply

    Wait until Terraform displays the "Apply complete!" message.

  3. Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.

Delete the changes

To delete your changes, do the following:

  1. To disable deletion protection, in your Terraform configuration file set the deletion_protection argument to false.
    deletion_protection =  "false"
  2. Apply the updated Terraform configuration by running the following command and entering yes at the prompt:
    terraform apply
  1. Remove resources previously applied with your Terraform configuration by running the following command and entering yes at the prompt:

    terraform destroy

When you place an in-place upgrade request, Cloud SQL first performs a pre-upgrade check. If Cloud SQL determines that your instance isn't ready for an upgrade, then your upgrade request fails with a message suggesting how you can address the issue. See also Troubleshoot a major version upgrade.

Automatic upgrade backups

When you perform a major version upgrade, Cloud SQL automatically makes two on-demand backups, called upgrade backups:

  • The first upgrade backup is the pre-upgrade backup, which is made immediately before starting the upgrade. You can use this backup to restore your database instance to its state on the previous version.
  • The second upgrade backup is the post-upgrade backup, which is made immediately after new writes are allowed to the upgraded database instance.

When you view your list of backups, the upgrade backups are listed with type On-demand. Upgrade backups are labeled so that you can identify them quickly. For example, if you're upgrading from PostgreSQL 14 to PostgreSQL 15, your pre-upgrade backup is labeled Pre-upgrade backup, POSTGRES_14 to POSTGRES_15. and your post-upgrade backup is labeled Post-upgrade backup, POSTGRES_14 to POSTGRES_15.

As with other on-demand backups, upgrade backups persist until you delete them or delete the instance. If you have PITR enabled, you can't delete your upgrade backups while they're in your retention window. If you need to delete your upgrade backups, you must disable PITR or wait until your upgrade backups are no longer in your retention window.

Complete the major version upgrade

Once you've finished upgrading your primary instance, perform the following steps to complete your upgrade:

  1. Enable pglogical replication if your instance used it before the upgrade. Doing this automatically creates the necessary replication slot.
    1. Drop the pglogical subscription on the destination replica by using the following command:
      select pglogical.drop_subscription(subscription_name name);
      

      Replace name with the name of the existing subscription.

      For example:

      postgres=> select pglogical.drop_subscription(subscription_name := 'test_sub');
      -[ RECORD 1 ]-----+--
      drop_subscription | 1
      
    2. Recreate the pglogical subscription on the destination (replica) by providing connection details as follows to the Cloud SQL primary instance:
      SELECT pglogical.create_subscription(
          subscription_name := 'test_sub',
          provider_dsn := 'host=primary-ip port=5432 dbname=postgres user=replication_user password=replicapassword'
      ); 

      For example:

      postgres=> SELECT pglogical.create_subscription(
      postgres(>     subscription_name := 'test_sub',
      postgres(>     provider_dsn := 'host=10.58.64.90 port=5432 dbname=postgres user=postgres password=postgres'
      postgres(> );
      -[ RECORD 1 ]-------+-----------
      create_subscription | 2769129391
      
    3. Check the status of the subscription by using the following command:
      SELECT * FROM pglogical.show_subscription_status('test_sub');
      
    4. Test the replication by performing write transactions and verifying that the changes are visible on the destination.
  2. Upgrade the read replicas.

    If you stopped replication to read replicas, upgrade them one by one. You can use any of the methods used to upgrade the primary instance. When you upgrade a replica, Cloud SQL recreates it preserving the IP addresses, refreshes it with the latest data from the primary, and restarts the replica.

    If you deleted your read replicas prior to upgrading your primary, you can create new read replicas, which are automatically provisioned on the upgraded database version.

  3. Refresh the database statistics.

    Run ANALYZE on your primary instance to update the system statistics after the upgrade. Accurate statistics ensure that the PostgreSQL query planner processes queries optimally. Missing statistics can lead to bad query plans, which in turn might degrade performance and take up excessive memory.

  4. Perform acceptance tests.

    You should run tests to ensure that the upgraded system performs as expected.

Troubleshoot a major version upgrade

Cloud SQL returns an error message if you attempt an invalid upgrade command, for example, if your instance contains invalid database flags for the new version.

If your upgrade request fails, check the syntax of your upgrade request. If the request has a valid structure, try looking into the following suggestions.

View pre-upgrade check failures

Pre-upgrade check failures are issues or errors that Cloud SQL detects during the pre-upgrade verification or validation process. These failures occur before the actual upgrade process begins and are meant to identify potential problems or incompatibilities that can affect the success of the upgrade.

Pre-upgrade check failures are displayed for the following categories:

  • Incompatible extensions: detect PostgreSQL extensions that aren't compatible with the destination version of the instance.
  • Unsupported dependencies: identify dependencies that aren't supported anymore or need to be updated.
  • Data format incompatibilities: verify data inconsistencies that arise from various factors, including differences in version-specific data structures, alterations in encoding and collation, modifications to data types, and adjustments to the system catalog.

The following table lists pre-upgrade check failures and their error messages:

Pre-upgrade check failure Error message
Cloud SQL detects an unknown data type. Please remove the following usages of 'Unknown' data types before attempting an upgrade: (database: db_name, relation: rel_name, attribute: attr_name)
When upgrading to PostgreSQL 12 or later versions, Cloud SQL detects the 'sql_identifier' data type. Please remove the following usages of 'sql_identifier' data types before attempting an upgrade: (database: db_name, relation: rel_name, attribute: attr_name)
Cloud SQL detects the reg* data type. Please remove the following usages of 'reg*' data types before attempting an upgrade: (database: db_name, relation: rel_name, attribute: attr_name)
Cloud SQL detects that the
LC_COLLATE value for the postgres database is a character set that's other than en_US.UTF8.
Please change the 'LC_COLLATE' value of the postgres database to 'en_US.UTF8' before attempting an upgrade
Cloud SQL detects tables that have object identifiers (OIDs). Please remove the following usages of tables with OIDs before attempting an upgrade: (database: db_name, relation: rel_name)
Cloud SQL detects composite data types. Please remove the following usages of 'composite' data types before attempting an upgrade: (database: db_name, relation: rel_name, attribute: attr_name)
Cloud SQL detects user-defined postfix operators. Please remove the following usages of 'postfix operators' before attempting an upgrade: (database: db_name, operation id: op_id, operation namespace: op_namespace, operation name: op_name, type namespace: type_namespace, type name: type_name)
Cloud SQL detects incompatible polymorphic functions. Please remove the following usages of 'incompatible polymorphic' functions before attempting an upgrade: (database: db_name, object kind: obj_kind, object name: obj_name)
Cloud SQL detects user-defined encoding conversions. Please remove the following usages of user-defined encoding conversions before attempting an upgrade: (database: db_name, namespace name: namespace_name, encoding conversions name: encod_name)
Cloud SQL detects empty search path for function ll_to_earth Please update the search path of the 'll_to_earth' function
Cloud SQL detects the presence of unpacked PostGIS raster files. PostGIS version upgrade has not been completed, unpackaged raster files present. Follow the steps at https://postgis.net/documentation/tips/tip-removing-raster-from-2-3/ to fix before major version upgrade.

Fix the empty search path issue

This happens because the earthdistance extension uses earth and cube types without specifying the search path of the function. You must specify this path which is required during the upgrade process.

To resolve this issue, fix the search path for the ll_to_earth function by running this query: ALTER FUNCTION ll_to_earth SET search_path = public;

View upgrade logs

If any issues occur with a valid upgrade request, Cloud SQL publishes error logs to projects/PROJECT_ID/logs/cloudsql.googleapis.com%2Fpostgres-upgrade.log. Each log entry contains a label with the instance identifier to help you identify the instance with the upgrade error. Look for such upgrade errors and resolve them.

To view error logs, follow these steps:

  1. In the Google Cloud console, go to the Cloud SQL Instances page.

    Go to Cloud SQL Instances

  2. To open the Overview page of an instance, click the instance name.
  3. In the Operations and logs pane of the instance Overview page, click the View PostgreSQL error logs link.

    The Logs Explorer page opens.

  4. View logs as follows:

    • To list all error logs in a project, select the log name in the Log name log filter.

    For more information on query filters, see Advanced queries.

    • To filter the upgrade error logs for a single instance, enter the following query in the Search all fields box, after replacing DATABASE_ID

    with the project ID followed by the instance name in this format: project_id:instance_name.

    resource.type="cloudsql_database"
    resource.labels.database_id="DATABASE_ID"
    logName : "projects/PROJECT_ID/logs/cloudsql.googleapis.com%2Fpostgres-upgrade.log"
    

    For example, to filter the upgrade error logs by an instance named shopping-db running in the project buylots, use the following query filter:

     resource.type="cloudsql_database"
     resource.labels.database_id="buylots:shopping-db"
     logName : "projects/buylots/logs/cloudsql.googleapis.com%2Fpostgres-upgrade.log"
    

Log entries with the pg_upgrade_dump prefix indicate that an upgrade error had occurred. For example:

pg_upgrade_dump: error: query failed: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.

Additionally, log entries labeled with a .txt secondary filename might list other errors that you might want to resolve before attempting the upgrade again.

All filenames are found in the postgres-upgrade.log file. To locate a file name, look at the labels.FILE_NAME field.

Filenames that might contain errors to resolve include:

  • tables_with_oids.txt: This file contains tables that are listed with object identifiers (OIDs). Either delete the tables or modify them so that they don't use OIDs.
  • tables_using_composite.txt: This file contains tables that are listed using system-defined composite types. Either delete the tables or modify them so that they don't use these composite types.
  • tables_using_unknown.txt: This file contains tables that are listed using the UNKNOWN data type. Either delete the tables or modify them so that they don't use this data type.
  • tables_using_sql_identifier.txt: This file contains tables that are listed using the SQL_IDENTIFIER data type. Either delete the tables or modify them so that they don't use this data type.
  • tables_using_reg.txt: This file contains tables that are listed using the REG* data type (for example, REGCOLLATION or REGNAMESPACE). Either delete the tables or modify them so that they don't use this data type.
  • postfix_ops.txt: This file contains tables that are listed using postfix (right-unary) operators. Either delete the tables or modify them so that they don't use these operators.

Check the memory

If the instance has insufficient shared memory, you might see this error message: ERROR: out of shared memory. This error is more likely to occur if you have in excess of 10,000 tables.

Before you attempt an upgrade, set the value of the max_locks_per_transaction flag to approximately twice the number of tables in the instance. The instance is restarted when you change the value of this flag.

Check the connections capacity

If your instance has insufficient connection capacity, you might see this error message: ERROR: Insufficient connections.

Cloud SQL recommends that you increase the max_connections flag value by the number of databases in your instance. The instance is restarted when you change the value of this flag.

Check for an ambiguous column reference

If you have an ambiguous column reference in your views, then you might see this error message: ERROR: column reference "<column_name>" is ambiguous. This issue occurs when a new PostgreSQL version introduces changes to the structure of system catalog views such as pg_stat_activity and pg_stat_replication. This can disrupt custom views that rely on the column order.

To check for this error message, add this query to the query editor: textPayload =~ "ERROR: column reference .+ is ambiguous at character \d+"

You can resolve this issue in the following ways:

  1. Adapt your custom views.

    Update the column references in your custom views to align with the new system catalog view (such as pg_stat_activity and pg_stat_replication) definition in the target PostgreSQL version.

  2. Recreate your views.

    Drop your custom views before performing a major version upgrade. Recreate the views after the upgrade is complete, ensuring that they're compatible with the new structure.

For a more-detailed example of the problem and further insights, see this stack overflow discussion

Check for SRFs in CASE statements

If you are upgrading your instance from version 9.6 and using set returning functions in your CASE statements, then you might see this error message ERROR: set-returning functions are not allowed in CASE. This issue occurs as from version 10 onwards using set-returning functions in CASE statements is disallowed.

To resolve this issue and upgrade your instance successfully, ensure that any CASE statements utilizing set-returning functions are modified to avoid their use before retrying the upgrade. Some commonly used SRFs include the following:

  • unnest()
  • generate_series()
  • array_agg()
  • regexp_split_to_table()
  • jsonb_array_elements()
  • json_array_elements()
  • sonb_each()
  • json_each()

Check event trigger ownership

If an event trigger is owned by a user who lacks the cloudsqlsuperuser role, you might get an error message such as ERROR: permission denied to change owner of event trigger "<trigger_name>". This issue is caused due to permission issues while trying to recreate these triggers during upgrade. You can use add the following query in the query editor to check for this error message textPayload =~ "ERROR: permission denied to change owner of event trigger .+ "

To address this, verify the ownership of all event triggers within your instance. Ensure that the owner of each trigger is a cloudsqlsuperuser. If any triggers are owned by other users, update their ownership to a cloudsqlsuperuser before attempting the upgrade again. You can use the following query to change ownership ALTER EVENT TRIGGER <trigger_name> OWNER TO <cloudsqlsuperuser>;

You can use the following query to get a list of event triggers and the owner details SELECT evtname AS trigger_name, evtowner::regrole AS owner FROM pg_event_trigger;

Check generated columns from unlogged tables

If you have an unlogged table which has generated columns you might see the error message ERROR: unexpected request for new relfilenumber in binary upgrade mode. This issue occurs due to discrepancies in the persistence characteristics between tables and their sequences for generated columns.

To address this issue, do the following:

  1. Drop unlogged tables: if possible, drop any unlogged tables that are linked to generated columns. Ensure that data loss can be safely mitigated before proceeding.
  2. Convert to permanent tables: temporarily, convert unlogged tables to permanent tables using the following steps:
    1. Convert the table to a logged table ALTER TABLE SET LOGGED;
    2. Perform major version upgrade
    3. Convert the table back to an unlogged table ALTER TABLE SET UNLOGGED

You can identify all such tables by using the following query :

SELECT
  relnamespace::regnamespace,
  c.relname AS table_name,
  a.attname AS column_name,
  a.attidentity AS identity_type
FROM
  pg_catalog.pg_class c
  JOIN pg_catalog.pg_attribute a ON a.attrelid = c.oid
WHERE
  a.attidentity IN ('a', 'd') AND c.relkind = 'r' AND c.relpersistence = 'u'
ORDER BY c.relname, a.attname;

Check the custom flags for your PostgreSQL instance

If you're upgrading to a PostgreSQL instance, version 14 or higher, then check the names of any custom database flags that you configured for the instance. This is because PostgreSQL placed additional restrictions on allowed names for custom parameters.

The first character of a custom database flag must be alphabetic (A-Z or a-z). All subsequent characters can be alphanumeric, the underscore (_) special character, or the dollar sign ($) special character.

Remove extensions

If you're upgrading your Cloud SQL instance, then you might see this error message: pg_restore: error: could not execute query: ERROR: role "16447" does not exist.

To resolve this issue, follow these steps:

  1. Remove the pg_stat_statements and pgstattuple extensions.
  2. Perform the upgrade.
  3. Reinstall the extensions.

Restore to the previous major version

If your upgraded database system doesn't perform as expected, you might need to restore your instance to the previous version. You do so by restoring your pre-upgrade backup to a Cloud SQL recovery instance, which is a new instance running the pre-upgrade version.

To restore to the previous version, perform the following steps:

  1. Identify your pre-upgrade backup.

    See Automatic upgrade backups.

  2. Create a recovery instance.

    Create a new Cloud SQL instance using the major version that Cloud SQL was running when the pre-upgrade backup was made. Set the same flags and instance settings that the original instance uses.

  3. Restore your pre-upgrade backup.

    Restore your pre-upgrade backup to the recovery instance. This might take several minutes to complete.

  4. Add your read replicas.

    If you were using read replicas, add them individually.

  5. Connect your application.

    Having recovered your database system, update your application with details about the recovery instance and its read replicas. You can resume serving traffic on the pre-upgrade version of your database.

FAQs

The following questions might come up when upgrading the database major version.

Is my instance unavailable during an upgrade?
Yes. Your instance remains unavailable for a period of time while Cloud SQL performs the upgrade.
How long does an upgrade take?

Upgrading a single instance typically takes less than 10 minutes. If your instance configuration uses a small number of vCPUs or memory, then your upgrade might take more time.

If your instance hosts too many databases or tables, or your databases are very large, then the upgrade might take hours or even time out because the upgrade time corresponds to the number of objects in your databases. If you have multiple instances that need to be upgraded, then your total upgrade time increases proportionately.

Can I monitor each step in my upgrade process?
While Cloud SQL allows you to monitor whether an upgrade operation is still in progress, you are unable to track the individual steps in each upgrade.
Can I cancel my upgrade after I've started it?
No, you can't cancel an upgrade once it has started. If your upgrade fails, Cloud SQL automatically recovers your instance on the previous version.
What happens to my settings during an upgrade?

When you perform an in-place major version upgrade, Cloud SQL retains your database settings, including your instance name, IP address, explicitly configured flag values, and user data. However, the default value of the system variables might change. For example, the default value of the password_encryption flag in PostgreSQL 13 and earlier is md5. When you upgrade to PostgreSQL 14, the default value of this flag changes to scram-sha-256.

To learn more, see Configure database flags. If a certain flag or value is no longer supported in your target version, then Cloud SQL automatically removes the flag during the upgrade.

What's next