Changes to table structures (DDL) aren't replicated through standard DDL commands, but only with commands executed using the
pglogicalextension used for replication.
pglogicalprovides a function
pglogical.replicate_ddl_commandthat allows DDL to be run on both the source database and replica at a consistent point. The user running this command on the source must already exist on the replica.
In order to replicate data for new tables, you must use the
pglogical.replication_set_add_tablecommand to add the new tables to existing replication sets.
To learn more about DDL replication while the migration is in progress, see the section on migration fidelity.
Database Migration Service migrates only tables with primary keys. Any tables on the source PostgreSQL database without primary key constraints won't be migrated. For these tables, Database Migration Service will migrate only the table schema.
Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command:
REFRESH MATERIALIZED VIEW view_name.
SEQUENCEstates (for example,
last_value) on the new Cloud SQL destination might vary from the source
TEMPORARYtables aren't and can't be replicated.
Large Object data type isn't supported. More details in the section on migration fidelity.
Only extensions and procedural languages that Cloud SQL supports for PostgreSQL can be migrated.
Database Migration Service doesn't support migrating from read replicas that are in recovery mode.
Database Migration Service doesn't support AWS RDS sources where the AWS SCT extension pack is applied.
Database Migration Service doesn't support the creation of instances with customer-managed encryption keys (CMEK) enabled.
User-defined functions written in C can't be migrated, except for functions that are installed in the PostgreSQL database when you're installing extensions that are supported by Cloud SQL.
If other extensions and procedural lanuages exist in the source database, or if their versions aren't supported, then when you test or start the migration job, it will fail.
Databases that are added after the migration job has started aren't migrated.
All schemas and all tables from the source database are migrated as part of the database migration, except for the following schemas:
- The information schema (
- Any schemas beginning with
pg_catalog, which contains the system tables and all built-in data types, functions, and operators, and exists in all databases automatically). As a result, information about user roles isn't migrated. See the full list of schemas beginning with
- The information schema (
Encrypted databases can't be migrated.
The destination Cloud SQL database is writable during the migration to allow DDL changes to be applied if needed. Take care not to make any changes to the database configuration or table structures which might break the migration process or impact data integrity.
Trigger behavior depends on how they were configured. Default behavior is they will not trigger, but if they were configured using
ALTER EVENT TRIGGERor
ALTER TABLEstatement and set the trigger state to either replica or always, then they will trigger on the replica during replication.
Functions with security definer will be created by
cloudsqlexternalsyncin Cloud SQL replica. When it's executed by any users, it will be executed with the privileges of
cloudsqlreplicaroles. It's better to restrict use of a security definer function to only some users. To do that, the user should revoke the default PUBLIC privileges and then grant execute privilege selectively.
- Up to 2,000 connection profiles and 1,000 migration jobs can exist at any given time. To create space for more, migration jobs (including completed ones) and connection profiles can be deleted.