Known limitations for using a PostgreSQL database as a source include:
The pglogical extension doesn't support the replication of generated columns for PostgreSQL 12+.
Changes to table structures (DDL) aren't replicated through standard DDL
commands, but only with commands executed using the pglogical
extension used for replication.
For example, pglogical provides a function
pglogical.replicate_ddl_command that allows DDL to be run on
both the source database and replica at a consistent point. The user
running this command on the source must already exist on the replica.
In order to replicate data for new tables, you must use the
pglogical.replication_set_add_table command to add
the new tables to existing replication sets.
To learn more about DDL replication while the migration is in
progress, see the section on
For tables that don't have primary keys, Database Migration Service supports migration of the initial snapshot and INSERT statements during the change data capture (CDC) phase. You should migrate UPDATE and DELETE statements manually.
Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command: REFRESH MATERIALIZED VIEW view_name.
The SEQUENCE states (for example, last_value) on the new Cloud SQL destination might vary from the source SEQUENCE states.
UNLOGGED and TEMPORARY tables aren't and can't be
Large Object data type isn't supported. More details in the section on
Database Migration Service doesn't support migrating from read replicas that are in recovery mode.
Database Migration Service doesn't support AWS RDS sources where the AWS SCT extension pack is applied.
User-defined functions written in C can't be migrated, except for functions that are installed in the PostgreSQL database when you're installing extensions that are supported by Cloud SQL.
If other extensions and procedural languages exist in the source database, or if their versions aren't supported, then when you test or start the migration job, it will fail.
Databases that are added after the migration job has started aren't migrated.
All schemas and all tables from the source database are migrated as part of the database migration, except for the following schemas:
The information schema (information_schema)
Any schemas beginning with pg (for example, pg_catalog, which contains the system tables and all built-in data types, functions, and operators, and exists in all databases automatically). As a result, information about users and user roles isn't migrated. See the full list of schemas beginning with pg.
If encrypted databases require customer-managed encryption keys to decrypt the databases, and if Database Migration Service doesn't have access to the keys, then the databases can't be migrated.
However, if customer data is encrypted by the pgcrypto extension, then the data can be migrated with Database Migration Service (because Cloud SQL supports the extension).
The destination Cloud SQL database is writable during the migration
to allow DDL changes to be applied if needed. Take care not to make any
changes to the database configuration or table structures which might break
the migration process or impact data integrity.
Trigger behavior depends on how they were configured. The default behavior is they
won't trigger, but if they were configured using the
ALTER EVENT TRIGGER or ALTER TABLE statement and the trigger state is set to either replica or always, then they will trigger
on the replica during replication.
Functions with security definer will be created by
cloudsqlexternalsync in Cloud SQL replica. When it's
executed by any users, it will be executed with the privileges of
cloudsqlexternalsync which has cloudsqlsuperuser and
cloudsqlreplica roles. It's better to restrict use of a security
definer function to only some users. To do that, the user should revoke the
default PUBLIC privileges and then grant execute privilege selectively.
Up to 2,000 connection profiles and 1,000 migration jobs can exist at any given
time. To create space for more, migration jobs (including completed ones)
and connection profiles can be deleted.