Migration fidelity

Overview

When you're migrating your schema, data, and metadata from a source database to a destination database, you want to ensure that all of this information is migrated accurately. Database Migration Service provides a high-fidelity way to migrate database objects (including the schema, data, and metadata) from one database to another.

All of the following data, schema, and metadata components are migrated as part of the database migration:

Data

  • All schemas and all tables from the selected database, excluding the following schemas:

    • The information schema information_schema
    • Any schemas beginning with pg (for example, pg_catalog)

    For more information about these schemas, see Known limitations.

Schema

  • Naming

  • Primary key

  • Data type

  • Ordinal position

  • Default value

  • Nullability

  • Auto-increment attributes

  • Secondary indexes

Metadata

  • Stored procedures

  • Functions

  • Triggers

  • Views

  • Foreign key constraints

Continuous migration

Only data manipulation language (DML) changes are updated automatically during continuous migrations. Managing data definition language (DDL) changes so that the source and destination databases remain compatible is the responsibility of the user, and can be achieved in two ways:

  1. Stopping writes to the source and running the DDL commands in both source and destination. Before running DDL commands on the destination, grant cloudsqlexternalsync to the Cloud SQL user applying the DDL changes. To enable querying or changing the data, grant the cloudsqlexternalsync role to the relevant Cloud SQL users.

  2. Using the pglogical.replicate_ddl_command to allow DDL to be run on the source and destination at a consistent point. The user running this command must have the same username on both the source and the destination, and should be the superuser or the owner of the artifact being migrated (for example, the table, sequence, view, or database).

    Here are a few examples of using the pglogical.replicate_ddl_command.

    To add a column to a database table, run the following command:

    select pglogical.replicate_ddl_command('ALTER TABLE [schema].[table] add column surname varchar(20)', '{default}');

    To change the name of a database table, run the following command:

    select pglogical.replicate_ddl_command('ALTER TABLE [schema].[table] RENAME TO [table_name]','{default}');

    To create a database table, run the following commands:

    1. select pglogical.replicate_ddl_command(command := 'CREATE TABLE [schema].[table] (id INTEGER PRIMARY KEY, name VARCHAR);', replication_sets := ARRAY['default'']);

    2. select pglogical.replication_set_add_table('default', '[schema].[table]');

What isn't migrated

  • To add users to a Cloud SQL destination instance, navigate to the instance and add users from the Users tab, or add them from a PostgreSQL client. Learn more about creating and managing PostgreSQL users.
  • Large objects can't be replicated, as PostgreSQL's logical decoding facility does not support decoding changes to large objects. For tables that have column type oid referencing large objects, the rows are synced, and new rows are replicated. However, trying to access the large object on the destination database (read using lo_get, export using lo_export, or check the catalog pg_largeobject for the given oid), fails with a message saying that the large object does not exist.
  • Database Migration Service migrates only tables with primary keys. Any tables on the source PostgreSQL database without primary key constraints won't be migrated. For these tables, Database Migration Service will migrate only the table schema.
  • Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command: REFRESH MATERIALIZED VIEW view_name.
  • The SEQUENCE states (for example, last_value) on the new Cloud SQL destination might vary from the source SEQUENCE states.