select tab.table_schema,
tab.table_name
from information_schema.tables tab
left join information_schema.table_constraints tco
on tab.table_schema = tco.table_schema
and tab.table_name = tco.table_name
and tco.constraint_type = 'PRIMARY KEY'
where tab.table_type = 'BASE TABLE'
and tab.table_schema not in ('pg_catalog', 'information_schema', 'pglogical')
and tco.constraint_name is null
order by table_schema,
table_name;
SELECT schemaname,
relname,
n_tup_ins,
n_tup_upd,
n_tup_del
FROM pg_stat_user_tables
WHERE schemaname NOT IN
('pglogical', 'pg_catalog', 'information_schema');
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-18 UTC。"],[[["\u003cp\u003eThis page provides debugging scripts and guidance for using PostgreSQL within the Database Migration Service.\u003c/p\u003e\n"],["\u003cp\u003eDatabase Migration Service supports the migration of initial snapshots and \u003ccode\u003eINSERT\u003c/code\u003e statements for tables without primary keys, but \u003ccode\u003eUPDATE\u003c/code\u003e and \u003ccode\u003eDELETE\u003c/code\u003e operations may need manual intervention.\u003c/p\u003e\n"],["\u003cp\u003eYou can identify tables lacking primary keys and check for any updates or deletes on the source during migration using the provided SQL queries against the \u003ccode\u003einformation_schema\u003c/code\u003e and \u003ccode\u003epg_stat_user_tables\u003c/code\u003e tables.\u003c/p\u003e\n"],["\u003cp\u003eIf data discrepancies are found between the source and the Cloud SQL destination instance, you can manually resolve them by comparing and updating data, or by utilizing \u003ccode\u003epg_dump\u003c/code\u003e/\u003ccode\u003epg_restore\u003c/code\u003e or the \u003ccode\u003eCOPY\u003c/code\u003e command for the affected tables.\u003c/p\u003e\n"],["\u003cp\u003eIt is important to potentially clean the replica before using \u003ccode\u003epg_restore\u003c/code\u003e or \u003ccode\u003eCOPY\u003c/code\u003e commands if there is data that was previously migrated.\u003c/p\u003e\n"]]],[],null,["# Debugging and other tools\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n[MySQL](/database-migration/docs/mysql/debugging-tools \"View this page for the MySQL version of Database Migration Service.\") \\| PostgreSQL \\| [PostgreSQL to AlloyDB](/database-migration/docs/postgresql-to-alloydb/debugging-tools \"View this page for the PostgreSQL to AlloyDB version of Database Migration Service.\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nOverview\n--------\n\nThis page includes scripts that help with debugging and using PostgreSQL.\n\nMigrate UPDATE and DELETE operations for non-primary key tables\n---------------------------------------------------------------\n\nFor tables that don't have primary keys, Database Migration Service supports migration of the initial snapshot and `INSERT` statements during the change data capture (CDC) phase.\n\nTo update the missing `UPDATE` and `DELETE` processes, refer to the later sections of this document.\n\n### Detect missing data between the source and Cloud SQL destination instances\n\n1. Identify which tables don't have primary keys: \n\n ```\n select tab.table_schema,\n tab.table_name\n from information_schema.tables tab\n left join information_schema.table_constraints tco \n on tab.table_schema = tco.table_schema\n and tab.table_name = tco.table_name \n and tco.constraint_type = 'PRIMARY KEY'\n where tab.table_type = 'BASE TABLE'\n and tab.table_schema not in ('pg_catalog', 'information_schema', 'pglogical')\n and tco.constraint_name is null\n order by table_schema,\n table_name;\n ```\n2. Before starting the migration, for all tables without primary keys, check if there are any updates or deletes using this query: \n\n ```\n SELECT schemaname,\n relname,\n n_tup_ins,\n n_tup_upd,\n n_tup_del\n FROM pg_stat_user_tables\n WHERE schemaname NOT IN\n ('pglogical', 'pg_catalog', 'information_schema');\n ```\n Where:\n - `n_tup_ins`: number of rows inserted\n - `n_tup_upd`: number of rows updated (includes HOT updated rows)\n - `n_tup_del`: number of rows deleted\n3. Save these results either in a separate table or in a file.\n4. After the migration setup is complete, run the query again.\n5. Compare the results with the results from step 3.\n\nIf there are differences in values of `n_tup_upd` or `n_tup_del` on the source during migration, there might be some updates or deletes on the source.\n| **Note:** You can manually reset `pg_stat_user_tables` using the `pg_stat_reset()` function. If this function is run on the source, the statistics might not reflect accurate information.\n\n### Manually migrate data from the source to the Cloud SQL destination instances\n\nIf you detect there are some discrepancies between the source and Cloud SQL destination instance, you can migrate the data using one of the options:\n\n- **Option 1**: Manually compare the data between the source and Cloud SQL destination and execute appropriate SQL queries to update only the data that is different between the source and the replica.\n\n- **Option 2** : Run `pg_dump` and `pg_restore` for the tables without primary keys. See [Export data from an on-premises PostgreSQL server using pg_dump](/sql/docs/postgres/import-export/import-export-dmp#external-server) for more information.\n\n- **Option 3** : Use Postgres `COPY` command to migrate your data. See [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-copy.html) for more information. Note that this step requires the replica VM to be able to connect to the source VM.\n\n| For **Option 2** and **Option 3** , if there is any data that is already migrated on to the replica, those records might require cleanup before running the `pg_restore` or `COPY` commands. Please make changes accordingly."]]