Export and import using pg_dump and pg_restore

This page describes exporting and importing data into Cloud SQL instances using pg_dump and pg_restore.

Before you begin

Exports use database resources, but they do not interfere with normal database operations unless the instance is under-provisioned.

For best practices, see Best Practices for Importing and Exporting Data.

After completing an import operation, verify the results.

For help with pg_dump, see the pg_dump reference.

Export

Export data from Cloud SQL

When you use Cloud SQL to perform an export, whether from the Cloud console, the gcloud, or the API, you are using the pg_dump utility, with the options required to ensure that the resulting export file is valid for import back into Cloud SQL.

Export data from an on-premises PostgreSQL server using pg_dump

To export a database that is not managed by Cloud SQL, for later import into Cloud SQL, use the pg_dump utility with the following flags:

  • --no-owner

    Ownership change commands must not be included in the dump file.

  • --format

    The custom and directory formats are allowed if the dump file is intended for use with pg_restore.

    For plain-text format, export to a SQL dump file instead. This format is not compatible with pg_restore, and must be imported using the Cloud console import command or psql client.

  • --no-acl

    This flag is required if your dump would otherwise contain statements to grant or revoke membership in a SUPERUSER role.

In addition, you must remove all of the following:

  • Extension-related statements, if Cloud SQL does not support that extension. See PostgreSQL Extensions for the list of supported extensions.
  • CREATE EXTENSION or DROP EXTENSION statements referencing plpgsql. This extension comes pre-installed on Cloud SQL Postgres instances.
  • COMMENT ON EXTENSION statements.

Confirm that the default encoding, as determined by the database settings, is correct for your data. If needed, you can override the default with the --encoding flag.

Export data using the custom format

To use the custom format, from a command line, run pg_dump:

pg_dump \
-U USERNAME \
--format=custom \
--no-owner \
--no-acl \
DATABASE_NAME > DATABASE_NAME.dmp

Export data in parallel

Parallel export is supported only for the directory output format.

To export in parallel, use the -j NUM_CORES flag. NUM_CORES is the number of cores on the source instance.

Import

Use the pg_restore utility to import an archive into a Cloud SQL database. pg_restore only works with archives created by pg_dump in either custom> or directory format." Learn more about pg_restore.

Import from a dump file created with the custom format

If the dump file was created with custom format, run the following command:

pg_restore \
--list DATABASE_NAME.dmp | sed -E 's/(.* EXTENSION )/; \1/g' >  DATABASE_NAME.toc

Post-processing from sed comments out all extension statements in the SQL dump file.

When importing using pg_restore, specify the processed table of contents with the command-line argument "--use-list=DATABASE_NAME.toc".

Import data in parallel

Parallel import is only supported for archives created using the directory and custom output formats.

To import in parallel, use the -j NUM_CORES flag. NUM_CORES is the number of cores on the source instance.

Import performance

What's next