This page describes exporting and importing data into Cloud SQL instances using pg_dump, pg_dumpall, and pg_restore.
Before you begin
Exports use database resources, but they do not interfere with normal database operations unless the instance is under-provisioned.
For best practices, see Best Practices for Importing and Exporting Data.
After completing an import operation, verify the results.
Learn more about the pg_dump
, pg_dumpall
, and pg_restore
utilities.
Export data from Cloud SQL for PostgreSQL
You can use Cloud SQL to perform an export from the Google Cloud console, the gcloud CLI, or the API.
- To export a single PostgreSQL database, use the
pg_dump
utility. - To export all PostgreSQL databases of a cluster, use the
pg_dumpall
utility.
When using either utility, make sure that you also use the required options to ensure that the resulting export file is valid for import back into Cloud SQL.
Export data from an on-premises PostgreSQL server using pg_dump
To export a database that is not managed by Cloud SQL, for later import
into Cloud SQL, use the pg_dump
utility with the following
flags:
--no-owner
Ownership change commands must not be included in the dump file.
--format
The
custom
anddirectory
formats are allowed if the dump file is intended for use withpg_restore
.For
plain-text
format, export to aSQL dump file
instead. This format is not compatible withpg_restore
, and must be imported using the Google Cloud console import command orpsql
client.--no-acl
This flag is required if your dump would otherwise contain statements to grant or revoke membership in a
SUPERUSER
role.--clean
This optional flag enables you to include the
DROP <object>
SQL statement that's required to drop (clean) database objects before you import them.--if-exists
This optional flag enables you to include the
IF EXISTS
SQL statement with eachDROP
statement that's produced by theclean
flag.
In addition, you must remove all of the following:
- Extension-related statements, if Cloud SQL does not support that extension. See PostgreSQL Extensions for the list of supported extensions.
CREATE EXTENSION
orDROP EXTENSION
statements referencing plpgsql. This extension comes pre-installed on Cloud SQL Postgres instances.COMMENT ON EXTENSION
statements.
Confirm that the default encoding, as determined by the database settings,
is correct for your data. If needed, you can override the default with the
--encoding
flag.
Export data using the custom
format from Cloud SQL for PostgreSQL
To use the custom format, from a command line, run pg_dump
:
pg_dump \ -U USERNAME \ --format=custom \ --no-owner \ --no-acl \ DATABASE_NAME > DATABASE_NAME.dmp
Export data from multiple files in parallel from Cloud SQL for PostgreSQL
You can only use the directory
output format to export data from multiple files in parallel.
To export in parallel, use the -j NUM_CORES
flag.
NUM_CORES is the number of cores on the source instance.
Export all databases
pg_dumpall
is
a utility that allows you to extract all PostgreSQL databases of a cluster into
a single script file. This file has SQL commands that you can use to restore the
databases.
To export all PostgreSQL databases in a Cloud SQL instance, use the
pg_dumpall
utility with the following mandatory flags:
exclude-database=cloudsqladmin
exclude-database=template*
The pg_dumpall
utility doesn't have access to the cloudsqladmin
or
template
databases.
To export all PostgreSQL databases, run the following command:
pg_dumpall \ -h HOST_NAME -l DATABASE_NAME –exclude-database=cloudsqladmin \ –exclude-database=template* > pg_dumpall.sql
To view role passwords when dumping roles with pg_dumpall
, set the
cloudsql.pg_authid_select_role
flag to a PostgreSQL role name. If the role exists,
then it has read-only (SELECT
) access to the pg_authid
table. This table
contains role passwords.
Import
Use the pg_restore
utility to import an archive into a
Cloud SQL database. pg_restore
works only with archives
created by pg_dump in either the
custom
or directory
formats.
Learn more about
pg_restore
.
Import from a dump file created with the custom
format to Cloud SQL for PostgreSQL
If the dump file was created with custom format, run the following command:
pg_restore \ --list DATABASE_NAME.dmp | sed -E 's/(.* EXTENSION )/; \1/g' > DATABASE_NAME.toc
Post-processing from sed
comments out all extension statements
in the SQL dump file.
When importing using pg_restore
, specify the processed table of contents with
the command-line argument "--use-list=DATABASE_NAME.toc".
Import data from multiple files in parallel to Cloud SQL for PostgreSQL
You can import data from multiple files in parallel only for archives created using the directory
and
custom
output formats.
To import in parallel, use the -j NUM_CORES
flag.
NUM_CORES is the number of cores on the destination instance.
Import performance on Cloud SQL for PostgreSQL
What's next
- Learn how to check the status of import and export operations.
- Learn more about best practices for importing and exporting data.
- Learn more about the PostgreSQL pg_dump utility.
- Known issues for imports and exports.