Exporting data

This page describes how to export data from Cloud SQL instances or from a database server not managed by Cloud SQL.

Exports use database resources, but they do not interfere with normal database operations unless the instance is under provisioned.

For best practices for exporting data, see Best Practices for Importing and Exporting Data.

You can export a CSV file. You also can export a SQL dump file, for example, if you want to export to another SQL database.

Standard export versus serverless export

For a standard export from Cloud SQL, the export is run while the database is online. When the databases being exported are smaller, the impact is likely to be minimal. However, when there are large databases, or large objects, such as BLOBs in the database, there's the possibility that the export might degrade database performance. This might impact the time it takes to perform database queries and operations against the database. Once you start an export, it is not possible to stop it if your database starts to respond slowly.

To prevent slow responses during an export, you can use serverless export. With serverless export, Cloud SQL creates a separate, temporary instance to offload the export operation. Offloading the export operation allows databases on the primary instance to continue to serve queries and perform operations at the usual performance rate. When the data export is complete, the temporary instance is automatically deleted.

A serverless export takes longer to do than a standard export, because it takes time to create the temporary instance. At a minimum, it takes longer than five minutes, but for larger databases, it might be longer. Consider the impact to time and performance before determining which type of export to use.

You can use serverless export on a primary instance or a read replica.

For best practices for exporting data, see Best Practices for Importing and Exporting Data.

Exporting data from Cloud SQL to a SQL dump file

When you use Cloud SQL to perform an export, whether from the Cloud Console, the gcloud command-line tool, or the API, you are using the pg_dump utility, with the options required to ensure that the resulting export file is valid for import back into Cloud SQL.

You can also run pg_dump manually using the psql client, if you are exporting to a database that is not managed by Cloud SQL.

If you plan to import your data into Cloud SQL, you must follow the instructions provided in Exporting data from an external database server so that your SQL dump file is formatted correctly for Cloud SQL.

Before you begin

This procedure requires you to export a file to Cloud Storage. To export data into Cloud Storage, the instance's service account or the user must have the Cloud SQL Client role and least the roles/storage.legacyBucketWriter IAM role. If the service account or user is also performing import operations, you can grant the account the storage.objectAdmin IAM roles set in the project. For help with IAM roles, see Cloud Identity and Access Management for Cloud Storage.

You can find the instance's service account name in the Google Cloud Console on your instance's Overview page. You can verify the roles for your Cloud Storage bucket by using the gsutil tool to inspect the bucket:

gsutil iam get gs://[BUCKET_NAME]

Learn more about using IAM with buckets.

Exporting data to a SQL dump file in Cloud Storage

To export data from a database on a Cloud SQL instance to a SQL dump file in a Cloud Storage bucket:

Console

  1. Go to the Cloud SQL Instances page in the Google Cloud Console.

    Go to the Cloud SQL Instances page

  2. Click the instance you want to export data from to open its Overview page.
  3. Click EXPORT in the button bar.
  4. Under File format, click SQL to create a SQL dump file.
  5. Under Data to export, use the drop-down menu to select the database you want to export from.
  6. Under Destination, select Browse to search for a Cloud Storage bucket or folder for your export.
  7. Click Export to begin the export.

gcloud

  1. Create a Cloud Storage bucket, if you haven't already.

    For help with creating a bucket, see Creating Storage Buckets.

  2. Upload the file to your bucket.

    For help with uploading files to buckets, see Uploading an Object.

  3. Describe the instance you are exporting from:
      gcloud sql instances describe [INSTANCE_NAME]
      
  4. Copy the serviceAccountEmailAddress field.
  5. Use gsutil iam to grant the storage.objectAdmin IAM role to the service account for the bucket. For help with setting IAM permissions, see Using IAM permissions.
  6. Export the database:
      gcloud sql export sql [INSTANCE_NAME] gs://[BUCKET_NAME]/sqldumpfile.gz \
                                  --database=[DATABASE_NAME] --offload
      

    The export sql command does not contain triggers or stored procedures, but does contain views. To export triggers and/or stored procedures, use the pg_dump tool.

    For more information about using the export sql command, see the sql export sql command reference page.

  7. If you do not need to retain the IAM role you set previously, revoke it now.

REST v1beta4

  1. Create a bucket for the export:
    gsutil mb -p [PROJECT_NAME] -l [LOCATION_NAME] gs://[BUCKET_NAME]
    

    This step is not required, but strongly recommended, so you do not open up access to any other data.

  2. Provide your instance with the storage.objectAdmin IAM role for your bucket. For help with setting IAM permissions, see Using IAM permissions.
  3. Export your database:

    Before using any of the request data below, make the following replacements:

    • project-id: The project ID
    • instance-id: The instance ID
    • bucket_name: The Cloud Storage bucket name
    • path_to_dump_file: The path to the SQL dump fle
    • database_name_1: The name of a database inside the Cloud SQL instance
    • database_name_2: The name of a database inside the Cloud SQL instance
    • offload: Enables serverless export. Set to true to use serverless export.

    HTTP method and URL:

    POST https://www.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id/export

    Request JSON body:

    {
     "exportContext":
       {
          "fileType": "SQL",
          "uri": "gs://bucket_name/path_to_dump_file",
          "databases": ["database_name"]
          "offload": true | false
        }
    }
    

    To send your request, expand one of these options:

    You should receive a JSON response similar to the following:

  4. If you do not need to retain the IAM role you set previously, revoke it now.
For the complete list of parameters for this request, see the instances:export page.

Exporting data from Cloud SQL to a CSV file

You can export your data in CSV format, which is usable by other tools and environments. Exports happen at the database level. During a CSV export, you can specify the schemas to export. All schemas under the database level are eligible for export.

Before you begin

This procedure requires you to export a file to Cloud Storage. To export data into Cloud Storage, the instance's service account or the user must have the Cloud SQL Client role and least the roles/storage.legacyBucketWriter IAM role. If the service account or user is also performing import operations, you can grant the account the storage.objectAdmin IAM roles set in the project. For help with IAM roles, see Cloud Identity and Access Management for Cloud Storage.

You can find the instance's service account name in the Google Cloud Console on your instance's Overview page. You can verify the roles for your Cloud Storage bucket by using the gsutil tool to inspect the bucket:

gsutil iam get gs://[BUCKET_NAME]

Learn more about using IAM with buckets.

Exporting data to a CSV file in Cloud Storage

To export data from a database on a Cloud SQL instance to a SQL dump file in a Cloud Storage bucket:

Console

  1. Go to the Cloud SQL Instances page in the Google Cloud Console.

    Go to the Cloud SQL Instances page

  2. Click the instance to open its Overview page.
  3. Click EXPORT.
  4. Under Cloud Storage export location, select a Cloud Storage bucket or folder for your export.
  5. In the Name field, provide a name for your export file and click Select.

    You can use a file extension of .gz to compress your export file.

  6. Set Format to CSV.
  7. Under Cloud Storage export location, add the name of the bucket, folder, and the file that you want to export, or click Browse to find or create a bucket, folder, or file.

    If you click Browse:

    1. Under Location, select a Cloud Storage bucket or folder for your export.
    2. In the Name text box, add a name for the CSV file, or if you created the file previously, select it from the list under Location.

      You can use a file extension of .gz (the complete extension would be .csv.gz) to compress your export file.

    3. Click Select.
  8. Under Format, click CSV.
  9. Under Database for export, select the name of the database from the drop-down menu.
  10. Under SQL query, enter a SQL query to specify the table to export data from.

    For example, to export the entire contents of the entries table in the guestbook database, you enter

    SELECT * FROM guestbook.entries;
    Your query must specify a table in the specified database; you cannot export an entire database in CSV format.

  11. Click Export to start the export.
  12. The Export database? dialog box opens with a message that the export process can take an hour or more for large databases. During the export, the only operation you can perform on the instance is viewing information. You can't stop the export once it starts. If this is a good time to start an export, click EXPORT. Otherwise, click CANCEL.

gcloud

  1. Create a Cloud Storage bucket, if you haven't already.

    For help with creating a bucket, see Creating Storage Buckets.

  2. Upload the file to your bucket.

    For help with uploading files to buckets, see Uploading an Object.

  3. Describe the instance you are exporting from:
    gcloud sql instances describe [INSTANCE_NAME]
    
  4. Use gsutil iam to grant the storage.objectAdmin IAM role to the service account for the bucket. For help with setting IAM permissions, see Using IAM permissions.
  5. Export the database:
    gcloud sql export csv [INSTANCE_NAME] gs://[BUCKET_NAME]/[FILE_NAME] \
                                --database=[DATABASE_NAME]
                                --offload
                                --query=[SELECT_QUERY]
    

    For information about using the export csv command, see the sql export csv command reference page.

  6. If you do not need to retain the IAM role you set previously, revoke it now.

REST v1beta4

  1. Create a bucket for the export:
    gsutil mb -p [PROJECT_NAME] -l [LOCATION_NAME] gs://[BUCKET_NAME]
    

    This step is not required, but strongly recommended, so you do not open up access to any other data.

  2. Provide your instance with the storage.objectAdmin IAM role for your bucket. For help with setting IAM permissions, see Using IAM permissions.
  3. Export your database:

    Before using any of the request data below, make the following replacements:

    • project-id: The project ID
    • instance-id: The instance ID
    • bucket_name: The Cloud Storage bucket name
    • path_to_csv_file: The path to the CSV file
    • database_name: The name of a database inside the Cloud SQL instance
    • offload: Enables serverless export. Set to true to use serverless export.
    • select_query: SQL query for export

    HTTP method and URL:

    POST https://www.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id/export

    Request JSON body:

    {
     "exportContext":
       {
          "fileType": "CSV",
          "uri": "gs://bucket_name/path_to_csv_file",
          "databases": ["database_name"],
          "offload": true | false
          "csvExportOptions":
           {
               "selectQuery":"select_query"
           }
       }
    }
    

    To send your request, expand one of these options:

    You should receive a JSON response similar to the following:

    You must specify exactly one database with the databases property, and if the select query specifies a database, it must be the same.

  4. If you do not need to retain the IAM role you set previously, revoke it now.
For the complete list of parameters for this request, see the instances:export page.

CSV export creates standard CSV output. If you need a non-standard CSV format, you can use the following statement in a psql client:

  \copy [table_name] TO '[csv_file_name].csv' WITH
      (FORMAT csv, ESCAPE '[escape_character]', QUOTE '[quote_character]',
      DELIMITER '[delimiter_character]', ENCODING 'UTF8', NULL '[null_marker_string]');
For the complete list of parameters for this request, see the instances:export page.

Exporting data from an on-premises PostgreSQL server using pg_dump

To export a database that is not managed by Cloud SQL, for later import into Cloud SQL, use the pg_dump utility with the following flags:

  • --no-owner

    Ownership change commands must not be included in the SQL dump file.

  • --format=plain

    Only plain SQL format is supported by Cloud SQL.

  • --no-acl

    This flag is required if your dump would otherwise contain statements to grant or revoke membership in a SUPERUSER role.

In addition, you must remove all of the following:

  • Extension-related statements, if Cloud SQL does not support that extension. See PostgreSQL Extensions for the list of supported extensions.
  • CREATE EXTENSION or DROP EXTENSION statements referencing plpgsql. This extension comes pre-installed on Cloud SQL Postgres instances.
  • COMMENT ON EXTENSION statements.

From a command line, run pg_dump:

pg_dump -U [USERNAME] --format=plain --no-owner --no-acl [DATABASE_NAME] \
    | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > [SQL_FILE].sql

The sed post-processing comments out all extension statements in the SQL dump file.

Confirm that the default encoding, as determined by the database settings, is correct for your data. If needed, you can override the default with the --encoding flag.

To export in parallel, use the -j NUM_CORES flag. NUM_CORES is the number of cores on the source instance. Use the same flag with pg_restore to import in parallel.

For help with pg_dump, see the pg_dump reference.

Troubleshooting

Click the links in the table for details:

For this problem... The issue might be... Try this...
Can't see the operation status. The user interface only shows success or failure. Use these database commands to find out more.
408 Error (Timeout) during export. SQL export can take a long time depending on database size and export content. Use multiple CSV exports to reduce the size of each operation.
CSV export worked but SQL export failed. SQL export is more likely to encounter compatibility issues with Cloud SQL. Use CSV exports to export only what you need..
Export is taking too long. Cloud SQL does not support concurrent synchronous operations. Use export offloading. Learn more.
Import is taking too long. Too many active connections can interfere with import operations. Close unused connections, or restart the Cloud SQL instance before beginning an import operation.
Create Extension error. The dump file contains references to unsupported extension. Edit the dump file to remove the references.
Error using pg_dumpall. The tool requires superuser role. The superuser role is not supported.
Export operation times out before exporting anything. Query must produce data within first seven minutes. Try a manual export using the pg_dump tool.
Import fails. Exported file may contain database users who do not yet exist. Create the database users before doing the import.
Connection closed during the export operation. Query must produce data within first seven minutes. Test the query manually. Learn more.
Unknown error during export. Possible bandwidth issue. Ensure that both the instance and the Cloud Storage bucket are in the same region.
You want to automate exports. Cloud SQL does not provide a way to automate exports. Build your own pipeline to perform this functionality. Learn more.
ERROR_RDBMS: system error occurred. Cloud Storage permissions or non-existent table. Check permissions OR ensure table exists.

Can't see the operation status

You can't see the status of an ongoing operation.

The issue might be

The Google Cloud Console reports only success or failure when done, and is not designed to return warnings.

Things to try

Connect to the database and run SHOW WARNINGS.


408 Error (Timeout) during export

You see the error message 408 Error (Timeout) while performing an export job in Cloud SQL.

The issue might be

CSV and SQL formats do export differently. The SQL format exports the entire database, and likely takes longer to complete. The CSV format lets you define which elements of the database to include in the export,

Things to try

Use the CSV format, and run multiple, smaller export jobs to reduce the size and length of each operation.


CSV export worked but SQL export failed

CSV export worked but SQL export failed.

The issue might be

CSV and SQL formats do export differently. The SQL format exports the entire database, and likely takes longer to complete. The CSV format lets you define which elements of the database to include in the export,

Things to try

Use CSV exports to export only what you need.


Export is taking too long

Export is taking too long, blocking other operations.

The issue might be

Cloud SQL does not support concurrent synchronous operations.

Things to try

Try exporting smaller datasets at a time.


Import is taking too long

Import is taking too long, blocking other operations.

The issue might be

Too many active connections can interfere with import operations. Connections consume CPU and memory, limiting the resources available.

Things to try

Close unused operations. Check CPU and memory usage to make sure there are plenty of resources available. The best way to ensure maximum resources for the import operation is to restart the instance before beginning the operation. A restart:

  • Closes all connections.
  • Ends any tasks that may be consuming resources.


Create Extension error

You see the error message SET SET SET SET SET SET CREATE EXTENSION ERROR: must be owner of extension plpgsql

The issue might be

When importing a PostgreSQL dump and you see a similar error message, the dump file contains references to plpgsql

Things to try

Edit the dump file and commenting out all lines relating to plpgsql.


Error using pg_dumpall

You get an error when trying to use the external pg_dumpall command-line tool.

The issue might be

This tool requires the superuser role.

Things to try

Cloud SQL is a managed service and does not give users the superuser roles or permissions.


Connection reset by peer

The export operation times out before anything is exported. You see the error message Could not receive data from client: Connection reset by peer.

The issue might be

If Cloud Storage does not receive any data within a certain time frame, the connection resets.

Things to try

Do a manual export using the pg_dump tool.


Import fails

Import fails when one or more users referenced in the exported SQL dump file does not exist.

The issue might be

Before importing a SQL dump, all the database users who own objects or were granted permissions on objects in the dumped database must exist. If they do not, the restore fails to recreate the objects with the original ownership and/or permissions.

Things to try

Create the database users before importing the SQL dump.


Connection closed during the export operation

Connection closed during the export operation.

The issue might be

The connection to Cloud Storage may be timing out because the query running in the export is not producing any data within the first seven minutes since the export is initiated.

Things to try

Test the query manually by connecting from any client and sending the output of your query to STDOUT with the command below:

COPY (INSERT_YOUR_QUERY_HERE) TO STDOUT WITH ( FORMAT csv, DELIMITER ',', ENCODING 'UTF8', QUOTE '"', ESCAPE '"' ).

This is expected behavior since when the export is initiated, the client is expected to start sending data right away. Keeping the connection with no data sent ends up breaking the connection and eventually resulting in the export failing and leaving the operation in an uncertain state. Also, this is what the error message from gcloud is trying to say with this message:

operation is taking longer than expected.


Unknown error during export

You see the error message Unknown error while trying to export a database to a Cloud Storage bucket.

The issue might be

The transfer might be failing due to a bandwidth issue.

Things to try

The Cloud SQL instance may be located in a different region from the Cloud Storage bucket. Reading and writing data from one continent to another involves a lot of network usage, and can cause intermittent issues like this. Check the regions of your instance and bucket.


Want to automate exports

You want to automate exports.

The issue might be

Cloud SQL does not provide a way to automate exports.

Things to try

You could build your own automated export system using Google Cloud products such as Cloud Scheduler, Pub/Sub, and Cloud Functions.


ERROR_RDBMS system error occurred

You see the error message [ERROR_RDBMS] system error occurred.

The issue might be

  • The user might not have all the Cloud Storage permissions it needs.
  • The database table might not exist.

Things to try

  1. Check that you have at least WRITER permissions on the bucket and READER permissions on the export file. For more information on configuring access control in Cloud Storage, see Create and Manage Access Control Lists.
  2. Ensure the table exists. If the table exists, confirm that you have the correct permissions on the bucket.

What's next