This page describes how to export data from AlloyDB for PostgreSQL clusters into SQL dump files.
For information about migrating an entire database from a supported database server to a new AlloyDB instance, see Migrate a database to AlloyDB by using Database Migration Service. If you're exporting because you want to create a new instance from the exported file, consider restoring a cluster from a stored backup.
You can cancel the export of data from AlloyDB for PostgreSQL clusters. For more information, see Cancel the export of data.
Before you begin
- Before you begin an export operation, keep in mind that export operations use database resources, but they don't interfere with standard database operations unless the instance is underprovisioned.
- Inter-region data transfer charges apply when the target bucket is in a different region than the source cluster. For more information, see AlloyDB for PostgreSQL pricing.
- Compression is enabled if the object name ends with
.gz
extension. The object is then exported in.gz
format to Cloud Storage. - Multiple export operations can run in parallel.
Required roles and permissions for exporting from AlloyDB
To export data from AlloyDB into Cloud Storage, the user initiating the export must have one of the following Identity and Access Management (IAM) roles:
- The
Cloud AlloyDB Admin
(
roles/alloydb.admin
) role - A custom role, including the
following permissions:
alloydb.clusters.get
alloydb.clusters.export
Additionally, the service account for the AlloyDB cluster must have one of the following roles:
- The
storage.objectAdmin
IAM role - A custom role, including the
storage.objects.create
permissions
For help with IAM roles, see Identity and Access Management.
Export AlloyDB data to a SQL dump file
When you use AlloyDB to perform an export, whether from the
gcloud CLI or the API, you are using the
pg_dump
utility, with the options required to ensure that the resulting export file is
valid for import back into AlloyDB.
To export data from a database on a AlloyDB cluster to a SQL dump file in a Cloud Storage bucket, follow these steps:
gcloud
- Create a Cloud Storage bucket.
Use the provided format to identify the service account for the project you're exporting from. The format for the service account is as follows:
service-PROJECT_NUMBER@gcp-sa-alloydb.iam.gserviceaccount.com
Give the service account permissions to the Cloud Storage bucket for the export operation.
Use
gcloud storage buckets add-iam-policy-binding
to grant thestorage.objectAdmin
IAM role to the service account. For help with setting IAM permissions, see Using IAM permissions.Export the database to your Cloud Storage bucket. The following lists options for exporting data in a SQL dump format:
--async
(Optional): return immediately, without waiting for the operation in progress to complete.--tables
(Optional): tables to export from.--schema-only
(Optional): if set, export only the schema.--clean-target-objects
(Optional): if set, output commands toDROP
all the dumped database objects prior to outputting the commands for creating them.--if-exist-target-objects
(Optional): if set, useDROP ... IF EXISTS
commands to check for the object's existence before dropping it in--clean-target-objects
mode.
To use these features, include these options in the
gcloud
command. If you want to export only object definitions (schema) and no data, use the–-schema-only
flag. To specify which tables are to be exported, use the--tables=TABLE_NAMES
flag. You can specify comma separated values of table names or wildcard patterns to specify multiple tables.Otherwise, remove these parameters from the following command:
gcloud alloydb clusters export CLUSTER_NAME --region=REGION --database=DATABASE_NAME --gcs-uri="gs://BUCKET_NAME/OBJECT_NAME" --tables=TABLE_NAMES --schema-only --clean-target-objects --if-exist-target-objects --sql
The
alloydb clusters export
command doesn't contain triggers or stored procedures, but it does contain views. To export triggers or stored procedures, use thepg_dump
utility.For more information about using the
alloydb clusters export
command, see thealloydb clusters export
command reference page.If you don't need to retain the IAM role that you set previously, revoke the role now.
REST v1
Create a bucket for the export:
gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
Use the service account format to identify the service account for the project that you're exporting from.
The format for the service account is as follows:
service-PROJECT_NUMBER@gcp-sa-alloydb.iam.gserviceaccount.com
Give the service account permissions to the Cloud Storage bucket for the export operation.
Use
gcloud storage buckets add-iam-policy-binding
to grant thestorage.objectAdmin
IAM role to the service account. For help with setting IAM permissions, see Using IAM permissions.Export your database.
Use the following HTTP method and URL:
POST https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID:export
Before you use any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- REGION: the region in which the AlloyDB cluster is deployed.
- CLUSTER_ID: the cluster ID.
- BUCKET_NAME: the Cloud Storage bucket name.
- PATH_TO_SQL_FILE: the path to the SQL dump file.
- DATABASE_NAME: the name of a database inside the AlloyDB instance.
- TABLES: tables to export from.
- SCHEMA_ONLY: if
true
, export only the schema. - CLEAN_TARGET_OBJECTS: if
true
, output commands toDROP
all the dumped database objects prior to outputting the commands for creating them. - IF_EXIST_TARGET_OBJECTS: If
true
, useDROP ... IF EXISTS
commands to check for the object's existence before dropping it inclean_target_objects
mode.
To use these features, set the values of these parameters to
true
. Otherwise, set their values tofalse
. If you want to export only object definitions (schema) and no data, use theschema_only
flag. To specify which tables are to be exported, use thetables
field. You can select multiple tables by providing a comma-separated list of table names or by writing wildcard characters in the pattern.Request JSON body:
{ "exportClusterRequest": { "gcs_destination": { "uri": "gs://BUCKET_NAME/PATH_TO_SQL_FILE" }, "database": "DATABASE_NAME", "sql_export_options": { "schema_only": [true|false], "tables": "TABLE_NAMES", "clean_target_objects": [true|false], "if_exist_target_objects": [true|false], } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named
request.json
and execute the following command:curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json; charset=utf-8" \ -d @request.json \ "https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID:export"
PowerShell (Windows)
Save the request body in a file named
request.json
and execute the following command:$cred = gcloud auth print-access-token $headers = @{ "Authorization" = "Bearer $cred" } Invoke-WebRequest ` -Method POST ` -Headers $headers ` -ContentType: "application/json; charset=utf-8" ` -InFile request.json ` -Uri "https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID:export"| Select-Object -Expand Content
You receive a JSON response similar to the following:
Response
{ "name": "projects/PROJECT_ID/locations/REGION/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.alloydb.v1.OperationMetadata", "createTime": "2024-09-17T06:05:31.244428646Z", "target": "projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID", "verb": "export", "requestedCancellation": false, "apiVersion": "v1" }, "done": false }
If you don't need to retain the IAM role you set previously, remove it now.
For the complete list of parameters for the request, see
clusters:export
.
What's next
- Learn how to export a CSV file.
- Cancel a data export.