This page describes how to import data from a SQL file stored in a Cloud Storage bucket into an AlloyDB for PostgreSQL cluster. SQL files are plain text files with a sequence of SQL commands.
You can cancel the import of data into AlloyDB clusters. For more information, see Cancel an import operation.
Before you begin
Before you begin an import operation:
- Ensure that your database has adequate free space.
- Import operations use database resources, but they don't interfere with normal database operations unless the cluster is under-provisioned.
Required roles and permissions for importing to AlloyDB
To import data from Cloud Storage into AlloyDB, the user initiating the import must have one of the following roles:
- The AlloyDB Admin role
- A custom role, including the
following permissions:
alloydb.clusters.get
alloydb.clusters.import
Additionally, the service account for the AlloyDB cluster must have one of the following roles:
- The
storage.objectViewer
IAM role - A custom role, including the following permissions:
storage.objects.get
For help with IAM roles, see Identity and Access Management.
Import a SQL file to AlloyDB clusters
To import data to an AlloyDB cluster using a SQL file, follow these steps:
Console
Go to the Clusters page.
Click a cluster name to open the Overview page for that cluster.
Click Import.
In Choose the file you'd like to import data from, enter the path to the bucket and the SQL file to use for the import, or browse to an existing file.
Select the database you want the data to be imported into.
This causes AlloyDB to run the
USE DATABASE
statement before the import. If your SQL dump file includes aUSE DATABASE
statement, it overrides the database you set in the Google Cloud console.Optional: Specify a user for the import operation. If your import file contains statements that must be performed by a specific user, use this field to specify that user.
To start the import operation, click Import.
gcloud
- Create a Cloud Storage bucket.
Upload the SQL file to your bucket. For help with uploading files to buckets, see Uploading objects.
Use gcloud storage buckets add-iam-policy-binding to grant the
storage.objectViewer
IAM role to the AlloyDB cluster service account for the bucket:service-PROJECT_NUMBER@gcp-sa-alloydb.iam.gserviceaccount.com
Make the following replacements:
- CLUSTER_NAME: the name of the cluster.
- REGION: the region in which the AlloyDB cluster is deployed.
- BUCKET_NAME: the Cloud Storage bucket name.
- FILE_NAME: the name of the CSV file.
- DATABASE_NAME: the name of a database inside the cluster.
- USER: the user for the import operation.
- TABLE_NAME: the table inside the database.
For help with setting IAM permissions, see Using IAM permissions.
Import the file:
gcloud alloydb clusters import CLUSTER_NAME --region=REGION --database=DATABASE_NAME --gcs-uri='gs://BUCKET_NAME/PATH_TO_SQL_FILE' --user=USERNAME --sql
If the command returns an error like
PERMISSION_DENIED
, review the permissions.For information about using the
import
command, see thealloydb import
command reference page.If you don't need to keep the IAM permissions that you set previously, remove them using
gcloud storage buckets remove-iam-policy-binding
.
REST v1
- Create a Cloud Storage bucket.
- Upload the SQL file to your bucket. For help with uploading files to buckets, see Uploading objects.
Give the service account permissions to the Cloud Storage bucket for the import operation. Use the service account format to identify the service account for the project you're importing into. The format for the service account is as follows:
service-PROJECT_NUMBER@gcp-sa-alloydb.iam.gserviceaccount.com
Use gcloud storage buckets add-iam-policy-binding to grant the
storage.objectViewer
IAM role to the AlloyDB cluster service account for the bucket. For help with setting IAM permissions, see Using IAM permissions.Import your SQL file.
Use the following HTTP method and URL:
POST https://alloydbadmin.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/clusters/CLUSTER_ID:import
Before you use any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- LOCATION_ID: the region in which the AlloyDB cluster is deployed.
- CLUSTER_ID: the cluster ID.
- BUCKET_NAME: the Cloud Storage bucket name.
- PATH_TO_SQL_FILE: the path to the SQL file.
- USER: the user to use for the import.
- DATABASE_NAME: the name of a database inside the AlloyDB cluster.
Request JSON body:
{ "gcsUri": "gs://BUCKET_NAME/PATH_TO_SQL_FILE", "database": "DATABASE_NAME", "user": "USER", "sqlImportOptions": {} }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Save the request body in a file named
request.json
and execute the following command:curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json; charset=utf-8" \ -d @request.json \ "https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/clusters/CLUSTER_ID:import"
PowerShell (Windows)
Save the request body in a file named
request.json
and execute the following command:$cred = gcloud auth print-access-token $headers = @{ "Authorization" = "Bearer $cred" } Invoke-WebRequest ` -Method POST ` -Headers $headers ` -ContentType: "application/json; charset=utf-8" ` -InFile request.json ` -Uri "https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/clusters/CLUSTER_ID:import"| Select-Object -Expand Content
You receive a JSON response similar to the following:
Response
{ "name": "projects/project-id/locations/location-id/operations/operation-id", "metadata": { "@type": "type.googleapis.com/google.cloud.alloydb.v1.OperationMetadata", "createTime": "2025-01-04T13:12:32.363393723Z", "target": "projects/project-id/locations/location-id/clusters/cluster-id", "verb": "import", "requestedCancellation": false, "apiVersion": "v1" }, "done": false }
To use a different user for the import, specify the user property.
If you don't need to retain the IAM permissions that you set previously, remove them now.
For the complete list of parameters for the request, see
clusters:import
.
Check the status of an import operation
To check the status of an import operation, follow these steps:
gcloud
Run the following using the gcloud alloydb operations describe
command:
gcloud alloydb operations describe OPERATION_ID --region=REGION
You can also list details of a specific operation or cancel a specific
operation. For more information about this command, see the
gcloud alloydb operations
command reference page.
REST v1
Use the GET
method and the following URL:
GET https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/operations/OPERATION_ID
For more information, see get
.
Before you use any of the request data, make the following replacements:
- REGION: the region in which the AlloyDB cluster is deployed.
- PROJECT_ID: the project ID.
- OPERATION_ID: the ID of the import operation. For more information, see Before you begin.
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Execute the following command:
curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/operations/OPERATION_ID
PowerShell (Windows)
Execute the following command:
$cred = gcloud auth print-access-token $headers = @{ "Authorization" = "Bearer $cred" } Invoke-WebRequest ` -Method GET ` -Headers $headers ` -Uri "https://alloydb.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/operations/OPERATION_ID | Select-Object -Expand Content
You receive a JSON response similar to the following:
If successful, the response body contains an instance of Operation.
What's next
- Learn how to import a CSV file.
- Learn how to cancel an import operation.
- Check the status of an import operation.