La cuenta de servicio de importación del servicio de base de datos debe tener acceso al archivo de volcado.
La cuenta de servicio se llama postgresql-import-DATABASE_CLUSTER_NAME o oracle-import-DATABASE_CLUSTER_NAME, según el tipo de base de datos que importes.
Reemplaza DATABASE_CLUSTER_NAME por el nombre del clúster de base de datos en el que importarás los datos.
Puedes importar un archivo de volcado a un clúster de base de datos con la consola de GDC o la CLI de Distributed Cloud:
Console
Abre la página Descripción general del clúster de bases de datos en la consola de GDC para ver el clúster que contiene la base de datos que importas.
Haz clic en Importar. Se abrirá el panel Importar datos a las cuentas.
En la sección Fuente del panel Importar datos a las cuentas, especifica la ubicación del archivo de volcado de datos SQL que subiste anteriormente.
En el campo Destino, especifica una base de datos de destino existente para la importación.
Haz clic en Importar. En la consola de GDC, aparece un banner que muestra el estado de la importación.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Import from a dump file\n\nBefore importing data, you must:\n\n1. [Create a database cluster](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/db-service#create)\n to import the data to.\n\n2. Upload the dump file to a storage bucket. See\n [Upload objects to storage buckets](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/upload-download-storage-objects#upload_objects_to_storage_buckets)\n for instructions.\n\n The Database Service import service account must have access to the dump file.\n The service account is named\n `postgresql-import-`\u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e or\n `oracle-import-`\u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e, depending on\n the type of database you are importing.\n\n Replace \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e with the name of the\n database cluster where you are importing data.\n\nYou can import a dump file into a database cluster using either the\nGDC console or the Distributed Cloud CLI: \n\n### Console\n\n1. Open the **Database cluster overview** page in the GDC console to see\n the cluster that contains the database you are importing.\n\n2. Click **Import** . The **Import data to accounts** panel opens.\n\n3. In the **Source** section of the **Import data to accounts** panel, specify\n the location of the SQL data dump file you uploaded previously.\n\n4. In the **Destination** field, specify an existing destination database for the import.\n\n | **Note:** You can leave this field empty if the dump file already specifies a destination. If the dump file specifies a destination and you fill the **Destination** field, the **Destination** field overrides the destination specified in the dump file.\n5. Click **Import**. A banner on the GDC console shows the status of\n the import.\n\n### gdcloud CLI\n\n1. Before using Distributed Cloud CLI,\n [install and initialize](/distributed-cloud/hosted/docs/latest/gdch/resources/gdcloud-install) it.\n Then, [authenticate](/distributed-cloud/hosted/docs/latest/gdch/resources/gdcloud-auth) with your\n organization.\n\n2. Run the following command to import a dump file into a database:\n\n gdcloud database import sql \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER\u003c/var\u003e s3://\u003cvar translate=\"no\"\u003eBUCKET_NAME/sample.dmp\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER\u003c/var\u003e with the name of the database cluster to import data into.\n - \u003cvar translate=\"no\"\u003eBUCKET_NAME/SAMPLE.dmp\u003c/var\u003e with the location of the dump file.\n - \u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e with the name of the project that the database cluster is in.\n\n### API\n\n apiVersion: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDBENGINE_NAME\u003c/span\u003e\u003c/var\u003e.dbadmin.gdc.goog/v1\n kind: Import\n metadata:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eIMPORT_NAME\u003c/span\u003e\u003c/var\u003e\n namespace: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eUSER_PROJECT\u003c/span\u003e\u003c/var\u003e\n spec:\n dbclusterRef: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDBCLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\n dumpStorage:\n s3Options:\n bucket: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eBUCKET_NAME\u003c/span\u003e\u003c/var\u003e\n key: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDUMP_FILE_PATH\u003c/span\u003e\u003c/var\u003e\n type: S3\n\nReplace the following variables:\n\n- \u003cvar translate=\"no\"\u003eDBENGINE_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the database engine. This is one of `alloydbomni`, `postgresql` or `oracle`.\n- \u003cvar translate=\"no\"\u003eIMPORT_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the import operation.\n- \u003cvar translate=\"no\"\u003eUSER_PROJECT\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the user project where the database cluster to import is created.\n- \u003cvar translate=\"no\"\u003eDBCLUSTER_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the database cluster.\n- \u003cvar translate=\"no\"\u003eBUCKET_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the object storage bucket that stores the import files.\n- \u003cvar translate=\"no\"\u003eDUMP_FILE_PATH\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the object storage path to the stored files."]]