그래프 데이터베이스 또는 비관계형 데이터베이스에서 데이터를 이동하려면 소스 데이터베이스의 데이터를 파일에 저장하고, 파일을 Cloud Storage에 업로드한 후 Dataflow를 사용해서 파일을 가져올 수 있습니다. 권장되는 파일 형식에는 AVRO 및 CSV가 포함됩니다. 자세한 내용은 대량 마이그레이션의 권장 형식을 참조하세요.
제약조건 처리
스키마에 입력 테이블에 대해 제약조건이 정의되어 있으면 데이터 가져오기 중 이러한 제약조건을 위반하지 않도록 해야 합니다. 제약조건에는 다음이 포함됩니다.
외래 키: 외래 키 제약조건은 노드의 에지 참조에 대해 정의될 수 있습니다.
인터리브 처리: 에지 입력 테이블이 노드 입력 테이블에 인터리브 처리될 수 있습니다. 이러한 인터리브 처리는 하위 항목을 생성하기 전 상위 항목이 존재해야 한다는 암시적 제약조건과 함께 상위-하위 항목 관계를 정의합니다.
인터리브 처리된 구성의 상위 항목과 외래 키 제약조건에 있는 참조 항목을 먼저 로드해야 합니다. 즉, 먼저 그래프의 노드를 로드한 후 에지를 로드해야 합니다. 에지가 연결되는 노드를 로드하기 전에 에지를 로드하면 로드 프로세스 중 특정 키가 없음을 나타내는 오류가 발생할 수 있습니다.
가져오기 순서를 올바르게 지키려면 Google에서 제공된 템플릿을 사용하여 각 단계의 개별 Dataflow 작업을 정의한 후 순서에 따라 작업을 실행합니다. 예를 들어 노드를 가져오는 하나의 Dataflow 작업을 실행한 후 에지를 가져오는 다른 Dataflow 작업을 실행할 수 있습니다.
또는 가져오기 순서를 관리하는 커스텀 Dataflow 작업을 작성할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-05(UTC)"],[],[],null,["# Migrate your data to Spanner Graph\n\n| **Note:** This feature is available with the Spanner Enterprise edition and Enterprise Plus edition. For more information, see the [Spanner editions overview](/spanner/docs/editions-overview).\n\n\u003cbr /\u003e\n\nThis document describes the process of migrating your data and application to\nSpanner Graph. We describe the migration stages and recommended tools\nfor each stage, depending on your source database and other factors.\n\nMigrating your graph to Spanner Graph involves the following core stages:\n\n1. Gather your application requirements.\n2. Design your Spanner Graph schema.\n3. Migrate your application to Spanner Graph.\n4. Test and tune Spanner Graph.\n5. Migrate your data to Spanner Graph.\n6. Validate your data migration.\n7. Configure your cutover and failover mechanism.\n\nTo optimize your schema and application for performance, you might need to\niteratively design your schema, build your application, and test and tune\nSpanner Graph.\n\nGather your application requirements\n------------------------------------\n\nTo design a schema that meets your application needs, gather the\nfollowing requirements:\n\n- Data modeling\n- Common query patterns\n- Latency and throughput requirements\n\nDesign your Spanner Graph schema\n--------------------------------\n\nTo learn how to design a Spanner Graph schema, see\n[Spanner Graph schema overview](/spanner/docs/graph/schema-overview) for\nbasic concepts and see\n[Create, update, or drop a Spanner Graph schema](/spanner/docs/graph/create-update-drop-schema)\nfor more examples. To optimize your schema for common query patterns, see\n[Best practices for designing a Spanner Graph schema](/spanner/docs/graph/best-practices-designing-schema).\n\nMigrate your application to Spanner Graph\n-----------------------------------------\n\nFirst read the general Spanner guidance about [migrating your\napplication](/spanner/docs/migration-overview#migrate_your_application)\nand then read the guidance in this section to learn the Spanner Graph\napplication migration guidance.\n\n### Connect to Spanner Graph\n\nTo learn how to connect programmatically with Spanner Graph, see\n[Create, update, or drop a Spanner Graph schema](/spanner/docs/graph/create-update-drop-schema)\nand the [Spanner Graph queries overview](/spanner/docs/graph/queries-overview).\n\n### Migrate queries\n\nThe Spanner Graph query interface is compatible with\n[ISO GQL](https://www.iso.org/standard/76120.html), and it includes additional\nopenCypher syntax support. For more information, see\n[Spanner Graph reference for openCypher users](/spanner/docs/graph/opencypher-reference).\n\n### Migrate mutations\n\nTo migrate your application's mutation logic, you can use Spanner table\nmutation mechanisms. For more information, see [Insert, update, or delete Spanner Graph\ndata](/spanner/docs/graph/insert-update-delete-data).\n\nTest and tune Spanner Graph\n---------------------------\n\nThe Spanner guidance about how to [test and tune schema and application performance](/spanner/docs/migration-overview#test_and_tune_your_schema_and_application_performance)\napplies to Spanner Graph. To learn\nSpanner Graph performance optimization best practices, see\n[Best practices for designing a Spanner Graph schema](/spanner/docs/graph/best-practices-designing-schema)\nand\n[Best practices for tuning Spanner Graph queries](/spanner/docs/graph/best-practices-tuning-queries).\n\nMigrate your data to Spanner Graph\n----------------------------------\n\nTo move your data from a relational database, see [Migrate your\ndata](/spanner/docs/migration-overview#migrate-your-data).\n\nTo move data from a graph database or a non-relational database, you can persist\ndata from the source database into files, upload the files to Cloud Storage,\nand then import the files using Dataflow. Recommended file formats\ninclude AVRO and CSV. For more information, see [Recommended formats for bulk\nmigration](/spanner/docs/migration-overview#recommended_formats_for_bulk_migration).\n\n### Handle Constraints\n\nIf your schema has constraints defined on input tables, make sure those\nconstraints aren't violated during data import. Constraints include the\nfollowing:\n\n- **Foreign Keys**: A foreign key constraint might be defined for an edge's reference to a node.\n- **Interleaving**: An edge input table might be interleaved in a node input table. This interleaving defines a parent-child relationship, with the implicit constraint that the parent must exist before the child is created.\n\nThe parent in an interleaved organization and the referenced entity in the\nforeign key constraint must be loaded first. This means that you must first load\nnodes in the graph and then load the edges. When you load edges before you load\nthe nodes that the edges connect to, you might encounter errors during the\nloading process that indicate certain keys don't exist.\n\nTo achieve the correct import order, use Google-provided templates to define\nseparate Dataflow jobs for each stage and then run the jobs in\nsequence. For example, you might run one Dataflow job to import\nnodes, and then run another Dataflow job to import edges.\nAlternatively, you might write a [custom Dataflow\njob](/dataflow/docs/guides/use-beam) that manages the import sequence.\n\nFor more information about Google-provided templates, see the following:\n\n- [Cloud Storage Text to Spanner template](/dataflow/docs/guides/templates/provided/cloud-storage-to-cloud-spanner).\n- [Cloud Storage Avro to Spanner template](/dataflow/docs/guides/templates/provided/avro-to-cloud-spanner).\n\nIf you import in the wrong order, the job might fail, or only part of your data\nmight be migrated. If only part of your data is migrated, perform the migration\nagain.\n\n### Improve data loading efficiency\n\nTo improve data loading efficiency, create secondary indexes and define foreign\nkeys after you import your data to Spanner. This approach is only\npossible for initial bulk loading or during migration with downtime.\n\nValidate your data migration\n----------------------------\n\nAfter you migrate your data, perform basic queries to verify data\ncorrectness. Run the following queries on both source and destination databases\nto verify that the results match:\n\n- Count the number of nodes and edges.\n- Count the number of nodes and edges per label.\n- Compute stats (count, sum, avg, min, max) on each node and edge property.\n\nConfigure cutover and failover mechanism\n----------------------------------------\n\n[Configure your cutover and failover mechanisms](/spanner/docs/migration-overview#validate_your_data_migration).\n\nWhat's next\n-----------\n\n- [Compare Spanner Graph and openCypher](/spanner/docs/graph/opencypher-reference).\n- [Troubleshoot Spanner Graph](/spanner/docs/graph/troubleshoot)."]]