[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-05 UTC。"],[],[],null,["# Migrate your data\n\nAfter optimizing your Spanner schema and migrating your\napplication, you can move your data into an empty production-sized\nSpanner database, and then switch over your application to use\nthe Spanner database.\n\nDepending on your use case, you might be able to perform a live data\nmigration with minimal downtime, or you might require prolonged downtime to\nperform the data migration.\n\nIf your application can't afford a lot of\ndowntime, consider performing a\n[live data migration](#live-data-migration).\nIf your application can handle downtime, you can consider\n[migrating with downtime](#downtime-data-migration).\n\nIn a live data migration, you need to configure the network\ninfrastructure required for data to flow between your source database, the\ntarget Spanner database, and the tools you're using to perform\nthe data migration. You need to decide on either private or public network\nconnectivity depending on your organization's compliance requirements. You\nmight need your organization's network administrator to set up the\ninfrastructure.\n\nLive data migration\n-------------------\n\nA live data migration consists of two components:\n\n- Migrating the data in a consistent snapshot of your source database.\n- Migrating the stream of changes (inserts, updates and deletes) since that snapshot, referred to as change data capture (CDC).\n\nWhile live data migrations help protect your data, the process involves\nchallenges, including the following:\n\n- Storing CDC data while the snapshot is migrating.\n- Writing the CDC data to Spanner while capturing the incoming CDC stream.\n- Ensuring that the migration of CDC data to Spanner is faster than the incoming CDC stream.\n\nMigration with downtime\n-----------------------\n\nIf your source database can export to CSV or Avro, then you can migrate to\nSpanner with downtime. For more information, see\n[Spanner import and export overview](/spanner/docs/import-export-overview).\n\nMigrations with downtime can be used for test environments or\napplications that can handle a few hours of downtime. On a live database, a\nmigration with downtime might result in data loss.\n\nTo perform a downtime migration, consider the following high-level approach:\n\n1. Stop your application and generate a dump file of the data from the source database.\n2. Upload the dump file to Cloud Storage in a MySQL, PostgreSQL, Avro, or CSV dump format.\n3. Load the dump file into Spanner using Dataflow or the Spanner migration tool.\n\nGenerating multiple small dump files makes it quicker to write to\nSpanner, as Spanner can read multiple dump files\nin parallel.\n\nWhen generating a dump file from the source database, keep the following in mind\nto generate a consistent snapshot of data:\n\n- Before you perform the dump, apply a read lock on the source database to prevent the data from changing during the generation of the dump file.\n- Alternatively, generate the dump file using a read replica from the source database with replication disabled.\n\nSource specific guides\n----------------------\n\n- MySQL: [MySQL live data migration](/spanner/docs/mysql-live-data-migration)."]]