[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Write data to the Firestore database\n====================================\n\nThis page describes the second stage of the\n[migration process](/firestore/mongodb-compatibility/docs/migrate-data) where\nyou set up a Dataflow pipeline and begin a concurrent data move\nfrom the Cloud Storage bucket into your destination\nFirestore with MongoDB compatibility database. This operation\nwill run concurrently with the Datastream stream.\n\nStart the Dataflow pipeline\n---------------------------\n\nThe following command starts a new, uniquely named, Dataflow\npipeline.\n**Note:** The start timestamp of the job is captured in the `DATAFLOW_START_TIME` environment variable. Make a note of this timestamp: it will appear as part of the job name in the Dataflow console. \n\n DATAFLOW_START_TIME=\"$(date +'%Y%m%d%H%M%S')\"\n\n gcloud dataflow flex-template run \"dataflow-mongodb-to-firestore-$DATAFLOW_START_TIME\" \\\n --template-file-gcs-location gs://dataflow-templates-us-central1/latest/flex/Cloud_Datastream_MongoDB_to_Firestore \\\n --region $LOCATION \\\n --num-workers $NUM_WORKERS \\\n --temp-location $TEMP_OUTPUT_LOCATION \\\n --additional-user-labels \"\" \\\n --parameters inputFilePattern=$INPUT_FILE_LOCATION,\\\n inputFileFormat=avro,\\\n fileReadConcurrency=10,\\\n connectionUri=$FIRESTORE_CONNECTION_URI,\\\n databaseName=$FIRESTORE_DATABASE_NAME,\\\n shadowCollectionPrefix=shadow_,\\\n batchSize=500,\\\n deadLetterQueueDirectory=$DLQ_LOCATION,\\\n dlqRetryMinutes=10,\\\n dlqMaxRetryCount=500,\\\n processBackfillFirst=false,\\\n useShadowTablesForBackfill=true,\\\n runMode=regular,\\\n directoryWatchDurationInMinutes=20,\\\n streamName=$DATASTREAM_NAME,\\\n stagingLocation=$STAGING_LOCATION,\\\n autoscalingAlgorithm=THROUGHPUT_BASED,\\\n maxNumWorkers=$MAX_WORKERS,\\\n workerMachineType=$WORKER_TYPE\n\nFor more information about monitoring the Dataflow pipeline,\nsee\n[Troubleshooting](/firestore/mongodb-compatibility/docs/migrate-troubleshooting).\n\nWhat's next\n-----------\n\nProceed to\n[Migrate traffic to Firestore](/firestore/mongodb-compatibility/docs/migrate-traffic)."]]