Database Migration Service uses migration jobs to migrate data from your source database instance to the destination database instance. Creating a migration job for an existing destination instance includes:
- Defining settings for the migration job
- Selecting the source database connection profile
- Selecting the existing destination database instance
- Demoting the existing instance to convert it into a read replica
- Setting up connectivity between the source and destination database instances
- Testing the migration job to ensure that the connection information you provided for the job is valid
There are certain limitations that you should consider when you want to migrate to a destination instance created outside of Database Migration Service. For example, your Cloud SQL destination instance must be empty or contain only system configuration data. For more information, see Known Limitations.
Define settings for the migration job
- In the Google Cloud console, go to the Migration jobs page.
- Click Create migration job.
The migration job configuration wizard page opens. This wizard contains multiple panels that walk you through each configuration step.
You can pause the creation of a migration job at any point by clicking SAVE & EXIT. All of the data that you enter up to that point is saved in a draft migration job. You can finish your draft migration job later.
- On the Get started page, enter the following information:
- Migration job name
This is a human-readable name for your migration job. This value is displayed in the Google Cloud console.
- Migration job ID
This is a machine-readable identifier for your migration job. You use this value to work with migration jobs by using Database Migration Service Google Cloud CLI commands or API.
- From the Source database engine list, select
MySQL.
The Destination database engine field is populated automatically and can't be changed.
- Select the region where you save the migration job.
Database Migration Service is a fully-regional product, meaning all entities related to your migration (source and destination connection profiles, migration jobs, destination databases) must be saved in a single region. Select the region based on the location of the services that need your data, such as Compute Engine instances or App Engine apps, and other services. After you choose the destination region, this selection can't be changed.
- Migration job name
- Click Save and continue.
Specify information about the source connection profile
On the Define a source page, perform the following steps:
- From the Source connection profile drop-down menu, select the connection profile for your source database.
- In the Customize full dump configuration section, click Edit configuration.
- In the Edit full dump configuration panel, from the
Full dump method drop-down menu, select one of the following:
- Physical based: Select this option if you want to use Percona XtraBackup utility to provide your own backup file. This approach requires additional preparation steps. For the full guide on using physical backup files generated by Percona XtraBackup, see Migrate your databases by using a Percona XtraBackup physical file.
- Logical based: Select this option if you want to use
a logical backup file created by the
mysqldump
utility. Database Migration Service can auto-generate this backup file for you, or you can provide your own copy.
- Edit the rest of the dump settings. Perform one of the following:
- If you use the physical backup file, in the Provide your folder click Browse, and then select the folder where you uploaded your full dump file. Make sure you select the dedicated folder that contains the full backup file, and not the storage bucket itself.
If you use a logical backup file, configure the data dump parallelism or dump flags.
Expand this section for full logical backup file steps
In the Choose how to generate your dump file section, use one of the following options:
Auto-generated (recommended)
This option is recommended because Database Migration Service always generates an initial database dump file after the migration job is created and started.
Database Migration Service uses this file to reproduce the original object definitions and table data of your source database so that this information can be migrated into a destination Cloud SQL database instance.
If you use the auto-generated dump, select the type of operation Database Migration Service should perform in the Configure data dump operation section:
- Data dump parallelism: use a high-performance
parallelism option, available when migrating to MySQL versions
5.7 or 8.
The speed of data parallelism is related to the amount of load induced on your source database:
- Optimal (recommended): Balanced performance with optimal load on the source database.
- Maximum: Provides the highest dump speeds, but might cause increased load on the source database.
- Minimum: Takes the lowest amount of compute resources on the source database, but might have slower dump throughput.
- Dump flags: This option is exclusive with Data dump parallelism.
Use this setting to directly configure flags for the
mysqldump
utility that's used to create the dump file.To add a flag:
- Click ADD FLAG.
Select one of the following flags:
add-locks:
This flag surrounds each table that's contained in the dump file withLOCK TABLES
andUNLOCK TABLES
statements. This results in faster inserts when the dump file is loaded into the destination instance.ignore-error:
Use this flag to enter a list of comma-separated error numbers. These numbers represent the errors that themysqldump
utility will ignore.max-allowed-packet:
Use this flag to set the maximum size of the buffer for communication between the MySQL client and the source MySQL database. The default size of the buffer is 24 MB; the maximum size is 1 GB.
- Click DONE.
- Repeat these steps for each flag that you want to add.
To remove a flag, click the trashcan icon to the right of the row that contains the flag.
- Data dump parallelism: use a high-performance
parallelism option, available when migrating to MySQL versions
5.7 or 8.
Provide your own
This option is not recommended because by default Database Migration Service performs an initial dump as part of the migration job run.
If you want to use your own dump file, select Provide your own, click BROWSE, select your file (or the whole Cloud Storage folder if you use multiple files), and then click SELECT.
Make sure the dump was generated within the last 24 hours and adheres to the dump requirements.
- Click Save and continue.
Select the destination Cloud SQL instance
- From the Type of destination instance menu, select Existing instance.
- In the Select destination instance section, select your destination instance.
- Review the information in the Instance details section, and click Select and continue.
- To migrate to an existing destination database, Database Migration Service demotes the target instance and converts it to a replica. To signify that the demotion can be safely performed, in the confirmation window, enter the destination instance identifier.
- Click Confirm and continue.
Set up connectivity between the source and destination database instances
From the Connectivity method drop-down menu, select a network connectivity method. This method defines how the newly created Cloud SQL instance will connect to the source database. Current network connectivity methods include IP allowlist, reverse SSH tunnel, and VPC peering.
If you want to use... | Then... |
---|---|
The IP allowlist network connectivity method, | You need to specify the outgoing IP address of your destination instance. If the Cloud SQL instance you created is a high availability instance, include the outgoing IP addresses for both the primary and the secondary instance. |
The reverse SSH tunnel network connectivity method, | You need to select the Compute Engine VM instance that will host
the tunnel.
After specifying the instance, Google will provide a script that performs the steps to set up the tunnel between the source and destination databases. You'll need to run the script in the Google Cloud CLI. Run the commands from a machine that has connectivity to both the source database and to Google Cloud. |
The VPC peering network connectivity method, | You need to select the VPC network where the source database resides. The Cloud SQL instance will be updated to connect to this network. |
After you select and configure network connectivity, click Configure and continue.
Test, create, and run the migration job
On this final step, review the summary of the migration job settings, source, destination, and connectivity method, and then test the validity of the migration job setup. If any issues are encountered, then you can modify the migration job's settings. Not all settings are editable.
-
On the Test and create migration job page, click Test job.
If the test fails, then you can address the problem in the appropriate part of the flow, and return to re-test. For information troubleshooting a failing migration job test, see Diagnose issues for MySQL.
-
When the migration job test finishes, click Create and start job
to create the migration job and start it immediately, or click Create job
to create the migration job without immediately starting it.
If the job isn't started at the time that it's created, then it can be started from the Migration jobs page by clicking START. Regardless of when the migration job starts, your organization is charged for the existence of the destination instance.
Your migration is now in progress. When you start the migration job, Database Migration Service begins the full dump, briefly locking the source database. If your source is in Amazon RDS or Amazon Aurora, Database Migration Service additionally requires a short (approximately under a minute) write downtime at the start of the migration. For more information, see Known limitations.
- Proceed to Review the migration job.
Create a migration job by using Google Cloud CLI
When you migrate to an existing instance by using Google Cloud CLI, you must manually create the connection profile for the destination instance. This isn't required when you use the Google Cloud console, as Database Migration Service takes care of creating and removing the destination connection profile for you.
Before you begin
Before you use gcloud CLI to create a migration job to an existing destination database instance, make sure you:
- Create your destination database instance.
- Prepare your source database instance. See:
- Configure your source
- Create the source connection profile (Source connection profile identifier is required to create a migration job.)
- Configure connectivity
Create destination connection profile
Create the destination connection profile for your existing destination instance
by running the
gcloud database-migration connection-profiles create
command:
This sample uses the optional --no-async
flag so that all operations
are performed synchronously. This means that some commands might take
a while to complete. You can skip the --no-async
flag to run commands asynchronously.
If you do, you need to use the
gcloud database-migration operations describe
command to verify if your operation
is successful.
Before using any of the command data below, make the following replacements:
- CONNECTION_PROFILE_ID with a machine-readable identifier for your connection profile.
- REGION with the identifier of the region where you want to save the connection profile.
- DESTINATION_INSTANCE_ID with the instance identifier of your destination instance.
- (Optional) CONNECTION_PROFILE_NAME with a human-readable name for your connection profile. This value is displayed in the Google Cloud console.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration connection-profiles \ create mysql CONNECTION_PROFILE_ID \ --no-async \ --cloudsql-instance=DESTINATION_INSTANCE_ID \ --region=REGION \ --display-name=CONNECTION_PROFILE_NAME
Windows (PowerShell)
gcloud database-migration connection-profiles ` create mysql CONNECTION_PROFILE_ID ` --no-async ` --cloudsql-instance=DESTINATION_INSTANCE_ID ` --region=REGION ` --display-name=CONNECTION_PROFILE_NAME
Windows (cmd.exe)
gcloud database-migration connection-profiles ^ create mysql CONNECTION_PROFILE_ID ^ --no-async ^ --cloudsql-instance=DESTINATION_INSTANCE_ID ^ --region=REGION ^ --display-name=CONNECTION_PROFILE_NAME
You should receive a response similar to the following:
Waiting for connection profile [CONNECTION_PROFILE_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created connection profile CONNECTION_PROFILE_ID [OPERATION_ID]
Create the migration job
This sample uses the optional --no-async
flag so that all operations
are performed synchronously. This means that some commands might take
a while to complete. You can skip the --no-async
flag to run commands asynchronously.
If you do, you need to use the
gcloud database-migration operations describe
command to verify if your operation
is successful.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with a machine-readable identifier for your migration job. You use this value to work with migration jobs by using Database Migration Service Google Cloud CLI commands or API.
- REGION with the region identifier where you want to save the migration job.
- MIGRATION_JOB_NAME with a human-readable name for your migration job. This value is displayed in Database Migration Service in the Google Cloud console.
- SOURCE_CONNECTION_PROFILE_ID with a machine-readable identifier of the source connection profile.
- DESTINATION_CONNECTION_PROFILE_ID with a machine-readable identifier of the destination connection profile.
- MIGRATION_JOB_TYPE with the type of your
migration job. Two values are allowed:
ONE_TIME
orCONTINUOUS
. For more information, see Types of migration.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ create MIGRATION_JOB_ID \ --no-async \ --region=REGION \ --display-name=MIGRATION_JOB_NAME \ --source=SOURCE_CONNECTION_PROFILE_ID \ --destination=DESTINATION_CONNECTION_PROFILE_ID \ --type=MIGRATION_JOB_TYPE \
Windows (PowerShell)
gcloud database-migration migration-jobs ` create MIGRATION_JOB_ID ` --no-async ` --region=REGION ` --display-name=MIGRATION_JOB_NAME ` --source=SOURCE_CONNECTION_PROFILE_ID ` --destination=DESTINATION_CONNECTION_PROFILE_ID ` --type=MIGRATION_JOB_TYPE `
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ create MIGRATION_JOB_ID ^ --no-async ^ --region=REGION ^ --display-name=MIGRATION_JOB_NAME ^ --source=SOURCE_CONNECTION_PROFILE_ID ^ --destination=DESTINATION_CONNECTION_PROFILE_ID ^ --type=MIGRATION_JOB_TYPE ^
You should receive a response similar to the following:
Waiting for migration job [MIGRATION_JOB_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created migration job MIGRATION_JOB_ID [OPERATION_ID]
Demote the destination database
Database Migration Service requires that the destination database instance
works as a read replica for the time of migration. Before you start the
migration job, run the
gcloud database-migration migration-jobs demote-destination
command to demote the destination database instance.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ demote-destination MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` demote-destination MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ demote-destination MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: demote-destination name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
- Use the
gcloud database-migration migration-jobs describe
command to view the status of the migration job. - Use the
gcloud database-migration operations describe
with the OPERATION_ID to see the status of the operation itself.
Manage migration jobs
At this point, your migration job is configured and connected to your
destination database instance. You can manage it by using the
verify
,start
, stop
, restart
, and resume
operations.
Verify the migration job
We recommend you first verify your migration job, by running the
gcloud database-migration migration-jobs verify
command.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ verify MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` verify MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ verify MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: verify name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
- Use the
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job. - Use the
gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself.
Start the migration job
Start the migration job by running the
gcloud database-migration migration-jobs start
command.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ start MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` start MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ start MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: start name: OPERATION_ID
To see if your operation is successful, you can query the returned operation object, or check the status of the migration job:
- Use the
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job. - Use the
gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself.
Promote the migration job
Once the migration reaches the Change Data Capture (CDC) phase,
you can promote the destination database
instance from a read replica to a standalone instance.
Run the gcloud database-migration migration-jobs promote
command:
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ promote MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` promote MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ promote MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: start name: OPERATION_ID
- Use the
gcloud database-migration migration-jobs describe
command with the MIGRATION_JOB_ID to view the status of the migration job. - Use the
gcloud database-migration operations describe
command with the OPERATION_ID to see the status of the operation itself.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-19 UTC.