You can migrate your MySQL databases to Cloud SQL by using physical database backup files created with the Percona XtraBackup for MySQL utility. Migrating with physical backup files offers increased data restoration speeds over migrations that use logical backup files. This makes them a great choice for moving big databases that contain multiple terabytes of data.
This migration flow involves the following tasks:
Backing up your source MySQL instance and preparing the physical backup files by using the Percona XtraBackup for MySQL utility.
Uploading your backup files to a Cloud Storage bucket.
Creating and running the migration job in Database Migration Service.
Promoting the migration job after the data is fully migrated.
Limitations
This section lists limitations with the XtraBackup migration process:
- You must use Percona XtraBackup to backup up your data to the Cloud Storage bucket. Other backup utilities are not supported.
- Migration is not supported to earlier database major or minor versions. For example, you can't migrate from MySQL 8.0 to 5.7 or from MySQL 8.0.36 to 8.0.16.
- Database migration from a XtraBackup physical file is only supported for on-premises MySQL databases. Migration from Amazon Aurora or MySQL on Amazon RDS databases is not supported.
- You can only migrate from a full backup. Other backup types, such as incremental or partial backups, are not supported.
- Database migration does not include database users or privileges.
- You must set the binary log format to
ROW
. If you configure the binary log to any other format, such asSTATEMENT
orMIXED
, then replication might fail. - Any database with a table larger than 5 TB is not supported.
- Cloud Storage limits the size of a file that you can upload to a bucket to 5 TB. If your XtraBackup physical file exceeds 5 TB, then you must split the backup file into smaller files.
- Make sure you upload the backup files to a dedicated Cloud Storage folder that doesn't contain any other files.
- You must configure the
innodb_data_file_path
parameter with only one data file that uses the default data filenameibdata1
. If your database is configured with two data files or has a data file with a different name, then you can't migrate the database using an XtraBackup physical file. For example, a database configured withinnodb_data_file_path=ibdata01:50M:autoextend
is not supported for the migration. - The
innodb_page_size
parameter on your source instance must be configured with the default value16384
. - You can't migrate any plugins from your external database.
Costs
For homogenous migrations to Cloud SQL, Database Migration Service is offered at no additional charge. However, Cloud SQL and Cloud Storage pricing applies for network charges as well as Cloud SQL and Cloud Storage entities created for migration purposes.
In this document, you use the following billable components of Google Cloud:
- Cloud Storage
- Cloud SQL
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
- Consider in which region you want to create the destination database. Database Migration Service is a fully-regional product, meaning all entities related to your migration (source and destination connection profiles, migration jobs, destination databases, storage buckets) must be saved in a single region.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Database Migration Service, Compute Engine, and Cloud SQL Admin APIs.
Required roles
To get the permissions that you need to perform homogeneous MySQL migrations by using physical backup files, ask your administrator to grant you the following IAM roles on your project:
-
User account that performs the migration:
-
Database Migration Admin (
roles/datamigration.admin
) -
Storage Object Viewer (
roles/storage.objectViewer
) -
Cloud SQL Editor (
roles/cloudsql.editor
)
-
Database Migration Admin (
For more information about granting roles, see Manage access.
These predefined roles contain the permissions required to perform homogeneous MySQL migrations by using physical backup files. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to perform homogeneous MySQL migrations by using physical backup files:
-
User account that performs the migration:
-
datamigration.*
-
resourcemanager.projects.get
-
resourcemanager.projects.list
-
cloudsql.instances.create
-
cloudsql.instances.get
-
cloudsql.instances.list
-
compute.machineTypes.list
-
compute.machineTypes.get
-
compute.projects.get
-
storage.buckets.create
-
storage.buckets.list
-
You might also be able to get these permissions with custom roles or other predefined roles.
Prepare your source data
To prepare your data for migration, perform the following steps:
- Install the correct version of the XtraBackup utility on your
source instance. You must use a version of XtraBackup that is equal
to or later than your source instance version.
For more information, see
Server version and backup version comparison in the Percona
XtraBackup documentation.
- For MySQL 5.7, install Percona XtraBackup 2.4.
- For MySQL 8.0, install Percona XtraBackup 8.0.
- Export and prepare the physical backup file of your source instance by
using Percona XtraBackup. For complete information on using
Percona XtraBackup, refer to the tool's
documentation. You can also expand the following section for an example
of recommended steps.
Sample recommended steps for creating and preparing physical backup files by using Percona XtraBackup
Before using any of the command data below, make the following replacements:
- TARGET_DIR with the path where you want to save the output backup file.
- USERNAME with a
user that has the
BACKUP_ADMIN
privilege on the source instance. - PASSWORD with the password for the USERNAME account.
- Perform a full physical backup of your source instance. Run the following
command:
xtrabackup --backup \ --target-dir=TARGET_DIR \ --user=USERNAME \ --password=PASSWORD
- When the backup file is ready, use the
--prepare
command to ensure file consistency. Run the following command:xtrabackup --prepare --target-dir=TARGET_DIR
- Create your bucket to store
the backup files. Make sure you use the same region as the one where you
intend to create your destination Cloud SQL for MySQL instance.
Database Migration Service is a fully-regional product, meaning that all entities related to your migration (source and destination connection profiles, migration jobs, destination databases, storage buckets for backup files) must be saved in a single region.
- Upload the backup files to your Cloud Storage bucket. Make sure you upload the backup files to a dedicated Cloud Storage folder that doesn't contain any other files. See Upload objects from a file system in the Cloud Storage documentation.
- Configure connectivity for your source database instance.
- Create a source connection profile for your source database instance.
This sample uses the optional
--no-async
flag so that all operations are performed synchronously. This means that some commands might take a while to complete. You can skip the--no-async
flag to run commands asynchronously. If you do, you need to use thegcloud database-migration operations describe
command to verify if your operation is successful.Before using any of the command data below, make the following replacements:
- CONNECTION_PROFILE_ID with a machine-readable identifier for your connection profile.
- REGION with the identifier of the region where you want to save the connection profile.
- HOST_IP_ADDRESS with the IP address where Database Migration Service can reach your source database instance. This value can vary depending on which connectivity method you use for your migration.
- PORT_NUMBER with the port number where your source database accepts incoming connections.
- USERNAME with the name of the database user account you want Database Migration Service to connect as to your source database instance.
- PASSWORD with the password for the database user account.
- (Optional) CONNECTION_PROFILE_NAME with a human-readable name for your connection profile. This value is displayed in the Google Cloud console.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration connection-profiles \ create mysql CONNECTION_PROFILE_ID \ --no-async \ --region=REGION \ --host=HOST_IP_ADDRESS \ --port=PORT_NUMBER \ --username=USERNAME \ --password=PASSWORD \ --display-name=CONNECTION_PROFILE_NAME
Windows (PowerShell)
gcloud database-migration connection-profiles ` create mysql CONNECTION_PROFILE_ID ` --no-async ` --region=REGION ` --host=HOST_IP_ADDRESS ` --port=PORT_NUMBER ` --username=USERNAME ` --password=PASSWORD ` --display-name=CONNECTION_PROFILE_NAME
Windows (cmd.exe)
gcloud database-migration connection-profiles ^ create mysql CONNECTION_PROFILE_ID ^ --no-async ^ --region=REGION ^ --host=HOST_IP_ADDRESS ^ --port=PORT_NUMBER ^ --username=USERNAME ^ --password=PASSWORD ^ --display-name=CONNECTION_PROFILE_NAME
You should receive a response similar to the following:
Waiting for connection profile [CONNECTION_PROFILE_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created connection profile CONNECTION_PROFILE_ID [OPERATION_ID]
Prepare your destination instance
To configure your destination Cloud SQL instance, perform the following steps:
Create your Cloud SQL for MySQL destination instance. Make sure you use enough compute and memory resources to cover your migration needs. See Create an instance in Cloud SQL documentation.
Grant the Cloud Storage permissions to the service account associated with your destination instance. This account is created after you create the destination instance.
Find the service account email address for your Cloud SQL instance on the Cloud SQL instance detail page. This address uses the following format:
<project-identifier>@gcp-sa-cloud-sql.iam.gserviceaccount.com
. See View instance information in the Cloud SQL documentation.Add the Storage Object Viewer (
roles/storage.objectViewer
) IAM role to the service account. For information on how to manage access with Identity and Access Management, see Manage access to projects, folders, and organizations in the IAM documentation.
Create a destination connection profile for your Cloud SQL instance.
This sample uses the optional
--no-async
flag so that all operations are performed synchronously. This means that some commands might take a while to complete. You can skip the--no-async
flag to run commands asynchronously. If you do, you need to use thegcloud database-migration operations describe
command to verify if your operation is successful.Before using any of the command data below, make the following replacements:
- CONNECTION_PROFILE_ID with a machine-readable identifier for your connection profile.
- REGION with the identifier of the region where you want to save the connection profile.
- DESTINATION_INSTANCE_ID with the instance identifier of your destination instance.
- (Optional) CONNECTION_PROFILE_NAME with a human-readable name for your connection profile. This value is displayed in the Google Cloud console.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration connection-profiles \ create mysql CONNECTION_PROFILE_ID \ --no-async \ --cloudsql-instance=CLOUDSQL_INSTANCE_ID \ --region=REGION \ --display-name=CONNECTION_PROFILE_NAME
Windows (PowerShell)
gcloud database-migration connection-profiles ` create mysql CONNECTION_PROFILE_ID ` --no-async ` --cloudsql-instance=CLOUDSQL_INSTANCE_ID ` --region=REGION ` --display-name=CONNECTION_PROFILE_NAME
Windows (cmd.exe)
gcloud database-migration connection-profiles ^ create mysql CONNECTION_PROFILE_ID ^ --no-async ^ --cloudsql-instance=CLOUDSQL_INSTANCE_ID ^ --region=REGION ^ --display-name=CONNECTION_PROFILE_NAME
You should receive a response similar to the following:
Waiting for connection profile [CONNECTION_PROFILE_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created connection profile CONNECTION_PROFILE_ID [OPERATION_ID]
Create and run the migration job
To configure and run your migration, perform the following steps:
Create the migration job. If you use VPC peering or a reverse-SSH tunnel connectivity, make sure to add the required flags, such as
--peer-vpc
, or--vm, --vm-ip, --vm-port, --vpc
. For more informations, see Configure connectivity and Google Cloud CLI examples.This sample uses the optional
--no-async
flag so that all operations are performed synchronously. This means that some commands might take a while to complete. You can skip the--no-async
flag to run commands asynchronously. If you do, you need to use thegcloud database-migration operations describe
command to verify if your operation is successful.Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with a machine-readable identifier for your migration job. You use this value to work with migration jobs by using Database Migration Service Google Cloud CLI commands or API.
- REGION with the region identifier where you want to save the migration job.
- MIGRATION_JOB_NAME with a human-readable name for your migration job. This value is displayed in Database Migration Service in the Google Cloud console.
- SOURCE_CONNECTION_PROFILE_ID with a machine-readable identifier of the source connection profile.
- DESTINATION_CONNECTION_PROFILE_ID with a machine-readable identifier of the destination connection profile.
- MIGRATION_JOB_TYPE with the type of your
migration job. Two values are allowed:
ONE_TIME
orCONTINUOUS
. For more information, see Types of migration. - PHYSICAL_BACKUP_FILE_PATH_IN_STORAGE_BUCKET
with the path to your physical backup file stored in a Cloud Storage bucket. Use the following format:
gs://<bucket_name>/<path_to_backup_file>
. If you use multiple backup files, use the path to the folder where you store them.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ create MIGRATION_JOB_ID \ --no-async \ --region=REGION \ --display-name=MIGRATION_JOB_NAME \ --source=SOURCE_CONNECTION_PROFILE_ID \ --destination=DESTINATION_CONNECTION_PROFILE_ID \ --type=MIGRATION_JOB_TYPE --dump-type=PHYSICAL --dump-path=PHYSICAL_BACKUP_FILE_PATH_IN_STORAGE_BUCKET
Windows (PowerShell)
gcloud database-migration migration-jobs ` create MIGRATION_JOB_ID ` --no-async ` --region=REGION ` --display-name=MIGRATION_JOB_NAME ` --source=SOURCE_CONNECTION_PROFILE_ID ` --destination=DESTINATION_CONNECTION_PROFILE_ID ` --type=MIGRATION_JOB_TYPE --dump-type=PHYSICAL --dump-path=PHYSICAL_BACKUP_FILE_PATH_IN_STORAGE_BUCKET
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ create MIGRATION_JOB_ID ^ --no-async ^ --region=REGION ^ --display-name=MIGRATION_JOB_NAME ^ --source=SOURCE_CONNECTION_PROFILE_ID ^ --destination=DESTINATION_CONNECTION_PROFILE_ID ^ --type=MIGRATION_JOB_TYPE --dump-type=PHYSICAL --dump-path=PHYSICAL_BACKUP_FILE_PATH_IN_STORAGE_BUCKET
You should receive a response similar to the following:
Waiting for migration job [MIGRATION_JOB_ID] to be created with [OPERATION_ID] Waiting for operation [OPERATION_ID] to complete...done. Created migration job MIGRATION_JOB_ID [OPERATION_ID]
Demote your Cloud SQL destination instance.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ demote-destination MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` demote-destination MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ demote-destination MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: demote-destination name: OPERATION_ID
- Use the
gcloud database-migration migration-jobs describe
command to view the status of the migration job. - Use the
gcloud database-migration migration-jobs describe
with the OPERATION_ID to see the status of the operation itself.
- MIGRATION_JOB_ID with
your migration job identifier.
(Optional) Perform a migration job test to check if Database Migration Service can reach all the necessary source and destination entities.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ verify MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` verify MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ verify MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: verify name: OPERATION_ID
- Use the
gcloud database-migration migration-jobs describe
command to view the status of the migration job. - Use the
gcloud database-migration migration-jobs describe
with the OPERATION_ID to see the status of the operation itself.
- MIGRATION_JOB_ID with
your migration job identifier.
Start the migration job.
Before using any of the command data below, make the following replacements:
- MIGRATION_JOB_ID with
your migration job identifier.
If you don't know the identifier, you can use the
gcloud database-migration migration-jobs list
command to list all migration jobs in a given region and view their identifiers. - REGION with the identifier of the region where your connection profile is saved.
Execute the following command:
Linux, macOS, or Cloud Shell
gcloud database-migration migration-jobs \ start MIGRATION_JOB_ID \ --region=REGION
Windows (PowerShell)
gcloud database-migration migration-jobs ` start MIGRATION_JOB_ID ` --region=REGION
Windows (cmd.exe)
gcloud database-migration migration-jobs ^ start MIGRATION_JOB_ID ^ --region=REGION
Result
The action is performed in an asynchronous manner. As such, this command returns an Operation entity that represents a long-running operation:
done: false metadata: '@type': type.googleapis.com/google.cloud.clouddms.v1.OperationMetadata apiVersion: v1 createTime: '2024-02-20T12:20:24.493106418Z' requestedCancellation: false target: MIGRATION_JOB_ID verb: start name: OPERATION_ID
- Use the
gcloud database-migration migration-jobs describe
command to view the status of the migration job. - Use the
gcloud database-migration migration-jobs describe
with the OPERATION_ID to see the status of the operation itself.
When you start the migration job, your destination Cloud SQL instance is put into a read-only mode where it is fully managed by Database Migration Service. You can promote it to a standalone instance when your data is fully migrated.
- MIGRATION_JOB_ID with
your migration job identifier.
(Optional) Stop the migration
You can stop and delete your migration job at any point if you want to cancel the data migration process. You can manage the migration job in the Google Cloud console or with Google Cloud CLI.
For information on managing migration jobs in the Google Cloud console, see Manage migration jobs.
For information on managing migration jobs with Google Cloud CLI, see
gcloud database-migration migration-jobs
reference.
Finalize the migration
When the migration job completes successfully, finalize the migration job by performing one of the following steps:
For one-time migrations: Migration job status changes to Complete. No further actions required, you can clean up the migration job and connection profile resources.
For continuous migrations: Promote the migration job to switch your application to the new database instance.