Overview
Database Migration Service uses migration jobs to migrate data from your source database instance to the destination AlloyDB database instance.Creating a migration job includes:
- Defining settings for the migration job
- Specifying information about the connection profile that you created for your source database (source connection profile)
- Defining settings for the destination AlloyDB database instance and creating the instance
- Setting up connectivity between the source and destination database instances
- Testing the migration job to ensure that the connection information you provided for the job is valid
Define settings for the migration job
- Go to the Migration jobs page in the Google Cloud Console.
- Click Create migration job at the top of the page.
- Provide a name for the migration job. Choose a friendly name that helps you identify the migration job. Don't include sensitive or personally identifiable information in the job name.
- Keep the auto-generated Migration job ID.
- Select the source database engine.
- Select AlloyDB for PostgreSQL as the destination engine.
Select the destination region for your migration. This is where the Database Migration Service instance is created, and should be selected based on the location of the services that need your data, such as Compute Engine instances and App Engine apps, and other services. After you choose the destination region, this selection can't be changed.
Specify information about the source connection profile
-
If you have created a connection profile, then select it from the list of existing connection profiles.
If you haven't created a connection profile, then create one by clicking Create a connection profile at the bottom of the drop-down list, and then perform the same steps as in Create a source connection profile.
It is recommended to create a designated connection profile for your AlloyDB migration.
- In the Customize data dump configurations section,
click Show data dump configurations.
The speed of data dump parallelism is related to the amount of load on your source database. You can use the following settings:
- Optimal (recommended): Balanced performance with optimal load on the source database.
- Maximum: Provides the highest dump speeds, but might cause increased load on the source database.
- Minimum: Takes the lowest amount of compute resources on the source database, but might have slower dump throughput.
If you want to use adjusted data dump parallelism settings, make sure to increase the
max_replication_slots
,max_wal_senders
, andmax_worker_processes
parameters on your source database. You can verify your configuration by running the migration job test at the end of migration job creation. - Click Save and continue.
Define and create the destination AlloyDB instance
You will now create a new destination AlloyDB cluster for your migration job. Clusters are the top-level resources in AlloyDB. They contain a single primary instance for read/write access to the database you create during the migration process.
To create a destination cluster:
- Choose a cluster type. Database Migration Service currently supports Highly available AlloyDB clusters. They can serve data from more than one zone in a region, with no read pools.
- Click CONTINUE.
- Configure your cluster:
- In the Cluster ID field, enter an ID for your cluster.
- In the Password field, enter a password for the default
postgres
user. You will need the password to log in to your database. - In the Network field:
- Select a network path to define which resources are available when setting the migration connectivity. Clusters can only be configured with a private IP network path. If you plan to connect to the source database via VPC peering, select the VPC where it resides.
- If your network isn't configured for private services access, click Set up connection and follow the connection configuration wizard.
- If a managed service network was never configured for the selected VPC, you can choose to either select an IP range and click Connect or use an automatically selected IP range and click Allocate & Connect.
- Optional: In the Encryption section, specify whether you want to
manage the encryption of the data that's migrated from the source to the destination.
By default, your data is encrypted with a key that's managed by Google Cloud.
If you want to manage your encryption, you can use a customer-managed encryption key (CMEK). The key must be in the same location as your AlloyDB cluster. For example, for clusters located in us-west1 can use only keys in us-west1.
- Select the Use a customer-managed encryption key (CMEK) radio button.
- From the Select a customer-managed key menu, select your CMEK.
If you don't see your key, then click ENTER KEY RESOURCE NAME to provide the resource name of the key that you want to use. For example, you can enter
projects/my-project-name/locations/my-location/keyRings/my-keyring/cryptoKeys/my-key
in the Key resource name field, and then click SAVE.
- Click CONTINUE.
- In the Instance ID field, enter an ID for your primary instance.
- Select a machine type.
- Optional: Set flags for your instance. You can use flags to customize your instance. For information on supported flags, see AlloyDB documentation. For each flag:
- Click ADD FLAG.
- Select a flag from the New database flag list.
- Provide a value for the flag.
- Click DONE.
Set up connectivity between the source and destination database instances
From the Connectivity method drop-down menu, select a network connectivity method. This method defines how the newly created AlloyDB instance will connect to the source database. Current network connectivity methods include VPC peering, reverse SSH tunnel, and TCP proxy through a cloud-hosted VM.
If you select the reverse SSH tunnel network connectivity method, then select the Compute Engine VM instance that will host the tunnel.
After specifying the instance, Google will provide a script that performs the steps to set up the tunnel between the source and destination databases. You'll need to run the script in the Google Cloud CLI.
Run the commands from a machine that has connectivity to both the source database and to Google Cloud.
- If you select the VPC peering network connectivity method, then select the VPC network where the source database resides. The Cloud SQL instance will be updated to connect to this network.
If you select the TCP Proxy via cloud-hosted VM connectivity method, then enter the required details for the new Compute Engine VM instance that will host the TCP proxy.
After specifying the details, the Google Cloud console will provide a script that performs the steps to set up the proxy between the source and destination databases. You'll need to run the script on a machine with an updated Google Cloud CLI.
After running the script, it will output the newly created VM's private IP. Enter the IP and click Configure & continue.
- Learn more about how to Configure connectivity.
After selecting the network connectivity method and providing any additional information for the method, click CONFIGURE & CONTINUE.
Test and create the migration job
On this final step, review the summary of the migration job settings, source, destination, and connectivity method, and then test the validity of the migration job setup. If any issues are encountered, then you can modify the migration job's settings. Not all settings are editable.Click TEST JOB to verify that:
- The source database has been configured correctly, based on the prerequisites.
- The source and destination instances can communicate with each other.
- Any updates to private IP addresses needed on the destination are done.
The migration job is valid, and the source and destination versions are compatible.
If the test fails, then you can address the problem in the appropriate part of the flow, and return to re-test.
The migration job can be created even if the test fails, but after the job is started, it may fail at some point during the run.
Click CREATE & START JOB to create the migration job and start it immediately, or click CREATE JOB to create the migration job without immediately starting it.
If the job isn't started at the time that it's created, then it can be started from the Migration jobs page by clicking START.
Regardless of when the migration job starts, your organization is charged for the existence of the destination instance.
The migration job is added to the migration jobs list and can be viewed directly.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-26 UTC.