This quickstart shows you how to use Database Migration Service to migrate Oracle workloads into AlloyDB for PostgreSQL. The resources created in this quickstart typically cost less than one dollar (USD), assuming you complete the steps, including the clean up, in a timely manner.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Database Migration Service API.
- Make sure that you have the Database Migration Admin role assigned to your user account.
Requirements
Database Migration Service offers a variety of source database options, destination database options, and connectivity methods. Different sources and destinations work better with some connectivity methods than with others.
In this quickstart, we assume that you're using a standalone Oracle database in an environment where you can configure your network to add an inbound firewall rule. The source database can be on-premises or in a cloud provider. Because we can't know the specifics of your environment, we can't provide detailed steps when it comes to your networking configuration.
We also assume that you want to migrate your Oracle workloads into a destination AlloyDB for PostgreSQL database.
Only Private IP connectivity method is supported for Oracle to AlloyDB for PostgreSQL migrations. This type of connectivity requires that you create a private connectivity configuration for your AlloyDB for PostgreSQL destination database.
Create connection profiles
By creating connection profiles, you're creating records that contain information about the source and destination databases. Database Migration Service uses the information in the connection profiles to migrate data from your source Oracle database into the destination AlloyDB for PostgreSQL database instance.
In this section, you learn how to create connection profiles to:
- A source Oracle database
- A destination AlloyDB for PostgreSQL database
Create an Oracle connection profile
Go to the Database Migration Service Connection profiles page in the Google Cloud console.
Go to the Database Migration Service Connection Profiles Page
Click CREATE PROFILE.
On the Create a connection profile page, from the Profile role list, select Source.
From the Database engine list select Oracle.
Supply the following information:
In the Connection profile name field, enter a name for the connection profile for your source Oracle database, such as
My Oracle Connection Profile
.Keep the auto-generated Connection profile ID.
Select the Region where the connection profile is stored.
Enter the Hostname or IP address (domain or IP) and Port to access the source Oracle database. (The default Oracle port is 1521.)
Enter a Username and Password to authenticate to your source database.
In the Service name field, enter the service that ensures that the source Oracle database is protected and monitored. For Oracle databases, the database service is typically
ORCL
.
Click CONTINUE.
From the Connectivity method list, select a network connectivity method. This method defines how Database Migration Service will connect to the source Oracle database.
For this quickstart, select IP allowlist as the networking method.
Click RUN TEST to verify that Database Migration Service can communicate with the source.
If the test fails, then it indicates which part of the process had an issue. Necessary changes can be made and then re-tested on the Create a connection profile page.
Navigate to the part of the flow in question to correct the issue, and then retest.
Click CREATE.
Create a destination connection profile
Go to the Database Migration Service Connection profiles page in the Google Cloud console.
Go to the Database Migration Service Connection Profiles Page
Click CREATE PROFILE.
On the Create a connection profile page, from the Profile role list, select Destination.
From the Database engine list, select AlloyDB for PostgreSQL.
Supply the following information:
In the Connection profile name field, enter a name for the connection profile for your destination AlloyDB for PostgreSQL database, such as
My AlloyDB for PostgreSQL Connection Profile
.Keep the auto-generated Connection profile ID.
Select the Region where the connection profile is stored.
Select the AlloyDB for PostgreSQL cluster that you want to use as your migration destination database.
Enter the Hostname or IP address (domain or IP) and Port to access the destination database. (The default AlloyDB for PostgreSQL port is 5432.)
Enter a Username and Password to authenticate to your destination database.
Click CONTINUE.
Optional. If you plan to transfer sensitive information over a public network (by using IP allowlists), then we recommend using SSL/TLS encryption for the connection between the source and destination databases. Otherwise, keep the default value of None in the Encryption type drop-down menu.
Click CONTINUE.
In the Connectivity method menu, the Private IP connectivity method is automatically selected by Database Migration Service.
Connectivity method defines how Database Migration Service connects to the destination AlloyDB for PostgreSQL database. Private IP requires that you first create a private connectivity configuration.
Click RUN TEST to verify that Database Migration Service can communicate with the destination database.
If the test fails, then it indicates which part of the process had an issue. Necessary changes can be made and then re-tested on the Create a connection profile page.
Navigate to the part of the flow in question to correct the issue, and then retest.
Click CREATE.
Create a conversion workspace
Conversion workspaces help you convert the schema and objects from your source database into a format that's compatible with your destination database. This conversion allows Database Migration Service to migrate your data between the source and destination databases.
Create a conversion workspace by doing the following:
- Define settings for the conversion workspace.
- Connect to the source database and pull your schema information into Database Migration Service.
Perform automatic conversion with optional adjustments by using:
- Conversion workspace editor—a live editor space where you can modify SQL used for conversion
- Ora2Pg configuration files to provide additional conversion mappings
Apply converted schema to your destination database: Database Migration Service uses the generated SQL to create all the required entities in your destination database to help ensure that migration data can be correctly loaded to the database.
Define settings for the conversion workspace
Go to the Database Migration Service Conversion workspaces page in the Google Cloud Console.
Go to the Database Migration Service Conversion Workspaces Page
Click CREATE WORKSPACE.
Supply the following information:
In the Conversion workspace name field, enter a name for the conversion workspace, such as
My Conversion Workspace
.Keep the auto-generated Conversion workspace ID.
Select the Region where the conversion workspace is stored.
The Source database engine drop-down list is automatically populated with Oracle.
From the Destination database engine drop-down list, select AlloyDB for PostgreSQL.
Review the required prerequisites that are generated automatically to reflect how the environment must be prepared for a conversion workspace. These prerequisites include how to:
Create a connection profile to the source Oracle database
Configure your source Oracle database so that the conversion workspace can retrieve data from it
(Optional) Use the Ora2Pg migration tool to create additional mappings for the conversion workspace editor
Click CREATE WORKSPACE AND CONTINUE.
Connect to the source and convert objects
Open the Source connection profile drop-down list and select the connection profile that you created.
Click RUN TEST to verify that Database Migration Service can communicate with the source.
If a test fails, then it indicates which part of the process had an issue. Necessary changes can be made and then re-tested.
Navigate to the part of the flow in question to correct the issue, and then retest.
Click Pull schema and continue.
Database Migration Service begins connecting to your source database to download schema information. This operation can take a couple of minutes, depending on factors such as network connectivity or database size.
After Database Migration Service finishes pulling schema information, the Select and convert objects section opens.
Use the schema tree view to select all entities you want Database Migration Service to convert into schema compatible with your destination database SQL engine.
Click Convert and continue.
Database Migration Service creates your conversion workspace and performs the schema conversion. You can now preview the auto-generated SQL in the conversion workspace editor.
Apply to destination
When the schema you want to use in the destination database is converted, use the Apply to destination option to run the generated SQL statements on your destination database:
Click Apply to destination and select one of the following options:
- Test (recommended): This operation performs a test run to verify whether your schema can be successfully created in the destination database.
- Apply: This operation attempts to create your converted schema in the destination database.
In the Define destination section, select the connection profile that points to your destination database.
Click Define and continue.
In the Review objects and apply conversion to destination section, select the schemas of the database entities you want to create in your destination database.
Click Apply to destination.
Create a migration job
Database Migration Service uses migration jobs to migrate data from your source database instance to the destination AlloyDB for PostgreSQL database instance.Creating a migration job includes:
- Defining settings for the job
- Specifying information about the connection profile that you created for your source database (source connection profile)
- Specifying information about the connection profile that you created for your destination database (destination connection profile)
- Configuring the objects that you want to migrate from the source database
- Testing the migration job to ensure that the connection information you provided for the job is valid
Define settings for the migration job
Go to the Database Migration Service Migration jobs page in the Google Cloud console.
Click CREATE MIGRATION JOB.
In the Migration job name field, enter a name for the migration job, such as
My Migration Job
.Keep the auto-generated Migration job ID.
From the Source database engine menu, select Oracle.
From the Destination database engine menu, select AlloyDB for PostgreSQL.
Select the Region where the destination instance is to be created.
Review the required prerequisites that are generated automatically to reflect how the environment must be prepared for a migration job. These prerequisites can include how to configure the source database and how to connect it to the destination AlloyDB for PostgreSQL database instance. It's best to complete these prerequisites at this step, but you can complete them at any time before you test the migration job or start it. For more information about these prerequisites, see Configure your source Oracle database.
Click SAVE & CONTINUE.
Define source settings
On the Define your source page, perform the following:
- From the Source connection profile drop-down menu, select the source connection profile for your Oracle instance.
- Click Save and continue.
- (Optional) In the Test connection profile section, click Run test
to check if Database Migration Service can establish a network connection to your
source instance.
You can create the migration job even if the connection test fails, but you should fix any connectivity issues before you run the migration job.
- In the Customize source configuration section, configure the following
settings:
- In the Full dump configuration section, select Automatic.
- In the Source read settings, use the suggested concurrency defaults.
- Click Save and continue.
Define destination settings
On the Define your destination page, perform the following:
- From the Destination connection profile drop-down menu, select the destination connection profile.
- Click Save and continue.
- (Optional) In the Test connection profile section, click Run test
to check if Database Migration Service can establish a network connection to your
destination.
You can create the migration job even if the connection test fails, but you should fix any connectivity issues before you run the migration job.
- In the Customize destination configuration section, configure the
following settings:
- In the Maximum concurrent destination connections section, use the suggested concurrency defaults.
- In the Transaction timeout section, use the suggested timeout defaults.
- Click Save and continue.
Select objects to migrate
Select your conversion workspace from the Conversion workspace drop-down list.
After you select a conversion workspace, the Select objects to migrate region of the page lists all objects (schemas and tables) from the Oracle source database that can be migrated into the destination.
Select the database objects from the list that you want Database Migration Service to migrate.
Click SAVE & CONTINUE.
Test and create the migration job
Review the settings you chose for the migration job.
Click TEST JOB to verify that:
The source has been configured based on the prerequisites.
Database Migration Service is able to connect to the source.
Database Migration Service is able to connect to the destination.
The migration job is valid, and the source and destination versions are compatible.
If the test fails, then you can address the problem in the appropriate part of the flow, and return to re-test.
Click CREATE & START JOB to create the migration job and start it immediately.
Click CREATE & START in the subsequent dialog box.
In the Migration jobs page, verify that your migration job has a status of "Starting". After a few minutes, confirm that the status changes to "Running".
Verify the migration job
In this section, you confirm that Database Migration Service used the migration job to migrate data from your source database instance to the destination AlloyDB for PostgreSQL database instance.
Verify using the Data Validation Tool
There's an open-source Data Validation Tool which you can use to validate more closely that data matches between the source and the destination.
The following steps show a minimal example to run more-exact validations:
Deploy or use a virtual machine with access to both the source and the destination.
In the virtual machine, create a folder in which to install the Data Validation Tool.
Navigate to this folder.
Use pip to install the Data Validation Tool.
pip install google-pso-data-validator
Create connections to the source Oracle database and the destination AlloyDB for PostgreSQL database.
data-validation connections add -c source Oracle --host 'ip-address' --port port --user username --password pswd --database database-name data-validation connections add -c target Postgres --host 'ip-address' --port port --user username --password pswd --database database-name
For example:
data-validation connections add -c source Oracle --host '10.10.10.11' --port 1521 --user system --password pswd --database XE data-validation connections add -c target Postgres --host '10.10.10.12' --port 5432 --user postgres --password pswd --database postgres
Create or generate a list of tables to compare data between the source and destination databases.
export TABLES_LIST=$(data-validation find-tables --source-conn target --target-conn target --allowed-schemas schema-name)
For example:
export TABLES_LIST=$(data-validation find-tables --source-conn target --target-conn target --allowed-schemas public)
Run full validation against all tables.
data-validation validate column --source-conn source --target-conn target --tables-list "${TABLES_LIST}"
We suggest that you run this validation during replication to ensure relative consistency. Large table queries may take too long to run during a small promotion window. In such cases, use the Data Validation Tool to add filters to reduce runtime or prepare the table list to include a subset of tables for the final validation.
This confirms that Database Migration Service migrated the data.
Finalize the migration
For continuous migrations, finalize the migration job when you want to start using your destination cluster for your application.
You can use the Promote button on the migration job details page to have Database Migration Service clean up all the temporary migration data and promote your destination.
Return to the Migration jobs page.
Click the migration job that you created in this quickstart. This represents the migration that you want to finalize. The Migration job details page appears.
Use the Data Validation Tool to track the replication delay by checking row counts.
Wait for the replication delay to trend down significantly, ideally on the order of minutes or seconds. The replication delay is available for review on the Migration jobs page.
After the replication delay is at a minimum, initiate the cut over. To avoid data loss, make sure to:
Stop all writes, running scripts, and client connections to the source database. The downtime period begins here.
Wait until the replication delay is at zero, which means that the migration job has processed all outstanding changes.
You can finalize a migration even if the replication delay isn't at zero. This can reduce the database downtime, but may affect the accuracy of the data in the destination.
- On the migration job details page, click Promote, and then confirm the action in the Promote migration job? window.
The migration job stops reading from your source database. Database Migration Service promotes your AlloyDB for PostgreSQL destination to clean up all the temporary migration data. This process can take several minutes.
When the promotion process is complete, the migration job status changes to Completed.
You can now connect your application to the destination AlloyDB for PostgreSQL instance and safely delete the migration job.
Your AlloyDB for PostgreSQL database instance is ready to use.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
- Use the Google Cloud console to delete your migration job, conversion workspace, connection profiles, destination AlloyDB for PostgreSQL instance, and project if you don't need them.
What's next
- Read more about how to manage connection profiles.
- Read more about how to manage conversion workspaces.
- Read more about migration job statuses.