This document explains how to use different conversion workspaces in Database Migration Service.
Use conversion workspaces with the built-in editor
A conversion workspace helps you convert the schema and objects from your source database into a compatible format for your destination database. The workspace lets you review, modify, and apply the overall conversion structure, but it doesn't alter any data on the source database.
Convert source schema and code
When you set up a workspace, Database Migration Service performs the initial pull of the source database schema and code objects, and then you select which ones you want to convert (you can add more objects later). These objects are now displayed in the Oracle tab. The AlloyDB for PostgreSQL draft panel displays the resulting PostgreSQL schema and code generated by Database Migration Service.
At this point Database Migration Service displays the anticipated post-conversion objects, not the actual objects on the destination instance.
In the Google Cloud console, go to Conversion workspaces.
From the list of available conversion workspaces, select the conversion workspace that you want to work with.
Conversion workspace editor opens.
Optional: Upload an Ora2Pg configuration file to provide additional mappings for the generated SQL:
Click Edit configuration > Customize mapping with a configuration file
In the Ora2Pg configuration file box, click Browse and use the system file picker to select your configuration file.
Click Add file > Generate.
Optional: If you already have a converted schema and code, but want to manually adjust generated SQL before running the conversion again, perform the following steps:
In the Oracle panel, select the object for which you want to adjust the generated SQL.
Navigate to the SQL tab, and inspect the statements.
Adjust the scripts as needed for your scenario and click Save.
Once you finish all adjustments, click Convert source.
In the confirmation dialog box, click Convert.
Remove objects from the source schema
Remove objects from the source schema to exclude them from conversion. You can later add removed objects if needed.
In the Google Cloud console, go to Conversion workspaces.
From the list of available conversion workspaces, select the conversion workspace that you want to work with.
Conversion workspace editor opens.
In the Oracle tab, next to the object you want to remove, select > Remove.
MoreIn the Remove object confirmation dialog, click Remove.
Convert the source again to reflect the updates in the AlloyDB for PostgreSQL draft tab.
Add objects to the source schema
You can add objects previously removed from the source schema to add them to the conversion.
In the Google Cloud console, go to Conversion workspaces.
From the list of available conversion workspaces, select the conversion workspace that you want to work with.
Conversion workspace editor opens.
In the Oracle tab, click
Add objects.In the Add objects section, use the source list to select all entities you want add to the source schema.
You can use the Filter field to reduce the number of displayed objects. See Filter objects in the source schema view.
Click Add objects.
Convert the source again to reflect the updates in the AlloyDB for PostgreSQL draft tab.
Pull source schema and code snapshot
You can pull schema and code from your source database into the conversion workspace at any moment. Pulling the source gives you an opportunity to add new, or update existing objects in the conversion.
Pulling updated database schema and code doesn't reset any additional mappings that exist in the conversion workspace from the Ora2Pg configuration files unless you explicitly choose to remove custom mappings.
Additionally, pulling the source doesn't override SQL changes on your code objects. You can reset these changes directly at the object level.
In the Google Cloud console, go to Conversion workspaces.
Click Edit configuration > Pull source schema snapshot again.
In the dialog box:
- (Optional) Select the Reset custom mapping to remove existing custom mappings and DDL modifications.
- Click Pull schema snapshot.
Database Migration Service pulls the new snapshot from your source database. When the operation completes, convert the source again.
Review conversion results
After you perform the source conversion, you can review the conversion results and possible issues for every individual converted object in the workspace editor area. You can also use Google Cloud CLI to save all results and issues in bulk to a text file.
Console
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Select the Oracle tab, and locate the object for which you want to review conversion results in the tree view table.
Select the object. Use the SQL and Conversion issues tabs to review the conversion.
gcloud
With Google Cloud CLI you can print all conversion results or issues to the terminal. Redirect the output to a file for more convenient bulk object reviews.
Get a list of conversion results
gcloud CLI displays conversion results in the terminal in the form of Data Definition Language (DDL) statements. To save conversion results to a file, execute the following command:
gcloud database-migration conversion-workspaces describe-ddls \
CONVERSION_WORKSPACE_ID \
--region=REGION_ID \
> OUTPUT_FILE_PATH
Replace:
CONVERSION_WORKSPACE_ID
with the conversion workspace identifier. For information on how to retrieve conversion workspace identifiers, see View conversion workspace details.REGION_ID
with the name of the region where the conversion workspace is located.OUTPUT_FILE_PATH
with the path where to the text file where you want to save the output.Example:
gcloud database-migration conversion-workspaces describe-issues \ my-conversion-workspace \ --region=us-central1 \ > ./my-conversion-issues.txt
Result:
Your schema conversion results are saved in a text format where the first line says
DDLs
and subsequent lines are occupied by SQL statements:DDLs CREATE SCHEMA IF NOT EXISTS "SCHEMA1"; ALTER TABLE "SCHEMA1"."EMPLOYEES" ADD CONSTRAINT PK_ID PRIMARY KEY ("ID"); CREATE OR REPLACE FUNCTION mockschema.func_test_datatype(str1 VARCHAR(65000)) RETURNS DECIMAL LANGUAGE plpgsql AS $$ DECLARE str2 VARCHAR(100); BEGIN SELECT employees.first_name INTO STRICT STR2 FROM mockschema.employees WHERE employees.employee_id = CAST(FUNC_TEST_DATATYPE.str1 as DECIMAL) ; RAISE NOTICE '%', concat('Input : ', FUNC_TEST_DATATYPE.str1, ' Output : ', str2); RETURN 0; END; $$; CREATE OR REPLACE PROCEDURE greetings AS BEGIN dbms_output.put_line('Hello World!'); END; CREATE SYNONYM TABLE "SCHEMA1"."SYNONYM1" ON "SCHEMA1"."EMPLOYEES"; CREATE OR REPLACE VIEW "SCHEMA1"."VIEW1" AS SELECT * FROM JOBS;
Get a list of conversion issues
To save conversion issues to a file, execute the following command:
gcloud database-migration conversion-workspaces describe-issues \
CONVERSION_WORKSPACE_ID \
--region=REGION_ID \
> OUTPUT_FILE_PATH
Replace:
-
CONVERSION_WORKSPACE_ID
with the conversion workspace identifier. For information on how to retrieve conversion workspace identifiers, see View conversion workspace details. -
REGION_ID
with the name of the region where the conversion workspace is located. -
OUTPUT_FILE_PATH
with the path where to the text file where you want to save the output.
Example:
gcloud database-migration conversion-workspaces describe-issues \
my-conversion-workspace \
--region=us-central1 \
> ./my-conversion-issues.txt
Result:
All the conversion issues contained in your workspace are saved in a text format where the first line contains column headers and each subsequent line contains a separate conversion issue:
PARENT NAME ENTITY_TYPE ISSUE_TYPE ISSUE_SEVERITY ISSUE_CODE ISSUE_MESSAGE
SCHEMA1 EMPLOYEES TABLE DDL ERROR 500 unable to parse DDL.
SCHEMA1 EMPLOYEES TABLE CONVERT WARNING 206 no conversion done.
SCHEMA1 STORED_PROCEDURE1 STORED_PROCEDURE DDL ERROR 500 invalid DDL.
SCHEMA1 SYNONYM1 SYNONYM CONVERT WARNING 206 synonym warning message.
Fix conversion issues
Database Migration Service might not be able to automatically convert your entire source.
For most Oracle objects, you can use the conversion editor directly in Database Migration Service to adjust the generated SQL. For others, you might need to change the object directly in your source database and then pull the source snapshot again.
For a complete list of objects that Database Migration Service supports for editing directly in the conversion workspace, see Editable Oracle objects.
To fix conversion issues encountered with objects that support live editing:
Review the conversion results and identify possible issues.
You can use the Google Cloud console for reviewing individual objects or gcloud CLI for reviewing all objects in bulk.
In the AlloyDB for PostgreSQL draft tab, use the SQL editor to update the generated code.
Test application to verify Database Migration Service can correctly apply your code to the destination database.
To fix conversion issues encountered with objects that aren't supported in the workspace editor, perform one of the following:
Update your source:
- Update the schema and code directly in your source database.
- Pull the updated sources to Database Migration Service.
- Convert the sources again and check if the issues persist.
Provide additional mappings by using the Ora2Pg configuration tool:
- Configure your Ora2Pg and create a mapping file for the faulty object.
- Add the configuration file to the workspace. See Modify conversion workspaces.
Try removing the object from the sources tab and converting the schema again.
Modify the code directly on the destination instance. Make sure that you don't update objects that can affect your mappings.
Test the application to your destination
Before you apply the schema to the destination database, you can first perform a test application that doesn't impact your destination Cloud SQL instance. Testing schema application can help you proactively check for possible issues.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Apply to destination > Test (recommended).
The wizard for applying schema to destination database appears.
In the Define destination section, select the connection profile that points to your destination database.
Click Define and continue.
In the Select objects and test application section, select the schemas of database entities you want to test for your destination database.
You can use the Filter field to reduce the number of displayed objects. See Filter objects in the source schema view.
Click Test application.
You can review the application status in the AlloyDB for PostgreSQL draft tab.
Apply schema to destination
When the schema you would like to use in the destination database is converted according to your requirements and mappings, you can apply the results to the destination database.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Apply to destination > Apply.
The wizard for applying schema to destination database appears.
In the Define destination section, select the connection profile that points to your destination database.
Click Define and continue.
In the Review objects and apply conversion to destination section, select the schemas of database entities you want to create in your destination database.
You can use the Filter field to reduce the number of displayed objects. See Filter objects in the source schema view.
Click Apply to destination.
You can review the application status in the AlloyDB for PostgreSQL draft tab.
Create a migration job from the conversion editor
You can create a migration job that uses your conversion workspace directly from the conversion editor interface.
In the Google Cloud console, go to Conversion workspaces.
Click the display name of the conversion workspace that you want to work with.
Conversion workspace editor opens.
Click Create migration job.
Proceed with the standard migration job steps, as outlined in Create a migration job.
Filter objects in the source schema view
Database schemas often contain thousands of objects, making it challenging to partition conversion work. When you add objects from the schema snapshot to the source schema view, you can use filters to limit the number of displayed objects. Filters let you add objects in a more granular fashion and focus on converting a select subset of your schema.
Use the filtered view when you add objects to the source schema view:
In the Filter field, use one of the following filtering methods:
From the Properties list, select one of the following auto-suggested options:
- Object name to filter the objects by a name, for example,
ADMIN
. - Object type to filter the objects by several types, such as Function or Table.
- Conversion status to filter the objects by the conversion status, such as Action required or No issues.
You can combine filter properties with logical operators.
- Object name to filter the objects by a name, for example,
Enter your query. For example:
type=table
.For more information on the filtering syntax, see Supported filtering syntax.
Select the objects that you want to add to the source schema view.
Supported filtering syntax
You can filter objects by name with basic free text search, or use a dedicated type
property.
Both approaches support the Google API formal specification for filtering,
meaning you can use literals with wildcards, as well as logical and comparison
operators.
- Free text filtering
-
Use free text to filter the objects by name. This approach is case-sensitive and supports wildcards.
Example:
The
*JOB*
query uses wildcards to search for entities that contain theJOB
substring. The filtered view returns some tables and one stored procedure: - Filter by using the
type
property -
You can filter objects by all standard types supported in Database Migration Service.
The
type
property supports the following literals with the equality (=
) and inequality (!=
) operators:database
,schema
,table
,column
,index
,sequence
,stored_procedure
,function
,view
,synonym
,materialized_view
,udt
,constraint
,database_package
,trigger
, andevent_trigger
.Example:
The
type=table
filter returns only tables present in your schema:
Combine filtering conditions
You can specify multiple conditions by combining them with logical operators.
For example, to search exclusively for tables whose names contain the
JOB
or EMPLOYEE
substrings, use this query:
type=table AND (*JOB* OR *EMPLOYEE*)
As a result, the filter displays all matching tables:
Use legacy conversion workspaces
Legacy conversion workspaces are read-only workspaces that serve to pair the Ora2Pg configuration files with a connection profile.
Convert source schema
For legacy workspaces, you convert the schema by using the Ora2Pg migration tool directly on your source database. Schema conversion is managed outside Database Migration Service. See Prepare for schema conversion.
Database Migration Service runs the source schema conversion when you create the legacy conversion workspace. If you want to update your mappings, re-create the legacy workspace with the updated configuration file.
Apply to destination
After you create the Ora2Pg configuration file and create the workspace, you must apply the generated code by yourself directly on the destination database. If you need to change the conversion, adjust the Ora2Pg configuration file and re-create the legacy workspace with new settings.