Overview
Database Migration Service uses migration jobs to migrate data from your source database instance to the destination Cloud SQL database instance.
Creating a migration job includes:
Defining settings for the migration job
Specifying information about the connection profile that you created for your source database (source connection profile)
Defining settings for the destination Cloud SQL database instance and creating the instance
Setting up connectivity between the source and destination database instances
Testing the migration job to ensure that the connection information you provided for the job is valid
Database Migration Service comes equipped with a wizard to help you create a migration job. This wizard consists of five panels: Get started, Define a source, Create a destination, Define connectivity method, and Test and create migration job. Information on how to populate each panel is provided in the various sections of this page.
You can pause the creation of a migration job by clicking Save and exit . All of the data that was populated until that point is saved in a draft migration job, and you can access this job in a DRAFTS tab.
To finish creating a migration job, navigate to the tab, and then click the job. The creation flow resumes from where it was left off. The job remains a draft until you click CREATE or CREATE & START .
Define settings for the migration job
Go to the Migration jobs page in the
Google Cloud Console.
Click Create migration job at the top of the page.
Provide a name for the migration job. Choose a friendly name that helps
you identify the migration job. Don't include sensitive or personally
identifiable information in the job name.
Keep the auto-generated Migration job ID .
Select the source database engine.
Select the destination database engine.
Select the destination region for your
migration. This is where the Database Migration Service instance is created, and should
be selected based on the location of the services that need your data, such
as Compute Engine instances and App Engine apps, and other services. After
you choose the destination region, this selection can't be changed.
Note:
The Mexico, Montreal, and Osaka regions have three zones within one or two physical data centers.
These regions are in the process of expanding to at least three physical data centers. For more
information, see
Cloud locations and
Google Cloud Platform SLAs .
To help improve the reliability of your workloads, consider a
multi-regional deployment .
Important: If you plan to use the Cloud SQL for PostgreSQL Enterprise Plus edition, make sure your region
is supported for that edition.
See Cloud SQL for PostgreSQL Enterprise Plus edition region support .
1. Specify the migration job type: One-time (snapshot only) or Continuous
(snapshot + ongoing changes).
1. Review the required prerequisites that are generated automatically to reflect how the environment must be prepared for a migration job. These prerequisites can include how to configure the source database and how to connect it to the destination Cloud SQL database instance. It's best to complete these prerequisites at this step, but you can complete them at any time before you test the migration job or start it. For more information about these prerequisites, see Configure your source.
1. Click Save and continue .
Specify information about the source connection profile
If you have created a connection profile, then select it from the list of
existing connection profiles.
If you haven't created a connection profile, then create one by clicking
Create a connection profile at the bottom of
the drop-down list, and then perform the same steps as in
Create a source connection profile .
In the Customize data dump configurations section,
click Show data dump configurations .
The speed of data dump parallelism is related to the amount of load
on your source database. You can use the following settings:
Optimal (recommended) : Balanced performance with optimal load on the source database.
Maximum : Provides the highest dump speeds, but might cause increased load on the source database.
Minimum : Takes the lowest amount of compute resources on the source database, but might have slower dump throughput.
If you want to use adjusted data dump parallelism settings,
make sure to increase the max_replication_slots
,
max_wal_senders
, and max_worker_processes
parameters on your source
database. You can verify your configuration by
running the migration job test at the end of migration job creation.
Click Save and continue .
Define and create the destination Cloud SQL instance
Note: For Cloud SQL sources: If you are migrating from a Cloud SQL
instance that uses a Private IP connection to a Cloud SQL instance that uses a
non-RFC 1918 address
IP range, add the non-RFC 1918 range to the network configuration of your source Cloud SQL instance.
See Configure authorized networks
in Cloud SQL documentation.
From the Type of destination instance drop-down menu, select
New instance .You can also migrate to an existing instance, see
Migration job for an existing instance .
Provide an ID for the Cloud SQL instance or use the auto-generated ID.
Don't include sensitive or personally identifiable information in the
ID; it's externally visible. There's no need to include the
project ID in the instance name. This is done automatically where appropriate
(for example, in the log files).
Provide an alphanumeric password for the destination Cloud SQL instance. This will be the password for the postgres
administrator account in the instance.
You can either enter the password manually or click Generate to have Database Migration Service create one for you automatically.
Choose the database version for the destination instance from the list of
supported Database Migration Service versions for the specified database engine.
Learn more
about cross-version migration support.
Select the Cloud SQL for PostgreSQL edition for your destination instance. There are two
options available: Cloud SQL for PostgreSQL Enterprise edition and Cloud SQL for PostgreSQL Enterprise Plus edition .
Cloud SQL for PostgreSQL editions come with different sets of features,
available machine types, and pricing. Make sure you consult
the Cloud SQL documentation to choose the edition that is appropriate for your needs.
For more information, see Introduction to Cloud SQL for PostgreSQL editions .
The instance is created in the region that you selected when you
defined settings for the migration job . Select a zone
within that region or leave the zone set to Any for Google to select
one automatically.
If you are configuring your instance for high availability , select Multiple zones (Highly available) . You can select both the primary and the secondary zone.
The following conditions apply when the secondary zone is used during instance creation:
The zones default to Any for the primary zone and Any (different from primary) for the secondary zone.
If both the primary and secondary zones are specified, they must be different zones.
Choose whether to connect to this instance via private
or public IP address.
If you're planning to connect via VPC-peering or Reverse SSH tunnel,
then select the Private IP checkbox.
Make sure the following is true to enable private IP:
Select the associated VPC network to peer. If you plan on connecting to
the migration source via VPC peering, then choose the VPC where the instance
resides.
If a managed service network was never configured for the selected VPC, you
can choose to either select an IP range and click Connect or use an
automatically selected IP range and click Allocate & Connect .
If you're planning to connect via IP allowlisting, then select the
Public IP checkbox.
Optionally, click the Authorized networks field, and either authorize a network or a proxy to connect to the Cloud SQL instance. Networks will only be authorized via the addresses that you provide. Learn more
about configuring public access to the instance.
Select the machine type for the Cloud SQL instance. The disk size must be equal to or greater than the source database size.
Note: Machines available in the Machine type selection depend
on the Cloud SQL edition you use.
Learn more about
PostgreSQL
machine types.
For Cloud SQL for PostgreSQL Enterprise Plus edition : Select the Enable data cache checkbox
if you want to use the data cache feature in your destination database.
Data cache is an optional feature available for Cloud SQL for PostgreSQL Enterprise Plus edition instances
that adds a high-speed local solid state drive to your destination database.
This feature can introduce additional costs to your Cloud SQL.
For more information on data cache, see
Data cache overview in
Cloud SQL documentation.
Specify the storage type for the Cloud SQL instance. You can choose either
a
solid-state drive (SSD) or a hard disk drive (HDD).
Specify the storage capacity (in GBytes) for the Cloud SQL instance.
Important: Make sure the instance has enough storage capacity to handle the
data from your source database. You can increase this capacity at any time,
but you can't decrease it.
Optionally, click SHOW OPTIONAL CONFIGURATIONS , and then:
Specify whether you want to manage the encryption of the data that's migrated from the source to the destination. By default, your data is encrypted with a key that's managed by Google Cloud. If you want to manage your encryption, then you can use a customer-managed encryption key (CMEK). To do so:
Select the Use a customer-managed encryption key (CMEK) check box.
From the Select a customer-managed key menu, select your CMEK.
If you don't see your key, then click ENTER KEY RESOURCE NAME to provide the resource name of the key that you want to use. For example, you can enter projects/my-project-name /locations/my-location /keyRings/my-keyring /cryptoKeys/my-key
in the Key resource name field, and then click SAVE .
As part of creating the migration job, Database Migration Service will verify that the CMEK exists, and that Database Migration Service has permissions to use the key.
If Database Migration Service doesn't have these permissions, then information will appear, specifying that the Database Migration Service service account can't use the CMEK. Click GRANT to give Database Migration Service permissions to use the key.
For more information about creating a CMEK, see Using customer-managed encryption keys (CMEK) .
Add any necessary flags that will be applied to the database server. If possible, make sure that the database flags on the created destination Cloud SQL instance are the same as those on the source database.
Learn more about supported database flags for PostgreSQL .
Add any labels
that are specific to the Cloud SQL instance.
Labels help organize your instances. For example, you can organize labels
by cost center or environment. Labels are also included in your bill so you
can see the distribution of costs across your labels.
Click CREATE & CONTINUE .
In the Create destination database window, click CREATE DESTINATION & CONTINUE
to create the new instance. This may take up to several minutes.
This will create a Cloud SQL instance for
which you will be charged according to the configuration specified. If the
migration job creation flow is exited before completion, then use the
CANCEL button to clean up the Cloud SQL instance. If the browser
or tab is closed, Save and exit is clicked, or the screen is navigated
away from, then the Cloud SQL instance isn't cleaned up and will need to be
deleted manually from the Cloud SQL Instances page.
After the destination instance is created, the
Cloud SQL instance settings can no longer be edited in the Database Migration Service
console. You can still access the instance directly in the Cloud SQL
console to modify it.
Wait for the creation of the destination instance to finish.
Set up connectivity between the source and destination database instances
If you're reusing a source connection profile that was used previously for heterogeneous migrations (for example, from Oracle to Cloud SQL to PostgreSQL), then in this step, you must choose a connectivity method. This method will be used for the homogeneous migration from one Cloud SQL for PostgreSQL database instance to another. You must select a connectivity method because the heterogeneous connectivity method isn't relevant for this migration job.
From the Connectivity method drop-down menu, select a network connectivity method. This method defines how the newly created Cloud SQL instance will connect to the source database. Current network connectivity methods include IP allowlist, reverse SSH tunnel, and VPC peering .
If you select the IP allowlist network connectivity method, you need to specify the outgoing IP address of your destination instance. If the Cloud SQL instance you created is a high availability instance, include the outgoing IP addresses for both the primary and the secondary instance.
If you select the reverse SSH tunnel network connectivity method, then select the Compute Engine VM instance that will host the tunnel.
After specifying the instance, Google will provide a script that performs the steps to set up the tunnel between the source and destination databases. You'll need to run the script in the Google Cloud CLI .
Run the commands from a machine that has connectivity to both the source database and to Google Cloud.
If you select the VPC peering network connectivity method, then select the VPC network where the source database resides. The Cloud SQL instance will be updated to connect to this network.
Learn more about how to Configure connectivity .
After selecting the network connectivity method and providing any additional information for the method, click CONFIGURE & CONTINUE .
Test and create the migration job
On this final step, review the summary of the migration job settings, source, destination, and connectivity method, and then test the validity of the migration job setup. If any issues
are encountered, then you can modify the migration job's settings. Not all settings are editable.
Click TEST JOB to verify that:
If the test fails, then you can address the problem in the appropriate
part of the flow, and return to re-test.
For more information about reasons why the test may fail and how to troubleshoot any issues associated with the test failing, see Diagnose issues for PostgreSQL .
The migration job can be created even if the test fails, but after the
job is started, it may fail at some point during the run.
Click CREATE & START JOB to create the migration job and
start it immediately, or click CREATE JOB to create the migration job without immediately
starting it.
If the job isn't started at the time that it's created, then it can be started from the
Migration jobs page by clicking START .
Regardless of when the migration job starts, your organization is
charged for the existence of the destination instance.
Key Point:
When you start the migration job, Database Migration Service
begins the full dump, briefly locking the source database. If your source
is in Amazon RDS or Amazon Aurora, Database Migration Service additionally requires a short
(approximately under a minute) write downtime at the start of the migration.
For more information, see Known limitations .
The migration job is added to the migration jobs list and can be viewed
directly.
Send feedback
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-26 UTC.
Need to tell us more?
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-26 UTC."],[],[]]