This page describes how to replicate data from an external server to a Cloud SQL replica. You can replicate data with a managed dump or by uploading it from a dump file. We'll show you how to do both with an external server that is connectable through its IP address or DNS address.
An alternative to the procedures described on this page is the Database Migration Service, which offers continuous replication or one-time database migration from a source database server to Cloud SQL.
Before you begin
Terminology
Source database server. The MySQL server that you manage. Also known as the external primary, external server, source database server or external database server. If you are using Amazon Relational Database Service (RDS), this contains your source database server.
Source representation instance. A Cloud SQL instance that represents the source database server to the Cloud SQL replica. It is visible in the Cloud Console and appears the same as a regular Cloud SQL instance, but it contains no data, requires no configuration or maintenance, and does not affect billing. Contains the source database server configuration.
Cloud SQL replica. The Cloud SQL instance that replicates from the source database server. Also known as the external primary read replica.
Replication user account. The MySQL user account on the source database server with sufficient permissions to allow replication between the source database server and Google Cloud server.
Managed dump. The process of copying data directly from the source database server to the Cloud SQL replica. In this situation, Cloud SQL connects to the source database server using the replication user account and runs
mysqldump
directly on the source database server to import data to the Cloud SQL replica.
Source database server checklist
All
Ensure that MySQL 5.5, 5.6, 5.7 or 8.0 is installed on your source database server. MySQL Community Edition, Cloud SQL MySQL, and MySQL on Amazon RDS are supported.
- If MySQL 8.0 is installed on the source database server, you can use Cloud SQL MySQL 8.0 with your Cloud SQL replica.
- If MySQL 5.5 is installed on the source database server, GTID is not supported.
Ensure that binary logs are enabled and that:
- Row-based binary logging is used.
- Binary logs are retained for a long enough period to support the database migration. Generally a week is sufficient.
Take note of whether your source database server has GTID enabled. This will affect the steps you need to take to set up replication.
Take note of whether your source database server has global read-lock permissions. This will affect the steps you need to take to set up replication.
If the Cloud SQL replica is going to use a private IP, the source database server must be configured for access from the Cloud SQL replica's VPC. Set up this configuration before you complete the steps in this guide.
If your source database contains
DEFINER
clauses (views, events, triggers, or stored procedures), depending on the ordering of when these statements are executed, the replication might fail. Learn more aboutDEFINER
usage and potential workarounds in Cloud SQL.
Amazon RDS
Ensure that MySQL for Amazon RDS is installed on your source database server.
Take note that a source database server using Amazon RDS does not support GTID in most situations, but does support GTID for MySQL 5.7.
Take note that a source database server using Amazon RDS does not support global read-lock permissions.
Set up replication
You can initiate replication from a Cloud Storage file or from a managed dump.
When you set up your replication settings, you can also decide whether the Cloud SQL replica should stay in-sync with the source database server after the initial import is complete. A replica that should stay in-sync is online. A replica that is only updated once, is offline.
Online + File
To set up this type of replication, your source database server must have at least one of the following: GTID enabled or global read-lock permissions.
- Set up a project in Google Cloud.
- Install the Google Cloud SDK for the source database server.
- Create the replication user on the source database server.
- Export data from the source database server to a Google Cloud bucket.
- Set up the source representation instance.
- Set up the Cloud SQL replica.
- Confirm that instances were added correctly to Cloud SQL.
- Add users to the Cloud SQL replica.
- Update permissions for the replication user on the source database server.
- Allow incoming connections on the source database server.
- Verify your replication settings.
- Review your replication logs (optional).
- Replicate your data.
- Confirm your replication status.
- Clean up your storage.
Online + Managed Dump
- Set up a project in Google Cloud.
- Install the Google Cloud SDK for the source database server.
- Create the replication user on the source database server.
- Set up the source representation instance.
- Set up the Cloud SQL replica.
- Confirm that instances were added correctly to Cloud SQL.
- Add users to the Cloud SQL replica.
- Update permissions for the replication user on the source database server.
- Allow incoming connections on the source database server.
- Verify your replication settings.
- Review your replication logs (optional).
- Replicate your data.
- Confirm your replication status.
Offline + File
- Set up a project in Google Cloud.
- Install the Google Cloud SDK for the source database server.
- Create the replication user on the source database server.
- Export data from the source database server to a Google Cloud bucket.
- Set up the source representation instance.
- Set up the Cloud SQL replica.
- Confirm that instances were added correctly to Cloud SQL.
- Add users to the Cloud SQL replica.
- Update permissions for the replication user on the source database server.
- Allow incoming connections on the source database server.
- Replicate your data.
- Confirm your replication status.
- Clean up your storage.
Offline + Managed Dump
- Set up a project in Google Cloud.
- Install the Google Cloud SDK for the source database server.
- Create the replication user on the source database server.
- Set up the source representation instance.
- Set up the Cloud SQL replica.
- Confirm that instances were added correctly to Cloud SQL.
- Add users to the Cloud SQL replica.
- Update permissions for the replication user on the source database server.
- Allow incoming connections on the source database server.
- Verify your replication settings.
- Review your replication logs (optional).
- Replicate your data.
Configuration procedures
Based on your replication choices, please follow the relevant steps below.
Set up a Google Cloud project
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Cloud SQL Admin API.
- Make sure you have the Cloud SQL Admin, Storage Admin, and Compute Viewer roles on your user account.
- Enable the Cloud SQL Admin API.
Install the Google Cloud SDK
To configure replication, install Google Cloud SDK for your source database server. You may want to install this on your source database server if it is not already installed elsewhere.
Create a replication user account
A MySQL user account with the REPLICATION_SLAVE
privilege must exist on your source database server. This user account should
only be used to manage replication. To add a replication user account,
open a terminal on the source database server and enter these MySQL commands:
MySQL
// Online migration with Cloud Storage file:
CREATE USER '[USERNAME]'@'%' IDENTIFIED BY '[PASSWORD]';
GRANT REPLICATION SLAVE, EXECUTE ON *.* TO '[USERNAME]'@'[HOST]';
FLUSH PRIVILEGES;
// Online migration with managed dump and GTID enabled:
CREATE USER '[USERNAME]'@'%' IDENTIFIED BY '[PASSWORD]';
GRANT REPLICATION SLAVE, EXECUTE, SELECT, SHOW VIEW ON *.* TO '[USERNAME]'@'[HOST]';
FLUSH PRIVILEGES;
// Online migration with managed dump and GTID disabled:
CREATE USER '[USERNAME]'@'%' IDENTIFIED BY '[PASSWORD]';
GRANT REPLICATION SLAVE, EXECUTE, SELECT, SHOW VIEW, REPLICATION CLIENT,
RELOAD ON *.* TO '[USERNAME]'@'[HOST]';
FLUSH PRIVILEGES;
// Offline migration with managed dump:
CREATE USER '[USERNAME]'@'%' IDENTIFIED BY '[PASSWORD]';
GRANT SELECT, SHOW VIEW ON *.* TO '[USERNAME]'@'[HOST]';
FLUSH PRIVILEGES;
example
CREATE USER 'bluewing'@'%' IDENTIFIED BY '8*&fwwd';
GRANT REPLICATION SLAVE ON *.* TO 'bluewing'@'%';
FLUSH PRIVILEGES;
Property | Value |
---|---|
[USERNAME] |
The replication user account on the source database server. |
[PASSWORD] |
The password for the replication user account. |
[HOST]
|
Set this to % to accept all connections from any host.
In a later step, this will be changed to only accept
connections from the Cloud SQL replica. |
Export data to a Google Cloud bucket
You can populate a Cloud SQL replica with data from a Google Cloud bucket. These conditions apply:
- You must use the
mysqldump
utility bundled with MySQL. - While
mysqldump
is running, do not perform any DDL operations on the source database server. Doing so could cause inconsistencies in the export file.
To upload your data to a bucket, follow these steps:
- In Google Cloud, create a bucket.
- Open a terminal on your source database server and run the command below.
mysqldump
mysqldump \
--host=[SOURCE_HOST] \
--port=[SOURCE_PORT] \
--user=[USERNAME]\
--password \
[DATABASE_LIST] \
--hex-blob \
[SOURCE_DATA] \
--no-autocommit \
--default-character-set=utf8mb4 \
--single-transaction \
[GTID_PURGED] \
[ADD_DROP_TABLE] \
[ROUTINES] \
[COMPRESS] \
[GZIP] \
| gsutil cp - gs://[BUCKET]/[DUMP_FILENAME]
example
mysqldump \
--host=192.0.2.1 \
--port=3306 \
--user=replicationUser \
--password \
--databases guestbook journal \
--hex-blob \
--master-data=1 \
--no-autocommit \
--default-character-set=utf8mb4 \
--single-transaction \
--compress \
| gzip \
| gsutil cp - gs://replica-bucket/source-database.sql.gz
Property | Value |
---|---|
[SOURCE_HOST] |
The IPv4 or DNS address for the source database server. |
[SOURCE_PORT]
|
The port for the source database server.
If the source database server is hosted on Cloud SQL,
this is 3306 . |
[USERNAME]
|
The name of the replication user account or user account on the source database server that has database read permissions. |
[DATABASE_LIST] |
Space-separated list of all databases on the source database server, except for the system databases (sys , mysql , performance_schema , and information_schema ). Use the SHOW DATABASES MySQL command to list your databases. |
[SOURCE_DATA]
|
If your source database server does not support GTID, and
you have permission to access the global read lock on it,
use --master-data=1 ; otherwise, don't use this property. |
[GTID_PURGED]
|
If your source database server supports GTID, use
--set-gtid-purged=on ; otherwise, don't use this property. |
[ADD_DROP_TABLE]
|
If you want to add a DROP TABLE statement before each
CREATE TABLE statement, include --add-drop-table . |
[ROUTINES]
|
If you want to show stored routines, such as procedures
and functions, in the output for dumped databases,
include --routines . |
[COMPRESS]
|
If you want to compress all information sent between the
Cloud SQL replica and the source database server, use
--compress . |
[GZIP]
|
If you want to compress the dump file even more, use
| gzip . If your database contains data that does
not compress well, such as binary incompressible data or
JPG images, don't use this. |
[BUCKET]
|
The name of the bucket you created in Step 1 that will contain the dump file. |
[DUMP_FILENAME]
|
A file with this name is created in your bucket. This file will contain the contents of the database on your source database server. |
Set up a source representation instance
The source representation instance references the source database server. It only contains the request data from the source database server. In this step, you'll create the request data and use it in a cURL command that creates the source representation instance in Cloud SQL.
Create the request data
The request data contains basic information about your source database server in JSON format, The request data can be configured for a Cloud SQL replica on a public or private network and should contain this information:
source.json
{
"name": "[SOURCE_NAME]",
"region": "[REGION]",
"databaseVersion": "[MYSQL_VERSION]",
"onPremisesConfiguration": {
"hostPort": "[SOURCE_HOST]",
"username": "[USERNAME]",
"password": "[PASSWORD]",
"dumpFilePath": "gs://[BUCKET]/[DUMP_FILE]",
"caCertificate": "[SOURCE_CERT]",
"clientCertificate": "[CLIENT_CERT]",
"clientKey": "[CLIENT_KEY]"
}
}
managed dump example
// example of source.json for source database server that
// - initiates replication from a Cloud SQL managed dump
// - does not use SSL/TSL
{
"name": "cloudsql-source-instance",
"region": "us-central1",
"databaseVersion": "MYSQL_5_7",
"onPremisesConfiguration": {
"hostPort": "192.0.2.0:3306",
"username": "replicationUser",
"password": "486#@%*@"
}
}
dump file example
// example of source.json for source database server that
// - initiates replication from a file
// - uses SSL/TSL
{
"name": "cloudsql-source-instance",
"region": "us-central1",
"databaseVersion": "MYSQL_5_7",
"onPremisesConfiguration": {
"hostPort": "192.0.2.0:3306",
"username": "replicationUser",
"password": "486#@%*@",
"dumpFilePath": "gs://replica-bucket/source-database.sql.gz",
"caCertificate": "content of your certificate",
"clientCertificate": "content of your certificate",
"clientKey": "content of your client key"
}
}
Optional Fields | Notes |
---|---|
dumpFilePath
|
Only include if you are setting up replication with a dump file that exists in a Cloud SQL bucket. |
caCertificate
|
Only include if SSL/TLS is used on the source database server. |
clientCertificate
|
Only include if SSL/TLS is used on the source database server. |
clientKey
|
Only include if SSL/TLS is used on the source database server. |
Property | Value |
---|---|
[SOURCE_NAME]
|
The name of the source representation instance to create. |
[REGION]
|
The region where you want the source representation instance to reside. |
[MYSQL_VERSION]
|
The MySQL version running on your
source database server. The choices are
MYSQL_5_5 , MYSQL_5_6 , MYSQL_5_7 or MYSQL_8_0 . |
[SOURCE HOST]
|
The IPv4 address and port for the
source database server, or the
the DNS address for the source database server.
If you use a DNS address, it can contain up to 60
characters.
If the source database server is hosted on
Cloud SQL, the port is 3306 . |
[USERNAME]
|
The replication user account on the source database server. |
[PASSWORD] |
The password for the replication user account. |
[BUCKET] |
The name of the bucket that contains the dump file. |
[DUMP_FILE]
|
A file in the bucket that contains the data from the source database server. |
[CLIENT_CA_CERT]
|
The CA certificate on the source server. |
[CLIENT_CERT]
|
The client certificate on the source server. Required only for server-client authentication. |
[CLIENT_KEY]
|
The private key file for the client certificate on the source database server. Required only for server-client authentication. |
Create a source representation instance
Before you complete this step, you should create a JSON file that contains your source request data. Then, open a terminal and use these commands to create the source representation instance in Cloud SQL:
cURL
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data @[JSON_PATH] \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/[PROJECT-ID]/instances
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data @./source.json \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances
Property | Value |
---|---|
[PROJECT_ID] |
The ID for your project in Google Cloud. |
[JSON_PATH]
|
The path to the JSON file that contains the request data for
the source database server. |
Set up a Cloud SQL replica
The Cloud SQL replica will eventually contain the data from the source database server. In this step, you'll create the request data and use it in a cURL command that creates the Cloud SQL replica in Cloud SQL.
Create the request data
The request data contains basic information about your source database server and Cloud SQL replica in JSON format. The request data can be configured for a Cloud SQL replica on a public or private network and should contain this information:
replica.json
{
"settings": {
"tier": "[TIER]",
"dataDiskSizeGb": "[DISK_SIZE]",
"ipConfiguration": {
"privateNetwork": "projects/[PROJECT_ID]/global/networks/[NETWORK_NAME]"
}
},
"masterInstanceName": "[SOURCE_NAME]",
"region": "[SOURCE_REGION]",
"databaseVersion": "[MYSQL_VERSION]",
"name": "[REPLICA_NAME]"
}
example
{
"settings": {
"tier": "db-custom-4-15360",
"dataDiskSizeGb": "100"
},
"masterInstanceName": "source-instance",
"region": "us-central1",
"databaseVersion": "MYSQL_5_7",
"name": "replica-instance"
}
Optional Fields | Notes |
---|---|
ipConfiguration
|
Only include this if the Cloud SQL replica is on a private network. |
Property | Value |
---|---|
[TIER]
|
The type of machine to host your replica
instance. If you do not know which machine type to use,
start with db-custom-2-7680 . You can
change its size and other supported values
later if needed. |
[DISK_SIZE] |
The storage size for the Cloud SQL replica, in GB. |
[SOURCE_NAME] |
The name of the source representation instance. |
[SOURCE_REGION]
|
The region assigned to the source representation instance. |
[MYSQL_VERSION]
|
The MySQL version to use with the
Cloud SQL replica. Your choices
are MYSQL_5_6 , MYSQL_5_7 , MYSQL_8_0 . This can
match the MySQL version running on your
source database server or it can be at most
one version higher. For example, if you are
using MySQL 5.5 on your source database server,
you can use MySQL 5.6 with the
Cloud SQL replica. |
[REPLICA_NAME] |
The name of the Cloud SQL replica to create. |
[PROJECT_ID] |
The ID of your project in Google Cloud. |
[NETWORK_NAME]
|
The name of the private network to use with the Cloud SQL replica. |
Create the Cloud SQL replica
Before you complete this step, you should create a JSON file that contains your replica request data. Then, open a terminal and use these commands to create a Cloud SQL replica in Cloud SQL:
cURL
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data @[JSON_PATH] \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/[PROJECT-ID]/instances
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data @./replica.json \
-X POST \
Property | Value |
---|---|
[PROJECT_ID] |
The ID of your project in Google Cloud. |
[JSON_PATH]
|
The path to the JSON file that contains the request data for
the Cloud SQL replica. |
What you should see when you finish
To ensure your instances were set up correctly, go to the Cloud SQL Instances page. You should see your source representation instance and Cloud SQL replica. They will look similar to this:
Instance ID | Type | Public IP |
---|---|---|
(-) source-representation-instance | MySQL external primary | 10.68.48.3:3306 |
replica-instance | MySQL read replica | 34.66.48.59 |
Update permissions for the replication user
The replication user on the source database server is configured to
accept connections from any host (%
). You should update this user account
so that it can only be used with the Cloud SQL replica.
Open a terminal on the source database server and enter these commands:
MySQL
UPDATE mysql.user
SET Host='[NEW_HOST]' WHERE Host='[OLD_HOST]' AND User='[USERNAME]';
FLUSH PRIVILEGES;
example
UPDATE mysql.user
SET Host='192.0.2.0' WHERE Host='%' AND User='replicationUser';
FLUSH PRIVILEGES;
Property | Value |
---|---|
[USERNAME] |
The replication user account on the source database server. |
[NEW_HOST]
|
Specify the outgoing IP of the Cloud SQL replica. |
[OLD_HOST] |
The current value assigned to Host that you want to change. |
Allow incoming connections on the source database server
The Cloud SQL replica needs to connect to the source database server for replication to succeed. You must configure the network firewall for your source database server to accept connections from the Cloud SQL replica's outgoing IP address if all of the following apply:
- The source database server is behind a firewall or some other network restriction.
- Your Cloud SQL replica is using a public IP.
For instructions, see Adding an authorized address or address range. To connect to the Cloud SQL replica, you use the replica's primary IP address. This IP address is displayed in the Cloud Console.
Verify your replication settings
After your setup is complete, ensure that the Cloud SQL replica can replicate from the source database server.
First, ensure that your external sync settings are correct. To do this, use the commands below to verify:
- Connectivity between the Cloud SQL replica and source database server.
- Replication user privileges.
- Version compatibility.
If you are performing an online sync, the commands below will also check that:
- The Cloud SQL replica is not already replicating.
- Binlogs are enabled on the source database server.
- GTID is enabled if you are trying to do an external sync from an RDS source database server and are using a Google Cloud bucket.
Open a terminal and enter these commands to verify external sync settings are correct:
cURL
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "[SYNC_MODE]",
"skipVerification": "[SKIP_VERIFICATION]"
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/[PROJECT_ID]/instances/[REPLICA_INSTANCE]/verifyExternalSyncSettings
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"skipVerification": false
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/replica-instance/verifyExternalSyncSettings
Property | Value |
---|---|
[SYNC_MODE]
|
This can be online or offline . If
online is used, verifyExternalSyncSettings verifies
that you can keep the Cloud SQL replica and
source database server in sync after replication is
set up. |
[SKIP_VERIFICATION] |
Whether or not to skip the built-in verification step before syncing your data. Only recommended if you have already verified your replication settings. |
[PROJECT_ID] |
The ID of your project in Google Cloud. |
[REPLICA_INSTANCE] |
The ID of your Cloud SQL replica. |
Review your replication logs
When you verify your replication settings, logs are produced. You can view these logs by following these steps:
- Go to the Logs Viewer in the Google Cloud Console.
- Select the Cloud SQL replica from the Instance dropdown.
-
Select the
replication-setup.log
log file.If the Cloud SQL replica is unable to connect to the source database server, confirm the following:
- Any firewall on the source database server is configured to allow connections from the Cloud SQL replica's outgoing IP address.
- Your SSL/TLS configuration is correct.
- Your replication user, host, and password are correct.
Replicate the source database server
After you have verified that you can replicate from the source database server, you are ready to perform the replication. Expect the replica to import about 25-50 GB per hour.
During the initial import process, do not perform any DDL operations on the source database server. Doing so could cause inconsistencies during the import. After the import process completes, the replica will use the binary logs on the source database server to catch up to the current state of the source database server.
If you do not have permission to access the global read lock on the source database server and GTID is off, as may be the case with Amazon RDS, see Steps with logs, otherwise, see Steps.
Steps
Open a terminal and enter the commands below to replicate from the source database server.
Steps with logs
- Go to the Logs Viewer, and select your Cloud SQL replica from the resource list. You should see a list of the most recent logs for your Cloud SQL replica. Ignore them for now.
- Open a terminal and enter the commands below these steps to replicate from the source database server.
Return to the Logs Viewer. When you see the log below, halt writing to the database on your source database server. In most cases, this is only required for a few seconds.
DUMP_IMPORT(START): Start importing data, please pause any write to the external primary database.
When you see the log below in your Logs Viewer, enable writing to the database on your source database server again.
DUMP_IMPORT(SYNC): Consistent state on primary and replica. Writes to the external primary may resume.
Here are the commands to replicate from the source database server:
cURL
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "[SYNC_MODE]",
"skipVerification": "[SKIP_VERIFICATION]"
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/[PROJECT_ID]/instances/[REPLICA_INSTANCE]/startExternalSync
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"skipVerification": false
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/replica-instance/startExternalSync
Property | Value |
---|---|
[SYNC_MODE]
|
This can be online or offline . If online , future
changes on the source database server are reflected
on the Cloud SQL replica. If offline ,
future changes are not reflected
on the Cloud SQL replica. |
[SKIP_VERIFICATION] |
Whether or not to skip the built-in verification step before syncing your data. Only recommended if you have already verified your replication settings. |
[PROJECT_ID] |
The ID of your project in Google Cloud. |
[REPLICA_INSTANCE] |
The ID of your Cloud SQL replica. |
Confirm replication status
When the Cloud SQL replica finishes replicating, it connects to the source database server and applies all the updates that were made after the export was taken. Follow these steps to confirm your replication status.
It's important to check the replication status before promoting the replica to a stand-alone instance. If the replication process is not complete and successful, a promoted replica doesn't have all the changes from your source representation instance.
If replication delay is not trending toward 0, take steps to address it.
If GTID is turned off, point-in-time recovery (PITR) and binary logs are disabled on the promoted replica. To enable GTID, enable PITR. This requires restarting the instance.
Clean up your storage
If you replicated from a file in a bucket, you can remove this file and bucket. See the Cloud Storage documentation for Deleting Objects and Deleting Buckets.
Supplemental procedures
These procedures may be helpful in setting up replication.
Configure binary log retention
You should configure your source database server to retain binary logs for at least 24 hours.
MySQL Community
If your source database server is using MySQL Community Edition, you can use these MySQL commands to update your binary log retention settings.
MySQL 5.5 - 5.7:
expire_logs_days
MySQL 8.0:
expire_logs_days
,binlog_expire_logs_seconds
Cloud MySQL
If your source database server is using Cloud SQL MySQL, your binary logs are retained for seven days. This setting can't be changed.
Amazon RDS
If your source database server is an Amazon RDS instance, you can use this command to update your binary log retention settings:
call mysql.rds_set_configuration('binlog retention hours', 24);
Add users to the Cloud SQL replica
MySQL user accounts cannot be imported from the source database server, but you can create them on a Cloud SQL replica. Do this before you replicate from the source database server.
Get the Cloud SQL replica's outgoing IP address
You can use the outgoing IP address of the Cloud SQL replica to create a secure connection between the source database server and the Cloud SQL replica. There are a few ways to get this outgoing IP address:
gcloud
gcloud sql instances describe [REPLICA_NAME] --format="default(ipAddresses)"
Cloud SQL
- Open the Cloud SQL Instances page.
- Next to the Cloud SQL replica's public IP address, hover over the More info tool tip and retrieve the outgoing IP address. Note that outgoing IP address is not the IP address displayed in the main listing for the replica in the Cloud Console.
Update the source representation or Cloud SQL replica
After you set up a source representation or Cloud SQL replica, you can update it with these configurations. For example, these scenarios require an update to your configurations:
- The host, port, or IP of the source database server changes.
- You want to use a different MySQL replication user.
- The password of the MySQL replication user changes.
- The SSL certificates used to securely connect to the source database server changes.
Troubleshooting
Click the links in the table for details:
For this problem... | The issue might be... | Try this... |
---|---|---|
Specified key was too long; max key length is 767 bytes . |
The external primary instance may have the variable innodb_large_prefix set. |
Set the innodb_large_prefix flag to
ON when creating the replica, or update the existing replica with the flag. |
Table definition has changed . |
There were data definition language (DDL) changes during the dump process. | Avoid DDL changes during the dump process. |
Error message: Access denied; you need (at least one of) the SUPER
privilege(s) for this operation . |
There could be a view, a function, or a procedure in the source database
referencing DEFINER in a way that's not supported by
Cloud SQL. |
See more information about
DEFINER usage in Cloud SQL. |
Error message: ERROR 1045 (28000) at line {line_number}: Access
denied for user 'cloudsqlimport'@'localhost |
There is a DEFINER in the source database that doesn't
exist in the replica. |
See more information about DEFINER
usage in Cloud SQL. |
Lost connection to MySQL server during query when dumping table . |
The source may have become unavailable, or the dump contained packets too large. | Make sure the external primary is available to connect, or
use mysqldump with the
max_allowed_packet option. |
Got packet bigger than 'max_allowed_packet' bytes when dumping table . |
The packet was larger than allowed by settings. | Use mysqldump with the max_allowed_packet option. |
Initial data migration was successful, but no data is being replicated. | There may be conflicting replication flags. | Check these flag settings. |
Initial data migration was successful but data replication stops working after a while. | There could be many causes. | Try these suggestions. |
mysqld check failed: data disk is full . |
The data disk of the replica instance is full. | Increase the disk size of the replica instance. You can either manually increase the disk size or enable auto storage increase. |
Specified key was too long; max key length is 767 bytes
You see the error Specified key was too long; max key length is 767 bytes.
The issue might be
The external primary instance may have the variable
innodb_large_prefix set. This allows index key prefixes longer than
767 bytes. The default value is OFF
for MySQL 5.6.
Things to try
Set the innodb_large_prefix
flag to ON
when
creating the replica, or update the existing replica with the flag.
Table definition has changed
You see the error Table definition has changed
.
The issue might be
There were DDL changes during the dump process.
Things to try
Don't modify tables or perform any other DDL changes during the dump process.
Access denied; you need (at least one of) the SUPER privilege(s) for this operation
You see the error Access denied; you need (at least one of) the SUPER privilege(s) for this operation
.
The issue might be
The root cause could be customer has VIEW/FUNCTION/PROCEDURE
with
DEFINER
using super user@localhost (such as root@localhost). This
is not supported by Cloud SQL.
Things to try
The workaround is to update the definer in the external databases, for example
from root@localhost
to root@%
or a non superuser.
See
Stored Object Access Control for more information.
ERROR 1045 (28000) at line xxx: Access denied for user 'cloudsqlimport'@'localhost
You see the error ERROR 1045 (28000) at line xxx: Access denied for user 'cloudsqlimport'@'localhost
The issue might be
A user in the source database with the
DEFINER
clause does not exist on the replica database, and that user
is cross-referenced in the object definitions in the source database.
Things to try
Users are not migrated with the data. Create the source database users in the replica database before beginning replication.
Lost connection to MySQL server during query when dumping table
You see the error Lost connection to MySQL server during query when dumping table
.
The issue might be
The source may have become unavailable to connect from the replica.
The source database may have tables with large blobs or long strings which require setting the
max_allowed_packet
to a larger number on the source database.
Things to try
Restart and make sure the external primary is available to connect.
Use mysqldump with the
max_allowed_packet
option to dump the data and migrate with the dump file.
Got packet bigger than 'max_allowed_packet' bytes when dumping table
You see the error Got packet bigger than 'max_allowed_packet' bytes when
dumping table
.
The issue might be
The packet was larger than allowed by settings.
Things to try
Use mysqldump with the max_allowed_packet
option to dump the data
and migrate with the dump file.
No data is being replicated.
Initial data migration was successful, but no data is being replicated.
The issue might be
One possible root cause could be your source database has defined replication flags which result in some or all database changes not being replicated over.
Things to try
Make sure the replication flags such as binlog-do-db
,
binlog-ignore-db
, replicate-do-db
or
replicate-ignore-db
are not set in a conflicting way.
Run the command show master status
on the primary instance to see
the current settings.
Initial data migration was successful but data replication stops working after a while
The initial data migration was successful but data replication stops working after a while.
The issue might be
There can be many root causes for this issue.
Things to try
Check the replication metrics for your replica instance in the Cloud Monitoring UI.
The errors from the MySQL IO thread or SQL thread can be found in Cloud logging in the mysql.err log files.
The error can also be found when connecting to the replica instance. Run the command
SHOW SLAVE STATUS
, and check for these fields in the output:- Slave_IO_Running
- Slave_SQL_Running
- Last_IO_Error
- Last_SQL_Error
mysqld check failed: data disk is full
You see the error mysqld check failed: data disk is full
.
The issue might be
If you see this error during a DRAIN
operation, the data disk of
the replica instance is full.
Things to try
Increase the disk size of the replica instance.
What's next
- Learn about updating an instance.
- Learn about managing replicas.
- Learn about promoting your Cloud SQL replica to promote the replica to a standalone instance and stop replicating from the source database server.