This page describes how to troubleshoot and fix replication lag for Cloud SQL read replicas.
Overview
Cloud SQL read replicas use MySQL row-based replication using global transaction identifiers (GTIDs). Changes are written to the binary log of the primary instance and sent to the replica, where they are received and then applied to the database.Replication lag can happen in a few scenarios, such as:
- The primary instance can't send the changes fast enough to the replica.
- The replica can't receive the changes quickly enough.
- The replica can't apply the changes quickly enough.
network_lag
metric to monitor the first two scenarios when the
primary instance can't send changes fast enough or the replica can't receive changes
quickly enough.
The total lag is observed with the replica_lag
metric.
The difference between replica_lag
and the network_lag
can indicate
the third reason when the replica can't apply replication changes fast enough.
These metrics are described in the Monitor replication lag section
below.
Faster replica configuration
We have two ways to make a MySQL replica apply changes faster. Users can configure their replicas with the following options:
- Parallel replication
- High performance flushing
Parallel replication
Parallel replication might help replication lag by configuring the replica to use multiple threads acting in parallel to apply changes on the replica. For information about using parallel replication, see Configuring parallel replication.
High performance flushing
By default, Cloud SQL for MySQL flushes the redo logs to disk after each transaction. High performance flushing reduces the frequency with which the redo logs are flushed to the disk to once per second, which improves write performance.
Set the innodb_flush_log_at_trx_commit
flag on the read replica to 2. You must also set the sync_binlog
flag to a
higher value for the innodb_flush_log_at_trx_commit
flag to be effective.
See Tips for working with flags for more information about this flag.
When the innodb_flush_log_at_trx_commit flag is set on the read replica and Cloud SQL detects that a crash might have occurred, Cloud SQL automatically recreates the replica.
Optimize queries and schema
This section suggests some common query and schema optimizations you can make to improve replication performance.
Query isolation level in the read replica
The REPEATABLE READ
and SERIALIZABLE
transaction isolation
levels acquire locks that might block the replication changes. Consider reducing
the isolation level for your queries in the replica. The READ COMMITTED
transaction isolation level might perform better.
Long-running transactions in the primary database
If a large number of rows are updated in a single transaction, it can cause a sudden spike in the number of changes that need to be applied to the primary instance and then sent to the replica. This applies to single-statement updates or deletes that affect many rows at once. Changes are sent to the replica after they are committed. Applying a sudden spike of changes in the replica can increase the possibility of lock contention in the replica if the query load on the replica is also high, leading to replication lag.
Consider breaking large transactions into multiple smaller transactions.
Missing primary keys
Cloud SQL read replicas use row-based replication, which performs poorly if the MySQL tables that are replicated don't have primary keys. We recommend that all of the replicated tables have primary keys.
For MySQL 8 or later, we recommend that you set the flag
sql_require_primary_key
to ON
to require tables in your database to have primary keys.
Exclusive locks due to DDL
Data definition language (DDL) commands, such as ALTER TABLE
and
CREATE INDEX
, can cause replication lag in the replica due to
exclusive locks. To avoid lock contention, consider scheduling DDL execution
during times when the query load is lower on the replicas.
Overloaded replica
If a read replica is receiving too many queries, replication could be blocked. Consider splitting the reads among multiple replicas to reduce the load on each one.
To avoid query spikes, consider throttling replica read queries in your application logic or in a proxy layer if you use one.
If there are spikes of activity on the primary instance, consider spreading out updates.
Monolithic primary database
Consider sharding the primary database vertically (or horizontally) to prevent one or more lagging tables from holding back all the other tables.
Monitor replication lag
You can use the replica_lag
and network_lag
metrics to monitor replication
lag and identify whether the cause of the lag is in the primary database,
the network, or the replica.
Metric | Description |
---|---|
Replication lag ( cloudsql.googleapis.com ) |
The number of seconds that the replica's state is lagging behind the state of the primary instance. This is the difference between the current time and the original timestamp at which the primary database committed the transaction that is currently being applied on the replica. In particular, writes might be counted as lagging even if they have been received by the replica, if the replica hasn't yet applied the write to the database. This metric reports the value of |
Last I/O thread error number ( cloudsql.googleapis.com ) |
Indicates the last error that caused the I/O thread to fail. If this is non-zero,
replication is broken. This is rare, but it might happen. Check
MySQL documentation to understand what the error code indicates. For
example, binlog files in the primary instance might have been deleted
before the replica received them.
Cloud SQL usually automatically recreates the replica if replication is broken.
This |
Last SQL thread error number ( cloudsql.googleapis.com ) |
Indicates the last error that caused the SQL thread to fail. If this is non-zero,
replication is broken. This is rare, but it might happen. Check
MySQL documentation to understand what the error code indicates.
Cloud SQL will usually automatically recreate the replica if replication is broken.
This |
Network lag ( cloudsql.googleapis.com ) |
The length of time, in seconds, that it takes from writing the binlog in the primary database to reaching the IO thread in the replica. If the |
Verify replication
To verify that replication is working, run the following statement against the replica:
mysql> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: xx.xxx.xxx.xxx
Master_User: cloudsqlreplica
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.199927
Read_Master_Log_Pos: 83711956
Relay_Log_File: relay-log.000025
Relay_Log_Pos: 24214376
Relay_Master_Log_File: mysql-bin.199898
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 24214163
Relay_Log_Space: 3128686571
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: Yes
Master_SSL_CA_File: master_server_ca.pem
Master_SSL_CA_Path: /mysql/datadir
Master_SSL_Cert: replica_cert.pem
Master_SSL_Cipher:
Master_SSL_Key: replica_pkey.pem
Seconds_Behind_Master: 2627
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 321071839
Master_UUID: 437d04e9-8456-11e8-b13d-42010a80027b
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: System lock
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: 437d04e9-8456-11e8-b13d-42010a80027b:52111095710-52120776390
Executed_Gtid_Set: 437d04e9-8456-11e8-b13d-42010a80027b:1-52113039508
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
If replication is happening, the Slave_IO_State
(first column) shows Waiting
for master to send event
or a similar message. Also, the Last_IO_Error
field is empty.
If replication is not happening, the Slave_IO_State
column shows the status
Connecting to master
and the Last_IO_Error
column shows the status
error connecting to master cloudsqlreplica@x.x.x.x:3306
.
According to the MySQL documentation, a few other interesting fields that relate to replication lag are:
Field | Description |
---|---|
Master_Log_File |
The name of the source binary log file that the I/O thread is currently reading from. |
Read_Master_Log_Pos |
The position in the current source binary log file the I/O thread has read up to. |
Relay_Log_File |
The name of the relay log file the SQL thread is currently reading and executing from. |
Relay_Log_Pos |
The position in the current relay log file the SQL thread has read and executed up to. |
Relay_Master_Log_File |
The name of the source binary log file containing the most recent event executed by the SQL thread. |
In the above example Relay_Master_Log_File
has the value mysql-bin.199898
.
Master_Log_File
has the value mysql-bin.199927
. The numeric suffix 199898 is
less than 199927. This means that even though the replica has received a newer
mysql-bin.199927
log file, it's still applying the older mysql-bin.199898
.
In this case, the SQL thread is lagging in the replica.
You can also connect to the primary database and execute:
SHOW MASTER STATUS;
This command shows you which binlog file is being written in the primary database.
If the primary database binary log file is newer than the Master_Log_File
in the replica,
it means that the I/O thread is lagging. The replica is still reading an older
binary log file from the primary database.
When the I/O thread is lagging, the network_lag
metric is also high. When the SQL thread
is lagging, but the I/O thread is not, then the network_lag
metric isn't as high, but
the replica_lag
is high.
The previous commands let you observe lag details while the lag is happening,
but the metrics network_lag
and replica_lag
provide you a way to
look into the past occurrences of the lag.