Replication lag

This page describes how to troubleshoot and fix replication lag for Cloud SQL read replicas.

Overview

Cloud SQL read replicas use PostgreSQL streaming replication. Changes are written to Write-Ahead Log (WAL) in the primary instance. The WAL sender sends the WAL to the WAL receiver in the replica, where they are applied.

Replication lag can happen in a few scenarios, such as:

  • The primary instance can't send the changes fast enough to the replica.
  • The replica can't receive the changes quickly enough.
  • The replica can't apply the changes quickly enough.
The first two reasons above can be monitored with the network_lag metric. The third is observed via the replica_lag metric. High replica_lag means that the replica can't apply replication changes fast enough. The total lag can be observed via replica_byte_lag metric, which has labels to indicate further details. These metrics are described in the Monitor replication lag section below.

Optimize queries and schema

This section suggests some common query and schema optimizations you can make to improve replication performance.

Long-running queries in the read replica

Long-running queries in the replica might block replication for Cloud SQL. You might want to have separate replicas for online transaction processing (OLTP) and online analytical processing (OLAP) purposes and only send long-running queries to the OLAP replica.

Consider adjusting the max_standby_archive_delay and max_standby_streaming_delay flags for your replica.

If you suspect VACUUM is the culprit, and query cancellation is not acceptable, consider setting the hot_standby_feedback flag in the replica.

Review PostgreSQL Hot Standby documentation for more information.

Exclusive locks due to DDL

Data definition language (DDL) commands, such as ALTER TABLE and CREATE INDEX, can cause replication lag in the replica due to exclusive locks. To avoid lock contention, consider scheduling DDL execution during times when the query load is lower on the replicas.

Review PostgreSQL Hot Standby documentation for more information.

Overloaded replica

If a read replica is receiving too many queries, replication could be blocked. Consider splitting the reads among multiple replicas to reduce the load on each one.

To avoid query spikes, consider throttling replica read queries in your application logic or in a proxy layer if you use one.

If there are spikes of activity on the primary instance, consider spreading out updates.

Monolithic primary database

Consider sharding the primary database vertically (or horizontally) to prevent one or more lagging tables from holding back all the other tables.

Monitor replication lag

You can use the replica_lag and network_lag metrics to monitor replication lag and identify whether the cause of the lag is in the primary database, the network, or the replica.

MetricDescription
Replication lag
(cloudsql.googleapis.com/database/replication/replica_lag)

The number of seconds that the replica's state is lagging behind the state of the primary instance. This is the difference between the current time and the original timestamp at which the primary database committed the transaction that is currently being applied on the replica. In particular, writes might be counted as lagging even if they have been received by the replica, if the replica hasn't yet applied the write to the database.

This metric is calculated using now() - pg_last_xact_replay_timestamp() in the replica. This is an approximation. If replication is broken, the replica doesn't know how far ahead the primary database is and this metric would not indicate total lag.

Lag bytes
(cloudsql.googleapis.com/database/postgres/replication/replica_byte_lag)

The amount of bytes by which the replica's state is lagging behind the state of the primary database. replica_byte_lag exports 4 time series, and the replica_lag_type label can indicate any of the following:

  • sent_location: Indicates how many bytes of WAL have been generated, but haven't yet been sent to the replica.
  • write_location: Write minus sent lag shows WAL bytes in the network, that have been sent out but not yet written in the replica.
  • flush_location: Flush minus write lag shows WAL bytes written in the replica but not yet flushed in the replica.
  • replay_location: Shows total lag in bytes. Replay minus flush lag indicates replay delay.
Network lag
(cloudsql.googleapis.com/database/replication/network_lag)

The amount of time in, seconds that it takes from commit in the primary database to reach the WAL receiver in the replica.

If the network_lag is zero, or negligible, but the replica_lag is high, it indicates that the WAL receiver is not able to apply replication changes fast enough.

Verify replication

To verify that replication is working, run the following statement against the replica:

select status, last_msg_receipt_time from pg_stat_wal_receiver;

If replication is happening, you see the status streaming and a recent last_msg_receipt_time:

postgres=> select status, last_msg_receipt_time from pg_stat_wal_receiver;
  status   |     last_msg_receipt_time
-----------+-------------------------------
 streaming | 2020-01-21 20:19:51.461535+00
(1 row)

If replication is not happening, an empty result is returned:

postgres=> select status, last_msg_receipt_time from pg_stat_wal_receiver;
 status | last_msg_receipt_time
--------+-----------------------
(0 rows)