本節說明使用 Datastream 複製資料時,MySQL 來源的行為。從 MySQL 資料庫擷取資料時,您可以使用以 binlog 為基礎的複製功能,或以全域交易 ID (GTID) 為基礎的複製功能。您可以在建立串流時選取 CDC 方法。
以二進位記錄為基礎的複製
Datastream 可以使用二進位記錄檔檔案,記錄 MySQL 資料庫中的資料變更。然後將這些記錄檔中的資訊複製到目的地,以重現來源所做的變更。
Datastream 中以 binlog 為基礎的複寫功能主要有以下特點:
您可以選取特定 MySQL 來源中的所有資料庫或特定資料庫,以及資料庫中的所有資料表或特定資料表。
所有歷來資料都會複製。
系統會複製所有資料操縱語言 (DML) 變更,例如指定資料庫和資料表中的插入、更新和刪除作業。
系統只會複製已提交的變更。
以全域交易 ID (GTID) 為基礎的複製功能
Datastream 也支援以全域 ID (GTID) 為基礎的複製功能。
全域交易 ID (GTID) 是系統建立的專屬 ID,與 MySQL 來源上所修訂的每筆交易相關聯。這個 ID 不僅在來源上是唯一的,在特定複製拓撲中的所有伺服器上也是唯一的。這與以二進位記錄為基礎的複製不同,在後者中,資料庫叢集中的每個節點都會維護自己的二進位記錄檔,並使用自己的編號。如果發生故障或計畫性停機,維護個別二進位記錄檔和編號可能會造成問題,因為二進位記錄檔的連續性會中斷,導致以二進位記錄檔為基礎的複寫作業失敗。
Datastream 不支援來源 MySQL 連線設定檔中的 SSL 憑證鏈結。系統僅支援單一的 x509 PEM 編碼憑證。
Datastream 不支援連鎖刪除作業。這類事件不會寫入二進位記錄檔,因此不會傳播至目的地。
Datastream 不支援 DROP PARTITION 作業。這類作業僅限中繼資料作業,不會進行複製。其他活動不受影響,串流也會順利執行。
由於使用二進位記錄式複製功能時,Datastream 不支援容錯移轉至備用資源,因此建議對 MySQL Enterprise Plus 適用的 Cloud SQL 來源使用 GTID 式複製功能。Cloud SQL Enterprise Plus 執行個體會在維護期間幾乎無須停機,並在維護期間容錯移轉至副本。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eDatastream replicates data from MySQL sources using either binlog-based or GTID-based replication, supporting historical data and DML changes for selected databases and tables.\u003c/p\u003e\n"],["\u003cp\u003eSupported MySQL versions include 5.6, 5.7, 8.0, and 8.4 (GTID-based only), with compatibility for self-hosted, Cloud SQL, Amazon RDS, Amazon Aurora, MariaDB, Alibaba Cloud PolarDB, and Percona Server.\u003c/p\u003e\n"],["\u003cp\u003eLimitations exist, including a 10,000-table limit, restrictions on tables with invisible primary keys or more than 500 million rows, and potential data discrepancies from schema changes.\u003c/p\u003e\n"],["\u003cp\u003eSpecific data types like spatial data and zero values in \u003ccode\u003eDATETIME\u003c/code\u003e, \u003ccode\u003eDATE\u003c/code\u003e, or \u003ccode\u003eTIMESTAMP\u003c/code\u003e columns, and certain JSON values are not supported, and are thus replaced with null or discarded.\u003c/p\u003e\n"],["\u003cp\u003eGTID-based replication, currently in preview, offers failover support, but has additional limitations like only being recoverable through the Datastream API and not supporting \u003ccode\u003eCREATE TABLE ... SELECT\u003c/code\u003e statements.\u003c/p\u003e\n"]]],[],null,["# Source MySQL database\n\nThis section contains information about:\n\n- The behavior of how Datastream handles data that's being pulled from a source MySQL database\n- The versions of MySQL database that Datastream supports\n- Known limitations for using MySQL database as a source\n- An overview of how to setup a source MySQL database so that data can be streamed from it to a destination\n\nBehavior\n--------\n\nThis section describes the behavior of MySQL sources when you replicate data\nusing Datastream. When you ingest data from MySQL databases, you can\nuse binlog-based replication or global transaction identifier (GTID)-based\nreplication. You select your CDC method when you\n[create a stream](/datastream/docs/create-a-stream).\n\n### Binlog-based replication\n\nDatastream can use\n[binary log](https://dev.mysql.com/doc/refman/5.6/en/binary-log.html) files to\nkeep a record of data changes in MySQL databases. The information contained in\nthese log files is then replicated to the destination to reproduce the changes\nmade on the source.\n\nThe key characteristics of binlog-based replication in Datastream are:\n\n- All databases or specific databases from a given MySQL source, as well as all tables from the databases or specific tables, can be selected.\n- All historical data is replicated.\n- All data manipulation language (DML) changes, such as inserts, updates, and deletes from the specified databases and tables, are replicated.\n- Only committed changes are replicated.\n\n### Global transaction identifier (GTID)-based replication\n\nDatastream also supports global identifier (GTID)-based replication.\n\nGlobal transaction identifier (GTID) is a unique identifier created and\nassociated with each transaction committed on a MySQL source. This identifier is\nunique not only to the source on which it originated, but also across all servers\nin a given replication topology, as opposed to the binary log-based\nreplication where each node in the database cluster maintains its own binlog\nfiles, with its own numbering. Maintaining separate binlog files and numbering\nmight become an issue in the event of a failure or planned downtime, because the\nbinlog continuity is broken and the binlog-based replication fails.\n\nGTID-based replication supports failovers, self-managed database clusters, and\ncontinues to work irrespective of changes in the database cluster.\n\nThe key characteristics of GTID-based replication in Datastream are:\n\n- All databases or specific databases from a given MySQL source, as well as all tables from the databases or specific tables, can be selected.\n- All historical data is replicated.\n- All data manipulation language (DML) changes, such as inserts, updates, and deletes from the specified databases and tables, are replicated.\n- Only committed changes are replicated.\n- Seamless support for failovers.\n\n### Switch from binlog-based to GTID-based replication\n\nIf you want to update your stream and switch from binlog-based to GTID-based\nreplication without the need to do a backfill, perform the following steps:\n| **Note:** These steps require database downtime. Similar steps might also be useful when you want to upgrade the major version of your MySQL source.\n\n1. Ensure that all requirements for GTID-based replication are satisfied. For more information, see [Configure a source MySQL database](/datastream/docs/configure-your-source-mysql-database).\n2. Optionally, create and run a *test* GTID-based stream. For more information, see [Create a stream](/datastream/docs/create-a-stream#expandable-2).\n3. Create a GTID-based stream. Don't start it yet.\n4. Stop application traffic to the source database.\n5. Pause the existing binlog-based stream. For more information, see [Pause the stream](/datastream/docs/run-a-stream#pauseastream).\n6. Wait for a few minutes to ensure that Datastream has caught up with the database. You can check this using the metrics in the **Monitoring** tab, on the **Stream details** page for your stream. The values for *Data freshness* and *Throughput* need to be `0`.\n7. Start the GTID-based stream. For more information, see [Start the stream](/datastream/docs/run-a-stream#startstream).\n8. Resume traffic to the source database.\n\nIf performing a backfill isn't an issue, you can truncate your tables in\nBigQuery, delete the old stream, and start a new one with backfill. For\nmore information about managing backfill, see\n[Manage backfill for the objects of a stream](/datastream/docs/manage-backfill-for-the-objects-of-a-stream).\n\nVersions\n--------\n\nDatastream supports the following versions of MySQL database:\n\n- MySQL 5.6\n- MySQL 5.7\n- MySQL 8.0\n- MySQL 8.4 (supported only for GTID-based replication)\n\n | Global transaction identifier (GTID)-based replication is only supported for versions 5.7 and later.\n\nDatastream supports the following types of MySQL database:\n\n- [Self-hosted MySQL](/datastream/docs/configure-self-managed-mysql)\n- [Cloud SQL for MySQL](/datastream/docs/configure-cloudsql-mysql) Cloud SQL for MySQL Enterprise Plus sources are supported when using the GTID-based replication.\n- [Amazon RDS for MySQL](/datastream/docs/configure-amazon-rds-mysql)\n- [Amazon Aurora MySQL](/datastream/docs/configure-amazon-aurora-mysql)\n- [MariaDB](/datastream/docs/configure-self-managed-mysql)\n- [Alibaba Cloud PolarDB](/datastream/docs/configure-self-managed-mysql)\n- [Percona Server for MySQL](/datastream/docs/configure-self-managed-mysql)\n\nKnown limitations\n-----------------\n\nKnown limitations for using MySQL database as a source include:\n\n- Streams are limited to 10,000 tables.\n- Tables that have a primary key defined as `INVISIBLE` can't be backfilled.\n- A table that has more than 500 million rows can't be backfilled unless the following conditions are met:\n 1. The table has a unique index.\n 2. None of the columns of the index are nullable.\n 3. The index isn't [descending](https://dev.mysql.com/doc/refman/8.0/en/descending-indexes.html).\n 4. All columns of the index are included in the stream.\n- Datastream periodically fetches the latest schema from the source as events are processed. If a schema changes, Datastream detects the schema change and triggers a schema fetch. However, some events might get processed incorrectly or get dropped between the schema fetches, which can cause data discrepancies.\n- Not all changes to the source schema can be detected automatically, in which case data corruption may occur. The following schema changes may cause data corruption or failure to process the events downstream:\n - Dropping columns\n - Adding columns to the middle of a table\n - Changing the data type of a column\n - Reordering columns\n - Dropping tables (relevant if the same table is then recreated with new data added)\n - Truncating tables\n- Datastream doesn't support replicating views.\n- Datastream doesn't support columns of [spatial data types](https://dev.mysql.com/doc/refman/8.0/en/spatial-type-overview.html). The values in these columns are replaced with `NULL` values.\n- Datastream doesn't support the zero value (`0000-00-00 00:00:00`) in columns of the `DATETIME`, `DATE`, or `TIMESTAMP` data types. The zero value is replaced with the `NULL` value.\n- Datastream doesn't support replicating rows which include the following values in `JSON` columns: `DECIMAL`, `NEWDECIMAL`, `TIME`, `TIME2` `DATETIME`, `DATETIME2`, `DATE`, `TIMESTAMP` or `TIMESTAMP2`. Events containing such values are discarded.\n- Datastream doesn't support [binary log transaction compression](https://dev.mysql.com/doc/refman/8.0/en/binary-log-transaction-compression.html).\n- Datastream doesn't support SSL certificate chains in the source MySQL connection profiles. Only single, x509 PEM-encoded certificates are supported.\n- Datastream doesn't support cascading deletes. Such events aren't written to the binary log, and as a result, aren't propagated to the destination.\n- Datastream doesn't support `DROP PARTITION` operations. Such operations are metadata only operations and aren't replicated. Other events aren't affected and the stream runs successfully.\n- Because Datastream doesn't support failovers to replicas when using the binary log-based replication, we recommend using GTID-based replication for Cloud SQL for MySQL Enterprise Plus sources. Cloud SQL Enterprise Plus instances are subject to [near-zero downtime maintenance](/sql/docs/mysql/maintenance#nearzero) and fail over to a replica during maintenance.\n\nAdditional limitations for the GTID-based replication\n-----------------------------------------------------\n\n- Recovering streams that use GTID-based replication is only available when using the Datastream API.\n- Creating tables from other tables using the `CREATE TABLE ... SELECT` statements isn't supported.\n- Datastream doesn't support tagged GTIDs.\n- For MySQL restrictions that apply to GTID-based replication, see [MySQL documentation](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-restrictions.html).\n\nWhat's next\n-----------\n\n- Learn how to [configure a MySQL source](/datastream/docs/configure-your-source-mysql-database) for use with Datastream."]]