Stay organized with collections
Save and categorize content based on your preferences.
This page describes best practices for use cases where:
Users have an existing table in BigQuery and need to replicate their
data using change data capture (CDC) into the same BigQuery table.
Users need to copy data into an existing BigQuery table without using
the Datastream backfill capability, either because of the time it would
take or because of product limitations.
Problem
A BigQuery table that's populated using the BigQuery Storage
Write API doesn't allow regular data
manipulation language (DML) operations. This means that once a CDC stream
starts writing to a BigQuery table, there's no way to add historical data
that wasn't already pre-populated in the table.
Consider the following scenario:
TIMESTAMP 1: the table copy operation is initiated.
TIMESTAMP 2: while the table is being copied, DML operations at the source
result in changes to the data (rows are added, updated or removed).
TIMESTAMP 3: CDC is started, changes that happened in TIMESTAMP 2 aren't
captured, resulting in data discrepancy.
Solution
To ensure data integrity, the CDC process must capture all the changes in the
source that occurred from the moment immediately following the last update made
that was copied into the BigQuery table.
The solution that follows lets you ensure that the CDC process captures all the
changes from TIMESTAMP 2, without blocking the copy operation from writing data
into the BigQuery table.
Prerequisites
The target table in BigQuery must have the exact same schema and
configuration as if the table was created by Datastream. You can use the
Datastream BigQuery Migration Toolkit
to accomplish this.
For MySQL and Oracle sources, the user must be able to identify the log position
at the time when the copy operation is initiated.
The database must have sufficient storage and log retention policy to allow the
table copy process to complete.
MySQL and Oracle sources
Create, but don't start the stream that you intend to use for the ongoing CDC
replication. The stream needs to be in the CREATED state.
When you're ready to start the table copy operation, identify the database current
log position:
For MySQL, see the MySQL documentation
to learn how to obtain the replication binary log coordinates. Once you've
identified the log position, close the session to release any locks on the
database.
For Oracle, run the following query: SELECT current_scn FROM V$DATABASE
Copy the table from the source database into BigQuery.
Once the copy operation is completed, follow the steps described in the
Manage streams page
to start the stream from the log position that you identified earlier.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[[["\u003cp\u003eThis guide provides solutions for replicating data from existing tables into BigQuery using change data capture (CDC) without using Datastream's backfill feature.\u003c/p\u003e\n"],["\u003cp\u003eThe primary challenge addressed is ensuring data integrity when using BigQuery's Storage Write API for CDC, which does not allow adding historical data after the CDC stream begins, leading to potential data discrepancies.\u003c/p\u003e\n"],["\u003cp\u003eTo prevent data loss during table copy operations, the CDC process must capture all changes made to the source table from the moment after the last copied update.\u003c/p\u003e\n"],["\u003cp\u003eThe solution involves identifying the log position in MySQL and Oracle sources at the start of the copy operation, and using this position to start the CDC stream after the copy is completed, or using a replication slot for PostgreSQL sources.\u003c/p\u003e\n"],["\u003cp\u003eTarget tables must be using the correct schema, and the database must have the correct storage and log retention policy.\u003c/p\u003e\n"]]],[],null,["# Use Datastream with an existing BigQuery table\n\nThis page describes best practices for use cases where:\n\n- Users have an existing table in BigQuery and need to replicate their data using change data capture (CDC) into the same BigQuery table.\n- Users need to copy data into an existing BigQuery table without using the Datastream backfill capability, either because of the time it would take or because of product limitations.\n\nProblem\n-------\n\nA BigQuery table that's populated using the [BigQuery Storage\nWrite API](/bigquery/docs/change-data-capture) doesn't allow regular data\nmanipulation language (DML) operations. This means that once a CDC stream\nstarts writing to a BigQuery table, there's no way to add historical data\nthat wasn't already pre-populated in the table.\n\nConsider the following scenario:\n\n1. **TIMESTAMP 1**: the table copy operation is initiated.\n2. **TIMESTAMP 2**: while the table is being copied, DML operations at the source result in changes to the data (rows are added, updated or removed).\n3. **TIMESTAMP 3** : CDC is started, changes that happened in **TIMESTAMP 2** aren't captured, resulting in data discrepancy.\n\nSolution\n--------\n\nTo ensure data integrity, the CDC process must capture all the changes in the\nsource that occurred from the moment immediately following the last update made\nthat was copied into the BigQuery table.\n\nThe solution that follows lets you ensure that the CDC process captures all the\nchanges from **TIMESTAMP 2**, without blocking the copy operation from writing data\ninto the BigQuery table.\n\n### Prerequisites\n\n- The target table in BigQuery must have the exact same schema and configuration as if the table was created by Datastream. You can use the [Datastream BigQuery Migration Toolkit](/datastream/docs/best-practices-migration-toolkit) to accomplish this.\n- For MySQL and Oracle sources, the user must be able to identify the log position at the time when the copy operation is initiated.\n- The database must have sufficient storage and log retention policy to allow the table copy process to complete.\n\n### MySQL and Oracle sources\n\n1. Create, but don't start the stream that you intend to use for the ongoing CDC replication. The stream needs to be in the **CREATED** state.\n2. When you're ready to start the table copy operation, identify the database current log position:\n - For MySQL, see the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-howto-masterstatus.html) to learn how to obtain the replication binary log coordinates. Once you've identified the log position, close the session to release any locks on the database.\n - For Oracle, run the following query: `SELECT current_scn FROM V$DATABASE`\n3. Copy the table from the source database into BigQuery.\n4. Once the copy operation is completed, follow the steps described in the [Manage streams](/datastream/docs/manage-streams#startastreamfromspecific) page to start the stream from the log position that you identified earlier.\n\n### PostgreSQL sources\n\n1. When you're ready to start copying the table, create the replication slot. For more information, see [Configure a source PostgreSQL database](/datastream/docs/configure-your-source-postgresql-database).\n2. Copy the table from the source database into BigQuery.\n3. Once the copy operation is completed, create and start the stream."]]