During change data capture, Datastream reads Oracle redo log files to monitor your source databases for changes and replicate them to the destination instance. Each Oracle database has a set of online redo log files. All transaction records on the database are recorded in the files. When the current redo log file is rotated (or switched), the archive process copies this file into an archive storage. Meanwhile, the database promotes another file to serve as the current file.
Datastream Oracle connector extracts change data capture (CDC) events from archived Oracle redo log files.
Access redo log files
Datastream can use the Oracle LogMiner API or the binary reader method to access the redo log files:
Oracle LogMiner: an out-of-the-box utility included in Oracle databases. If you configure Datastream to use Oracle LogMiner API, Datastream can only work with archived redo log files, online redo log files aren't supported. The LogMiner API method is single-threaded and is subject to higher latency and lower throughput when working with large transaction number source databases. LogMiner supports most data types and Oracle database features.
Binary reader (Preview): a specialized, high-performance utility that works with both online and archived redo log files. Binary reader can access the log files using Automatic Storage Management (ASM) or by reading the files directly using database directory objects. Binary reader is multithreaded and supports low-latency CDC. It also creates low impact on the source database as redo logs are parsed outside of the database operations. The binary reader CDC method has limited support for certain data types or features. For more information, see Known limitations.
Set configuration parameters for Oracle redo log files
This design has profound implications on Datastream's potential latency. If Oracle's redo log files are switched frequently or kept to a smaller size (for example, < 256MB), Datastream can replicate changes faster.
There are configuration parameters that you can set to control the log file rotation frequency:
Size: Online redo log files have a minimum size of 4 MB, and the default size is dependent on your operating system. You can modify the size of the log files by creating new online log files and dropping the older log files.
To find the size of the online redo log files, run the following query:
SELECT GROUP#, STATUS, BYTES/1024/1024 MB FROM V$LOG
Time: The
ARCHIVE_LAG_TARGET
parameter provides an upper limit of how long (in seconds) the current log of the primary database can span.This isn't the exact log switch time, because it takes into account how long it will take to archive the log. The default value is
0
(no upper bound), and a reasonable value of1800
(or 30 minutes) or less is suggested.You can use the following commands to set the
ARCHIVE_LAG_TARGET
parameter, either during initialization or while the database is up:SHOW PARAMETER ARCHIVE_LAG_TARGET;
This command displays how many seconds it will take for the current log to span.ALTER SYSTEM SET ARCHIVE_LAG_TARGET = number-of-seconds;
Use this command to change the upper limit.For example, to set the upper limit to 10 minutes (or 600 seconds), enter
ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 600;
What's next
- Learn more about Oracle as a source.
- Learn more about configuring a source Oracle database.