Source Oracle database

Stay organized with collections Save and categorize content based on your preferences.

This section contains information about:

  • The behavior of how Datastream handles data that's being pulled from a source Oracle database
  • The versions of Oracle database that Datastream supports
  • An overview of how to setup a source Oracle database so that data can be streamed from it to a destination
  • Known limitations for using Oracle database as a source

Behavior

The source Oracle database relies upon its Oracle Logminer feature for exposing changes to the data.

  • All schemas or specific schemas from a given database, as well as all tables from the schemas or specific tables, can be selected.
  • All historical data is replicated.
  • All data manipulation language (DML) changes, such as inserts, updates, and deletes from the specified databases and tables, are replicated.
  • Datastream replicates both committed and, in some cases, uncommitted changes into the destination. Datastream reads uncommitted changes. In case of a rollback, the Datastream output records also include the opposite operation. For example, if there's a rolled-back INSERT operation, then the output records will also contain a corresponding DELETE operation. In this case, the event will appear as a DELETE event with only the ROWID.

Versions

Datastream supports the following versions of Oracle database:

  • Oracle 11g, Version 11.2.0.4
  • Oracle 12c, Version 12.1.0.2
  • Oracle 12c, Version 12.2.0.1
  • Oracle 18c
  • Oracle 19c

Setup

To set up a source Oracle database so that data from it can be streamed into a destination, you must configure the database to grant access, set up logging, define a retention policy, and so on.

See Configure your source Oracle database to learn how to configure this database so that Datastream can pull data from it into a destination.

Known limitations

Known limitations for using Oracle database as a source include:

  • Any tables that have more than 500 million rows can't be backfilled unless the following conditions are met:
    1. The table must have a unique index defined on it.
    2. This index must also be Btree, which is a default index. The index can be composite.
    3. The index can't be reversed.
    4. The index can't contain a function-based column.
    5. All columns of the index are allowed.
    6. All columns of the index aren't nullable.
    7. There's no column in the index with the DATE type that contains negative dates as values.
  • Streams are limited to 10,000 tables.
  • Oracle multi-tenant architecture (CDB/PDB) isn't supported.
  • Oracle Autonomous Database isn't supported.
  • Secure Sockets Layer (SSL) authentication isn't supported.
  • Events from tables that don't have a primary key won't contain the information required to perform a merge operation on the consumer side.
  • Events have a size limitation of 3 MB.
  • Index-organized tables (IOTs) aren't supported.
  • Temporary tables aren't supported.
  • For columns of type BFILE, only the path to the file will be replicated. The contents of the file won't be replicated.
  • Columns of data types ANYDATA, BLOB, CLOB, LONG/LONG RAW, NCLOB, UDT, UROWID, XMLTYPE aren't supported, and will be replaced with NULL values.
  • For Oracle 11g, tables that have columns of data types ANYDATA or UDT aren't supported, and the entire table won't be replicated.
  • Oracle Label Security (OLS) isn't replicated.
  • Datastream periodically fetches the latest schema from the source as events are processed. If a schema changes, then some events from the new schema may be read while the old schema is still applied. In this case, Datastream will detect the schema change, trigger a schema fetch, and reprocess the failed events.
  • Not all changes to the source schema can be detected automatically, in which case data corruption may occur. The following schema changes may cause data corruption or failure to process the events downstream:
    • Dropping columns
    • Adding columns to the middle of a table
    • Changing the data type of a column
    • Reordering columns
    • Dropping tables (relevant if the same table is then recreated with new data added)
    • Truncating tables
  • Datastream doesn't support replicating views.
  • Datastream supports materialized views. However, new views created while the stream is running aren't backfilled automatically.
  • SAVEPOINT statements aren't supported and can cause data discrepancy in case of a rollback.