Configure a pluggable Oracle database

Datastream supports Oracle multi-tenant architecture, where a single container database (CDB) contains one or more pluggable databases (PDBs). Each pluggable database is a self-contained database with a unique ID and name, and can be managed independently.

You can configure your pluggable Oracle database for use with the following CDC methods:

For more information about the binary log reader and the LogMiner API, see Work with Oracle database redo log files.

For detailed instructions about how to configure your source Oracle database for each of these methods, see the sections that follow.

Configure your pluggable Oracle database for binary log reader

To configure your pluggable Oracle database for use with the binary log reader CDC method, perform the following steps:

  1. Select one of the following methods to access your log files:

    • Automatic Storage Management (ASM): if you select this option, you need to create a dedicated ASM connection, and provide its details when you create your connection profile.
    • Database directories: if you select this option, you need to create database directory objects that point to the redo log files and the archived log files, and grant READ access to the directories to your database user:
    CREATE DIRECTORY DIRECTORY_NAME_1 as 'ONLINE_REDO_LOGS_PATH';
    CREATE DIRECTORY DIRECTORY_NAME_2 as 'ARCHIVED_REDO_LOGS_PATH';
    GRANT READ ON DIRECTORY DIRECTORY_NAME_1 to USER_NAME;
    GRANT READ ON DIRECTORY DIRECTORY_NAME_2 to USER_NAME;

    Replace the following:

    • DIRECTORY_NAME_1: the name of the directory for the online redo log files.
    • DIRECTORY_NAME_2: the name of the directory for the archived redo log files.
    • ONLINE_REDO_LOGS_PATH: the path to the directory where online redo log files are to be stored.
    • ARCHIVED_REDO_LOGS_PATH: the path to the directory where archived log files are to be stored.
    • USER_NAME: the name of the database user to whom you want to grant READ access.

    If you select the binary log reader method and use database directories, you need to provide the redo log and archived log directory names when you create your stream.

  2. Verify that your database is running in ARCHIVELOG mode.

    To do so, sign in to your Oracle database and run the following command at the SQL prompt:

    SELECT LOG_MODE FROM V$DATABASE;

    1. If the result is ARCHIVELOG, then move on to the next step.
    2. If the result is NOARCHIVELOG, then you'll need to enable ARCHIVELOG mode for your database.
    3. Run the following commands when logged in as SYSDBA:

      SHUTDOWN IMMEDIATE;
      STARTUP MOUNT;
      ALTER DATABASE ARCHIVELOG;
      ALTER DATABASE OPEN;
    4. Archived log files consume disk space, so you'll want to configure the DB_RECOVERY_FILE_DEST_SIZE parameter for your database. Use this parameter to specify (in bytes) the hard limit on the total space to be used by target database recovery files. By setting this parameter, you can manage the tradeoff between protecting the database from running out of disk space and the stream failing because of log position loss.

  3. Define a data retention policy for your database by running these Oracle Recovery Manager (RMAN) commands:

    TARGET /
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;

    We recommend that you retain backups and archive logs for a minimum of 4 days, and 7 days is recommended.

  4. Return to the SQL prompt of the database tool that you're using to configure the Oracle log file rotation policy. We recommend setting a maximum log file size of no more than 512MB.

  5. Enable supplemental log data. To do so, first enable minimal database-level supplemental logging by running the following command:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

    Next, choose whether to turn on logging for specific tables or the entire database.

    To log changes only for specific tables, run the following command for each table that you want to replicate:

    ALTER TABLE SCHEMA.TABLE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

    Replace the following:

    • SCHEMA: the name of the schema that contains the table.
    • TABLE: the name of the table for which you want to log changes.

    To replicate most or all tables in your database, consider turning logging on for the entire database. Run the following command to enable supplemental log data for the entire database:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  6. Create a common user. A common user has the same identity in the CDB$ROOT container and in the pluggable databases. A common user can connect to and perform operations within the CDB$ROOT container, and within any pluggable database in which it has privileges. The common username must start with C## or c##.

  7. Grant the appropriate privileges to the user account that will be used to connect to your database. To do so, run the following commands:

    GRANT CONNECT TO USER_NAME;
    GRANT CREATE SESSION TO USER_NAME;
    GRANT SELECT ANY TRANSACTION TO USER_NAME;
    GRANT SELECT ANY TABLE TO USER_NAME;
    GRANT SELECT ANY DICTIONARY TO USER_NAME;
    GRANT SELECT ON DBA_EXTENTS TO USER_NAME;
    GRANT EXECUTE_CATALOG_ROLE TO USER_NAME;
    GRANT SELECT ON SYS.V_$DATABASE TO USER_NAME;
    GRANT SELECT ON SYS.V_$ARCHIVED_LOG TO USER_NAME;

    Replace USER_NAME with the name of the user account that you intend to use to connect to your database.

Configure your pluggable Oracle database for LogMiner

To configure a self-hosted Oracle pluggable database so that you can use it with the LogMiner API, perform the following steps:

  1. Verify that your database is running in ARCHIVELOG mode. To do so, run the following command from the CDB$ROOT container:

    SELECT LOG_MODE FROM V$DATABASE;

    1. If the result is ARCHIVELOG, then move on to step 2.
    2. If the result is NOARCHIVELOG, then you'll need to enable ARCHIVELOG mode for your database.
    3. Run the following commands when logged in as SYSDBA:

      SHUTDOWN IMMEDIATE;
      STARTUP MOUNT;
      ALTER DATABASE ARCHIVELOG;
      ALTER DATABASE OPEN;
      
    4. Archived log files consume disk space, so you'll want to configure the DB_RECOVERY_FILE_DEST_SIZE parameter for your database. Use this parameter to specify (in bytes) the hard limit on the total space to be used by target database recovery files. By setting this parameter, you can manage the tradeoff between protecting the database from running out of disk space and the stream failing because of log position loss.

  2. Define a data retention policy for your database by running the following Oracle Recovery Manager (RMAN) command from the CDB$ROOT container:

    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;
    

    The command defines the data retention policy for all pluggable databases in your container database.

    We recommend that you retain backups and archive logs for a minimum of 4 days, and 7 days is recommended.

  3. Return to the SQL prompt of the database tool that you're using to configure the Oracle log file rotation policy. We recommend setting a maximum log file size of no more than 512MB.

  4. Enable supplemental log data. To do so, first enable supplemental logging on the database at the CDB$ROOT container level by running the following command:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    

    Next, choose whether to turn on logging for specific tables or the entire pluggable database.

    To log changes only for specific tables, connect to the pluggable database container and run the following command for each table that you want to replicate:

    ALTER TABLE SCHEMA.TABLE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    

    Replace the following:

    • SCHEMA: the name of the schema that contains the table.
    • TABLE: the name of the table for which you want to log changes.

    To replicate multiple or all tables in your database, consider turning logging on for the entire database.

    Run the following command to enable supplemental log data for the entire database:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    
  5. Create a common user. A common user has the same identity in the CDB$ROOT container and in the pluggable databases. A common user can connect to and perform operations within the CDB$ROOT container, and within any pluggable database in which it has privileges. The common username must start with C## or c##.

  6. Grant the appropriate privileges to the common user that will be used to connect to your database. Different permissions are required at the CDB$ROOT container and pluggable database levels.

    • Connect to the CDB$ROOT container and run the following commands:
    GRANT CREATE SESSION TO USER_NAME;
    GRANT SET CONTAINER TO USER_NAME;
    GRANT SELECT ON SYS.V_$DATABASE TO USER_NAME;
    GRANT SELECT ON SYS.V_$LOGMNR_CONTENTS TO USER_NAME;
    GRANT EXECUTE ON DBMS_LOGMNR TO USER_NAME;
    GRANT EXECUTE ON DBMS_LOGMNR_D TO USER_NAME;
    GRANT LOGMINING TO USER_NAME;
    GRANT EXECUTE_CATALOG_ROLE TO USER_NAME;
    
    • Connect to the pluggable database and run the following commands:
    GRANT CREATE SESSION TO USER_NAME;
    GRANT SET CONTAINER TO USER_NAME;
    GRANT SELECT ANY TABLE TO USER_NAME;
    GRANT SELECT ON SYS.V_$DATABASE TO USER_NAME;
    GRANT SELECT ON SYS.V_$ARCHIVED_LOG TO USER_NAME;
    GRANT SELECT ON DBA_SUPPLEMENTAL_LOGGING TO USER_NAME;
    
  7. Grant the common user SELECT access to the DBA_EXTENTS view in your database. This privilege lets you use the ROWID based backfill for your Oracle source:

    GRANT SELECT ON DBA_EXTENTS TO USER_NAME;

What's next