Master Note for Streams Recommended Configuration

90
Master Note for Streams Recommended Configuration [ID 418755.1] Modified 13-DEC-2010 Type BULLETIN Status PUBLISHED In this Document Purpose Scope and Application Master Note for Streams Recommended Configuration Configuration 1.0 Software Version 2.0 Database Parameters Database Version 9iR2 Database Version 10gR2 3.0 Database Storage 3.1. Tablespace for Streams Administrator queues 3. 2. Separate queues for capture and apply 4.0 Privileges 5.0 Source Site Configuration 5.1. Streams and Flash Recovery Area (FRA) 5.2. Archive Logging must be enabled 5.3. Supplemental logging 5.4. Implement a Heartbeat Table 5.5. Flow Control 5.6. Perform periodic maintenance Database Version 9iR2 and 10gR1 Database Version 10gR2 and above 5.7. Capture Process Configuration 5.8. Propagation Configuration 5.9. Additional Configuration for RAC Environments for a Source Database 6.0 Target Site Configuration 6.1. Privileges 6.2. Instantiation 6.3. Conflict Resolution 6.4. Apply Process Configuration 6.5. Additional Configuration for RAC Environments for an Apply Database OPERATION Global Name

Transcript of Master Note for Streams Recommended Configuration

Page 1: Master Note for Streams Recommended Configuration

Master Note for Streams Recommended Configuration [ID 418755.1]

  Modified 13-DEC-2010     Type BULLETIN     Status PUBLISHED

 

In this Document  Purpose  Scope and Application  Master Note for Streams Recommended Configuration     Configuration     1.0 Software Version     2.0 Database Parameters     Database Version 9iR2     Database Version 10gR2     3.0 Database Storage     3.1. Tablespace for Streams Administrator queues     3. 2. Separate queues for capture and apply     4.0 Privileges     5.0 Source Site Configuration     5.1. Streams and Flash Recovery Area (FRA)     5.2. Archive Logging must be enabled     5.3. Supplemental logging     5.4. Implement a Heartbeat Table     5.5. Flow Control     5.6. Perform periodic maintenance     Database Version 9iR2 and 10gR1     Database Version 10gR2 and above     5.7. Capture Process Configuration     5.8. Propagation Configuration     5.9. Additional Configuration for RAC Environments for a Source Database     6.0 Target Site Configuration     6.1. Privileges     6.2. Instantiation     6.3. Conflict Resolution     6.4. Apply Process Configuration     6.5. Additional Configuration for RAC Environments for an Apply Database     OPERATION     Global Name     Apply Error Management     Backup Considerations     Batch Processing     Source Queue Growth     Streams Cleanup/Removal     Automatic Optimizer Statistics Collection     MONITORING

Page 2: Master Note for Streams Recommended Configuration

     Dynamic Streams views      Static Streams Views     Streams Views     Capture Views:     Propagation & Queue Views     Apply Views:     Monitoring Utility STRMMON     Alert Log     Streams Healthcheck Scripts  References

Applies to:

Oracle Server - Enterprise Edition - Version: 9.2.0.8 to 11.2.0.1 - Release: 9.2 to 11.2Information in this document applies to any platform.

Purpose

Oracle Streams enables the sharing of data and events in a data stream either within a database or from one database to another. This Note describes best practices for Oracle Streams configurations for both downstream capture and upstream (local) capture in version 9.2 and above.

Scope and Application

The information contained in this note targets Replication administrators implementing Streams replication in Oracle 9.2 and higher. This note contains key recommendations for successful implementation of Streams in Oracle database release 9.2 and above.

Master Note for Streams Recommended Configuration

Configuration

To ensure a successful Streams implementation, use the following recommendations when setting up a Streams environment:

Software Version Database Settings: Parameters, Storage, and Privileges Source Site Configuration Target Site Configuration

1.0 Software Version

Oracle recommends to run streams with the latest available patchset, and the list of recommended patches from Document 437838.1 Streams Specific Patches.

Please assess if any recommended patch conflicts with existing patches on your system.

Page 3: Master Note for Streams Recommended Configuration

There is Streams support in both DbControl and GridControl. GridControl should be used to manage multiple databases in a Streams environment.

2.0 Database ParametersFor best results in a Streams environment, set the following initialization parameters, as necessary, at each participating instance: global_names, _job_queue_interval, sga_target, streams_pool_size:

Database Version 9iR2

Parameter Name & Recommendation Description Considerations

_job_queue_interval = 1 Scan rate interval (seconds) of job queue. Default is 5

This improves the scan rate for propagation jobs to every second, rather than every 5 seconds.

COMPATIBLE = 9.2.0.0 This parameter specifies the release with which the Oracle server must maintain compatibility. Oracle servers with different compatibility levels can interoperate.

GLOBAL_NAMES =true Specifies whether a database link is required to have the same name as the database to which it connects. Default is FALSE

This parameter should be set to TRUE at each database that is participating in your Streams environment to eliminate errors resulting from incorrect database connections. This parameter setting is honored by database links.

JOB_QUEUE_PROCESSES > 2 Specifies the number of Jn job queue processes for the database instance

AQ_TM_PROCESSES >= 1 Specifies the number of queue monitor processes for the database instance

LOGMNR_MAX_PERSISTENT_SESSIONS >= Number of capture processes

Specifies the maximum number of persistent LOGMINER mining sessions. Streams Capture Process uses LOGMINER to mine the  redo logs.

If there is a need to run multiple Streams capture processes on a single database, then this parameter needs to be set equal to or higher than the number of planned capture processes.

LOG_PARALLELISM = 1 Specifies the level of concurrency for redo allocation within the

Page 4: Master Note for Streams Recommended Configuration

database instance.

PARALLEL_MAX_SERVERS >= 2 Default: Derived from the values of the following parameters:

CPU_COUNT

PARALLEL_ADAPTIVE_MULTI_USER

PARALLEL_AUTOMATIC_TUNING

Range: 0 to 3599

Modifiable?: Yes

Specifies the maximum number of parallel execution processes and parallel recovery processes for an instance. As demand increases, Oracle will increase the number of processes from the number created at instance startup up to this value.

In a Streams environment, each capture process and apply process may use multiple parallel execution servers. Set this initialization parameter to an appropriate value to ensure that there are enough parallel execution servers. For each defined Streams process (capture or apply), increase this parameter by 2+parallelism, where parallelism is the value of the capture or apply parallelism parameter.

SHARED_POOL_SIZE Each capture process needs 10MB of shared pool space, by defaultStreams is limited to using a maximum of 10% of the shared pool.The 10% of the shared_pool_size is in reference to the size of the buffer queue before spillover occurs. Shared_pool_size must be significantly larger if Streams captureis implemented, especially if there is a large workload.The typical recommendation is to double the existing shared_pool_size and set the _first_spare_parameter to 50.

OPEN_LINKS >= 4 Specifies the maximum number of concurrent open connections to remote databases in one session.

PROCESSES Specifies the maximum number of operating system user processes that can simultaneously connect to the database.

Make sure the value of this parameter allows for all background processes, such as locks, job queue processes, and parallel execution processes. In Streams, capture processes and apply processes use background processes and parallel

Page 5: Master Note for Streams Recommended Configuration

execution processes, and propagation jobs use job queue processes.

SESSIONS Specifies the maximum number of sessions that can be created in the system. Because every login requires a session, this parameter effectively determines the maximum number of concurrent users in the system.

If you plan to run one or more capture processes or apply processes in a database, then you may need to increase the size of this parameter. Each background process in a database requires a session.

SGA_MAX_SIZE Specifies the maximum size of SGA for the lifetime of the instance.

If you plan to run multiple capture processes on a single database, then you may need to increase the size of this parameter.Note :_SGA_SIZE should only be increased if a logminer error is returned indicating a need for more memory. Any memory allocated to logminer is used solely by logminer - it is not returned to the shared_poolafter it has been allocated by capture until the capture process is restarted.

TIMED_STATISTICS Specifies whether or not statistics related to time are collected.

If you want to collect elapsed time statistics in the data dictionary views related to Streams, then set this parameter to true. The views that include elapsed time statistics include:V$STREAMS_CAPTURE, V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, V$STREAMS_APPLY_SERVER.

Database Version 10gR2

Parameter Name & Recommendation

Description Considerations

_job_queue_interval = 1 Scan rate interval (seconds) of job queue. Default is 5

This improves the scan rate for propagation jobs to every second, rather than every 5 seconds.

Page 6: Master Note for Streams Recommended Configuration

COMPATIBLE = 10.2.0.0 This parameter specifies the release with which the Oracle server must maintain compatibility. Oracle servers with different compatibility levels can interoperate.

To use the new Streams features introduced in Oracle Database 10g Release 2, this parameter must be set to 10.2.0.0 or higher.

To use 10.2 downstream capture on logs from a 10.1 source , this parameter must be set to 10.1.0.0 at the source database and 10.2.0.0 at the downstream capture database.

GLOBAL_NAMES =true

Specifies whether a database link is required to have the same name as the database to which it connects. Default is FALSE

This parameter should be set to TRUE at each database that is participating in your Streams environment to eliminate errors resulting from incorrect database connections. This parameter setting is honored by database links.

JOB_QUEUE_PROCESSES > 4 number of propagations defined

Specifies the number of Jn job queue processes for each instance (J000 ... J999). Job queue processes handle requests created by DBMS_JOB.

This parameter controls the maximum number of jobs that can run concurrently within the instance and should be set to a value greater than the number of propagations configured for the database. Be sure to increase this parameter if there are any automated jobs configured for the database.

LOG_ARCHIVE_DEST_n

Defines up to ten log archive destinations, where n is 1, 2, 3, ... 10.

A specific archive log destination should be specified if this database is the source for a Streams capture process. Specify a specific destination other than the flash recovery area (FRA) for storing archived logs if a local capture process is enabled.

To use downstream capture and copy the redo log files to the downstream database using log transport services, at least one log archive destination must be to the site running the downstream capture process. Avoid copying log files to a remote flash recovery area for downstream capture processing.

Page 7: Master Note for Streams Recommended Configuration

See Also: Oracle Data Guard Concepts and Administration

LOG_ARCHIVE_DEST_STATE_n Specifies the availability state of the corresponding destination. The parameter suffix (1 through 10) specifies one of the ten corresponding LOG_ARCHIVE_DEST_n destination parameters.

Enable archive logging to the specified destination for both local and downstream capture. To use downstream capture and copy the redo log files to the downstream database using log transport services, make sure the destination that corresponds to the LOG_ARCHIVE_DEST_n destination for the downstream database is set to enable.

PARALLEL_MAX_SERVERS Default: Derived from the values of the following parameters:

CPU_COUNT

PARALLEL_ADAPTIVE_MULTI_USER

PARALLEL_AUTOMATIC_TUNING

Range: 0 to 3599

Modifiable?: Yes

Specifies the maximum number of parallel execution processes and parallel recovery processes for an instance. As demand increases, Oracle will increase the number of processes from the number created at instance startup up to this value.

In a Streams environment, each capture process and apply process may use multiple parallel execution servers. Set this initialization parameter to an appropriate value to ensure that there are enough parallel execution servers. For each defined Streams process (capture or apply), increase this parameter by 2+parallelism, where parallelism is the value of the capture or apply parallelism parameter.

REMOTE_ARCHIVE_ENABLE

Enables or disables the sending of redo archival to remote destinations and the receipt of remotely archived redo.

To use downstream capture and copy the redo log files to the downstream database using log transport services, this parameter must be set to true at both the source database and the downstream database. This parameter is not required for local capture configuration.

SGA_MAX_SIZE Specifies the maximum size of SGA for the lifetime of a database instance.

To run multiple Streams processes on a single database, you may need to increase the size of this

Page 8: Master Note for Streams Recommended Configuration

parameter.

SGA_TARGET =0 Specifies the total size of all System Global Area (SGA) components.

If this parameter is set to a nonzero value, then the size of the Streams pool is managed by Automatic Shared Memory Management.

For best results, size the shared_pool and streams_pool explicitly.

Tune the STREAMS_POOL_SIZE Specifies (in bytes) the size of the Streams pool. The Streams pool contains buffered queue messages. In addition, the Streams pool is used for internal communications during parallel capture and apply. Refer to V$STREAMS_POOL_ADVICE to determine the correct size to avoid excessive spills.

This parameter is modifiable. If this parameter is reduced to zero when an instance is running, then Streams processes and jobs will not run.

The size of the Streams pool is affected by each of the following factors:

capture process parallelism. Increase the Streams Pool Size by 10 MB for each capture process. In addition, if the capture parameter PARALLELISM is set greater than 1, increase the Streams Pool size by 10Mb * parallelism. For example, if parallelism is set to 3 for a capture process, then increase the Streams pool by 30 MB.

Apply process parallelism. Increase the Streams Pool Size by 1 MB for each apply process. In addition, if the apply parameter PARALLELISM is set greater than 1, increase the Streams Pool size by 1Mb * parallelism. For example, if parallelism is set to 5 for an apply process, then increase the Streams pool by 5 MB.

Logical Change Records (LCRs) are stored in the buffered queue. Increase the size of the Streams Pool to handle the volume of replicated data managed at both the source and target

Page 9: Master Note for Streams Recommended Configuration

databases.

Minimally set the Streams Pool Size to 256Mb on low activity databases or 500Mb on more active OLTP configurations. Adjust the Streams Pool size to an appropriate value using the V$STREAMS_POOL_ADVICE view to avoid excessive spill from the buffered queue to disk.

3.0 Database Storage

3.1. Tablespace for Streams Administrator queues

Create a separate tablespace for the streams administrator schema (STRMADMIN) at each participating Streams database. This tablespace will be used for any objects created in the streams administrator schema, including any spillover of messages from the in-memory queue.

For example:

CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 25 M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED;

ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name;

3. 2. Separate queues for capture and apply

Configure separate queues for changes that are captured locally and for receiving captured changes from each remote site. This is especially important when configuring bi-directional replication between multiple databases. For example, consider the situation where Database db1.net replicates its changes to databases db2.net, and Database db2.net replicates to db1.net. Each database will maintain 2 queues: one for capturing the changes made locally and other queue receiving changes from the other database.

Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the local changes directly to each other database, there will be 3 queues at each database. For example at db1.net, queue1 for the capture process, and queue2 and queue3 for receiving changes from each of the other databases. The two apply processes on db1.net (apply_from_db2, apply_from_db3) apply the changes, each associated with a specific queue (queue2 or queue3)

Queue names should not exceed 24 characters in length. Queue table names should not exceed 24 characters in length. To pre-create a queue for Streams, use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or MAINTAIN_GLOBAL procedures to configure Streams and do not identify specific queue names, individual queues will be created automatically.

Example: To configure a site (SITEA) that is capturing changes for distribution to another site, as well as receiving changes from that other site (SITEB), configure each queue at SITEA with a separate queue_table as follows:

Page 10: Master Note for Streams Recommended Configuration

dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A, queue_name=>'CAP_SITEA', )

dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB', queue_name=>'APP_FROM_SITEB');

If desired, the above set_up_queue procedure calls can include a storage_clause parameter to configure separate tablespace and storage specifications for each queue table. Typically, Logical Change Records (LCRs) are queued to an in-memory buffer and processed from memory. However, they can be spilled to disk if they remain in memory too long due to an unavailable destination or on memory pressure (Streams_Pool memory is too low). The storage clause parameter can be used to preallocate space for the queue table or specify an alternative tablespace for the queue table without changing the default tablespace for the Streams Administrator.

4.0 Privileges

The streams administrator (strmadmin) must be granted the  following on each participating Streams participating database:

GRANT EXECUTE ON DBMS_AQADM TO strmadmin;GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;GRANT EXECUTE ON DBMS_STREAMS TO strmadmin;GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;

BEGIN DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, grantee => 'strmadmin', grant_option => FALSE); END;/

BEGIN DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ, grantee => 'strmadmin', grant_option => FALSE);END;/

In order to create capture and apply processes, the Streams Administrator must have DBA privilege. This privilege must be explicitly granted to the Streams Administrator.

GRANT DBA to STRMADMIN;

In addition, other required privileges must be granted to the Streams Administrator schema (strmadmin) on each participating Streams database with the GRANT_ADMIN_PRIVILEGE procedure:

In Oracle 10g and above, all the above (except DBA) can be granted using the procedure:

DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

Page 11: Master Note for Streams Recommended Configuration

5.0 Source Site Configuration

The following recommendations apply to source databases, ie, databases in which Streams capture is configured.

5.1. Streams and Flash Recovery Area (FRA)

In Oracle 10g and above, configure a separate log archive destination independent of the Flash Recovery Area for the Streams capture process for the database. Archive logs in the FRA can be removed automatically on space pressure, even if the Streams capture process still requires them. Do not allow the archive logs for Streams capture to reside solely in the FRA.

5.2. Archive Logging must be enabled

Verify that each source database is running in ARCHIVE LOG mode. For downstream capture sites (ie, databases in which the Streams capture is configured for another database), the database at which the source redo logs are created must have archive logging enabled.

5.3. Supplemental logging

Confirm supplemental logging is enabled at each source site. In 9iR2 Streams apply requires unconditional logging of Unique Index and Foreign Key constraints, even if those columns are not modified. This is because of Bug 4198593 Apply incorrectly requires unconditional logging of Unique and FK constraints fixed in 9.2.0.8.

If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique and foreign key columns in the tables for which an apply process applies changes. Supplemental logging may be required for other columns in these tables as well, depending on your configuration.

Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site.

Supplemental logging can be specified at the source either at the database level or for the individual replicated table.

In 10gR2, supplemental logging is automatically configured for tables on which primary, unique, or foreign keys are defined when the database object is prepared for Streams capture. The procedures for maintaining streams and adding rules in the DBMS_STREAMS_ADM package automatically prepare objects for a local Streams capture. For downstream capture sites (ie, databases in which the Streams capture is configured for another database), the database at which the source redo logs are created must have supplemental logging for the database objects of interest to the downstream capture process.

All target site indexed columns, including the primary key, unique index, and foreign key columns of a replicated table or database must be logged at the source site. Primary Key logging must be unconditionally logged, unique index and foreign keys can be conditionally logged. This supplemental logging is enabled automatically when the source table is prepared for capture with DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION.

Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site. Supplemental logging for these columns must be configured

Page 12: Master Note for Streams Recommended Configuration

explicitly by the database administrator, using the table sql syntax: ALTER TABLE... ADD SUPPLEMENTAL LOG... .

To Verify that supplemental logging has been specified at the source either at the database level or for the individual replicated table:

Database level logging:

SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;Table level logging:

SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION

SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION

SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;

Check supplemental log groups

Select log_group_name, table_name, decode(always, 'ALWAYS', 'Unconditional', NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS

Check columns in supplemental log groupsSelect log_group_name, column_name, position from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR';

Refer to Document 782541.1 Streams Replication Supplemental Logging Requirements

5.4. Implement a Heartbeat Table

To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. A "heart beat" table is especially useful for databases that have a low activity rate. The streams capture process requests a checkpoint after every 10Mb of generated redo. During the checkpoint, the metadata for streams is maintained if there are active transactions. Implementing a heartbeat table ensures that there are open transactions occurring regularly within the source database enabling additional opportunities for the metadata to be updated frequently. Additionally, the heartbeat table provides quick feedback to the database administrator as to the health of the streams replication.

To implement a heartbeat table: Create a table at the source site that includes a date or timestamp column and the global name of the database. Add a rule to capture changes to this table and propagate the changes to each target destination. Make sure that the target destination will apply changes to this table as well. Set up an automated job to update this table at the source site periodically, for example every minute.

Refer to Document 461278.1 Example of a Streams Heartbeat Table

Page 13: Master Note for Streams Recommended Configuration

5.5. Flow ControlIn Oracle 9iR2, when the threshold for memory of the buffer queue is exceeded, Streams will write the messages to disk. This is sometimes referred to as "spillover". When spillover occurs, Streams can no longer take advantage of the in-memory queue optimization. One technique to minimize this spillover is to implement a form of flow control. See the following note for the scripts and pre-requisites:

Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk (Doc ID 259609.1)

In Oracle 10g and above flow control is automatically handled by the database so there is no need to implement it manually.

5.6. Perform periodic maintenance

Database Version 9iR2 and 10gR1Periodically force capture to checkpoint. This checkpoint is not the same as a database checkpoint. To force capture to checkpoint, use the capture parameter _CHECKPOINT_FORCE and set the value to YES. Forcing a checkpoint ensure that the DBA_CAPTURE view columns CAPTURED_SCN and APPLIED_SCN are maintained.

Database Version 10gR2 and aboveA. Confirm checkpoint retention. In Oracle 10gR2 and above, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.

B. Dump fresh copy of Dictionary to redo. Issue a DBMS_CAPTURE_ADM.BUILD command to dump a current copy of the data dictionary to the redo logs. Doing this will reduce the amount of logs to be processed in case of additional capture process creation or process rebuild.

C. Prepare database objects for instantiation Issue a DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where * indicates the level (TABLE, SCHEMA, GLOBAL) for the database objects captured by Streams. This is used in conjunction with the BUILD in B above for new capture creation or rebuild purposes.

5.7. Capture Process Configuration

A. Configuring Capture

Use the DBMS_STREAMS_ADM.ADD_*_RULES procedures (ADD_TABLE_RULES,  ADD_SCHEMA_RULES for DML and DDL, ADD_GLOBAL_RULES for DDL only).  These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.

CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES procedure cannot be used to capture DML changes for entire database. ADD_GLOBAL_RULES can be used to capture all DDL changes for the database.

Page 14: Master Note for Streams Recommended Configuration

A single Streams capture can process rules for multiple tables or schemas. For best performance, rules should be simple.  Rules that include NOT or LIKE clauses are not simple and will impact the performance of Streams.

Minimize the number of rules added into the process rule set.  A good rule of thumb is to keep the number of rules in the rule set to less than 100. If more objects need to be included in the ruleset, consider constructing rules using the IN clause. For example, a rule for the 6 TB_M21* tables in the MYACCT schema would look like the following:

(:dml.get_object_owner() = 'MYACCT' and :dml.is_null_tag() = 'Y' and :dml.get_object_name() IN ('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','TB_M211B010'))

In version 10.2 and above, use the DBMS_STREAMS_ADM. MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) procedures to configure Streams. These procedures automate the entire configuration of the streams processes between databases, following the Streams best practices. For local capture, the default behavior of these procedures is to implement a separate queue for capture and apply. If you are configuring a downstream capture and applying the changes within the same database, override this behavior by specifying the same queue for both the capture_queue_name and apply_queue_name.

If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES, ADD_SCHEMA_RULES for DML and DDL, ADD_SUBSET_RULES for DML only, and ADD_GLOBAL_RULES for DDL only). These procedures minimize the number of steps required to configure Streams processes. It is also possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.

The Streams capture process requires a rule set with rules. The ADD_GLOBAL_RULES procedure can be used to capture DML changes for entire database as long as a negative ruleset is created for the capture process that includes rules for objects with unsupported datatypes.. ADD_GLOBAL_RULES can be used to capture all DDL changes for the database.

A single Streams capture can process changes for multiple tables or schemas. For best performance, rules for these multiple tables or schemas should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.

To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate ALL changes, tagged or not, for the table or object.

B. Capture Parameters

Set the following parameters after a capture process is created:

Parameter &

Recommendation

Values Comment

PARALLELISM=1 Default: 1 Number of parallel execution servers to configure one or more preparer processes used to prefilter changes for the capture

Page 15: Master Note for Streams Recommended Configuration

process. Recommended value is 1.

_CHEKPOINT_FREQUENCY=500 Default: 10 <10.2.0.4

Default 1000 in 10.2.0.4

Modify the frequency of logminer checkpoints especially in a database with significant LOB or DDL activity. Larger values decrease the frequency of logminer checkpoints. Smaller numbers increase the frequency of those checkpoints. Logminer checkpoints are not the same as database checkpoints. Availability of logminer checkpoints impacts the time required to recover/restart the capture after database restart. In a low activity database (ie, small amounts of data or the data to be captured is changed infrequently), use a lower value, such as 100.

A logminer checkpoint is requested by default every 10Mb of redo mined. If the value is set to 500, a logminer checkpoint is requested after every 500Mb of redo mined. Increasing the value of this parameter is recommended for active databases with significant redo generated per hour.

It should not be necessary to configure _CHECKPOINT_FREQUENCY in 10.2.0.4 or higher

_SGA_SIZE Default: 10 Amount of memory available from the streams pool for logminer processing. The default amount of streams_pool memory allocated to logminer is 10Mb. Increase this value especially in environments where large LOBs are processed. This parameter should not be increased unless the logminer error ORA-1341 is encountered. Streams pool memory allocated to logminer is unavailble for other usa

Capture parameters can be set using the SET_PARAMETER procedure from the DBMS_CAPTURE_ADM package. For example, to set the checkpoint frequency of the streams capture process named CAPTURE_EX, use the following syntax while logged in as the Streams Administrator to request a logminer checkpoint after processing every Gigabyte (1000Mb) of redo:

dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');

5.8. Propagation Configuration

A. Configuring Propagation

If the maintain_* procedures are not suitable for your environment(Oracle 9iR2 and 10gR1), please use the ADD_*__PROPAGATION_RULES procedures (ADD_TABLE_PROPAGATION_RULES, ADD_SCHEMA_PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL., ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.

The rules in the rule set for propagation can differ from the rules specified for the capture process. For example, to configure that all captured changes be propagated to a target site, a single

Page 16: Master Note for Streams Recommended Configuration

ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though multiple ADD_TABLE_RULES might have been configured for the capture process.

B. Propagation mode

For new propagation processes configured in 10.2 and above. set the queue_to_queue propagation parameter to TRUE. If the database is RAC enabled, an additional service is created typically named in the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A streams subscriber is a defined propagation between two Streams queues or an apply process with the apply_captured parameter set to TRUE. This service automatically follows the ownership of the queue on queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the network name column of DBA_SERVICES view.

If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor. See section 6 on Additional Considerations for RAC below.

Propagation process configured prior to 10.2 continue to use the dblink mode of propagation. In this situation, if the database link no longer connects to the owning instance of the queue, propagation will not succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance window recreate propagation. Make sure that the queue is empty with no unapplied spilled messages before you drop the propagation. Then, recreate the propagation with the queue_to_queue parameter set to TRUE.

Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take advantage of the automatic service generation and queue_to_queue propagation. Be sure to perform this activity when the queue is empty and no new LCRs are being enqueued into the queue.

C. Propagation Parameters

Parameter & Recommendation

Values Comment

latency=5 Default: 60

Maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued.

The default value is 60. Caution: if latency is not specified for this call, then latency will over-write any existing value with this default value (60).

For example, if the latency is 60 seconds, then during the propagation window, if there are no messages to be propagated, then messages from that queue for the destination will not be propagated for at least 60 more seconds. It will be at least 60 seconds before the queue will be checked again for messages to be propagated for the specified destination. If the latency is 600, then the queue will not be checked for 10 minutes and if the latency is 0, then a job queue process will be waiting for messages to be enqueued for the destination and as soon as a message is enqueued it will be propagated.

Propagation parameters can be set using the ALTER_PROPAGATION_SCHEDULE procedure from the DBMS_AQADM package. For example, to set the latency parameter of the streams propagation from the

Page 17: Master Note for Streams Recommended Configuration

STREAMS_QUEUE owned by STRMADMIN to the target database whose global_name is DEST_DB for the queue Q1, use the following syntax while logged in as the Streams Administrator:

dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queue','DEST_DB',destination_queue=>'Q1',latency=>5);

D. Network Connectivity

When using Streams propagation across a Wide Area Network (WAN), increase the session data unit (SDU) to improve the propagation performance. The maximum value for SDU is 32K (32767). The SDU value for network transmission is negotiated between the sender and receiver sides of the connection: the minimum SDU value of the two endpoints is used for any individual connection. In order to take advantage of an increased SDU for Streams propagation, the receiving side sqlnet.ora file must include the default_sdu_size parameter. The receiving side listener.ora must indicate the SDU change for the SID. The sending side tnsnames.ora connect string must also include the SDU modification for the particular service.

Tuning the tcp/ip networking parameters can significantly improve performance across the WAN. Here are some example tuning parameters for Linux. These parameters can be set in the /etc/sysctl.conf file and running sysctl -p . When using RAC, be sure to configure this at each instance.

net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 # increase Linux autotuning TCP buffer limits # min, default, and max # number of bytes to use net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216

In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE sqlnet.ora parameters increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis on their overall impact on system performance.

For further information, please review the Oracle Net Services Guide

5.9. Additional Configuration for RAC Environments for a Source DatabaseArchive Logs

The archive log threads from all instances must be available to any instance running a capture process. This is true for both local and downstream capture.

Queue Ownership

When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that

a capture process is run at the owning instance of the source queue. a propagation job must run at the owning instance of the queue

Page 18: Master Note for Streams Recommended Configuration

a propagation job must connect to the owning instance of the target queue.

Ownership of the queue can be configured to remain on a specific instance, as long as that instance is available, by setting the PRIMARY _INSTANCE and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0), the queue ownership will return to the specified instance whenever the instance is up.

Capture will automatically follow the ownership of the queue. If the ownership changes while capture is running, capture will stop on the current instance and restart at the new owner instance.

For queues created with Oracle Database 10g Release 2, a service will be created with the service name= schema.queue and the network name SYS$schema.queue.global_name for that queue. If the global_name of the database does not match the db_name.db_domain name of the database, be sure to include the global_name as a service name in the init.ora.

For propagations created with the Oracle Database 10g Release 2 code with the queue_to_queue parameter to TRUE, the propagation job will deliver only to the specific queue identified. Also, the source dblink for the target database connect descriptor must specify the correct service (global name of the target database ) to connect to the target database. For example, the tnsnames.ora entry for the target database should include the CONNECT_DATA clause in the connect descriptor for the target database. This clause should specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do NOT include a specific INSTANCE in the CONNECT_DATA clause.

For example, consider the tnsnames.ora file for a database with the global name db.mycompany.com. Assume that the alias name for the first instance is db1 and that the alias for the second instance is db2. The tnsnames.ora file for this database might include the following entries:

db.mycompany.com=(description=(load_balance=on)(address=(protocol=tcp)(host=node1-vip)(port=1521))(address=(protocol=tcp)(host=node2-vip)(port=1521))(connect_data=(service_name=db.mycompany.com)))

db1.mycompany.com=(description=(address=(protocol=tcp)(host=node1-vip)(port=1521))(connect_data=(service_name=db.mycompany.com)(instance_name=db1)))

db2.mycompany.com=(description=(address=(protocol=tcp)(host=node2-vip)(port=1521))(connect_data=(service_name=db.mycompany.com)(instance_name=db2)))

Use the italicized tnsnames.ora alias in the target database link USING clause.

Page 19: Master Note for Streams Recommended Configuration

DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies all active services for the database In non_RAC configurations, the service name will typically be the global_name. However, it is possible for users to manually create alternative services and use them in the TNS connect_data specification . For RAC configurations, the service will appear in these views as SYS$schema.queue.global_name.

Propagation Restart

Use the procedures START_PROPAGATION and STOP_PROPAGATION from DBMS_PROPAGATION_ADM to enable and disable the propagation schedule. These procedures automatically handle queue_to_queue propagation.

Example:

exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation'); or

exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation',force=>true);

exec DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propagation');

6.0 Target Site ConfigurationThe following recommendations apply to target databases, ie, databases in which Streams apply is configured.

6.1. PrivilegesGrant Explicit Privileges to APPLY_USER for the user tables

Examples:

Privileges for table level DML: INSERT/UPDATE/DELETE,

Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX, CREATE (ANY) PROCEDURE

6.2. InstantiationSet Instantiation SCNs manually if not using export/import. If manually configuring the instantiation scn for each table within the schema, use the RECURSIVE=>TRUE option on the DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN procedure

For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or GLOBAL level).

6.3. Conflict ResolutionIf updates will be performed in multiple databases for the same shared object, be sure to configure conflict resolution. See the Streams Replication Administrator's Guide Chapter 3 Streams Conflict Resolution, for more detail.

To simplify conflict resolution on tables with LOB columns, create an error handler to handle errors for the table. When registering the handler using the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the ASSEMBLE_LOBS parameter as TRUE.

Page 20: Master Note for Streams Recommended Configuration

Refer to Document   779801.1 Streams Conflict Resolution

6.4. Apply Process ConfigurationA. Rules

If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES , ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES

APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all changes in the queue for the database. If no ruleset is specified for the apply process, all changes in the queue are processed by the apply process.

A single Streams apply can process rules for multiple tables or schemas located in a single queue that are received from a single source database . For best performance, rules should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.

To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate all changes, tagged or not, for the table or object.

B. Parameters

Parameter Values Comment

DISABLE_ON_ERROR=N Default: Y If Y, then the apply process is disabled on the first unresolved error, even if the error is not fatal.

If N, then the apply process continues regardless of unresolved errors.

PARALLELISM= 4 Default: 1 Parallelism configures the number of apply servers available to the apply process for performing user transactions from the source database. Choose a value 4, 8, 12, 16 based on the concurrent replicated workload generated at the source AND the number of CPUs available at the target.

TXN_LCR_SPILL_THRESHOLD Default=10,000 New in 10.2. Leave this parameter as default initially.

It enables you to specify that an apply process begins to spill messages for a transaction from memory to disk when the number of messages in memory for a particular transaction exceeds the specified number.

Setting this parameter to a value that is higher than the default to try to stage everything in memory must be done carefully so that queue spilling is not increased. Setting TXN_LCR_SPILL_THRESHOLD to 'infinite' is not recommended because this will revert Streams to the old pre-

Page 21: Master Note for Streams Recommended Configuration

10.2 behaviour.

The DBA_APPLY_SPILL_TXN and V$STREAMS_APPLY_READER views enable you to monitor the number of transactions and messages spilled by an apply process.

Refer to Document 365648.1 Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams

Apply parameters can be set using the SET_PARAMETER procedure from the DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter of the streams apply process named APPLY_EX, use the following syntax while logged in as the Streams Administrator:exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');

In some cases, performance can be improved by setting the following hidden parameter. This parameter should be set when the major workload is UPDATEs and the updates are performed on just a few columns of a many-column table.

Parameter Values Comment

_DYNAMIC_STMTS=Y Default: N If Y, then for UPDATE statements, the apply process will optimize the generation of SQL statements based on required columns.

_HASH_TABLE_SIZE=1000000 Default: 80*parallelism

Set the size of the hash table used to calculate transaction dependencies to 1 million.

6.5. Additional Configuration for RAC Environments for an Apply DatabaseQueue Ownership

When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that

the database link specified in the propagation must connect to the owning instance of the target queue.

the apply process is run at the owning instance of the target queue

Ownership of the queue can be configured to remain on a specific instance, as long as that instance is available, by setting the PRIMARY _INSTANCE and SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0), the queue ownership will return to the specified instance whenever the instance is up.

Apply will automatically follow the ownership of the queue. If the ownership changes while apply is running, apply will stop on the current instance and restart at the new owner instance.

Page 22: Master Note for Streams Recommended Configuration

Changing the GLOBAL_NAME of the Database

See the OPERATION section on Global_name below. The following are some additional considerations when running in a RAC environment. If the GLOBAL_NAME of the database is changed, ensure that the queue is empty before changing the name and that the apply process is dropped and recreated with the apply_captured parameter = TRUE. In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the GLOBAL_NAME in the list of services for the database in the database parameter initialization file.

OPERATIONA Streams process will automatically restart after a database startup, assuming that the process was in a running state before the database shut down. No special startup or shutdown procedures are required in the normal case.

Global NameStreams uses the GLOBAL_NAME of the database to identify changes from or to a particular database. Do not modify the GLOBAL NAME of a Streams database after capture has been created. Changes captured by the Streams capture process automatically include the current global name of the source database. This means that if the global name is modified after a capture process has been configured, the capture process will need to be dropped and recreated following the GLOBAL_NAME modification. In addition, the system-generated rules for capture, propagation, and apply typically specify the global name of the source database. These rule will need to be modified or recreated to adjust the source_database_name. Finally, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the GLOBAL_NAME in the list of services for the database in the database parameter initialization file.

If the global name must be modified on the database, do it at a time when NO user changes are possible on the database and the Streams queues are empty with no outstanding changes to be applied, so that the Streams configuration can be recreated. Keep in mind that all subscribers (propagations to target databses and the target apply processes) must also be recreated if the source database global_name is changed. Follow the directions in the Streams Replication Administrator's Guide for Changing the DBID or GLOBAL NAME of a source database.

It is also strongly recommended that the database init.ora parameter global_names be set to TRUE to guarantee that database link names match the global name of the target database.

Apply Error ManagementThe view DBA_APPLY_ERROR includes the message_number within the transaction on which the reported error occurred. Use this message number in conjunction with the procedures from the documentation manual Streams Concepts and Administration ( Chapter 22 Monitoring Streams Apply Processes "Displaying Detailed Information About Apply Errors") to print out the column values of each logical change record within the failed transaction.

Backup Considerations1. Ensure that any manual backup procedures that include the any of the following statements include a non-null Streams tag:

ALTER TABLESPACE ... BEGIN BACKUP

Page 23: Master Note for Streams Recommended Configuration

ALTER TABLESPACE ... END BACKUP

The tag should be chosen such that these DDL commands will be ignored by the capture rule set.

To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified to avoid capturing these commands.

Backups performed using RMAN do not need to set a Streams tag.

2. Do not allow any automated backup of the archived logs to remove necessary archive logs. It is especially important in a Streams environment that all necessary archived logs remain available online and in the expected location until the capture process has finished processing them. If a log required by the capture process is unavailable, the capture process will abort. Force a checkpoint (capture/logminer) before beginning the manual backup procedures. To force a checkpoint, explicitly reset the hidden capture parameter _CHECKPOINT_FORCE to 'Y'. The REQUIRED_CHECKPOINT_SCN column of the DBA_CAPTURE view specifies the lowest required SCN to restart capture. A procedure to determine the minimum archive log necessary for successful capture restart is available in the Streams health check script.

3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete recovery.

4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value, FULL.

5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. Implementing a heartbeat table ensures that the metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health of streams replication. Refer to the Source Site Configuration Section: Implement a Hearbeat Table for more details.

6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow the instructions in Chapter 9 of the Streams Replication Administrators Guide

Performing Point-in-Time Recovery on the Source in a Single-Source EnvironmentPerforming Point-in-Time Recovery in a Multiple-Source Environment

7. In situations that result in incomplete recovery at the destination site, follow the instructions in Chapter 9 of the Streams Replication Administrator's Guide

Performing Point-in-Time Recovery on a Destination Database

Batch ProcessingFor best performance, the commit point for batch processing should be kept low. It is preferable that excessively large batch processing be run independently at each site. If this technique is utilized, be sure to implement DBMS_STREAMS.SET_TAG to skip the capture of batch processing session. Setting this tag is valid only in the connected session issuing the set_tag command and will not impact the capture of changes from any other database sessions.DDL Replication

Page 24: Master Note for Streams Recommended Configuration

When replicating DDL, keep in mind the effect the DDL statement will have on the replicated sites. In particular, do not allow system generated naming for constraints or indexes, as modifications to these will most likely fail at the replicated site. Also, storage clauses may cause some issues if the target sites are not identical.

If you decide NOT to replicate DDL in your Streams environment, any table structure change must be performed manually.

Refer to Document 313478.1 Performing Manual DDL in a Streams EnvironmentPropagationAt times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. The typical solution is to disable the propagation and then re-enable it.

exec dbms_propagation_adm.stop_propagation('propagation_name'); exec dbms_propagation_adm.start_propagation('propagation_name');

If the above does not fix the problem, perform a stop of propagation with the force parameter and then start propagation again.

exec dbms_propagation_adm.stop_propagation('propagation_name',force=>true); exec dbms_propagation_adm.start_propagation('propagation_name');

An additional side-effect of stopping the propagation with the force parameter is that the statistics for the propagation are cleared

The above is documented in the Streams Replication Administrator's Guide: Restart Broken Propagations

Source Queue GrowthSource queue may grow if one of the target sites is down for an extended period, or propagation is unable to deliver the messages to a particular target site (subscriber) due to network problems for an extended period.

Automatic flow control minimizes the impact of this queue growth. Queued messages (LCRs) for unavailable target sites will spill to disk storage while messages for available sites are processed normally.

Propagation is implemented using the DBMS_JOB subsystem. If a job is unable to execute 16 successive times, the job will be marked as "broken" and become disabled. Be sure to periodically check that the job is running successfully to minimize source queue growth due to this problem.

Streams Cleanup/RemovalRemoving the Streams administrator schema with DROP USER ..... CASCADE can be used to remove the entire Streams configuration.

Automatic Optimizer Statistics CollectionOracle database 10g has the Automatic Optimizer Statistics Collection feature that runs every night and gathers optimizer stats of tables whose stats have become stale. The problem with volatile tables, such as the Streams queue tables, is that it is quite possible that when the stats collection job runs these tables may not have data that is representative of their full load period. For this reason we recommend to

Page 25: Master Note for Streams Recommended Configuration

customers that for volatile tables, they run the dbms_stats.gather job manually on them when they are at the fullest and then immediately lock the stats of using the PL/SQL API's (dbms_stats.lock ...) provided. This will ensure that when the nightly Automatic Optimizer Statistics Collection job runs, these volatile tables will be skipped and hence not analyzed.

These volatile AQ/Streams tables are created through a call to dbms_aqadm.create_queue_table (qtable_name, etc.) or dbms_streams_adm.setup_queue() command with a user defined queue table (qtable_name). In addition to the queue table, the call internally creates the following tables which also tend to be volatile:

aq$_{qtable_name}_iaq$_{qtable_name}_haq$_{qtable_name}_taq$_{qtable_name}_paq$_{qtable_name}_daq$_{qtable_name}_c

Oracle has the ability to restore old stats on tables including data dictionary tables using the dbms_stats.restore... API's. This feature can be used for short term resolution, but the real solution is the first one, where you lock optimizer stats of volatile tables.

MONITORINGAll Streams processing is done at the "owning instance" of the queue. To determine the owning instance, use the query below:

SELECT q.owner, q.name, t.queue_table, t.owner_instanceFROM DBA_QUEUES q, DBA_QUEUE_TABLES tWHERE t.object_type = 'SYS.ANYDATA' ANDq.queue_table = t.queue_table ANDq.owner = t.owner;

To display the monitoring view information, either query the monitoring views from the owning instance or use the GV$ views for dynamic streams views.

Dynamic Streams views The views listed below with larger size font are the most commonly monitored runtime views in Streams. The hyperlinks below link to the view descriptions in the Oracle Database 10g Release 2 Database Reference manual.

Streams View Name Streams View Name from any RAC instance

V$STREAMS_CAPTURE GV$STREAMS_CAPTURE

V$STREAMS_APPLY_COORDINATOR GV$STREAMS_APPLY_COORDINATOR

V$STREAMS_APPLY_READER GV$STREAMS_APPLY_READER

V$STREAMS_APPLY_SERVER GV$STREAMS_APPLY_SERVER

Page 26: Master Note for Streams Recommended Configuration

V$STREAMS_POOL_ADVICE GV$STREAMS_POOL_ADVICE

V$STREAMS_TRANSACTION GV$STREAMS_TRANSACTION

V$BUFFERED_PUBLISHERS GV$BUFFERED_PUBLISHERS

V$BUFFERED_QUEUES GV$BUFFERED_QUEUES

V$BUFFERED_SUBSCRIBERS GV$BUFFERED_SUBSCRIBERS

V$PROPAGATION_RECEIVER GV$PROPAGATION_RECEIVER

V$RULE GV$RULE

V$RULE_SET GV$RULE_SET

V$RULE_SET_AGGREGATE_STATS GV$RULE_SET_AGGREGATE_STATS

Static Streams ViewsThe views listed below with larger size font are the most commonly monitored configuration views in Streams. The hyperlinks below link to the view descriptions in the Oracle Database 10g Release 2 Database Reference manual.

Streams ViewsDBA_REGISTERED_ARCHIVED_LOGDBA_RECOVERABLE_SCRIPTDBA_RECOVERABLE_SCRIPT_BLOCKSDBA_RECOVERABLE_SCRIPT_ERRORSDBA_RECOVERABLE_SCRIPT_PARAMSDBA_STREAMS_ADD_COLUMNDBA_STREAMS_ADMINISTRATORDBA_STREAMS_DELETE_COLUMNDBA_STREAMS_GLOBAL_RULESDBA_STREAMS_MESSAGE_CONSUMERSDBA_STREAMS_MESSAGE_RULESDBA_STREAMS_NEWLY_SUPPORTEDDBA_STREAMS_RENAME_COLUMNDBA_STREAMS_RENAME_SCHEMADBA_STREAMS_RENAME_TABLEDBA_STREAMS_RULESDBA_STREAMS_SCHEMA_RULESDBA_STREAMS_TABLE_RULES

Page 27: Master Note for Streams Recommended Configuration

DBA_STREAMS_TRANSFORM_FUNCTIONDBA_STREAMS_TRANSFORMATIONSDBA_STREAMS_UNSUPPORTEDDBA_RULE_SET_RULESDBA_RULE_SETSDBA_RULESDBA_HIST_BUFFERED_QUEUESDBA_HIST_BUFFERED_SUBSCRIBERSDBA_HIST_RULE_SETDBA_HIST_STREAMS_APPLY_SUMDBA_HIST_STREAMS_CAPTUREDBA_HIST_STREAMS_POOL_ADVICE

Capture Views:DBA_CAPTUREDBA_CAPTURE_EXTRA_ATTRIBUTESDBA_CAPTURE_PARAMETERSDBA_CAPTURE_PREPARED_DATABASEDBA_CAPTURE_PREPARED_SCHEMASDBA_CAPTURE_PREPARED_TABLES

Propagation & Queue ViewsDBA_PROPAGATIONDBA_QUEUE_SCHEDULESDBA_QUEUE_SUBSCRIBERSDBA_QUEUE_TABLESDBA_QUEUES

Apply Views:DBA_APPLYDBA_APPLY_CONFLICT_COLUMNSDBA_APPLY_DML_HANDLERSDBA_APPLY_ENQUEUEDBA_APPLY_ERRORDBA_APPLY_EXECUTEDBA_APPLY_INSTANTIATED_GLOBALDBA_APPLY_INSTANTIATED_OBJECTSDBA_APPLY_INSTANTIATED_SCHEMASDBA_APPLY_KEY_COLUMNSDBA_APPLY_OBJECT_DEPENDENCIESDBA_APPLY_PARAMETERSDBA_APPLY_PROGRESSDBA_APPLY_SPILL_TXNDBA_APPLY_TABLE_COLUMNSDBA_APPLY_VALUE_DEPENDENCIES

Monitoring Utility STRMMONSTRMMON is a monitoring tool focused on Oracle Streams. Using this tool, database administrators get a quick overview of the Streams activity occurring within a database. In a single line display, strmmon

Page 28: Master Note for Streams Recommended Configuration

reports information The reporting interval and number of iterations to display are configurable. STRMMON is available in the rdbms/demo directory in $ORACLE_HOME. The most recent version of the tool is available from Document 290605.1 Oracle Streams STRMMON Monitoring Utility

Alert LogStreams capture and apply processes report long-running and long transactions in the alert log.

Long-running transactions are open transactions with no activity( ie, no new change records , rollback or commit ) for an extended period (20 minutes). Large transactions are open transactions with a large number of change records. The alert log will report the fact that a long-running or large transaction has been seen every 20 minutes. Not all such transactions will be reported - only 1 per 10 minute period. When the commit or rollback is received, this fact will be reported in the alert log as well.

Streams Healthcheck ScriptsThe Streams health check script is a collection of queries to determine the configuration of the streams environment. This script should be run at each participating database in a streams configuration. In addition to configuration information, analysis of the rules specified for streams is included to enable quicker diagnosis of problems. A guide to interpreting the output is provided. The healthcheck script is an invaluable tool for problem solving customer issues. The Streams Healthcheck script is available from Document 273674.1 Streams Configuration Report and Health Check Script

To browse through the complete list of streams published articles refer to Knowledge > Browse. Then select Oracle Technology -> Database -> Information Integration -> Streams.

To learn about Oracle University offerings related to Oracle Streams, refer to Document 762188.1 Oracle University Offerings Related to Oracle Streams.

References

NOTE:265201.1 - Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or ORA-26786NOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:789445.1 - Master Note for Streams Setup ScriptsNOTE:1264598.1 - Master Note for Streams Downstream Capture - 10g and 11g [Video]NOTE:313279.1 - Master Note for Troubleshooting Streams capture 'WAITING For REDO' or INITIALIZINGNOTE:779801.1 - Streams Conflict ResolutionNOTE:290605.1 - Oracle Streams STRMMON Monitoring UtilityNOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:437838.1 - Streams Specific PatchesNOTE:273674.1 - Streams Configuration Report and Health Check ScriptNOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk

Page 29: Master Note for Streams Recommended Configuration

NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 StreamsNOTE:782541.1 - Streams Replication Supplemental Logging RequirementsNOTE:313478.1 - Performing Manual DDL in a Streams EnvironmentNOTE:461278.1 - Example of a Streams Heartbeat Table

How to Create STRMADMIN User and Grant Privileges [ID 786528.1]

  Modified 02-SEP-2010     Type SCRIPT     Status PUBLISHED

 

In this Document  Purpose  Software Requirements/Prerequisites  Configuring the Script  Running the Script  Caution  Script  Script Output

Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6 - Release: 10.1 to 11.1Information in this document applies to any platform.Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6

Purpose

The following script is intented to be used by the DBA to create an administrator user for STREAMS .

Software Requirements/Prerequisites

This code is applicable to versions 10.x and above. 

Configuring the Script

Please run this script logged in as a  user who has SYSDBA privileges.

Running the Script

To run this script set your environment so the values below are the same as yours or replace them in the script with values appropriate to your

Page 30: Master Note for Streams Recommended Configuration

environment :

STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site

STRMADMIN = Streams Administrator with password strmadmin

Caution

This script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.

Script

connect <DBA user>/<password>@STRM1.NET as SYSDBA

create user STRMADMIN identified by STRMADMIN;

ALTER USER STRMADMIN DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

How to setup Streams Schema level replication using MAINTAIN_SCHEMAS procedure [ID 878638.1]

  Modified 10-SEP-2009     Type HOWTO     Status PUBLISHED

 

In this Document  Goal  Solution  References

Page 31: Master Note for Streams Recommended Configuration

Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.2.0.2Information in this document applies to any platform.

Goal

This article provide the steps needed to setup Schema Level streams environment, we will use set of scripts provided already under DBMS_STREAMS_ADM package .

 This procedure configures a Streams environment that replicates changes to specified schemas between two databases. This procedure can either configure the environment directly, or it can generate a script that can be edited and used to configures the environment later.

Notice this procedure should run at the capture database. The capture database is the database that captures changes made to the source database.

This procedure is overloaded. so the schema_names parameter can provided in type of VARCHAR2 or also in type of  DBMS_UTILITY.UNCL_ARRAY. These parameters enable you to enter the list of schemas in different ways and are mutually exclusive.

For more information about the maintain_* procedures, please review the following article :

Article-ID: Note 864973.1Title:  How to setup Streams replication using DBMS_STREAMS_ADM.MAINTAIN_* set of procedures

Solution

The description of DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS: Syntax :

DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names IN VARCHAR2, source_directory_object IN VARCHAR2, destination_directory_object IN VARCHAR2, source_database IN VARCHAR2, destination_database IN VARCHAR2, perform_actions IN BOOLEAN DEFAULT TRUE, script_name IN VARCHAR2 DEFAULT NULL, script_directory_object IN VARCHAR2 DEFAULT NULL, dump_file_name IN VARCHAR2 DEFAULT NULL, capture_name IN VARCHAR2 DEFAULT NULL, capture_queue_table IN VARCHAR2 DEFAULT NULL, capture_queue_name IN VARCHAR2 DEFAULT NULL, capture_queue_user IN VARCHAR2 DEFAULT NULL, propagation_name IN VARCHAR2 DEFAULT NULL, apply_name IN VARCHAR2 DEFAULT NULL, apply_queue_table IN VARCHAR2 DEFAULT NULL, apply_queue_name IN VARCHAR2 DEFAULT NULL, apply_queue_user IN VARCHAR2 DEFAULT NULL, log_file IN VARCHAR2 DEFAULT NULL, bi_directional IN BOOLEAN DEFAULT FALSE,

Page 32: Master Note for Streams Recommended Configuration

include_ddl IN BOOLEAN DEFAULT FALSE, instantiation IN INTEGER DEFAULT DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);

++Prerequisites:

To use the MAINTAIN_SCHEMAS procedure, following are should be considered :

1. Set all the required database parameters related to streams as in : Note 418755.1

2. Create the streams administrator user account and grant all the mandatory permissions as in Note 786528.1 . Use DBMS_STREAMS_ADM.MAINTAIN_ * , DBA privileges for the Streams Administrator Account is required.

3. Create the directory objects needed to create the data pump export / import, and also if you need to store the script for later usage, you will need directory object, can be the same one or different one. .

4. Creating the required database links.

++An Example on how to  Configure  Schema level Replication using MAINTAIN_SCHEMAS procedure:

 Two 10g database ORCL102A and ORCL102B are involved.

conn /as sysdba set echo on termout on define source=ORCL102A define SourceGlobal_name=ORCL102A.EG.ORACLE.COM define target=ORCL102B define TargetGlobal_name=ORCL102B.EG.ORACLE.COM prompt /* Create streams admin at the source db :&&source */ conn sys/oracle@&&source  as sysdba / CREATE USER strmadmin IDENTIFIED BY strmadmin    DEFAULT TABLESPACE streams_tbs QUOTA  UNLIMITED ON streams_tbs / GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN /  BEGIN    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(  grantee => 'strmadmin',  grant_privileges => true);  END;  / 

prompt /* Create streams admin at the target db:&&target */ conn sys/oracle@&&target as sysdba

CREATE USER strmadmin IDENTIFIED BY strmadmin    DEFAULT TABLESPACE streams_tbs QUOTA  UNLIMITED ON streams_tbs /

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN /   BEGIN    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(  grantee => 'strmadmin', 

Page 33: Master Note for Streams Recommended Configuration

grant_privileges => true);  END;  / 

 

2). Creating the directory objects needed to create the data pump export and import as a part of the setup operation.In this example we will use it only to store the output script.This script can be saved or edited and used again:

CREATE DIRECTORY db_files_directory AS '/home/oracle/db_files';

3).Create the required database links:

conn strmadmin/strmadmin@&&source   create database link &&SourceGlobal_name      connect to strmadmin identified by strmadmin      using '&&source'   / 

conn strmadmin/strmadmin@&&target   create database link &&TargetGlobal_name      connect to strmadmin identified by strmadmin      using '&&source'   / 

 

4. The example in this section uses this procedure to configure a one way DDL/DML Streams replication environment that maintains the SCOTT schema. The source database is orcl102A, and the destination database is orcl102B.

conn strmadmin/strmadmin@&&source define schema_name=scott begin dbms_streams_adm.maintain_schemas( schema_names=> '&&schema_name', source_directory_object=> null, destination_directory_object=> null, source_database=> '&&source', destination_database => '&&target', perform_actions => true, script_name =>'Schema_maintain_streams.sql', script_directory_object=>'db_files_directory', bi_directional=> false, include_ddl => true , instantiation=>dbms_streams_adm.instantiation_schema_network); end;

 

Note:

This procedure automatically excludes database objects that are not supported by Streams in the schemas from the replication environment by adding rules to the negative rule set of each capture and apply process. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by Streams. If unsupported database objects are not excluded, then

Page 34: Master Note for Streams Recommended Configuration

capture errors will result.

If the bi_directional parameter is set to TRUE, then do not allow data manipulation language (DML) or data definition language (DDL) changes to the shared database objects at the destination database while the MAINTAIN_SCHEMAS procedure, or the script generated by the procedure, is running. This restriction does not apply to the source database.

References

NOTE:418755.1 - 10gR2 Streams Recommended ConfigurationNOTE:786528.1 - How to Create STRMADMIN User and Grant PrivilegesNOTE:864973.1 - How to setup Streams replication using DBMS_STREAMS_ADM.MAINTAIN_* set of procedures

How to setup Streams replication using DBMS_STREAMS_ADM.MAINTAIN_* set of procedures [ID 864973.1]

  Modified 06-DEC-2010     Type HOWTO     Status PUBLISHED

 

In this Document  Goal  Solution  References

Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2Information in this document applies to any platform.Checked for any broken links, none has been found .. 18-Jan-2010

Goal

This article provide a very fast method to setup streams environment, we will use set of scripts provided already under DBMS_STREAMS_ADM package, those scripts provides many levels of streams setup, like table level, schema level etc.

The following procedures are available : dbms_streams_adm.MAINTAIN_SCHEMAS dbms_streams_adm.MAINTAIN_SIMPLE_TABLESPACEdbms_streams_adm.MAINTAIN_SIMPLE_TTSdbms_streams_adm.MAINTAIN_TABLESdbms_streams_adm.MAINTAIN_TABLESPACESdbms_streams_adm.MAINTAIN_TTSdbms_streams_adm.MAINTAIN_GLOBALEach one of the above procedures can be used to create a different level of streams setup. We will use dbms_streams_adm.MAINTAIN_TABLES in this article as an example, but all procedures uses almost the same information and the same guide lines.

Page 35: Master Note for Streams Recommended Configuration

Solution

The description of dbms_streams_adm.MAINTAIN_TABLES : DBMS_STREAMS_ADM.MAINTAIN_TABLES(table_names IN VARCHAR2,source_directory_object IN VARCHAR2,destination_directory_object IN VARCHAR2,source_database IN VARCHAR2,destination_database IN VARCHAR2,perform_actions IN BOOLEAN DEFAULT TRUE,script_name IN VARCHAR2 DEFAULT NULL,script_directory_object IN VARCHAR2 DEFAULT NULL,dump_file_name IN VARCHAR2 DEFAULT NULL,capture_name IN VARCHAR2 DEFAULT NULL,capture_queue_table IN VARCHAR2 DEFAULT NULL,capture_queue_name IN VARCHAR2 DEFAULT NULL,capture_queue_user IN VARCHAR2 DEFAULT NULL,propagation_name IN VARCHAR2 DEFAULT NULL,apply_name IN VARCHAR2 DEFAULT NULL,apply_queue_table IN VARCHAR2 DEFAULT NULL,apply_queue_name IN VARCHAR2 DEFAULT NULL,apply_queue_user IN VARCHAR2 DEFAULT NULL,log_file IN VARCHAR2 DEFAULT NULL,bi_directional IN BOOLEAN DEFAULT FALSE,include_ddl IN BOOLEAN DEFAULT FALSE,instantiation IN INTEGER DEFAULTDBMS_STREAMS_ADM.INSTANTIATION_TABLE);

Most of the procedure parameters has default values, except the first five values :

Parameter Description

table_names The tables to be configured for replication and maintained by Streams after configuration.

source_directory_object The directory object for the directory on the computer system running the source database into which the generated Data Pump export dump file is placed. This file remain in this directory after the procedure completes.Can be null if network instantiation will be used.

destination_directory_object The directory object for the directory on the computer system running the destination database into which the generated Data Pump export dump file is transferred.Can be null if network instantiation will be used.

source_database The global name of the source database.If the value given for the source_database parameter does not match the global name of the database the procedure is run on,then the procedure will configure capture as downstream(and hence needs another 3rd database to configure)Check you are using the global names of the databases you are trying to configure in the source_database and destination_database parameters

destination_database The global name of the destination database.

In addition to the above parameters, there are another important set of parameters needs extra attention if used : capture_queue_table IN VARCHAR2 DEFAULT NULL,capture_queue_name IN VARCHAR2 DEFAULT NULL,apply_queue_table IN VARCHAR2 DEFAULT NULL,apply_queue_name IN VARCHAR2 DEFAULT NULL,

When using dbms_streams_adm.maintain_* the queue and queue table name can't be more than 24 characters each.

Page 36: Master Note for Streams Recommended Configuration

This has always been the restriction on queue names.For the dbms_streams_adm.maintain_* supplied queue name, Oracle adds extra 6 characters before (AQ$_) the assigned name and after it (_E).Since the max length for the queue name is 30 characters, you are left with only 24 characters.

 

Can source_directory_object directory point to ASM diskgroup?

Answer:

No it cannot point to ASM diskgroup. The only way to make this work is to have maintain_<> procedure produce the script and then edit the script for ASM diskgroup access.For more information about the other parameters please check the following document : Oracle Database PL/SQL Packages and Types Reference10g Release 2 (10.2) Part Number B14258-02http://www.oracle.com/pls/db102/to_toc?pathname=appdev.102%2Fb14258%2Ftoc.htm&remark=portal+%28Information+Integration%29

DBMS_STREAMS_ADM should be executed on the capture database. If the bidirectional option has been chosen, then no DML's should be run on the target database (doesn't apply to the source). Streams has a function (dbms_streams.compatible_<version>) that can be used to check the compatibility for tablesthat can be added to any positive rule (table, schema, or global level).DBMS_STREAMS.COMPATIBLE_11_1DBMS_STREAMS.COMPATIBLE_10_2DBMS_STREAMS.COMPATIBLE_10_1DBMS_STREAMS.COMPATIBLE_9_2When using MAINTAIN_* procedures to create the Streams environment, this type of rule is generated automatically.

To use the maintain_tables procedure, following are prerequisites :

1. Set all the required database parameters related to streams as in : Note 418755.1

2. Create the streams administrator user account and grant all the mandatory permissions as in    Note 786528.1. Use DBMS_STREAMS_ADM.MAINTAIN_ * , DBA privileges for the    Streams Administrator Account is required.

3. Creating the directory objects needed to create the data pump export and import .

4. Creating the required database links.

Unidriectional Streams replication Example

Two 10g database ORC1 and ORC2  are involved.

1.Create the streams administrator:ORC1:

connect <DBA user>/<password>@STRM1.NET as SYSDBA 

create user STRMADMIN identified by STRMADMIN; 

ALTER USER STRMADMIN DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP 

Page 37: Master Note for Streams Recommended Configuration

QUOTA UNLIMITED ON USERS; 

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; 

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN'); 

ORC2:

connect <DBA user>/<password>@STRM1.NET as SYSDBA 

create user STRMADMIN identified by STRMADMIN; 

ALTER USER STRMADMIN DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS; 

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; 

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

2. Creating the directory objects needed to create the data pump export and import as a part of the setup operation.In this example we will use it only to store the output script.This script can be saved or edited and used again: CREATE DIRECTORY db_files_directory AS '/usr/db_files';

3.Create the required database links:On the source database, ORC1 in this example: SQL> create database link orc2 connect to strmadmin identified by      strmadmin using 'ORC2';

On the target database, ORC2: 

SQL> create database link orc1 connect to strmadmin identified by      strmadmin using 'ORC1';

4. Setup the replication on the following tables in scott schema:DEPTEMPBONUSSALGRADE    On the source:

CONNECT strmadmin/strmadmin

DECLARE  tables DBMS_UTILITY.UNCL_ARRAY;  BEGIN    tables(1) := 'scott.dept';    tables(2) := 'scott.emp';    tables(3) := 'scott.bonus';    tables(3) := 'scott.salgrade';    DBMS_STREAMS_ADM.MAINTAIN_TABLES(      table_names                  => tables,      source_directory_object      => NULL,      destination_directory_object => NULL,      source_database              => 'orc1',      destination_database         => 'orc2',      perform_actions              => true,      script_name                  => 'configure_rep.sql',      script_directory_object      => 'db_files_directory',

Page 38: Master Note for Streams Recommended Configuration

      bi_directional               => false,      include_ddl                  => false,      instantiation     => DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK);END;/The above execution assumes the following : -The script will be saved to db_files_directory.-The script will be executed immediately.-The network will be automatically used to instantiate the tables. (This parameter achieves that : instantiation=> DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK )

On source:

Select * from scott.dept;

DEPTNO DNAME LOC---------- -------------- -------------10 ACCOUNTING NEW YORK20 RESEARCH DALLAS30 SALES CHICAGO40 OPERATIONS BOSTON

On Target:

Select * from scott.dept;

DEPTNO DNAME LOC---------- -------------- -------------10 ACCOUNTING NEW YORK20 RESEARCH DALLAS30 SALES CHICAGO40 OPERATIONS BOSTON

On source :

SQL> insert into scott.dept values (99,'IT','Cairo');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from scott.dept;

DEPTNO DNAME LOC---------- -------------- -------------99 IT Cairo10 ACCOUNTING NEW YORK20 RESEARCH DALLAS30 SALES CHICAGO40 OPERATIONS BOSTON

On the target :

SQL> select * from scott.dept;

DEPTNO DNAME LOC---------- -------------- -------------10 ACCOUNTING NEW YORK20 RESEARCH DALLAS30 SALES CHICAGO40 OPERATIONS BOSTON

Page 39: Master Note for Streams Recommended Configuration

99 IT Cairo

The above shows that the new record inserted has been transferred successfully to the target database.

How to setup replication using dbms_stream_adm from one source to two separate destinations?

Run the dbms_stream_adm script twice, once for each destination.

How to use the API to setup more than one source downstreams replication?

The best method is generate the script only, and run it after the required modifications for each source, please notice that providing source dataabse name to the script different than the global_name of the local database will instruct the script to create downstreams setup.

Use the MAINTAIN_* procedures with default settings as much as possible. For a downstream capture configuration, where  the capture and apply are colocated at the downstream database, be sure to specify the same queue name for both the capture_queue_name and the apply_queue_name. This will eliminate the extraneous propagation in downstream capture.

If the procedure execution completed successfully, (See the above output example), then the replication is up and ready, otherwise, the following views can be used to troubleshoot :

DBA_RECOVERABLE_SCRIPTDetails about recoverable operations, shows the current running script and which block is being executed, and the total number of blocks.

DBA_RECOVERABLE_SCRIPT_PARAMSDetails about the recoverable operation parameters used to run the script.

DBA_RECOVERABLE_SCRIPT_BLOCKSDetails about the recoverable script blocks, shows more details about each block and which tasks exactly are being achieved by running each blosk.

DBA_RECOVERABLE_SCRIPT_ERRORSDetails showing errors during script execution, you can check it to find more details about any error.

After checking the above views, detecting and fixing the error, you can simply continue running the script by using the following  (You can also use the same procedure to rollback the script): DBMS_STREAMS_ADM.RECOVER_OPERATION(script_id IN RAW,operation_mode IN VARCHAR2 DEFAULT 'FORWARD');

script_id

The operation id of the procedure invocation that is being rolled forward, rolled back, or purged. Query the SCRIPT_ID column of the DBA_RECOVERABLE_SCRIPT data dictionary view to determine the operation id.

operation_mode:

If FORWARD, then the procedure rolls forward the operation. Specify FORWARD to try to complete the operation.

If ROLLBACK, then the procedure rolls back all of the actions performed in the operation. If the rollback is successful, then the procedure purges all of the metadata about the operation.

Page 40: Master Note for Streams Recommended Configuration

If PURGE, then the procedure purges all of the metadata about the operation without rolling the operation back.How to clear dbms_streams_adm.maintain_schemas recovery views after failure?

To perform the clean up you will have to remove the metadata directly from the Source database using:

DELETE FROM SYS.RECO_SCRIPT_BLOCK$ WHERE OID = '<script_id>';DELETE FROM SYS.RECO_SCRIPT$ WHERE OID = '<script_id>'';DELETE FROM SYS.RECO_SCRIPT_ERROR$ WHERE OID = ''<script_id>';DELETE FROM SYS.RECO_SCRIPT_PARAMS$ WHERE OID = ''<script_id>';commit;

If you have any doubts, please consult Oracle Support before doing such step.For more information, please review the complete details in Oracle documentation :

Streams Replication Administrator's Guide 10g : http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14228%2Ftoc.htm&remark=portal+%28Information+Integration%29Or Streams Replication Administrator's Guide 11g:http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/toc.htm

NOTE: Recommended Admin Interface For Streams

10.2.0.5 Grid Control offers an excellent set of manageability & monitoring features for Streams, and it should be used as the admin interface for Streams.You may refer to the following Note in this context:

Note 784021.1 Managing Streams from Oracle Enterprise Manager 10g Release 5 Grid Control

Also you may find the same information in the following document:

http://www.oracle.com/technology/products/dataint/pdf/gc10_2_0_5_streams_ext_with_notes.pdf

Setup Streams Replication Between Different Source and Target Schemas with Different Table Structures [ID 784899.1]

  Modified 12-SEP-2010     Type SAMPLE CODE     Status PUBLISHED

 

In this Document  Purpose  Software Requirements/Prerequisites  Configuring the Sample Code  Running the Sample Code  Caution  Sample Code  Sample Code Output

Page 41: Master Note for Streams Recommended Configuration

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6 - Release: 10.2 to 11.1Information in this document applies to any platform.Oracle Server Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6

Purpose

Oracle Streams enables the sharing of data and events in a data stream, either within a database or from one database to another. This article is intended to provide the steps for a DBA to setup streams replication from one schema to another schema within the same database.

The provided script demonstrates setting up Streams Replication when the schemas have different table structures (different number of columns).

The sample code be used by Oracle Support Analysts and DBAs who needs to setup Streams Replication within the same database in Oracle 10.2 or higher.

Software Requirements/Prerequisites

The scripts provided can be used on any databases versions between Oracle Enterprise Edition 10.2.0.1 to 11.1.0.7.

The script needs to be run on SQL*Plus.

Configuring the Sample Code

It is assumed that the database runs in ARCHIVELOG mode. If this is not the case, then you need to enable the ARCHIVELOG mode for the database before you run the scripts.

Running the Sample Code

The scripts needs to be saved as "setup_streams_single_src.sql" and "streams_cleanup.sql" in appropriate directory. The script "setup_streams_single_src.sql" is used for setting up the streams replication and script "streams_cleanup.sql" is for cleaning up the changes made by the setup script. You need to read the instructions and warnings displayed during the script execution and proceed accordingly. 

The setup script can be run from SQL*Plus as follows:

SQL> connect / as sysdba SQL> @setup_streams_single_src.sql The cleanup script needs to be run from SQL*Plus as follows:

Page 42: Master Note for Streams Recommended Configuration

SQL> connect / as sysdba SQL> @streams_cleanup.sql

Caution

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this sample code before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample code may not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.

Sample Code

--------------------------- setup_streams_single_src.sql-------------------------- SPOOL streams_single_src.logSET SQLPROMPT ''SET ECHO ON/* ** Warning ***************The following script will remove any streams existing configurations in your database.  The script will drop the STRMADMIN user if existing and will create new STRMADMIN user.  This also drops any existing users SHIP and OE. 

You should not proceed with the script execution if there is an existing streams setup  in the database, instead you may modify the script for your environment and re-execute.*/ SET ECHO OFFPROMPT Press ENTER to Continue or Press CTRL+C and type EXIT to abortPAUSESET ECHO ON/* 1. Remove the streams configuration from the database: */CONNECT / as SYSDBAEXECUTE DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;DROP USER STRMADMIN CASCADE;DROP USER SHIP CASCADE;DROP USER OE CASCADE;/* 2. Setup STRMADMIN User: */CONNECT / as sysdbaCREATE USER strmadmin IDENTIFIED BY strmadmin;GRANT dba,connect,resource,aq_administrator_role TO strmadmin;EXECUTE DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');ALTER SYSTEM SET aq_tm_processes=1;/* 3. Setup Queue: */

Page 43: Master Note for Streams Recommended Configuration

CONNECT strmadmin/strmadminBEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE(  queue_table => 'streams_queue_table',  queue_name => 'streams_queue',   queue_user => 'strmadmin');END;//* 4. Create Source Table SHIP.ORDERS: */CONNECT / as sysdbaCREATE USER ship IDENTIFIED BY ship;GRANT connect ,resource TO ship;CONNECT ship/shipCREATE TABLE SHIP.ORDERS(  order_id   number(8) PRIMARY KEY,  order_item varchar2(30),  ship_no    number(8))/

/* 5. Create Target Table OE.ORDERS in the same db with 3 extra columns: */CONNECT / as sysdbaCREATE USER oe IDENTIFIED BY oe;GRANT connect ,resource TO oe;CONNECT oe/oeCREATE TABLE OE.ORDERS(  order_id           number(8) PRIMARY KEY,  order_item         varchar2(30),  ship_no            number(8),  ship_date          date,  ship_zone          varchar2(10),  ship_reach_by_date date)//* 6. Add the apply rules: */CONNECT strmadmin/strmadminSET SERVEROUTPUT ONDECLARE v_dml_rule VARCHAR2(80); v_ddl_rule VARCHAR2(80); v_src_db   VARCHAR2(120);BEGINSELECT global_name INTO v_src_db FROM global_name;DBMS_STREAMS_ADM.ADD_TABLE_RULES(   table_name          => 'OE.ORDERS',   streams_type        => 'apply',   streams_name        => 'streams_apply',   queue_name          => 'strmadmin.streams_queue',   include_dml         => true,   include_ddl         => false,   include_tagged_lcr  => false,   source_database     => v_src_db,   dml_rule_name       => v_dml_rule,   ddl_rule_name       => v_ddl_rule,   inclusion_rule      => true,

Page 44: Master Note for Streams Recommended Configuration

   and_condition       => NULL);DBMS_OUTPUT.PUT_LINE('Database GLOBAL_NAME => ' || v_src_db);DBMS_OUTPUT.PUT_LINE('Apply DML Rule for OE.ORDERS => ' || v_dml_rule);END;/BEGIN    DBMS_APPLY_ADM.ALTER_APPLY(     apply_name => 'streams_apply',     apply_user => 'strmadmin');END;//* 7. Add the capture rules, then prepare tables for instantiation: */CONNECT strmadmin/strmadminSET SERVEROUTPUT ONDECLARE v_dml_rule VARCHAR2(80); v_ddl_rule VARCHAR2(80); v_src_db   VARCHAR2(120); BEGINSELECT global_name INTO v_src_db FROM global_name;DBMS_STREAMS_ADM.ADD_TABLE_RULES(   table_name          => 'SHIP.ORDERS',   streams_type        => 'capture',   streams_name        => 'streams_capture',   queue_name          => 'strmadmin.streams_queue',   include_dml         => true,   include_ddl         => false,   include_tagged_lcr  => false,   source_database     => v_src_db,   dml_rule_name       => v_dml_rule,   ddl_rule_name       => v_ddl_rule,   inclusion_rule      => true,   and_condition       => NULL);DBMS_OUTPUT.PUT_LINE('Database GLOBAL_NAME => ' || v_src_db);   DBMS_OUTPUT.PUT_LINE('Capture DML Rule for SHIP.ORDERS => ' || v_dml_rule);DBMS_STREAMS_ADM.RENAME_SCHEMA(v_dml_rule,'SHIP','OE');END;/BEGINDBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(   table_name            => 'SHIP.ORDERS',   supplemental_logging  => 'keys');END;//* 8. Set the instantiation scn for SHIP.ORDERS: */CONNECT strmadmin/strmadminSET SERVEROUTPUT ONDECLARE  iSCN NUMBER;   v_src_db   VARCHAR2(120); BEGIN  iSCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();  SELECT global_name INTO v_src_db FROM global_name;  DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iSCN);  DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(    source_object_name    => 'SHIP.ORDERS',    source_database_name  => v_src_db,

Page 45: Master Note for Streams Recommended Configuration

    instantiation_scn     => iSCN);  COMMIT;  END;//* 9. Add the DML Handlers: */CONNECT strmadmin/strmadminCREATE OR REPLACE PROCEDURE dml_handler(in_any in sys.anydata)IS lcr           SYS.LCR$_ROW_RECORD;rc            PLS_INTEGER;object_owner  VARCHAR2(30);BEGIN    rc           := in_any.GETOBJECT(lcr);    object_owner := lcr.GET_OBJECT_OWNER();    IF lcr.get_object_owner() = 'OE' THEN       lcr.add_column('new','SHIP_DATE',sys.anydata.convertdate(SYSDATE));       lcr.add_column('new','SHIP_ZONE',sys.anydata.convertvarchar2('NORTH'));       lcr.add_column('new','SHIP_REACH_BY_DATE',sys.anydata.convertdate(SYSDATE+10));       lcr.execute(TRUE);    END IF;END;/BEGIN     DBMS_APPLY_ADM.SET_DML_HANDLER    (object_name        => 'OE.ORDERS',     object_type        => 'TABLE',     operation_name     => 'INSERT',     error_handler      => FALSE,     user_procedure     => 'STRMADMIN.DML_HANDLER');END;//* 10. Start the apply: */CONNECT strmadmin/strmadminBEGIN DBMS_APPLY_ADM.START_APPLY('streams_apply');END;//* 11. Start the capture: */CONNECT strmadmin/strmadminBEGIN DBMS_CAPTURE_ADM.START_CAPTURE('streams_capture');END;/SPOOL OFF--------------------------- setup_streams_single_src.sql------------------------------------------------------streams_cleanup.sql-----------------------------------

SPOOL streams_cleanup.log SET SQLPROMPT '' SET ECHO ON /*

Page 46: Master Note for Streams Recommended Configuration

** Warning ** ************* The following script will remove any streams existing configurations in your database.  The script will drop the STRMADMIN user if exists. This also drops any existing users  SHIP and OE. You should not proceed with the script execution if there is an existing  streams setup in the database, instead you may modify the script for your environment  and re-execute. 

*/ SET ECHO OFF PROMPT Press ENTER to Continue or Press CTRL+C and type EXIT to abort PAUSE CONNECT / AS SYSDBA SET ECHO ON EXECUTE DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION; DROP USER STRMADMIN CASCADE; DROP USER SHIP CASCADE; DROP USER OE CASCADE; SPOOL OFF ----------------------------streams_cleanup.sql-----------------------------------   

Sample Code Output

SQL> CONNECT ship/ship Connected. SQL> INSERT INTO ship.orders VALUES(23450,'Printers',98456);

1 row created.

SQL> INSERT INTO ship.orders VALUES(23451,'Scanners',98457);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT * FROM ship.orders;

  ORDER_ID ORDER_ITEM      SHIP_NO ---------- ------------ ----------      23450 Printers          98456      23451 Scanners          98457

SQL> CONNECT oe/oe

Page 47: Master Note for Streams Recommended Configuration

Connected. SQL> SET LINESIZE 200 SQL> SELECT * FROM oe.orders;

ORDER_ID ORDER_ITEM      SHIP_NO SHIP_DATE SHIP_ZONE  SHIP_REAC -------- ------------ ---------- --------- ---------- ---------    23450 Printers          98456 24-FEB-09 NORTH      06-MAR-09    23451 Scanners          98457 24-FEB-09 NORTH      06-MAR-09

SQL>

Streams Bi-Directional Setup [ID 471845.1]

  Modified 12-JAN-2011     Type HOWTO     Status PUBLISHED

 

In this Document  Goal  Solution

Applies to:

Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.7 - Release: 9.2 to 11.1Information in this document applies to any platform.

Goal

This note will help in configuring streams bidirectional setup.

Solution

## Steps which are required for implementing bidirectional setup :-

Bidirectional Streams Setup

1. Say You have 2 schemas named "hr" in 2 different databases src and dest.

2. Create strmadmin user(a streams administrator to manage streams setup) in both the databases.

-- Create the streams tablespace and set the logmnr to use it.

CREATE TABLESPACE streams_tbs DATAFILE 'streams_tbs_01.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; -- The following step is optional, in 10.2 logmnr uses SYSAUX as default tablespace.

Page 48: Master Note for Streams Recommended Configuration

exec DBMS_LOGMNR_D.SET_TABLESPACE ('streams_tbs');

-- Create the streams administrator.

Do the following at both (source) and (target).

CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;

GRANT DBA TO strmadmin;

BEGIN DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin', grant_privileges => true); END; /

-- checking that the streams admin is created:

SELECT * FROM dba_streams_administrator;

3. Set initialization parameters of both the databases as per following Notes.

For 9i:- Note 297273.1  9i Streams Recommended Configuration

10g:- Note 418755.1  10.2.0.x.x Streams Recommendations

11g:-

http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_mprep.htm#i1010370

4. Create supplemental logging for objects in hr schema for both databases.

5. Create database links under the user strmadmin in both src and dest databases.

At SRC :

create database link dest connect to strmadmin identified by strmadmin using 'dest';

At Dest :

create database link src connect to strmadmin identified by strmadmin using 'src';

6. Set up 2 queues for Capture and apply in SRC Database as shown below :

Page 49: Master Note for Streams Recommended Configuration

conn strmadmin/strmadmin@src

begin dbms_streams_adm.set_up_queue( queue_table => 'apply_srctab', queue_name => 'apply_src', queue_user => 'strmadmin'); end; /

begin dbms_streams_adm.set_up_queue( queue_table => 'capture_srctab', queue_name => 'capture_src', queue_user => 'strmadmin'); end; /

7. Set up 2 queues for Capture and apply in DEST Database as shown below :

conn strmadmin/strmadmin@dest

begin dbms_streams_adm.set_up_queue( queue_table => 'apply_desttab', queue_name => 'apply_dest', queue_user => 'strmadmin'); end; /

begin dbms_streams_adm.set_up_queue( queue_table => 'capture_desttab', queue_name => 'capture_dest', queue_user => 'strmadmin'); end; /

8. Configure capture process on SRC database.

conn strmadmin/strmadmin@src

begin dbms_streams_adm.add_schema_rules ( schema_name => 'hr', streams_type => 'capture', streams_name => 'captures_src', queue_name => 'capture_src', include_dml => true, include_ddl => true, inclusion_rule => true); end; /

9. Configure apply process on SRC database

Page 50: Master Note for Streams Recommended Configuration

conn strmadmin/strmadmin@src

begin dbms_streams_adm.add_schema_rules ( schema_name => 'hr', streams_type => 'apply', streams_name => 'applys_src', queue_name => 'apply_src', include_dml => true, include_ddl => true, source_database => 'dest'); end; /

10. If needed setup conflict handlers for objects in hr@SRC, Refer the following link in Streams documentation :

http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14228/conflict.htm

11. Configure propagation process on SRC Database:

conn strmadmin/strmadmin@src

begin dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'hr', streams_name => 'prop_src_to_dest', source_queue_name => 'capture_src', destination_queue_name => 'apply_dest@dest', include_dml => true, include_ddl => true, source_database => 'src'); end; /

12. Configure capture process on dest Database :

conn strmadmin/strmadmin@dest

begin dbms_streams_adm.add_schema_rules ( schema_name => 'hr', streams_type => 'capture', streams_name => 'captures_dest', queue_name => 'capture_dest', include_dml => true, include_ddl => true); end; /

13. Set the schema instantiation SCN on SRC using the SCN of Dest database :

connect to strmadmin/strmadmin@dest

Page 51: Master Note for Streams Recommended Configuration

declare v_scn number; begin v_scn := dbms_flashback.get_system_change_number(); dbms_apply_adm.set_schema_instantiation_scn@src( source_schema_name => 'hr', source_database_name => 'dest', instantiation_scn => v_scn, recursive => true); end; /

14. Configure apply process on dest :

connect strmadmin/strmadmin@dest begin dbms_streams_adm.add_schema_rules ( schema_name => 'hr', streams_type => 'apply', streams_name => 'applys_dest', queue_name => 'apply_dest', include_dml => true, include_ddl => true, source_database => 'src'); end; /

15. Configure propagation process on dest .

connect strmadmin/strmadmin@dest

begin dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'hr', streams_name => 'prop_dest_to_src', source_queue_name => 'capture_dest', destination_queue_name => 'apply_src@src', include_dml => true, include_ddl => true, source_database => 'dest'); end; /

16. Set schema instantiation on dest Database :

There are many ways to instantiate hr@dest Database .

If object is not already exists in the dest database , instantiation can be done using export/import .

If object already exists , Instantiation can be done using dbms_apply_adm.set_schema_instantiation_scn

Say object already exists in hr@dest then,

conn strmadmin/strmadmin@src

Page 52: Master Note for Streams Recommended Configuration

declare v_scn number; begin v_scn := dbms_flashback.get_system_change_number(); dbms_apply_adm.set_schema_instantiation_scn@dest( source_schema_name => 'hr', source_database_name => 'src', instantiation_scn => v_scn, recursive => true); end; /

Ensure that supplemental logging is present for objects present in both SRC and DEST databases.

17. If needed Configure conflict resolution in hr@Dest Database, Refer the following link in Streams documentation :

http://download-east.oracle.com/docs/cd/B19306_01/server. 102/b14228/conflict.htm

18.Start capture and apply processes on DEST :

Start Apply :

SET parameter disable_on_error to 'N' to continue processing row LCR even it encounters errors,

begin dbms_apply_adm.set_parameter ( apply_name => 'applys_dest', parameter => 'disable_on_error', value => 'N'); end; / exec dbms_apply_adm.start_apply (apply_name=> 'applys_dest');

Start capture process in dest :

exec dbms_capture_adm.start_capture (capture_name=>'captures_dest');

19. Start capture and apply processes on SRC :

begin dbms_apply_adm.set_parameter ( apply_name => 'applys_src', parameter => 'disable_on_error', value => 'N'); end; /

exec dbms_apply_adm.start_apply (apply_name=> 'applys_src');

Start capture process in src :

Page 53: Master Note for Streams Recommended Configuration

exec dbms_capture_adm.start_capture (capture_name=>'captures_src');

20. Testing of the Bidirectional Steams setup can be done with DML & DDL Statements between hr@SRC and hr@dest Schemas

Refrences:-

10g:- http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/repmultdemo.htm#STREP004

11g:- http://download.oracle.com/docs/cd/B28359_01/server.111/b28322/config_flex.htm#insertedID2

9i:- http://download.oracle.com/docs/cd/B10501_01/server.920/a96571/repmultdemo.htm#54726

Note 335516.1 Streams Performance Recommendations Note 437838.1 Streams Specific Patches Note 273674.1 Streams Configuration Report and Health Check Script Note 290605.1 Oracle Streams STRMMON Monitoring Utility Note 238455.1 Streams Supported and Unsupported Datatypes

How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]

  Modified 25-OCT-2010     Type SAMPLE CODE     Status PUBLISHED

 

In this Document  Purpose  Software Requirements/Prerequisites  Configuring the Sample Code  Running the Sample Code  Caution  Sample Code  Sample Code Output  References

Applies to:

Oracle Server - Enterprise EditionInformation in this document applies to any platform.Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6

Page 54: Master Note for Streams Recommended Configuration

Purpose

Starting from release 9.2, Oracle has introduced a more flexible and efficient way of implementing replication using Streams. Oracle Streams enables the sharing of data and events in a data stream, either within a database or from one database to another.

In a nutshell, replication using Oracle Streams is implemented in the following way: 1.-A background capture process is configured to capture changes made to tables,schemas, or the entire database. The capture process captures changes from the redo log and formats each captured change into a logical change record (LCR). The capture process uses logminer to mine the redo/archive logs to format LCRs. 2.-The capture process enqueues LCR events into a queue that is specified. 3.-This queue is scheduled to Propagate events from one queue to another in a different database. 4.-A background apply process dequeues the events and applies them at the destination database.

The steps below are intended to assist Replication DBAs in setting up and configuring Streams Replication. The sample code outlines the steps to set up one-way streams replication at Schema level.

Software Requirements/Prerequisites

Applicable from  release 10.1.0.2 to 11.1.0.6

Configuring the Sample Code

As a prerequisite, ensure the streams parameters are configured in the source and target instances as detailed in the relevant notes for your release:

Note 298877.1 10gR1 Streams Recommended Configuration Note.418755.1 10gR2 Streams Recommended Configuration

It is highly recommended to run Oracle Streams with the latest available patchset for your OS/release combination. Also, take a look at Note 437838.1 Streams Specific Patches

Running the Sample Code

To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment :

STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site

STRMADMIN = Streams Administrator with password strmadmin

HR = Source schema to be replicated - This schema is already installed on the source site

The sample code replicates both DML and DDL.

Page 55: Master Note for Streams Recommended Configuration

The Streams Administrator (STRMADMIN) has been created as per Note 786528.1 How to create STRMADMIN user and grant privileges.

Caution

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this sample code before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample code may not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.

Sample Code

 

Note:If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT" line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the databases in the environment.

 

/************************* BEGINNING OF SCRIPT ******************************Run SET ECHO ON and specify the spool file for the script. Check the spool file for errors after you run this script.*/

SET ECHO ONSPOOL stream_oneway.out

/* STEP 1.- Create the streams queue and the database links that will be used for propagation. */

connect STRMADMIN/[email protected]

BEGIN    DBMS_STREAMS_ADM.SET_UP_QUEUE(      queue_table => 'STREAMS_QUEUE_TABLE',      queue_name  => 'STREAMS_QUEUE',      queue_user  => 'STRMADMIN'); END; /conn sys/&[email protected] as sysdba create public database link STRM2.NET using 'strm2.net';

conn strmadmin/[email protected] create database link STRM2.NET connect to strmadmin identified by strmadmin;

Page 56: Master Note for Streams Recommended Configuration

/* STEP 2.- Connect as the Streams Administrator in the target site strm2.net and create the streams queue */

connect STRMADMIN/[email protected] BEGIN    DBMS_STREAMS_ADM.SET_UP_QUEUE(      queue_table => 'STREAMS_QUEUE_TABLE',      queue_name  => 'STREAMS_QUEUE',      queue_user  => 'STRMADMIN'); END; /

/* STEP 3.- Add apply rules for the Schema at the destination database  */

BEGIN    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(      schema_name     => 'HR',      streams_type    => 'APPLY ',      streams_name    => 'STREAM_APPLY',      queue_name      => 'STRMADMIN.STREAMS_QUEUE',      include_dml     => true,      include_ddl     => true,      source_database => 'STRM1.NET'); END; /

/* STEP 4.- Add capture rules for the schema HR at the source database */

CONN STRMADMIN/[email protected]  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(    schema_name     => 'HR',    streams_type    => 'CAPTURE',    streams_name    => 'STREAM_CAPTURE',    queue_name      => 'STRMADMIN.STREAMS_QUEUE',    include_dml     => true,    include_ddl     => true,    source_database => 'STRM1.NET');END;/

/* STEP 5.- Add propagation rules for the schema HR at the source database. This step will also create a propagation job to the destination database */

BEGIN    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(      schema_name            => 'HR',      streams_name           => 'STREAM_PROPAGATE',      source_queue_name      => 'STRMADMIN.STREAMS_QUEUE',      destination_queue_name => '[email protected]',      include_dml            => true,      include_ddl            => true,      source_database        => 'STRM1.NET'); END; /

Page 57: Master Note for Streams Recommended Configuration

/* STEP 6.- Export, import and instantiation of tables from Source to Destination Database; if the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database 

Export from the Source Database: Specify the OBJECT_CONSISTENT=Y clause on the export command. By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).  */HOST exp USERID=SYSTEM/&[email protected] OWNER=HR FILE=hr.dmp LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE

/* Import into the Destination Database: Specify STREAMS_INSTANTIATION=Y clause in the import command. By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file  */HOST imp USERID=SYSTEM/&[email protected] FULL=Y CONSTRAINTS=Y FILE=hr.dmp IGNORE=Y  COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y 

/* If the objects are already present in the destination database, there are two ways of instantiating the objects at the destination site.

1. By means of Metadata-only export/import : Specify ROWS=N during Export Specify IGNORE=Y during Import along with above import parameters.

2. By Manaually instantiating the objects

Get the Instantiation SCN at the source database:

connect STRMADMIN/[email protected] set serveroutput on  DECLARE      iscn NUMBER; -- Variable to hold instantiation SCN value      BEGIN         iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();         DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);  END;  / 

Instantiate the objects at the destination database with this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN, then the apply process discards the LCR. Else, the apply process applies the LCR.

connect STRMADMIN/[email protected] BEGIN    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(      SOURCE_SCHEMA_NAME   => 'HR',      SOURCE_DATABASE_NAME => 'STRM1.NET',      RECURSIVE            => TRUE,

Page 58: Master Note for Streams Recommended Configuration

     INSTANTIATION_SCN    => &iscn ); END;

Enter value for iscn: <Provide the value of SCN that you got from the source database above>

In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN is used for instantiation. If using the parameter recursive true with SET_SCHEMA_INSTANTIATION_SCN

then you need a dblink on the destination database to the source database with the same name as the source database.

Refer to the following documentationOracle� Database PL/SQL Packages and Types Reference 10g Release 2 (10.2) B14258-01

page 15-46 - SET_SCHEMA_INSTANTIATION_SCN Procedure

otherwise apply aborts with following error example - ORA-26687: no instantiation SCN provided for "HR"."DEPARTMENTS" in  source database "STRM1.NET"  */

/* STEP 7.- Specify an 'APPLY USER' at the destination database. This is the user who would apply all statements and DDL statements.The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects. */

conn strmadmin/[email protected]  DBMS_APPLY_ADM.ALTER_APPLY(    apply_name => 'STREAM_APPLY',    apply_user => 'HR');END;/

/* STEP 8.- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination */

conn strmadmin/[email protected] BEGIN   DBMS_APPLY_ADM.SET_PARAMETER(    apply_name => 'STREAM_APPLY',    parameter  => 'disable_on_error',    value      => 'n');END;/

DECLARE    v_started number; BEGIN    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started    FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY';    if (v_started = 0) then       DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY');    end if; END; /

Page 59: Master Note for Streams Recommended Configuration

/* STEP 9.- Set up capture to retain 7 days worth of logminer checkpoint information, then start the Capture process on the source */

conn strmadmin/[email protected] BEGIN  DBMS_CAPTURE_ADM.ALTER_CAPTURE(    capture_name              => 'STREAM_CAPTURE',    checkpoint_retention_time => 7);END;/

begin   DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE'); end; /

/* Check the Spool ResultsCheck the stream_oneway.out spool file to ensure that all actions finished successfully after this script is completed. */

SET ECHO OFFSPOOL OFF

/*************************** END OF SCRIPT ******************************/

Sample Code Output

/* Perform changes in tables belonging to HR on the source site and check that these are applied on the destination */

conn HR/[email protected]

insert into HR.DEPARTMENTS values (99,'OTHER',205,1700); commit;

alter table HR.EMPLOYEES add (NEWCOL VARCHAR2(10)); 

/* Confirm the insert has been done on  HR.DEPARTMENTS at destination and a HR.EMPLOYEES has now a new column */

conn HR/[email protected]

select * from HR.DEPARTMENTS where department_id=99;

desc HR.EMPLOYEES;

References

Page 60: Master Note for Streams Recommended Configuration

NOTE:273674.1 - Streams Configuration Report and Health Check ScriptNOTE:290605.1 - Oracle Streams STRMMON Monitoring UtilityNOTE:298877.1 - 10gR1 Streams Recommended ConfigurationNOTE:300223.1 - Comparative Study Between Oracle Streams and Oracle Data GuardNOTE:418755.1 - 10gR2 Streams Recommended ConfigurationNOTE:437838.1 - Streams Specific PatchesNOTE:786528.1 - How to Create STRMADMIN User and Grant PrivilegesStreams Replication Administrator's Guide

How to add a New Table to an Existing Streams Setup? [ID 833624.1]

  Modified 14-JAN-2011     Type HOWTO     Status MODERATED

 

In this Document  Goal  Solution

Platforms: 1-914CU;

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.

Applies to:

Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.7 - Release: 9.2 to 11.1Information in this document applies to any platform.

Goal

How to add a new table to existing streams setup?

Solution

Various scenarious for adding to an existing Streams environment are discussed within the documentation. It depends on what your current setup is and exactly what you are doing as to which of the sections are appropriate for you.

The following demonstration requires two databases, hora10r24 and hora10r242 in this example, with uni-directional replication. The source database (hora10r24) needs to be running in archive log mode.

set echo on set serveroutput on spool setup.out

connect sys/oracle@hora10r24 as sysdba

exec dbms_propagation_adm.stop_propagation('STREAMS_PROPAGATION')

Page 61: Master Note for Streams Recommended Configuration

exec dbms_streams_adm.remove_streams_configuration;

drop user strmadmin cascade;

create user strmadmin identified by streams;

grant DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE to strmadmin; grant CREATE DATABASE LINK to strmadmin; grant CREATE ANY DIRECTORY to strmadmin;

BEGIN DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin', grant_privileges => true); END; /

ALTER USER strmadmin DEFAULT TABLESPACE USERS QUOTA UNLIMITED ON USERS;

drop user test cascade;

create user test identified by test; grant connect, resource to test; alter user test default tablespace users;

connect test/test@hora10r24

CREATE TABLE TESTA ( COL1A VARCHAR(4) PRIMARY KEY);

grant select, update, delete, insert on test.testA to strmadmin;

connect sys/oracle@hora10r242 as sysdba;

exec dbms_streams_adm.remove_streams_configuration;

drop user strmadmin cascade;

create user strmadmin identified by streams;

grant DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE to strmadmin; grant CREATE DATABASE LINK to strmadmin; grant CREATE ANY DIRECTORY to strmadmin;

BEGIN DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin', grant_privileges => true); END; /

ALTER USER strmadmin DEFAULT TABLESPACE USERS QUOTA UNLIMITED ON USERS;

drop user test cascade;

Page 62: Master Note for Streams Recommended Configuration

create user test identified by test; grant connect, resource to test; alter user test default tablespace users;

connect test/test@hora10r242

CREATE TABLE TESTA ( COL1A VARCHAR(4) PRIMARY KEY);

grant select, update, delete, insert on test.testa to strmadmin;

connect STRMADMIN/streams@hora10r24;

CREATE DATABASE LINK hora10r242 connect to strmadmin identified by streams using 'hora10r242';

BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'streams_capture_qt', queue_name => 'streams_capture_q', queue_user => 'strmadmin'); END; /

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'test.testa', streams_name => 'STREAMS_PROPAGATION', source_queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q', destination_queue_name => '[email protected]', include_dml => true, include_ddl => true, source_database => 'hora10r24.uk.oracle.com', inclusion_rule => true, queue_to_queue => true); END; /

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'test.testa', streams_type => 'capture', streams_name => 'STREAMS_CAPTURE', queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q', include_dml => true, include_ddl => true, source_database => 'hora10r24.uk.oracle.com', include_tagged_lcr => false, inclusion_rule => true); END; /

connect STRMADMIN/streams@hora10r242;

Page 63: Master Note for Streams Recommended Configuration

CREATE DATABASE LINK hora10r24 connect to STRMADMIN identified by streams using 'hora10r24';

BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'STREAMS_APPLY_QT', queue_name => 'STREAMS_APPLY_Q', queue_user => 'STRMADMIN'); END; /

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'test.testa', streams_type => 'apply', streams_name => 'STREAMS_APPLY', queue_name => 'STRMADMIN.STREAMS_APPLY_Q', include_dml => true, include_ddl => false, source_database => 'hora10r24.uk.oracle.com', include_tagged_lcr => false, inclusion_rule => true); END; /

connect STRMADMIN/streams@hora10r24;

DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hora10r242.uk.oracle.com( source_object_name => 'test.testa', source_database_name => 'hora10r24.uk.oracle.com', instantiation_scn => iscn); END; /

connect STRMADMIN/streams@hora10r242;

begin dbms_apply_adm.start_apply('STREAMS_APPLY'); end; / connect STRMADMIN/streams@hora10r24;

begin dbms_capture_adm.start_capture('STREAMS_CAPTURE'); end; /

connect test/test@hora10r24

INSERT INTO testa VALUES('A'); commit;

connect sys/oracle@hora10r24 as sysdba

Page 64: Master Note for Streams Recommended Configuration

exec dbms_lock.sleep(60)

connect test/test@hora10r242

select * from testa;

spool off

Add the table to the environment.

set echo on set serveroutput on spool add.out

connect test/test@hora10r24

CREATE TABLE TESTB ( COL1B VARCHAR(4) PRIMARY KEY);

grant select, update, delete, insert on test.testb to strmadmin;

INSERT INTO testb VALUES('B'); commit;

connect STRMADMIN/streams@hora10r242;

begin dbms_apply_adm.stop_apply('STREAMS_APPLY'); end; /

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'test.testb', streams_type => 'apply', streams_name => 'STREAMS_APPLY', queue_name => 'STRMADMIN.STREAMS_APPLY_Q', include_dml => true, include_ddl => false, source_database => 'hora10r24.uk.oracle.com', include_tagged_lcr => false, inclusion_rule => true); END; /

connect STRMADMIN/streams@hora10r24;

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'test.testb', streams_name => 'STREAMS_PROPAGATION', source_queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q', destination_queue_name => '[email protected]', include_dml => true, include_ddl => true, source_database => 'hora10r24.uk.oracle.com', inclusion_rule => true, queue_to_queue => true); END;

Page 65: Master Note for Streams Recommended Configuration

/

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'test.testb', streams_type => 'capture', streams_name => 'STREAMS_CAPTURE', queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q', include_dml => true, include_ddl => true, source_database => 'hora10r24.uk.oracle.com', include_tagged_lcr => false, inclusion_rule => true); END; /

--Instantiate the table at the apply side --This step could be performed by datapump

connect test/test@hora10r242

CREATE TABLE TESTB ( COL1B VARCHAR(4) PRIMARY KEY);

grant select, update, delete, insert on test.testb to strmadmin;

INSERT INTO testb VALUES('B'); commit;

connect STRMADMIN/streams@hora10r24;

DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hora10r242.uk.oracle.com( source_object_name => 'test.testb', source_database_name => 'hora10r24.uk.oracle.com', instantiation_scn => iscn); END; /

connect STRMADMIN/streams@hora10r242;

begin dbms_apply_adm.start_apply('STREAMS_APPLY'); end; /

connect test/test@hora10r24

INSERT INTO testb VALUES('B1'); commit;

connect sys/oracle@hora10r24 as sysdba

exec dbms_lock.sleep(30)

connect test/test@hora10r242

select * from testb;

Page 66: Master Note for Streams Recommended Configuration

spool off

Streams Table Level Replication Setup Script [ID 789500.1]

  Modified 05-JAN-2011     Type SCRIPT     Status PUBLISHED

 

In this Document  Purpose  Software Requirements/Prerequisites  Configuring the Script  Running the Script  Caution  Script  Script Output  References

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6 - Release: 10.2 to 11.1Information in this document applies to any platform.

Purpose

The following is a complete code sample that configures unidirectional streams replication at table level.

Software Requirements/Prerequisites

The script is applicable to 10.2.x and 11.x Please ensure the following are setup as prerequisites.

1.Ensure the streams parameters are configured in the source and target instances as detailed in   Note 298877.1 10G Streams Recommended Configuration

2.Create streams administrator user STRMADMIN as per << Note 786528.1>> 'How to create STRMADMIN user and grant privileges'.

3. For additional Supplemental logging requirements please check note 782541 'Streams Replication Supplemental Logging Requirements' and create the necessary supplemental logging on the source.

Configuring the Script

Page 67: Master Note for Streams Recommended Configuration

To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment :

STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site STRMADMIN = Streams Administrator with password strmadmin HR.EMPLOYEES = table to be replicated to the target database.

Running the Script

The script assumes that :

-- The sample HR schema is installed on the source site - STRM1.NET -- A user HR_DEMO exists on the destination site -STRM2.NET

-- The target site table is empty .

Please cut and paste the script into a file, make the necessary changes and run it from sqlplus.

Caution

This script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.

Script

/* Step 1 - Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at STRM1.NET. */

conn strmadmin/[email protected]

BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; /conn sys/[email protected] as sysdbacreate public database link STRM2.NET using 'strm2.net';

conn strmadmin/[email protected]

create database link STRM2.NET connect to strmadmin identified by strmadmin;

Page 68: Master Note for Streams Recommended Configuration

/* Step 2 - Connect as the Streams Administrator in the target site STRM2.NET and create the streams queue */

conn strmadmin/[email protected]

BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE', queue_table =>'STREAMS_QUEUE_TABLE', queue_user => 'STRMADMIN'); END; /

/*Step 3 -Connected to STRM1.NET, create CAPTURE and PROPAGATION rules for HR.EMPLOYESS */ 

conn strmadmin/[email protected] 

BEGIN  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(  table_name => 'HR.EMPLOYEES',  streams_name => 'STRMADMIN_PROP',  source_queue_name => 'STRMADMIN.STREAMS_QUEUE',  destination_queue_name => '[email protected]',  include_dml => true,  include_ddl => true,  source_database => STRM1.NET);  END;  / 

BEGIN    DBMS_STREAMS_ADM.ADD_TABLE_RULES(      table_name     => 'HR.EMPLOYEES',      streams_type    => 'CAPTURE',      streams_name    => 'STRMADMIN_CAPTURE',      queue_name      => 'STRMADMIN.STREAMS_QUEUE',      include_dml     => true,      include_ddl     => true,      source_database => 'STRM1.NET');    END;    /  

 

/*Step 4 - Connected as STRMADMIN at STRM2.NET, create APPLY rules for HR.EMPLOYEES */

conn STRMADMIN/[email protected]

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'HR.EMPLOYEES', streams_type => 'APPLY', streams_name => 'STRMADMIN_APPLY', queue_name => 'STRMADMIN.STREAMS_QUEUE', include_dml => true, include_ddl => true, source_database => 'STRM1.NET'); END; / BEGIN    DBMS_APPLY_ADM.ALTER_APPLY(      apply_name => 'STRMADMIN_APPLY',      apply_user => 'HR');  END;  / 

Page 69: Master Note for Streams Recommended Configuration

BEGIN    DBMS_APPLY_ADM.SET_PARAMETER(      apply_name => 'STRMADMIN_APPLY',      parameter  => 'disable_on_error',      value      => 'n');  END;  / 

/*Step 7 - Take an export of the table at STRM1.NET */

exp USERID=SYSTEM/[email protected] TABLES=EMPLOYEES FILE=hr.dmp 

LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE 

/*Step 8 - Transfer the export dump file to STRM2.NET and import */

imp USERID=SYSTEM/<password>@strm2.net CONSTRAINTS=Y FULL=Y FILE=hr.dmp 

IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y

/*Step 9 - Start Apply and capture */

conn strmadmin/[email protected] 

BEGIN  DBMS_APPLY_ADM.START_APPLY(  apply_name => 'STRMADMIN_APPLY');  END;  / 

conn strmadmin/[email protected] 

BEGIN  DBMS_CAPTURE_ADM.START_CAPTURE(  capture_name => 'STRMADMIN_CAPTURE');  END;  / 

 

For bidirectionals treams setup, Please run steps 1 through 9 after interchanging Db1 and Db2. Caution should be exercised while setting the instantiation SCN this time as one maynot want to export and import the data. Export option ROWS=N can be used for the instantiation of objects from DB2--> DB1.

 

 

Script Output

/* Perform changes HR.EMPLOYEES and confirm that these are applied to tables on the destination */

Page 70: Master Note for Streams Recommended Configuration

conn hr/[email protected] insert into hr.Employees values (99999,'TEST','TEST','TEST@oracle','1234567',sysdate,'ST_MAN',null,null,null,null); commit; conn hr / [email protected]

select * From employees where employee_id=99999; 

How to Create STRMADMIN User and Grant Privileges [ID 786528.1]

  Modified 02-SEP-2010     Type SCRIPT     Status PUBLISHED

 

In this Document  Purpose  Software Requirements/Prerequisites  Configuring the Script  Running the Script  Caution  Script  Script Output

Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6 - Release: 10.1 to 11.1Information in this document applies to any platform.Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6

Purpose

The following script is intented to be used by the DBA to create an administrator user for STREAMS .

Software Requirements/Prerequisites

This code is applicable to versions 10.x and above. 

Configuring the Script

Please run this script logged in as a  user who has SYSDBA privileges.

Running the Script

Page 71: Master Note for Streams Recommended Configuration

To run this script set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment :

STRM1.NET = Global Database name of the Source (capture) Site STRM2.NET = Global Database name of the Target (apply) Site

STRMADMIN = Streams Administrator with password strmadmin

Caution

This script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.

Script

connect <DBA user>/<password>@STRM1.NET as SYSDBA

create user STRMADMIN identified by STRMADMIN;

ALTER USER STRMADMIN DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');