Home » Server Options » Replication » Data Synchronization Issue
Data Synchronization Issue [message #345518] |
Wed, 03 September 2008 22:52 ![Go to next message Go to next message](/forum/theme/orafaq/images/down.png) |
n_joy
Messages: 2 Registered: September 2008
|
Junior Member |
|
|
Hi All,
Need ideas from you. Let me provide the scenario. Suppose, we have one main database on one side(say A) and an external system on the other side(say C). Midway, we have one more staging database(say B).
Lets take, we have one record related to Bank Information both in database A and database B and the following activites are performed.
1) "Bank Name" column in database B is updated by the external system C.
When this Bank Name is updated in database B by external system C, we need to update the value of this field in database A.
2) Suppose couple of days later, the Bank phone number of the same bank record is updated in main database A and the same update needs to be reflected in staging database B.
How can we take care of both these activities of data-synchronization. What are the different approaches we can take? FYI, we are on oracle 10g rel2 and Windows OS.
Cheers!
Jay
|
|
|
Re: Data Synchronization Issue [message #345519 is a reply to message #345518] |
Wed, 03 September 2008 23:09 ![Go to previous message Go to previous message](/forum/theme/orafaq/images/up.png) |
rajatratewal
Messages: 507 Registered: March 2008 Location: INDIA
|
Senior Member |
|
|
Oracle Streams Can be used for replication.
<Mirroring Of Schema Through Streams>
Streams Can Be used For Replication:-
The processing of streams is divided into three
main processes (Capture, Staging and Apply):
·The capture process is an optional
background process that mines DDL
and DML changes from the redo logs and wraps them up as
Logical Change Records (LCRs). In addition to the default
capture mechanism user defined events can be enqueued
directly. Currently, the capture process cannot capture
changes to index-organized tables(Only Regular Tables).
·Staging involves storing the LCRs in queues of datatype
SYS.ANYDATA LCRs can be propogated between a source and
destination staging area in different databases if necessary.
Propagation is scheduled using job queues.
·The apply process is an optional
background process that dequeues LCRs and either applies them
directly or for user-defined messages passes them as
parameters to user-defined packages.
Prerequisites:-
1)AQ_TM_PROCESSES:-
This parameter establishes queue monitor process(At Least
1)These queue monitor processes are responsible for managing
time-based operations of messages such as delay and
expiration, cleaning up retained messages after the
specified detention time, and cleaning up consumed messages
if the retention time is 0.
2)GLOBAL_NAMES :Must Be true and should contain name of the
databases that will be used in replication.
3) JOB_QUEUE_PROCESSES:Must be set to atleast 2.
4) COMPATIBLE: Must be 9.2.0 or higher.
5) LOG_PARALLELISM:Must be set to 1.
6) Any Database that is producing changes must be in ARCHIVELOG Mode.(In our case it will be production databases).
Steps For Implementing:-
1)Alternate tablespace for LogMiner tables.By default LogMiner
tables are in SYSTEM tablespace.
EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('&tspace_name');
2)Create the Streams administrator and grant this user the
necessary privileges. These privileges enable the user to
manage queues, execute subprograms in packages related to
Streams, create rule sets, create rules, and monitor the
Streams environment by querying data dictionary views and
queue tables.
GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin
IDENTIFIED BY strmadminpw;
ACCEPT streams_tbs PROMPT 'Enter Streams administrator
tablespace on mult1.net:'
ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs QUOTA
UNLIMITED ON &streams_tbs;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
3)Create streams Queue By executing this procedure:-
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
Running This procedure will do following things:-
1)Create a queue table named STREAMS_QUEUE_TABLE.
2)Create a queue named streams_queue.
3)Starts The Queue
4) Create database links from the current database to the
other databases in the environment.
CREATE DATABASE LINK '&dblinkname' CONNECT TO strmadmin
IDENTIFIED BY strmadminpw USING '&targetdatabase';
5) Prepare the tables for latest time Conflict resolution.(May
not be used in our case)
6) Supplement Logging:Database level and table level.
Additional information required by capture process to
propagate changes).
7) Create Capture Process:-
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'hr'
streams_type => 'capture',
streams_name => 'capture_hr',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true);
END;
8) Create Apply process:-(Need to Be Configured at Target
Database because in our case Changes will be propagated
from Production Database)
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'hr',
streams_type => 'apply',
streams_name => 'apply_from_mult2',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'mult2.net');
END;
9) Configure Propagation Of Changes:-
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'hr',
streams_name => '<StreamName>',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue@<TargetDatabase>',
include_dml => true,
include_ddl => true,
source_database => '<sourceDatabase>');
END;
10) Configure Instantiation SCN.The instantiation SCN of the
source table must be configured in the destination table
before the apply process will work. If the destination
table is already present this can be
accomplished using a metadata only export/import:
Export User
Import User with STREAMS_INSTANTIATION=y
11) Start The Apply Process at target Database
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => '<apply_name>',
parameter => 'disable_on_error',
value => 'n');
END;
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => '<apply_name>');
END;
12) Start The Capture Process at source Database:-
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_hr');
END;
Regards,
Rajat
-
Attachment: Diagram.jpg
(Size: 338.55KB, Downloaded 2246 times)
[Updated on: Thu, 04 September 2008 01:43] Report message to a moderator
|
|
|
Goto Forum:
Current Time: Mon Feb 10 23:28:43 CST 2025
|