IMPORTANT NOTE:
This will only cover the NEW features of logminer, this is NOT an
exhaustive study on how logminer works, SO this means you
need to already know how logminer works as the beginning aspects are not
covered here. NOTE: the majority of this info comes from metalink doc id:
148616.1, dated Oct 11, 2001
<SHAMELESS PLUG ON>
If you want the total ins/outs of logminer, come and see me at
IOUG-A, if all goes well, i'll be doing a 2 hr mini-lesson on the logminer,
beginning (just what is a redo log) thru advanced(can i track DDL changes and
how does it affect me for looking at old data before the table had a column
added), and we'll contrast/compare the command line versus the GUI logminer
viewer that comes with OEM.
<SHAMELESS PLUG OFF>
and now back to the show:
Logminer has been enhanced quite a bit in 9i.
Here are the features that will be discussed in this segment:
New Dictionary options Redo log files as dictionary
Online data dictionaryDDL TrackingSkipping past redo log
corruptionSkip uncommited transactionsChained/Migrated rowsClustered
TablesDirect Path Insert LoggingSupplemental LoggingMine_value
functionColumn_present functionGUI
--------------------------------------------------------------------------------------------------------------------- New
Dictionary options: Back in the 8i days, there was only flat file
dictionary option. This has been expanded to
include:
Redo log files as dictionary: This is where the current dictionary is
written to the redo logs, there are
restrictions: DBMS_LOGMNR_D.BUILD
must be run on an Oracle9i
database The database must be in
archivelog mode The COMPATIBLE
parameter value must be set to
9.0.X The dictionary must belong
to the same database as the redo logs to be
analyzed No DDL is allowed during
the dictionary extraction SQL>
execute DBMS_LOGMNR.START_LOGMNR(options => ->
dbms_logmnr.dict_from_redo_logs);
Online data dictionary: This means you will be using the current data
dictionary as it exists right now in the database.
To instruct LogMiner to use the database data dictionary, simply provide
this option to the DBMS_LOGMNR.START_LOGMNR
procedure after adding the logs to be analyzed. No dictionary build is
done.SQL> execute DBMS_LOGMNR.START_LOGMNR(options =>
-dbms_logmnr.dict_from_online_catalog); ---------------------------------------------------------------------------------------------------------------------DDL
Tracking:
A. LogMiner automatically records the SQL statement
used for a DDL operation as such, so that operations like a DROP/ALTER/CREATE
table can be easily tracked. In Oracle8i, only the internal operations to the
data dictionary are recorded and it is difficult to track these operations (A
DROP table results in several DML statements against the data
dictionary). B. By specifying the
DBMS_LOGMNR.DDL_DICT_TRACKING option when starting LogMiner, the LogMiner
internal dictionary is updated if a DDL event is found in the redo log files.
This allows the SQL_REDO in V$LOGMNR_CONTENTS to accurately display information
for objects that are modified by user DML statements after LogMiner dictionary
is built. This option is not valid with the
DICT_FROM_ONLINE_CATALOG
option.
1. Build the
dictionary:
SQL> execute DBMS_LOGMNR_D.BUILD ('dictionary.ora',
'/database/9i/logminer');
2. Alter the table to add a column
:
SQL> alter table test add(c4
number);
3. Add the log which contains the ALTER
statement:
SQL> execute DBMS_LOGMNR.ADD_LOGFILE(logfilename =>
'/database/9i/arch/1_683.dbf', options =>
dbms_logmnr.new);
4. Start the log analysis: =>
Without the DDL_DICT_TRACKING
option: SQL>
execute DBMS_LOGMNR.START_LOGMNR(dictfilename =>
'/database/9i/logminer/dictionary.ora');
V$LOGMNR_CONTENTS.SQL_REDO
contains: insert into
"SCOTT"."TEST"("COL 1","COL 2","COL 3","COL 4") values(HEXTORAW('c102'),
HEXTORAW('c103'), HEXTORAW('c104'),
HEXTORAW('c105')); => With the
DDL_DICT_TRACKING
option: SQL> execute
DBMS_LOGMNR.START_LOGMNR(dictfilename =>
'/database/9i/logminer/dictionary.ora', options
=>dbms_logmnr.ddl_dict_tracking);
V$LOGMNR_CONTENTS.SQL_REDO
contains: insert into
"SCOTT"."TEST"("C1","C2","C3","C4") values
('1','2','3','4'); Note: You must be sure that you
have included the log which contains the DDL statement in the logs to be
analyzed with
DBMS_LOGMNR.ADD_LOGFILE. ---------------------------------------------------------------------------------------------------------------------Skipping
past redo log corruption
There is an option that can be called when executing the
DBMS_LOGMNR.START_LOGMNR procedure that will let you skip over log corruption,
so when would this be useful? Ok you’re in a recovery situation and you’ve
hit a corrupted redo log.
In most cases you’re outta luck, you can’t get past it, oracle won’t let
you. But you need the data that was committed to the database since
then(hopefully its not like week worth of redo). Calling the procedure
like this:
SQL> execute DBMS_LOGMNR.START_LOGMNR(options =>
dbms_logmnr.skip_corruption); This will work with a caveat,
as long as the corruption is NOT in the header of the redo log, you will be
fine. If the corruption is in the header you’re still out of
luck. Skip uncommited transactions
You can now tell logminer "I only want to see committed
transactions". They will return in the v$logmnr_contents view in commit
order. Call it like this:
SQL> execute DBMS_LOGMNR.START_LOGMNR(options => ->
dbms_logmnr.committed_data_only);---------------------------------------------------------------------------------------------------------------------
Chained/Migrated rows
DML that is performed on chained/migrated rows will now be shown
correctly in the v$logmnr_contents view. In the past in 8i, it only showed
as "internal operation".
---------------------------------------------------------------------------------------------------------------------Clustered
Tables Same here as for chained/migrated rows, logminer now supported
clustered tables.
---------------------------------------------------------------------------------------------------------------------Direct
Path Insert Logging: Direct path inserts are now logged as insert
statements and the operation column is set to "DIRECT INSERT"
---------------------------------------------------------------------------------------------------------------------Supplemental
Logging: Oracle9i has the ability to log columns in the redo which are
not actually changed as part of the DML statements.
This is useful for maintaining copies of tables on other databases.
Prior to 9i, LogMiner only returned the columns which were changed and
identified the row with a WHERE clause with a ROWID. But, ROWIDs are not
portable to other databases so it was not possible to extract SQL using LogMiner
which could be used on other databases. There are two types
of supplemental logging: database and table. *** Database
supplemental logging *** Database supplemental logging
allows you to specify logging of primary keys, unique indexes or both. With this
enabled, whenever a DML is performed, the columns involved in the primary key or
unique index are always logged even if they were not involved in the
DML. To turn on database-wide supplemental logging for both
primary keys and unique indexes, execute the
following: SQL> ALTER DATABASE ADD
SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX)
COLUMNS; This only takes effect for statements which have not
yet been parsed. It also invalidates all DML cursors in the cursor cache and
therefore has an effect on performance until the cache is
repopulated. 1. The EMP table has a primary key defined on
the EMPID column. If supplemental logging is
turned on for primary key columns, then any update to EMP logs the EMPID
column. SQL> select * from
emp;
EMPID
SAL
----------
10 100000 SQL>
update emp set sal=150000; 1 row
updated. Without supplemental logging,
V$LOGMNR_CONTENTS.sql_redo contains: update
"SCOTT"."EMP" set "SAL" = '150000' where "SAL" = '100000' and ROWID
='AAABOaAABAAANZ/AAA'; But, with the supplemental logging as
done above, V$LOGMNR_CONTENTS.sql_redo
contains: update "SCOTT"."EMP" set "SAL" =
'150000' where "EMPID" = '10' and "SAL" ='100000' and ROWID =
'AAABOaAABAAANZ/AAA';
2. To turn off the supplemental logging, execute the
following: SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG
DATA; *** Table-level supplemental logging
*** Table-level supplemental logging allows users to define
log groups and specify which columns are always logged in the redo stream. It is
done on a table-by-table basis. The ALWAYS keyword is used to indicate
that if a row is updated, all columns in the group are logged. If ALWAYS is not
used, the columns in the log group are logged when at least one of the columns
in the group is updated. You can define a log group using
either CREATE TABLE or ALTER TABLE statement. 1. Create a
table with a log group : SQL> CREATE
TABLE test_log(c1 number, c2 number, c3 number, SUPPLEMENTAL LOG GROUP
test_always(c1,c2) always); SQL> select
* from
test_log;
C1
C2 C3 ----------
---------- ----------
1
2 3 2.
Update a column that does not belong to the log group
: SQL> update test_log set
c3=99; 1 row updated. Without
supplemental logging, v$logmnr_contents.sql_redo
contains: update "SCOTT"."TEST_LOG" set "C3" = '99'
where "C3" = '3' and ROWID ='AAABOZAABAAANZ6AAA'; With the
log group test_always defined above, V$LOGMNR_CONTENTS.sql_redo
contains: update "SCOTT"."TEST_LOG" set "C3" = '99'
where "C1" = '1' and "C2" = '2' and "C3" = '3' and ROWID =
'AAABOZAABAAANZ6AAA'; 3. Define another log
group: SQL> alter table test_log add
supplemental log group test_sometimes(c1,c2); 4.1 Update a
column that belongs to the log group:
SQL> update test_log set c2=10; 1 row
updated. V$LOGMNR_CONTENTS.sql_redo
contains: update "SCOTT"."TEST_LOG" set
"C2" = '10' where "C1" = '1' and "C2" = '2' and ROWID =
'AAABLtAABAAANYgAAA'; 4.2 Update a column that does not
belong to the log group: => no information
is logged. 5. There are 2 sets of views for log
groups: ALL_/USER_/DBA_LOG_GROUPS - log group
definitions on tables ALL_/USER_/DBA_LOG_GROUP_COLUMNS -
columns that are specified in a log group 6. To drop a log
group, issue an ALTER TABLE :
SQL> ALTER TABLE test_log DROP SUPPLEMENTAL LOG GROUP
test_always; ---------------------------------------------------------------------------------------------------------------------Mine_value
function/Column_present function
A. DBMS_LOGMNR.MINE_VALUE returns the specified value of
the first parameter (either redo_value or undo_value) for the column
name specified in the second parameter.
Example: To return update statements executed against
SCOTT.EMP which changed SAL to more than twice its original value, the following
SQL could be used: SQL> SELECT
sql_redo FROM
v$logmnr_contents WHERE
dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') >
2*dbms_logmnr.mine_value(undo_value,'SCOTT.EMP.SAL')
AND operation='UPDATE'; B. DBMS_LOGMNR.COLUMN_PRESENT returns
1 if the specified column is contained in a redo record and 0 if it is
not. Note: COLUMN_PRESENT returns a 1 not only
if the column was involved in a DML, but also if the column is logged as a
result of being part of an identification key or a logging
group. C. MINE_VALUE can return a NULL in two
scenarios: 1. The redo record contains the
column and its value is actually NULL. 2. The redo record
does not contain the column. So, COLUMN_PRESENT
can be used to supplement the MINE_VALUE function to identify NULLs which
are the result of a DML which changed the column to NULL vs. a NULL indicating a
column was not present in the redo log.
Example: To select redo values for the SAL
column and filter out the case where SAL is not present in the redo, the
following SQL could be used: SQL>
SELECT
dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL')
2 FROM v$logmnr_contents 3
WHERE dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') is NOT
NULL 4 OR
(dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') is
NULL 5 AND
dbms_logmnr.column_present(redo_value,'SCOTT.EMP.SAL')=1);
These functions do not support LONG, LOB, ADT or COLLECTION data
types. ---------------------------------------------------------------------------------------------------------------------
GUI: Just briefly as this has been a long one to write(and read for
you all):
The logminer viewer is part of Oracle Enterprise
Manager(OEM), it give you GUI type people(you know who you are), the
capability to do the pretty boxes, etc.
You will need to login as sysdba for logminer viewer to work.
The box pops up and shows 3 tabs:
Query
Criteria: (graphical and textual modes) this is where you choose what
columns you want to query on and the value you’re searching for, the date range
of when you want to mine and options to save the query to reuse at a later time,
feel free to leave them blank to see everything in the logs.
Redo Log
Files: This is where you choose which logs you want to mine(archived and
redo logs), this is also the place where you decide what kind of dictionary to
use, the default is the online catalog.
Display
options: Here you choose which columns you want to display while mining.
Once you have all the options set, hit the "Find Now" button and
logminer is off and running.
This covers all that I’ll talk about on 9i features for logminer, any
further info, feel free to ask me offline at
9i_at_oracle-dba.com
Joe
Received on Tue Oct 30 2001 - 12:28:33 CST