LogMiner puzzle - CLOB datatype
From: Michael Rosenblum <mrosenblum_at_dulcian.com>
Date: Mon, 6 Feb 2012 15:27:36 -0500
Message-ID: <2673A0170447634DA7F7ABCA51E89FA9016CFB941876_at_MAIL2.dulcian.local>
Hello, everybody!
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated - or I am just missing something.
Test #1. Initial discovery of a problem
Test #2. Quantification
Question:
Date: Mon, 6 Feb 2012 15:27:36 -0500
Message-ID: <2673A0170447634DA7F7ABCA51E89FA9016CFB941876_at_MAIL2.dulcian.local>
Hello, everybody!
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated - or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
Test #1. Initial discovery of a problem
- Setup:
- I created create a table MISHA_TEST that contains CLOB column create table misha_test (a number primary key, b_cl CLOB)
- I run anonymous block that would insert into this table WITHOUT referencing CLOB column begin insert into misha_test (a) values (1); commit; end;
- I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO set transaction read write;
insert into "AFRISSR"."MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "AFRISSR"."MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;
Puzzle:
- why do we have two operations for a single insert - first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn't even touch the column B_CL! Seems very strange - why can't we write NULL to B_CL from the very beginning?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with "storage in row enabled = true" (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
- Two tables:
- With CLOB: create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)
- With VARCHAR2 create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))
- Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 1001
3. Check sizes of generated logs:
- With CLOB - 689,664 bytes
- With Varchar2 - 509.440 (or about 26% reduction)
Summary:
- the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain.
- Having LOB columns in the table that has tons of INSERT operations is expensive.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael Rosenblum
Dulcian Inc
-- http://www.freelists.org/webpage/oracle-lReceived on Mon Feb 06 2012 - 14:27:36 CST