Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Update of Clobs *Performance*
Putting in a counter, picking a reasonable size, commit'n and reset'n the counter when you hit the limit is usually useful.
Even if a single commit for the whole table will work for him now, there is a good chance it will blow up later as the tables grow. It's pretty likely committing each row is unreasonable, but committing monoliths is a recipe for future problems and driving UNDO out of cache without need. I recommend avoiding monolithic commits unless there is a hard requirement for reversibility (rollback) and avoiding a program architecture that drives a need for monolithic commits is up there with the golden mean as far as I'm concerned.
mwf
-----Original Message-----
From: Anthony Molinaro [mailto:amolinaro_at_wgen.net]
Sent: Wednesday, November 03, 2004 3:05 PM
To: mwf_at_rsiz.com; oracle-l_at_freelists.org
Subject: RE: Update of Clobs *Performance*
In regard to: >>>>>>>> Even better, just commit once at the end...
-----Original Message-----
From: Mark W. Farnham [mailto:mwf_at_rsiz.com]
Sent: Wednesday, November 03, 2004 3:00 PM
To: oracle-l_at_freelists.org
Subject: RE: Update of Clobs *Performance*
create index why_full_scan_all_my_clobs_for_each_one_row_update on tableB(tabB_num)
change your where clause to where tabB_num = to_number(v_id)
Think about a commit counter within the loop less than the entire table. Maybe 1000 or 10000?
Regards,
mwf
<snip>
Procedure
declare
v_clob varchar2(32500);
v_id varchar(10);
cursor cont_rep_clob is
select tabA_char, tabA_clob
from Table_A;
begin
open cont_rep_clob;
loop
fetch cont_rep_clob into v_id, v_clob;
exit when cont_rep_clob%NOTFOUND;
update Table_B
set tabB_clob = v_clob
where to_char(tabB_num) = v_id;
commit;
end loop;
close cont_rep_clob;
-- http://www.freelists.org/webpage/oracle-lReceived on Wed Nov 03 2004 - 14:13:37 CST
![]() |
![]() |