Re: Deleting Large Numbers of Rows w/o Filling up my redo logs

From: Mladen Gogala <mgogala_at_yahoo.com>
Date: Sun, 21 Dec 2008 12:50:22 +0000 (UTC)
Message-ID: <gile2e$rao$3@ss408.t-com.hr>


On Sat, 20 Dec 2008 21:36:06 +0000, Palooka wrote:

> I'd look at weekly partitions.
>
> Palooka

Partitions are largely a DW tecnology and will introduce a myriad of new problems like the invalidations of all global indexes. If I remember correctly, primary key, an item well advised for all OLTP tables, can only be implemented as a global index. In other words, one would have to transition the table in question to the state without primary key and open the door widely for the logical corruption (does the phrase "duplicate records" ring a bell?).

In my opinion, deleting large number of records is an ill advised tactics. I usually do things like that by scheduling a job through DBMS_SCHEDULER or DBMS_JOB, depending on the version, to run frequently and destroy at most N records in each run, by using ROWNUM <= N predicate. In other words, instead of 1 run deleting 10 millions of recors, I will have 1000 runs deleting 10,000 records each time.

-- 
http://mgogala.freehostia.com
Received on Sun Dec 21 2008 - 06:50:22 CST

Original text of this message