Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Deletion Of 160 Million Rows.
I will say this one more time and speaking from experience.... I have worked with large databases in the multi-multi-terabyte range.
The other ideas here are wrong. do not follow them.
We had a batch process that ran nightly last year. It would insert,update,delete. We didn't tune it, because we had no way of knowing how many of each we would get. Out of the blue, we got 16 million deletes. It ran for 2 days before someone killed it, then it had to rollback. When then did the create table as nologging.
The CTAS requires minimal downtime. you can script. The only time you will be down will be when you drop the old table and do a rename.
if you don't have space. Get it. Disk space is cheap. It will cost them more in labor costs to wait for this delete to run. For some reason managers have a hard time understanding that its often cheaper to buy us the hard disk then let us waste time since time is money.
> -----Original Message-----
> From: Kristian Myllym=E4ki [mailto:kristian_at_kmja.com]=20
> Sent: Tuesday, February 08, 2005 11:49 AM
> To: oracle-l_at_freelists.org
> Subject: RE: Deletion Of 160 Million Rows.
>
>
> I surely would prefer the CTAS option described in the thread, but if
> going with the other option, why use a temp table at all?
>
> Why not instead loop through some batches of deletes until no more rows
> are affected by your delete condition?
>
> begin
> loop
> delete from [table]
> where [your delete condition]
> and rownum < 10000; -- or whatever size you prefer
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Feb 08 2005 - 16:38:15 CST
![]() |
![]() |