Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Massive deletes & performance
tomscott_at_nospam.abac.com wrote in article
<360ceec4.673470774_at_news1.abac.com>...
> We have a large table (up to 50 million rows) with a very simple
> structure:
> Person_Id Number(10),
> Group_Id Number(10),
> Cell_Id Number(10)
> Primary Key(Person_Id, Group_id, Cell_id)
>
> We are "cleaning" this table using packaged procedures. Part of the
> cleaning involves removing rows that don't conform to certain business
> rules. This may mean removing up to 25% of them, or around 10 to 15
> million rows at a time. To clean the entire table, we have to run
> through hundreds of procedures with hundreds of select statements.
>
> In my preliminary testing, the more deletes I do, the worse
> performance is getting. I am doing a commit from within the package
> every 500 records.
>
> Am I having a problem with the primary key index getting so out of
> balance that it can't function correctly?
>
> Would I be better off having a 4th field that operates as a
> delete_flag and just changing the value of that field to indicate
> whether or not we want to keep the row?
>
When dealing with such large tables, what I have done at times, is write an
sql query (which selected only the rows I wanted to reload) which spooled
to a flat file and
then run SQLLDR in direct mode
S.Holder Received on Sun Sep 27 1998 - 01:43:48 CDT
![]() |
![]() |