Re: Better Delete method
Date: Sat, 10 Jul 2021 22:21:43 +0200
Message-ID: <MW3PR20MB3513123A18BACA470F2CF15CCC179_at_MW3PR20MB3513.namprd20.prod.outlook.com>
Hello,
In a similar situation I've already had, I used what you describe as Method
1.
Att.
*Juliano Ribeiro*
Em sáb., 10 de jul. de 2021 às 21:47, Lok P <loknath.73_at_gmail.com> escreveu:
> Hello , this database version is 11.2.0.4 of Oracle Exadata. A table(say
I created a procedure with a loop of records that should be deleted and
created a job.
It took about 3 days in my case, but the impact was minimal.
No problems with full FRA, backups, recreating objects, permissions,
invalid objects, etc...
http://linkedin.com/in/supernoi <http://br.linkedin.com/in/supernoi>
> TAB1) is there holding ~900 million rows with size ~222GB and it's not
> partitioned. It has two indexes , one with a three column composite index
> with size ~98Gb and other is the primary key on one column with size ~23GB.
> As a part of the requirement we need to delete/purge 50% of its data from
> this table. No referential constraints exist here. So I wanted to
> understand, out of the two below, which is the best method to opt for? or
> any other possible better option?
>
> I can understand method-1 is achievable ONLINE, but is slower while
> method-2 will be faster. So if we can afford ~1-2hrs of downtime, is it
> good to go for method -2 as the delete approach. As because deleting 50%
> rows even in method-1 may also need a table move+index rebuild(which will
> again need downtime on 11.2) to lower the high water mark and make the
> indexes compact and back to normal. Please advise.
>
>
> Method-1:-
>
> steps- 1:
> In a cursor pick the ~450million rowids of the rows to be deleted based
> on filter criteria;
> step2:-
> Delete based on ROW_IDS in a bulk collect fashion with LIMIT 50K rows
> ids at oneshot and commit within loop.
>
> Method-2:-
>
> Step- 1
> Create a new table using CTAS a new table TAB1_BKP AS select * from
> TAB1 where (required filter criteria which will pick ~450 required rows);
> Step-2:-
> Create the composite index in parallel.to make it as fast as
> possible
> Create the unique index on the same column as there in PK.
> Create the primary constraints with NOVALIDATE(because creating it
> with validate may take a lot of time to validate existing data) using the
> above unique index (This operation should happen in seconds as the index is
> already created in the above step.)
> Rename the TAB1_BKP as TAB1 and TAB1 as TAB1_BKP(which can be served as
> backup for a few days and later dropped).
>
>
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Sat Jul 10 2021 - 22:21:43 CEST