Re: Better Delete method

From: Andrew Kerber <andrew.kerber_at_gmail.com>
Date: Sat, 10 Jul 2021 18:48:47 -0500
Message-ID: <CAJvnOJZZ44ry+AvdiLQZuBWs1sb3waihBjivyPQiVS5yLBZXtg_at_mail.gmail.com>



If you can figure out a partition definition that matches the rows you need to delete, you can use dbms_redefinition to partition the table then drop the partition. dbms_redefinition allows you to do it without downtime.

On Sat, Jul 10, 2021 at 6:39 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

> Would you like the result to be partitioned so that future purging is
> cheap and quick?
>
>
>
> IF so, arrange the disjoint selections by desired partition to separate
> tables, create the empty partitioned table and partition exchange the
> pieces in.
>
>
>
> All your disjoint selects and be append mode, and it is likely that
> ordering the selected results for insert is useful (you are on too backward
> a version to use attribute clustering) to match your composite index.
>
>
>
> Good luck.
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Lok P
> *Sent:* Saturday, July 10, 2021 3:48 PM
> *To:* Oracle L
> *Subject:* Better Delete method
>
>
>
> Hello , this database version is 11.2.0.4 of Oracle Exadata. A table(say
> TAB1) is there holding ~900 million rows with size ~222GB and it's not
> partitioned. It has two indexes , one with a three column composite index
> with size ~98Gb and other is the primary key on one column with size ~23GB.
> As a part of the requirement we need to delete/purge 50% of its data from
> this table. No referential constraints exist here. So I wanted to
> understand, out of the two below, which is the best method to opt for? or
> any other possible better option?
>
>
>
> I can understand method-1 is achievable ONLINE, but is slower while
> method-2 will be faster. So if we can afford ~1-2hrs of downtime, is it
> good to go for method -2 as the delete approach. As because deleting 50%
> rows even in method-1 may also need a table move+index rebuild(which will
> again need downtime on 11.2) to lower the high water mark and make the
> indexes compact and back to normal. Please advise.
>
>
>
>
>
> Method-1:-
>
>
>
> steps- 1:
>
> In a cursor pick the ~450million rowids of the rows to be deleted based
> on filter criteria;
>
> step2:-
>
> Delete based on ROW_IDS in a bulk collect fashion with LIMIT 50K rows
> ids at oneshot and commit within loop.
>
>
>
> Method-2:-
>
>
>
> Step- 1
>
> Create a new table using CTAS a new table TAB1_BKP AS select * from
> TAB1 where (required filter criteria which will pick ~450 required rows);
>
> Step-2:-
>
> Create the composite index in parallel.to make it as fast as
> possible
>
> Create the unique index on the same column as there in PK.
>
> Create the primary constraints with NOVALIDATE(because creating it
> with validate may take a lot of time to validate existing data) using the
> above unique index (This operation should happen in seconds as the index is
> already created in the above step.)
>
> Rename the TAB1_BKP as TAB1 and TAB1 as TAB1_BKP(which can be served as
> backup for a few days and later dropped).
>
>
>
>
>
>
>

-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Jul 11 2021 - 01:48:47 CEST

Original text of this message