AW: Better Delete method
Date: Sat, 10 Jul 2021 22:11:58 +0200 (CEST)
Message-ID: <1625947918263.2845246.940115ae04d7b7dc6042d26dcc7d3c2a1a2bd24e_at_spica.telekom.de>
Hi,
in the attached file is a method to delete data from big table using
partition exchange (you have to enhance the method to use indexes....)
Best regards
-----Original-Nachricht-----
Hello , this database version is 11.2.0.4 of Oracle Exadata. A table(say
TAB1) is there holding ~900 million rows with size ~222GB and it's not
partitioned. It has two indexes , one with a three column composite index
with size ~98Gb and other is the primary key on one column with size ~23GB.
As a part of the requirement we need to delete/purge 50% of its data from
this table. No referential constraints exist here. So I wanted to
understand, out of the two below, which is the best method to opt for? or
any other possible better option?
I can understand method-1 is achievable ONLINE, but is slower while
method-2 will be faster. So if we can afford ~1-2hrs of downtime, is it
good to go for method -2 as the delete approach. As because deleting 50%
rows even in method-1 may also need a table move+index rebuild(which will
again need downtime on 11.2) to lower the high water mark and make the
indexes compact and back to normal. Please advise.
Method-1:-
steps- 1:
Delete based on ROW_IDS in a bulk collect fashion with LIMIT 50K rows
ids at oneshot and commit within loop.
Method-2:-
Step- 1
Create a new table using CTAS a new table TAB1_BKP AS select * from
TAB1 where (required filter criteria which will pick ~450 required rows);
Step-2:-
Rename the TAB1_BKP as TAB1 and TAB1 as TAB1_BKP(which can be served as
backup for a few days and later dropped).
Ahmed
Betreff: Better Delete method
Datum: 2021-07-10T21:47:55+0200
Von: "Lok P" <loknath.73_at_gmail.com>
An: "Oracle L" <oracle-l_at_freelists.org>
In a cursor pick the ~450million rowids of the rows to be deleted based
on filter criteria;
step2:-
Create the unique index on the same column as there in PK.
Create the primary constraints with NOVALIDATE(because creating it
with validate may take a lot of time to validate existing data) using the
above unique index (This operation should happen in seconds as the index is
already created in the above step.)
-- http://www.freelists.org/webpage/oracle-lReceived on Sat Jul 10 2021 - 22:11:58 CEST
- application/octet-stream attachment: pkg_utils.sql