That’s right.
But what you probably *should* do is partition the table by your delete key. And if you can change the insert into append, then you just partition exchange out and drop the unwanted rows with nary a delete seen. That also gives you a cheap ability to archive the swapped out partition to a table to back up as a stand alone period of time.
Good luck. IF Lothar’s suggestion of delayed block cleanout is correct, either of these things should fix it. IF the deleted space being used dramatically increasing the number of blocks you need to read via an indexed access path by fubaring the actual cluster factor, the partitioning fixes that.
From: Pap [mailto:oracle.developer35_at_gmail.com]
Sent: Thursday, July 29, 2021 3:49 PM
To: Mark W. Farnham
Cc: Lothar Flatz; Oracle L
Subject: Re: Insert Query issue
Thanks Mark. If I get it correct , you are suggesting a method to scan the blocks in the buffer cache which are getting deleted. But considering we are going to hold ~3days worth of data at any point in time, I was thinking if it's easy one to just scan or gather stats with auto sample size after delete to get the blocks cleaned. but before that is it the same effect i.e. delayed block cleanout which we are suffering from here? I thought Lothar might be pointing to some other similar effect but not exactly delayed block cleanout. Please correct me if I'm wrong.
On Fri, Jul 30, 2021 at 12:41 AM Mark W. Farnham <mwf_at_rsiz.com> wrote:
If you stash (probably insert append, then after the subsequent read truncate) into a table the rowid of all the rows you are about to delete (using the same predicate), then do the delete, then commit, that will give you the list of the candidate blocks. If you read from the table you deleted from where rowid in (select rowid from the new table) from a different node on your Exadata, you’re probably going to get that in sga. IF not, you can pl/sql loop them one block at a time.
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Pap
Sent: Thursday, July 29, 2021 2:34 PM
To: Lothar Flatz
Cc: Oracle L
Subject: Re: Insert Query issue
Thank you Lothar.
It appears to be something matching to delayed block cleanout effect in which post DML sometimes we have to perform a segment scan so that the commit scn can be updated for all the blocks and for that sometimes we have to do it using manual stats gathering or forced segment full scan to pass all the data blocks through buffer cache. But for that we used to see one stats i.e. 'transaction table consistent read undo records applied' on the SELECT query execution.
So is this tidying up of block post delete something similar as above and we can get that verified and confirmed from some session statistics from v$sesstat while the insert is running at run time? Also I am thinking what can be done so as to tidying up the blocks after delete, if a force segment full scan or stats gather post deletion can anyway help, so as to not impact the insert performance?
Betreff:
Re: Insert Query issue
Datum:
Thu, 29 Jul 2021 19:59:25 +0200
Von:
Lothar Flatz <mailto:l.flatz_at_bluewin.ch> <l.flatz_at_bluewin.ch>
An:
oracle-l_at_freelists.org
Hi,
there reason should be that after a delete rows are marked as technical deleted, but the block is not yet tidied up. (I suppose that would mean e.g. that other rows are not yet shifted for contiguous space etc.)
The tidying is done on the next insert, which therefore will be substantially slower for once.
I heard this explanation long ago , but suppose it still holds.
My whole understanding of this is a bit vague.
I guess Jonathan could fill in some details here.
Regards
Lothar
Am 29.07.2021 um 18:10 schrieb Pap:
This legacy database is on version 11.2.0.4 of oracle Exadata.
We are having below conventional insert query introduced and its a bulk insert happening with an arraysize of ~352 from informatica. And the response time was ~3millisec/execution i.e. for one chunk or ~352 rows. But we noticed suddenly the response time becomes tripled(~8-9millisec/execution) for the same set of rows/array size insert. No changes done to the underlying table structure.
During digging into the AWR views we found that the increase in response time for this insert query started after we introduced the delete script to purge data from this table. The delete is also happening in bulk fashion with an arraysize of 100K through plsql procedure and its deleting data with CRE_TS<sysdate-3. And it's clear from dba_hist_sqlstat that it's the cpu_time_delta and IOWAIT_DELTA which increased thus making the ELAPSED_TIME_delta go up by three times for the INSERT query. No such increase in BUFFER_GETS_delta observed. But one thing we notice is that the DELETE is not running at exactly the same time as the insert/data load script runs. So howcome this delete is impacting the INSERT query? Is this that as delete is making some blocks empty so that now the INSERT has to now see through all to find the empty blocks before insert and thus spending more time?
The table is ~215GB in size and is not partitioned holding ~1.7billion rows. There exists only a primary key composite index on two number data type columns. With Blevel-3 , and having index size ~65GB with ~7.3million leaf blocks.
Below is the AWR plan for the INSERT and DELETE queries.
INSERT INTO TAB1(TAID,TID,AID,APP_NM,CRE_TS) VALUES ( :1, :2, :3,:4, :5)