performance of delete/insert deteriorate over time [message #377718] |
Wed, 24 December 2008 09:59 |
Orna
Messages: 62 Registered: November 2008
|
Member |
|
|
Hi
I have a table that is going thru delete * ( complete ) and then insert every night.
the elapsed time is creeping up slowly
( overall timing started at 13 minutes a month ago, and now its 18 minutes ). The main offender is the delete statements, less so the inserts.
From ASH reports and looks like the problem is with the indexes on these tables. the db file sequential read event is creeping up . all indexes are in the same tablespace
Can you explain why this is happaning?
the amount of rows each delete/insert cycle is about the same ( about 4 milion rows each night ) .
Thanks
Orna
|
|
|
|
Re: performance of delete/insert deteriorate over time [message #377749 is a reply to message #377747] |
Wed, 24 December 2008 16:55 |
Orna
Messages: 62 Registered: November 2008
|
Member |
|
|
Thanks
But as I mentioned above - I cannot truncate the table as part of the job. ( because of the implicit commit )
If I will have to do it ,it will be an application downtime.
I am just curious as to why HWM has anything to do with increased waits on the indexes data files and with db_sequential_reads on those files. I would understand if it was scatterred reads on the table itself - but why the indexes ? how are they affected with the HWM - especially since I am deleting the entire table every night .
Thanks again
Orna
|
|
|
|
Re: performance of delete/insert deteriorate over time [message #377752 is a reply to message #377750] |
Wed, 24 December 2008 18:25 |
Orna
Messages: 62 Registered: November 2008
|
Member |
|
|
I apologize
I posted this question in another thread, that is where I explained about the truncate problem :
http://forums.oracle.com/forums/thread.jspa?forumID=61&threadID=841005
In any case , I am very familiar with HWM and the problems with continuing delete/inserts. However - the way it is presented in my case still puzzles me.
Why would indexes be affected if they are completely 'rebuilt' every might ( isnt a delete * and then re-insert = index rebuild, note - NOT table rebuild , only index rebuild ) ? indexes should not be effected with the HWM - they only have entries for actual rows - so even if the table consistes of 30% empty blocks - the indexes should still be the same size - if the number of rows remain unchanged.
thanks anyway
Orna
|
|
|
|
|
|
|
Re: performance of delete/insert deteriorate over time [message #377949 is a reply to message #377948] |
Fri, 26 December 2008 18:59 |
Orna
Messages: 62 Registered: November 2008
|
Member |
|
|
exec dbms_stats.gather_table_stats(ownname => 'AQR51', tabname => 'MATCHPATTERNS_SDB' cascade => true );
I would not expect new stats to change anything.
the command is 'delete * from < table_name >
it was a full table scan and it will remain a full table scan
the statistics did not change that much - as I said - the amount of rows always remains roughly the same.
|
|
|
|
Re: performance of delete/insert deteriorate over time [message #377951 is a reply to message #377950] |
Fri, 26 December 2008 19:11 |
Orna
Messages: 62 Registered: November 2008
|
Member |
|
|
I am NOT rebuilding the after each delete,insert
I suspect that rebuilding the indexes will do the trick here
but this is exactly what puzzled me : why ??
Isn't delete * and then insert all the rows again - is exactly equivalent to "rebuild " ?
why are the indexes 'deteriorating ' during these delete/insert cycles - if the delete is complete ?
|
|
|
Re: performance of delete/insert deteriorate over time [message #377952 is a reply to message #377951] |
Fri, 26 December 2008 19:26 |
|
Mahesh Rajendran
Messages: 10708 Registered: March 2002 Location: oracleDocoVille
|
Senior Member Account Moderator |
|
|
You previously stated
>>Why would indexes be affected if they are completely 'rebuilt' every might
followed by
>>NOT table rebuild , only index rebuild
and now stating
>>I am NOT rebuilding the after each delete,insert
Which one is true?
>>Isn't delete * and then insert all the rows again - is exactly equivalent to "rebuild " ?
No. Yes. May be.
Depends on how "rebuilding" is defined in our dictionary.
In oracle terminology, rebuilding index is dropping and creating the index again (online or offline).
In your case, it would be better to
drop the index,
truncate the table (preferably. Else just delete rows),
insert into table,
recreate the index
recollect stats.
>>why are the indexes 'deteriorating ' during these delete/insert cycles - if the delete is complete ?
It will. Over period of time.
Consider a text book.
If you completely change the contents of the book and just use the same index entries to goto a particular page, what happens?
Also, does your table have any lobs?
[Updated on: Fri, 26 December 2008 19:30] Report message to a moderator
|
|
|
Re: performance of delete/insert deteriorate over time [message #377967 is a reply to message #377952] |
Sat, 27 December 2008 02:36 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
When an index block is allocated, it is allocated for a particular range of values. If the number of rows with values in that range increases then the block can split (thereby DECREASING the range of values), but when the volumes decrease (rows are deleted), the blocks will never re-merge.
When you delete every row and then insert again, then these index blocks SHOULD fill back up - providing you insert the same values that you deleted.
However, if you delete rows with values that never get replaced, then some blocks will become sparse or even empty. A date index is a good example: if old values tend to "drop off" then the index blocks that contained the older dates will never re-fill.
Empty blocks should not present a problem, because it doesn't cost anything to ignore an empty block.
SPARSELY filled blocks are a different storey though. Say in the first cycle you had 1000 full blocks. Then in the second cycle you had a 10% change of data - such as a date or ever-increasing sequence number. The 10% that disappears might be evenly spread over (say) 200 blocks leaving them 50% full; the 10% that is newly created is densely packed in 100 new blocks.
Now you have 1100 blocks to delete from in the third cycle. Remember that Oracle does not read and write ROWS, it reads and writes BLOCKS, so that extra 100 blocks (with no increase in rows) is an extra 10% of works that needs to be done on the delete.
In the third cycle, the bottom 10% of values is deleted again, this time spread over the bottom 200 sparse blocks, plus the next 200 densely packed blocks. After the third cycle, you have 1200 blocks, 400 of them sparsely packed.
After many cycles, the entire index will be strung out over twice as many (or more!) blocks as it needs.
This should not happen with indexes on values that are randomly distributed. It should only occur when the values change predictably over time. You can test the theory by gathering statistics after each cycle and checking the LEAF_BLOCKS column - it should be increasing.
You could mitigate this by creating a REVERSE KEY index. This will turn the predictably changing values into randomly distributed values by doing a byte-wise reversal.
WARNING: If you use a Reverse Key index, it can only be used for EQUI-predicates (equals and IN), not for >, <, LIKE, or BETWEEN.
Ross Leishman
|
|
|
|
|
|
|