Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Bulk (array) inserts - performances
Zoran,
Besides the name of the constraint which has to be fetched from the
dictionary, as you mention, my own understanding is that the violation of
unicity can only be checked when inserting the index. What do you store in
indices ? Rowids. How do you get the rowid ? Well, you know it after having
inserted the table. So I believe the sequence to be :
which of course means a lot of work, a lot of writes to the database, and a lot of undo generation.
Regards,
Stephane Faroult
RoughSea Ltd
http://www.roughsea.com
On Fri, 19 Nov 2004 01:23 , Martic Zoran <zoran_martic_at_yahoo.com> sent:
Hi all,
I have a few questions about bulk (array) inserts. The question is very simple and it is based on real scenario:
Both 1. and 2. are done on the same connection one after the other.
First run took less then a second.
The second took around 6 seconds.
It is very easy to see Oracle db CPU statistics and
other stats below.
If the difference was not that big and the customers
did not complain I will never realize that big
discrepancy. This is not usually happening but when
happens you do not want to have the system slowing
that much.
The questions are:
1) Why failing bulk inserts are that much slower then
the bulk inserts without errors? Are the failing bulk
inserts tend to have the similar characteristics as
individual
failing inserts?
2) Why the redo generated in the second run is much
bigger? Is it the main reason for this slowness (I
know there are a few, like sending the constraint name
and error message, ...)?
[snip]
Regards,
Zoran Martic
-- http://www.freelists.org/webpage/oracle-lReceived on Fri Nov 19 2004 - 03:47:09 CST
![]() |
![]() |