Performance Issue [message #524125] |
Wed, 21 September 2011 23:35 ![Go to next message Go to next message](/forum/theme/orafaq/images/down.png) |
![](//www.gravatar.com/avatar/a95ded9a53e67a9bd90e5bc7015ab474?s=64&d=mm&r=g) |
a_oracle
Messages: 98 Registered: November 2010
|
Member |
|
|
Hi All,
Its just that I need your expertise in helping me opt for the best optimal method. The issue is which option should I adopt to fulfill an insert statement.
The scenario is, I have a cursor which fetches nearly half a million records. Now, the current insert statement does it by something like:
open the cursor
loop
fetch cursor in a declared variable vartab limit 200;
if vartab.count>0 then
forall i in vartab.first..vartab.last save exceptions
insert into table1
vartab(i);
commit;
end if;
end loop;
Now, in the above logic, for every 200 records the code is inserting and committing. One possible solution is increasing the limit count to say 2000. But how do i decide that 2000 is an optimal value to choose. how to check the memory constraints so that I can decide that 2000 is good enough or maybe I can go still further.
I cannot write insert into table1 select col1.....from table table2 as its an application where multiple transactions are going in parallel and going for insertion using a single insert statement with such a huge records may cause issues.
Forgot to mention, the fetching of records take nearly 30 seconds to fetch 5,00,000 records which seems to be fine.
Appreciate your help.
|
|
|
|
Re: Performance Issue [message #524162 is a reply to message #524126] |
Thu, 22 September 2011 01:46 ![Go to previous message Go to previous message](/forum/theme/orafaq/images/up.png) ![Go to next message Go to next message](/forum/theme/orafaq/images/down.png) |
![](//www.gravatar.com/avatar/a95ded9a53e67a9bd90e5bc7015ab474?s=64&d=mm&r=g) |
a_oracle
Messages: 98 Registered: November 2010
|
Member |
|
|
Hi Blackswan,
Thanks for the advice. but since the design flaw has been existing since the start of the application with multiple transactions
going in parallel,we cannot go for a total change of the system because it's beyond our scope and especially this
issue relates to customizing a code (Damn i hate giving support/maintenance to a existing code for which i have not been part of the development)
Now, this insert process with the stated logic takes nearly 40 minutes to complete the processing for nearly 5,00,000 records.
We have been assigned to reduce it to maybe at least 20 minutes on average.
So, the only solution i see here is to increase the limit caluse in the bulk collect statement as a simple insert statement has been rejected for it
had already caused serious performance issue before.
So, the dilemma is, the limit clause is current having 200 records and I want to change it to maybe 2000-5000. But before going to such to make the change
i need to assure that it should not again eat away the performance. Can you suggest what all thing we should be looing into before altering the limit clause.
And looking at the simply using insert into Table 1 (select col1...coln form table 2 <where clause>). I still feel this is indeed the best way. But dont know if this
should be incorporated as thought the query takes merely 30 seconds to fetch the data, inserting such a huge data in multi transaction application at one go might cause issues.
Can you suggest me a good method.
|
|
|
Re: Performance Issue [message #524172 is a reply to message #524162] |
Thu, 22 September 2011 02:15 ![Go to previous message Go to previous message](/forum/theme/orafaq/images/up.png) ![Go to next message Go to next message](/forum/theme/orafaq/images/down.png) |
John Watson
Messages: 8965 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Hi - how long does it actually take when you run the single
INSERT INTO....SELECT...FROM...
statement? 500,000 rows is not that many, and if it takes 40 minutes with your current pl/sql implementation, I would have expected it to take under twenty minutes as a single SQL statement. I cannot see why other concurrent transactions would be a problem. You sayQuote:a simple insert statement has been rejected for it had already caused serious performance issue before. what was this issue?
|
|
|
|
Re: Performance Issue [message #524191 is a reply to message #524178] |
Thu, 22 September 2011 03:24 ![Go to previous message Go to previous message](/forum/theme/orafaq/images/up.png) ![Go to next message Go to next message](/forum/theme/orafaq/images/down.png) |
John Watson
Messages: 8965 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
"memory getting blown away" is an Oracle error message I haven't seen before. I would show the client a few examples
like this:
orcl> set timing on
orcl> insert into t1 select * from epistore.grav_open_wave where rownum <=500000;
500000 rows created.
Elapsed: 00:04:50.17
orcl> select
2 round(used_ublk * (select value from v$parameter where name='db_block_size')/1024/1024) MB_undo
3 from v$transaction;
MB_UNDO
----------
32
Elapsed: 00:00:00.01
orcl>
That should convince them. Good luck.
[update: this was for a table with average row length 1372 bytes]
[Updated on: Thu, 22 September 2011 03:26] Report message to a moderator
|
|
|
|
|
|
Re: Performance Issue [message #524257 is a reply to message #524253] |
Thu, 22 September 2011 07:24 ![Go to previous message Go to previous message](/forum/theme/orafaq/images/up.png) |
![](//www.gravatar.com/avatar/a95ded9a53e67a9bd90e5bc7015ab474?s=64&d=mm&r=g) |
a_oracle
Messages: 98 Registered: November 2010
|
Member |
|
|
Hi Cookiemonster,
If you read my previous topic, I had asked for suggestions on increasing my limit clause in my existing code which has been using bulk collect to a higher amount but I am not sure how to determine the same.
|
|
|