can collections help [message #289706] |
Tue, 25 December 2007 13:39 |
himakar.vucha
Messages: 4 Registered: December 2007
|
Junior Member |
|
|
Hello,
I am writing a procedure which will perform the below functionality.
take a record from driver cursor read from some 12 tables gather all the details insert into some tracking tables in between and write into a UTL file.
for every record in the driver cursor (say customer) I will write 10 records(customer details) into the UTL file
there are some complex transformations I am performing for every record
All the tables are properly indexed and queries are tuned to the best.
But I am not satisfied with the speed. it is processing some 1500 records per minute in the driver cursor
data in these 13 tables will be around 2 million.
will collections help in this regard ? If so how ?
bulk collect and bulk update/insert will only save time in fetching and inserting but it is navigation through all these 13 tables it is taking time.
|
|
|
|
|
|
|
|
Re: can collections help [message #290315 is a reply to message #290170] |
Fri, 28 December 2007 06:12 |
himakar.vucha
Messages: 4 Registered: December 2007
|
Junior Member |
|
|
Hi Thanks a lot for the article.
I need ur help to understand the code.............The basic intention of this approach is to cache some customer service details and customer billing instructions into the collections.
-- Load up the services for the customer
custserv_tab.DELETE;
WHILE custserv_cur%FOUND
AND curr_custserv_rec.cust_num <= curr_cust_rec.cust_num LOOP
IF curr_custserv_rec.cust_num = curr_cust_rec.cust_num THEN
custserv_tab.EXTEND;
custserv_rec.LAST := curr_custserv_rec;
END IF;
FETCH custserv_cur INTO curr_custserv_rec;
END LOOP;
1) This is done for every customer coming from cust_cur
then where are we gaining time because our attempt is to cache a set of customer service records(of more than one customer)?
2) why did we use curr_custserv_rec.cust_num <=
as we are collecting the service details of a single customer ?
|
|
|
|
|
|
Re: can collections help [message #290421 is a reply to message #290315] |
Fri, 28 December 2007 16:21 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
The idea of concurrent cursors is not necessarily to cache data, but to cut down of SQLs executed - which is far more costly in high-volume SQL.
Instead of having one driving cursor that you loop through, and then perform 13 other SQLs or cursors inside the loop, you have 14 SEPARATE cursors which you OPEN all together then fetch them in sequence.
Take another look at the example, it performs only 3 SQL statements, rather than 1 cursor and 2 statements inside the loop that may be executed thousands of times.
Read the whole article through from scatch. If you still don't get it, workshop it with a colleague. I put several hours into that article, and I'm afraid I cannot explain it better than that.
Once you get the concurrent cursors working, then you can apply BULK COLLECT and FORALL statements to imrpove performance further.
Ross Leishman
|
|
|