Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: PL/SQL insert performance
A copy of this was sent to brendan_o'brien_at_wrightexpress.com
(if that email address didn't require changing)
On Thu, 07 May 1998 15:44:08 GMT, you wrote:
use tkprof to analyze what is happening, to see where the bottleneck is. Perhaps you are accidently accessing the table (or some other growing table) without an index and doing a full scan (5million to 27million would cause you to go from 1 hour to 8 hours easily if you are full scanning the table by accident inside your program somewhere).
To use tkprof
1- set timed_statistics = true in your init.ora file and bounce the database.
also make sure max_dump_file_size is set large, like 99999
2- grant alter session the to owner of the procedure 3- put a dbms_session.set_sql_trace(true) at the beginning of your procedure 4- rebuild the procedure 5- run the procedure against a small but representative set of input data
in your user_dump_dest directory (svrmgrl, show parameter user_dump_dest) you'll find a trace file created after the procedure is all done. use tkprof against this file
$ tkprof TRACEFILE run.txt explain=USER/PASSWORD
then edit run.txt, it'll have a detailed breakdown of all the sql you executed, query plans for the sql, amount of time spent on each query and so on.
>Here's a puzzler for the more experienced developers/dba's among you:
>
>SCENARIO: We have a large transaction table, about 30 cols, about 27 million
>records. This is a non-volatile table with virtually no updates or deletes
>against it. Records are inserted by a very straightforward stored procedure
>once daily. The procedure grabs all records from a scratch table (approx.
>30,000), does some minor data scrubbing, then inserts into the production
>table. Commit occurs every 2500 records.
>
>The production table has an 11 column PK, and 2 additional non-unique indexes
>of 10 and 3 columns respectively. The indexes are each spread across multiple
>extents (between 3 and 19 extents per index), while the data is spread across
>33 extents. Extent size = 125M.
>
>PROBLEM: As the table grows in size, the daily insert procedure is taking a
>rapidly increasing amount of time to complete. Execution time has gone from 1
>hour a couple of months ago (approx 5 million existing rows back then) to over
>8 hours today.
>
>POSSIBLE CULPRITS: As the table is non-volatile, I doubt stongly that
>dropping and re-creating indexes will improve performance as virtually no
>updates or deletes are ever done, therefore leaving the indexes virtually free
>of any 'dead' space (sorry if my terminology is off here... I'm a developer
>not a DBA). The DBA's swear the storage is optimal and not the problem, and I
>tend to believe them (the reading I've done on storage management suggests
>that reorganizing data for contiguous storage will have little effect on
>performance, especially on inserts). I have read something that suggested the
>culprit may be the non-unique indexes, and the solution lying in the
>optimization of something called 'AIP's to better handle the row-sizes for my
>tables. The procedure that does the inserts has not changed and is about as
>simple as it gets.
>
>That's all I can think of to mention. I've got developers and dba's pointing
>at one another. Any help is greatly appreciated. If you would, please email
>any replies to:
>brendan_o'brien_at_wrightexpress.com
>
>Peace.
>-B.
>
>-----== Posted via Deja News, The Leader in Internet Discussion ==-----
>http://www.dejanews.com/ Now offering spam-free web-based newsreading
Thomas Kyte
tkyte_at_us.oracle.com
Oracle Government
Herndon VA
http://govt.us.oracle.com/ -- downloadable utilities
Anti-Anti Spam Msg: if you want an answer emailed to you, you have to make it easy to get email to you. Any bounced email will be treated the same way i treat SPAM-- I delete it. Received on Thu May 07 1998 - 11:28:57 CDT
![]() |
![]() |