Max time for all sql's in my process being consumed by db file sequential read [message #288935] |
Wed, 19 December 2007 01:38 |
mithun.karthikeyan
Messages: 5 Registered: December 2007 Location: Hyderabad
|
Junior Member |
|
|
Hi,
I am facing performane issue with one of my process. The process is taking about 4 minutes to get completed. Ideally it should'nt have taken more than a minute.
So I generated the SQLTrace and the tkprof for the process and find that for almost all the SQL's max amount of time is being consumed by DBfile Sequential read. Of the total 4 Minutes that the process is taking to complete almost 3 minutes is being consumed by DBfile sequential read. And for almost all the SQL's in the trace, DB file sequential read accounts for 95% of the elapsed time.
Another noticeable thing is that DB file sequential read is more in case when Data is being fetched from three tables in particular. And these tables are very bulky tables with more than 5-6 millions of rows. I understand that the db file sequential read could be because of the I/O time required to read through the indexes of these huge tables.
However I would like help/suggestion to some how reduce this DBfile sequential read time. Is there anything I can do to improve the performance?
|
|
|
|
|
|
|
|
Re: Max time for all sql's in my process being consumed by db file sequential read [message #290274 is a reply to message #288935] |
Fri, 28 December 2007 04:36 |
mithun.karthikeyan
Messages: 5 Registered: December 2007 Location: Hyderabad
|
Junior Member |
|
|
Hi,
Thanks for the reply
I had taken TkProf for my process, and I came to know that DBfile sequential read was the one causing issue from that.
The most time consuming SQL is
SELECT 'Y'
FROM
TIME_ATTRIBUTES WHERE TIME_ATTRIBUTE_ID IN (SELECT TIME_ATTRIBUTE_ID
FROM TIME_ATTRIBUTE_USAGES WHERE TIME_BUILDING_BLOCK_ID = :B3 AND
TIME_BUILDING_BLOCK_OVN = :B2 ) AND ATTRIBUTE_CATEGORY = 'REASON' AND
ATTRIBUTE7 = DECODE (:B1 , NULL, ATTRIBUTE7, :B1 )
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 5420 3.31 3.37 0 0 0 0
Fetch 5420 4.71 79.54 7053 124992 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10841 8.02 82.92 7053 124992 0 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 7053 0.25 75.78
latch: cache buffers chains 6 0.00 0.00
********************************************************************************
I am attaching the SQLtxplain report also for this sql statement.
|
|
|
|
Re: Max time for all sql's in my process being consumed by db file sequential read [message #290278 is a reply to message #288935] |
Fri, 28 December 2007 04:42 |
mithun.karthikeyan
Messages: 5 Registered: December 2007 Location: Hyderabad
|
Junior Member |
|
|
However,
this is just one statement in my process experiencing this high percentage of DB file sequential read time.
Almost all the sql's are facing huge percentage of DBfile sequential read.
For eg, the next most costly statement has a DBfile Sequential read time of 61 seconds, when the total elapsed time for that statement is just 71 seconds.
The third most expensive sql statement has a total elapsed time of 22 seconds out of which DBfile Sequential read accounts for 18 seconds.
|
|
|
Re: Max time for all sql's in my process being consumed by db file sequential read [message #290434 is a reply to message #290278] |
Fri, 28 December 2007 19:44 |
halfydba
Messages: 20 Registered: November 2007 Location: Australia
|
Junior Member |
|
|
Start looking through the sqltxplain. There is a lot of useful information in there. You can see where parts of your query are taking the most time, and a brief overview of your data. This should be giving you a very good place to start. Try and understand what each is and do some reading.
Also, you have quite a number of rows there and the indexes you are using have quite a high cardinality. Stats have not been gathered for a long time and your data has increase ~5% for the 2 tables you are trying to join.
p.s. I know you are not doing a sort operation in this query, but your sort area is very low. PGA_aggregate_target is quite useful for large sorts. Test letting oracle manage this. It will consume a lot more memory, but your sorts will be a lot faster.. especially if you have that much data.
|
|
|