Help!!!!!!! Tuning using SQL_TRACE [message #116722] |
Thu, 21 April 2005 11:46 |
ankurgodambe
Messages: 45 Registered: March 2005
|
Member |
|
|
Hi Experts,
I have an insert statement which when run from sqlplus returns in 1-2 secs. But when i run it through Application server its takes minutes(app uses JDBC)... When I trace the two sessions the major difference between the two traces is the CPU used for parse phase. Can any one tell me why is there a differnce in parse phase cpu utlization when i run the same statement through app and sqlplus.Also how can I tune it to run fine from App server??? I am pasting the outputs of the two trace.....
Help needed urgemtly....
Thanks...
APP SERVER TRACE:
=================
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 141.96 142.53 0 92928 0 0
Execute 1 0.35 0.34 0 1967 9 100
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 142.31 142.87 0 94895 9 100
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 33
Rows Row Source Operation
------- ---------------------------------------------------
101 COUNT STOPKEY
100 NESTED LOOPS
100 TABLE ACCESS BY INDEX ROWID ITEM_TL
100 DOMAIN INDEX
100 TABLE ACCESS BY INDEX ROWID ITEM
199 INDEX RANGE SCAN (object id 21722)
********************************************************************************
SQLPLUS:
=========
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.08 0.11 3 138 3 0
Execute 1 0.23 0.25 1 3023 13 100
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.31 0.36 4 3161 16 100
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 33
Rows Row Source Operation
------- ---------------------------------------------------
101 COUNT STOPKEY
100 NESTED LOOPS
100 TABLE ACCESS BY INDEX ROWID ITEM_TL
100 DOMAIN INDEX
100 TABLE ACCESS BY INDEX ROWID ITEM
199 INDEX RANGE SCAN (object id 21722)
********************************************************************************
|
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116820 is a reply to message #116724] |
Fri, 22 April 2005 07:07 |
ankurgodambe
Messages: 45 Registered: March 2005
|
Member |
|
|
No thats not the case.
The sql is executed only once and not in a loop. The app takes a value and uses that for search. This is an intermediate step where some values are inserted into a temporary table. We just wanted to check whether the app is running fine...and we have hit this issue...
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116849 is a reply to message #116722] |
Fri, 22 April 2005 09:07 |
smartin
Messages: 1803 Registered: March 2005 Location: Jacksonville, Florida
|
Senior Member |
|
|
Why are you using a temporary table for intermediate values? Oracle is not sqlserver, you rarely need temp tables for that sort of thing. Just do your work in a single query.
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116877 is a reply to message #116849] |
Fri, 22 April 2005 12:12 |
pscjhe
Messages: 38 Registered: April 2005
|
Member |
|
|
You have a very interesting situation here. You are using DOMAIN INDEX for your insert! The "qeury" column for "parse" are Hugely different between the two. "query" is logical I/O. It looks like to parse this SQL in your "APP", it does a lot of lio.
Try to repeat your app several times, see if the result is same for tkprof.
This could be a learning experience for many of us ( at least me ) regarding DOMAIN INDEX. Can you show us your SQL ? how this DOMAIN INDEX is defined. What are the statistics on this index ?
If all is exhausted, I like to use "10053" event to trace CBO parsing.
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116892 is a reply to message #116877] |
Fri, 22 April 2005 14:05 |
ankurgodambe
Messages: 45 Registered: March 2005
|
Member |
|
|
Hi Experts,
This problem is resolved. When I checked the v$session_event view I found the event "latch free" going very high. After searching through the metalink, I came across the document which says that large Shared Pool may cause this event in 8i. When I reduced the shared pool the query run was excelent. What still baffels me is , Why wasnt the event "latch free" high for the session run through sqlplus????
Anyways....thanks for all your help!!!!!
Cheers.....
Ankur
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116895 is a reply to message #116892] |
Fri, 22 April 2005 14:40 |
pscjhe
Messages: 38 Registered: April 2005
|
Member |
|
|
I don't think you are getting the root cause of the issue yet.
If it were "latch free", it is scalability issue. Meaning that another process holds the latch in shared pool that your session is waiting for. It would be a wait event. The big number for "query" in your first tkprof's parse is doing LIO, which is IO not a wait event.
Please dig deeper.
|
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #116909 is a reply to message #116906] |
Sat, 23 April 2005 01:29 |
pscjhe
Messages: 38 Registered: April 2005
|
Member |
|
|
Your snapshot period seems to be 640min/(7073-7051) ~ 30/mins. I would like this a peek time 30 mins report in stead of the whole day.
Here is my impression from this statspack report.
1. Wait seems to be from "library cache", or shared pool. The SQL*Area's reload/invalidation are high.
2. The DOMAIN INDEX is probably from interMedia.
It is recommended to increase shared_pool_reserved_size, which defaults to 5% of shared pool. You probably want to increase it to close to 50% of shared pool, which is max. This should help above situation.
3. I would find the COMPLETE SQL for this statement in v$sql_area.
INSERT INTO search_results SELECT /*+ ALL_ROWS DOMAIN_INDEX_SORT USE_NL(item_
4. If you can set event 10046, level 8 before above SQL execute and get a raw trace file (NOT tkprof result). We probably can see where the bottleneck is.
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #117101 is a reply to message #116909] |
Mon, 25 April 2005 02:20 |
ankurgodambe
Messages: 45 Registered: March 2005
|
Member |
|
|
Hi pscjhe,
The relaods/invalidations are high but to take care of that code change is required, and i cannot do that since this is a third party application. Besiddes its working fine on the production.
Shared pool retained size was already set to 50% os shared pool.
I am attaching the statspack report and the trace generated by settibg the event 10046, level 8. And frankly its difficult for me to conclude anything from the trace. If possible please have a look.
Thanks for all the help extended by you.........
Regards,
ANkur
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #117245 is a reply to message #117101] |
Tue, 26 April 2005 00:22 |
pscjhe
Messages: 38 Registered: April 2005
|
Member |
|
|
I analyzed your 10046 trace file. The execution speed of "insert " is efficient even though there is relatively larger "query" at parse.
Look at the total non-recursive waits 1128 vs total elapsed 2823. The waits are real idle time doing "select count(*) from language".. This basically concludes that your original question about "INSERT" is NOT or NO LONGER the performance issue.
However in you statspack, it shows clealy "library cache pin" has 623sec wait. It seems to be some type of locks contention. So you can watch the v$session_event and v$session_wait and v$lock, see which session does the event and what that lock is. Reduce contention on that if possible. ref metalink Note:34579.1
INSERT INTO search_results SELECT /*+ ALL_ROWS DOMAIN_INDEX_SORT
USE_NL(item_tl item)*/ score(1), item_tl.item_itemid FROM item_tl, item
WHERE item_tl.lang ='US' AND item_tl.item_itemid=item.itemid AND
contains(item_tl.ctx_desc, '((({@@rvq@101708@graybar_kennametal@},
{@@rvq@101708@jl_kennametal@}, {@@rvq@101708@newpig_kennametal@},
{@@rvq@101708@scottelectric_kennametal@},
{@@rvq@101708@shannonsafety_kennametal@},
{@@rvq@101708@shuttlechem_kennametal@}, {@@rvq@101708@stauffer_kennametal@},
{@@rvq@101708@grainger_kennametal@}, {@@rvq@101708@xpedx_kennametal@})))
*10 & (${itemname${countval}}%)', 1) > 0 AND rownum <= 100 AND
item.A101764= 'production' ORDER BY score(1) DESC
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.83 0.91 0 1588 0 0
Execute 2 0.01 0.00 0 0 2 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.84 0.91 0 1588 2 0
Misses in library cache during parse: 2
Optimizer mode: CHOOSE
Parsing user id: 33
Rows Row Source Operation
------- ---------------------------------------------------
0 COUNT STOPKEY
0 NESTED LOOPS
0 TABLE ACCESS BY INDEX ROWID ITEM_TL
0 DOMAIN INDEX
0 TABLE ACCESS BY INDEX ROWID ITEM
0 INDEX RANGE SCAN (object id 21722)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
single-task message 2 0.54 1.00
SQL*Net message from client 8 0.03 0.07
SQL*Net break/reset to client 4 0.00 0.00
SQL*Net message to client 6 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 113 5.19 5.33 0 9152 0 0
Execute 113 7.25 7.29 0 51134 16 0
Fetch 96 0.09 0.09 0 476 8 140
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 322 12.53 12.71 0 60762 24 140
Misses in library cache during parse: 27
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message from client 17804 20.03 1147.68
SQL*Net message to client 628 0.00 0.00
SQL*Net more data from client 8 0.00 0.00
log file sync 8 0.01 0.02
single-task message 17056 1.10 7.15
SQL*Net break/reset to client 4 0.00 0.00
library cache lock 2 0.02 0.03
latch free 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3769 0.64 0.59 0 0 0 0
Execute 3996 0.15 0.16 0 48 24 16
Fetch 20819 3.09 2.85 0 54940 0 233221
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 28584 3.88 3.60 0 54988 24 233237
Misses in library cache during parse: 7
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message from client 16937 0.01 0.34
SQL*Net message to client 17057 0.01 0.04
SQL*Net more data to client 1836 0.01 0.15
149 user SQL statements in session.
3733 internal SQL statements in session.
3882 SQL statements in session.
********************************************************************************
Trace file: mmprod_ora_21884.trc
Trace file compatibility: 8.00.04
Sort options: default
1 session in tracefile.
149 user SQL statements in trace file.
3733 internal SQL statements in trace file.
3882 SQL statements in trace file.
37 unique SQL statements in trace file.
134683 lines in trace file.
2823 elapsed seconds in trace file.
|
|
|
Re: Help!!!!!!! Tuning using SQL_TRACE [message #117335 is a reply to message #117245] |
Tue, 26 April 2005 10:08 |
pscjhe
Messages: 38 Registered: April 2005
|
Member |
|
|
I kept thinking about the logic in this application
insert into search_result ....
truncate table search_result
As you point out earlier, why "truncate", if truncate immediately after insert, Apparenly search_result is transient. but its contents will not be useful at all. Does every user has a search result table ? If they do, of course, you will have latch contention. Because truncate is always serialized.
If search result is normal table AND one table is shared by all users AND it is as transient as I think. I would try to change it to global temporary table with "on commit preserve rows"
metalink note 68098.1. This will make this contention go away without changing application logic.
|
|
|
|
|