Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> Temp extent sizes and Strange performance problem
I posted earlier about my SQL statement that overnight went from less than 2 minutes to about 25 minutes on our nightly data warehouse loads. We used RMAN to move an exact copy of the database from before the process started running long to a different unix box. After running sql_trace and tkprof on the SQL statement in question (see below) on both the current and pre-problem database, the execution times were similar and the explain plans were identical except for minor differences in the number of rows returned.
I then looked at all of the initialization parameters and they look the same except that we created the rman copy with a smaller shared pool (due to resource constraints on the box we moved the copy to).
One thing that I noticed was that the extent sizes for the TEMP tablespace is different. The day that we started having this problem, we had a disk failure. The TEMP tablespace was on the failed disk. Another DBA dropped the TEMP tablespace and recreated it on a different disk (apparently with a larger extent size). The current next_extent size is 4194304. The next_extent size on the pre-problem TEMP tablespace is 40960.
Is it possible that this difference in extent size in the TEMP tablespace could cause a ten-fold degradation in performance?
Cherie
"Richard Ji" <rji_at_exenet.co To: Multiple recipients of list ORACLE-L <ORACLE-L_at_fatcity.com> m> cc: Sent by: Subject: Re: Strange performance problem root_at_fatcity.c om 09/14/01 02:46 PM Please respond to ORACLE-L
!! Please do not post Off Topic to this List !!
Did you check to see if there is anything else running on the server that
might take
resource away from Oracle? It has happened to me once that the SA was
running something that he shouldn't and it's using a lot of system
resources.
HTH
>>> Cherie_Machler_at_gelco.com 09/14/01 03:05PM >>>
!! Please do not post Off Topic to this List !!
I have a nightly load job that was being tracked by our developers. According to their nightly logs (going back months), a query was running as far back as they can record with a sub-second response time.
Then on a particular date (Aug. 23rd), the query started taking more than 20 minutes to complete. It has taken that long to complete ever since.
I looked at the explain plan and it looks o.k. Indexes are being used
and there are no suspicious full table scans. The init.ora file has not
changed
since then.
We restored a full copy of the database to an alternate host using rman.
It should be an exact copy as of Aug. 16th. I ran the query on the copy
and
on the current production database and the resulting explain plans were
identical except for the number of rows returned. Total execution time
and cpu times were similar.
I looked through our change documentation and I do not see any record of data structure changes or any data changes at all in the database in question.
I am sort of at a loss for what to try next. What sort of changes might cause such an extreme degradation in performance as this?
This is an 8.1.7 database on Sun Solaris 2.8. The optimization is
rule-based.
No partitioning. Database is about 80 Gig in size. Following is the
explain
plan, if anyone is interested:
SELECT ACTV_EAS_PERS_RPT_PROF_VIEW.LOGIN_ID,
ACTV_EAS_PERS_RPT_PROF_VIEW.ACCT_GRP_ID, ACTV_EAS_PERS_RPT_PROF_VIEW.RPT_PROF_ID, ACTV_EAS_PERS_RPT_PROF_VIEW.INS_DT_TM, ACTV_EAS_PERS_RPT_PROF_VIEW.UPD_DT_TM
Rows Row Source Operation
------- ---------------------------------------------------
1897 FILTER
2041 NESTED LOOPS
2422 HASH JOIN
2341 NESTED LOOPS 2342 NESTED LOOPS 2338 NESTED LOOPS 2338 NESTED LOOPS 2346 NESTED LOOPS 2510 NESTED LOOPS 2510 NESTED LOOPS 2510 INDEX FAST FULL SCAN (object id 17279) 5018 INDEX UNIQUE SCAN (object id 17278) 5018 TABLE ACCESS BY INDEX ROWID EAS_ACCT_GRP 5018 INDEX UNIQUE SCAN (object id 17266) 4854 INDEX RANGE SCAN (object id 17270) 4682 TABLE ACCESS BY INDEX ROWID EAS_PERSON_RPT_PROF_ASSGN 4682 INDEX RANGE SCAN (object id 17283) 4674 VIEW ACTIVE_EAS_RPT_PROF_VIEW 100491 SORT UNIQUE 43 UNION-ALL 10 TABLE ACCESS FULL EAS_RPT_PROF 33 FILTER 34 NESTED LOOPS 734 NESTED LOOPS 207976 NESTED LOOPS 207976 MERGE JOIN CARTESIAN 706 INDEX FAST FULL SCAN (object id 17270) 208680 SORT JOIN 295 TABLE ACCESS FULL EAS_CLNT_GRP_STS_LOG 415950 TABLE ACCESS BY INDEX ROWID EAS_ACCT_GRP 415950 INDEX UNIQUE SCAN (object id 17266) 208708 INDEX UNIQUE SCAN (object id 17275) 766 TABLE ACCESS FULL EAS_RPT_PROF 4678 TABLE ACCESS FULL USER_SIGNON 2341 INDEX UNIQUE SCAN (object id 17275) 295 TABLE ACCESS FULL EAS_CLNT_GRP_STS_LOG 4461 VIEW ACTIVE_EAS_PERSON_VIEW 2675205 SORT UNIQUE 1105 UNION-ALL 128 NESTED LOOPS 1107 INDEX RANGE SCAN (object id 17284) 128 TABLE ACCESS BY INDEX ROWID EAS_PERSON 2212 INDEX UNIQUE SCAN (object id 17277) 977 FILTER 1008 NESTED LOOPS 288511 NESTED LOOPS 326271 MERGE JOIN CARTESIAN 1107 INDEX RANGE SCAN (object id 17284) 327376 SORT JOIN 295 TABLE ACCESS FULL EAS_CLNT_GRP_STS_LOG 614780 TABLE ACCESS BY INDEX ROWID EAS_PERSON 652540 INDEX UNIQUE SCAN (object id 17277) 289517 INDEX UNIQUE SCAN (object id 17275)
540 SORT AGGREGATE
287 TABLE ACCESS BY INDEX ROWID EAS_CLNT_GRP_STS_LOG
557 INDEX RANGE SCAN (object id 17276) 1346 SORT AGGREGATE 737 TABLE ACCESS BY INDEX ROWID EAS_ACCT_GRP_STS_LOG 1412 INDEX RANGE SCAN (object id 17270)3938 SORT AGGREGATE
4035 INDEX RANGE SCAN (object id 17279) 680 SORT AGGREGATE 355 TABLE ACCESS BY INDEX ROWID EAS_CLNT_GRP_STS_LOG 696 INDEX RANGE SCAN (object id 17276) 2614 SORT AGGREGATE 1578 TABLE ACCESS FULL EAS_PERSON_STS_LOG 2614 SORT AGGREGATE 1578 TABLE ACCESS FULL EAS_PERSON_STS_LOG 14 SORT AGGREGATE 7 TABLE ACCESS BY INDEX ROWID EAS_CLNT_GRP_STS_LOG 14 INDEX RANGE SCAN (object id 17276) 66 SORT AGGREGATE 33 TABLE ACCESS BY INDEX ROWID EAS_ACCT_GRP_STS_LOG 66 INDEX RANGE SCAN (object id 17270)
I thought for sure that when we restored this database, it would reveal
clues to what
happened but nothing that I see has changed. I'd appreciate any clues
anyone
can give me about where to look and what to check.
Thanks,
Cherie
-- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: INET: Cherie_Machler_at_gelco.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: Richard Ji INET: rji_at_exenet.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: INET: Cherie_Machler_at_gelco.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).Received on Wed Sep 19 2001 - 08:19:14 CDT