Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: RE: Time to read 6000 (block size 2k) blocks
Do you have possibly multiple rows in profileduser for each profileduserid?
Or possibly multiple rows in extendedattributes for each profileduserid
(should not be called PK if so)?
In the case that both answers are no, I'm confused regarding the "DISTINCT" keyword.
If this profileuser table is a possibly long list of transactions for each user, then the problem is design.
You'd want a table with all the non-transactional attributes exactly once,
and then the transactional table would
just have the profileduserid and the transaction details.
Do you have 4000+ search companies with names like 'ACME%'? (I'm not familiar with the ESCAPE '/' syntax, guess I need to search a manual and learn what that is.)
Maybe if you would add the table aliases to the relevant columns and give us a list of all the indexes currently available on the two tables (including uniqueness) and the nullability of the relevant columns it would help.
Given what we can see, if speed of this query is more important than the
associated extra index maintenance costs, you could make an index on
profileduser
starting with search_company_name, profileduserid and including all the
other columns from that table you reference in the query. Then you'd
mitigate the number of blocks required by just hitting the index (unless the
index is so reduced in row references per leaf block that you end up reading
as many, but I doubt that).
hmm. your mileage may vary.
I'm also not sure of the purpose of your rownum limit, in which case a first rows hint might be in order.
good luck!
mwf
-----Original Message-----
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org]On Behalf Of
jaysingh1_at_optonline.net
Sent: Friday, August 06, 2004 10:52 AM
To: oracle-l_at_freelists.org
Subject: Re: RE: Time to read 6000 (block size 2k) blocks
Cary,
Thanks for your response. I ahve enclosed all the details here. Generated extended tarce with level12.
SELECT DISTINCT P.PROFILEDUSERID PROFILEDUSERID, SEARCH_LAST_NAME, SEARCH_FIRST_NAME FROM PROFILEDUSER P , EXTENDEDATTRIBUTES E WHERE P.PROFILEDUSERID = E.PROFILEDUSERID AND P.SEARCH_COMPANY_NAME LIKE 'ACME%' ESCAPE '/' AND E.CUSTOMERID = 'ABCDEFGH' AND HSBC_USER_CATEGORY IN ('VAL1','VAL2') AND ROWNUM < 150
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 180
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT UNIQUE 1 COUNT STOPKEY 1 NESTED LOOPS 4766 TABLE ACCESS BY INDEX ROWID PROFILEDUSER (it has 450,000 rows) 4767 INDEX RANGE SCAN (PROFILEDUSER_IX03) 1 TABLE ACCESS BY INDEX ROWID EXTENDEDATTRIBUTES (it has 15,000 rows) 9530 INDEX UNIQUE SCAN (ATTRIBUTES_PK)
SQL> select column_name from dba_ind_columns where index_name='PROFILEDUSER_IX03';
COLUMN_NAME
COLUMN_NAME
> Yes, this is actually 0.003568s/read, which is pretty good. > > A better question, though, is, "Does the application really need > to make > 18,805 visits to the database buffer cache to return just one row?" > > Unless your query uses some kind of aggregation function to return the > single row (count, sum, etc.), then you should be able to make > this SQL = > do > its job with 10-20 LIOs instead of 18,805. If you can do that, you = > should be > able to reduce response time from 41.99s to about 0.04s. > > I can't see your SQL here, but because there were 2 fetch calls, > I'll = > bet > that you're not aggregating the result, and that you should be > able to = > get > to the 0.04s response time target. It might be as simple as a > missing = > index, > or SQL that debilitates the use of an index. > > > Cary Millsap > Hotsos Enterprises, Ltd. > http://www.hotsos.com > * Nullius in verba * > > Upcoming events: > - Performance Diagnosis 101: 8/10 Boston, 9/14 San Francisco, 10/5 = > Charlotte > - SQL Optimization 101: 7/26 Washington DC, 8/16 Minneapolis, 9/20 = > Hartford > - Hotsos Symposium 2005: March 6-10 Dallas > - Visit www.hotsos.com for schedule details... > > > -----Original Message----- > From: oracle-l-bounce_at_freelists.org = > [oracle-l-bounce_at_freelists.org] > On Behalf Of Khedr, Waleed > Sent: Friday, August 06, 2004 9:19 AM > To: oracle-l_at_freelists.org > Subject: RE: Time to read 6000 (block size 2k) blocks > > Five millisecond is not bad for single block sequential read. > > Waleed > > -----Original Message----- > From: jaysingh1_at_optonline.net [jaysingh1_at_optonline.net]=3D20 > Sent: Friday, August 06, 2004 9:49 AM > To: oracle-l_at_freelists.org > Subject: Time to read 6000 (block size 2k) blocks > > > Hi All, > > The question may be wispy. > We have 14 CPU sun box,8i 2 node OPS. Not under heavy load. > > In our case it is taking 21.86 sec for 6126 blocks (from disk) > > db file sequential read 6126 0.29 > 21.86 > > Approximately how long it should take to read 6000 blocks? > > > call count cpu elapsed disk query current > rows > ------- ------ -------- ---------- ---------- ---------- --------- > - > ---------- > Parse 1 0.01 0.01 0 0 0 > 0 > Execute 1 0.00 0.00 0 0 0 > 0 > Fetch 2 11.39 41.98 6126 18805 0 > 1 > ------- ------ -------- ---------- ---------- ---------- --------- > - > ---------- > total 4 11.40 41.99 6126 18805 0 > 1 > > > > db file sequential read 6126 0.29 > 21.86 > > Thanks > Sami > > > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request_at_freelists.org > put 'unsubscribe' in the subject line. > -- > Archives are at http://www.freelists.org/archives/oracle-l/ > FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request_at_freelists.org > put 'unsubscribe' in the subject line. > -- > Archives are at http://www.freelists.org/archives/oracle-l/ > FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request_at_freelists.org > put 'unsubscribe' in the subject line. > -- > Archives are at http://www.freelists.org/archives/oracle-l/ > FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- >Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
-- Archives are at http://www.freelists.org/archives/oracle-l/ FAQ is at http://www.freelists.org/help/fom-serve/cache/1.htmlReceived on Fri Aug 06 2004 - 10:45:04 CDT
-----------------------------------------------------------------
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line. -- Archives are at http://www.freelists.org/archives/oracle-l/ FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------