Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: full-scan vs index for "small" tables
A rough histogram of what I've seen in the past ten years is this:
40% - inefficient SQL that hits the buffer cache ridiculously often for the same blocks over and over
40% - application 3GL code that parses inside of loops, processes one row at a time, etc.
5% - slow or too-frequent disk I/O
15% - miscellaneous other (includes configuration errors, enqueue problems, bbw problems, etc.)
Many instances of the first two problem types listed here are invisible to system-wide monitoring tools, which, I think, is why we see them so much.
Cary Millsap
Hotsos Enterprises, Ltd.
Nullius in verba
Hotsos Symposium 2007 / March 4-8 / Dallas
Visit www.hotsos.com for curriculum and schedule details...
From: Steve Perry [mailto:sperry_at_sprynet.com]
Sent: Tuesday, June 27, 2006 9:47 PM
To: Cary Millsap
Cc: Mladen Gogala; oracle-l_at_freelists.org
Subject: Re: full-scan vs index for "small" tables
On Jun 27, 2006, at 09:52 AM, Cary Millsap wrote:
"Most people guess "It's got to be I/O." But "it" is I/O in fewer than 5% of cases I've witnessed since about 1995"
What have been the majority of problems you've run into?
For me, IO used to be the problem ( after 2GB drives went away and before SANs showed up). Mainly because the sysadmins treated database file systems like "file servers". they'd put everything on a few spindles and fill them up. then mgt. would say the servers were under utilized if the average utilization was below 60% so they would load up the servers with applications to max out the memory.
After I changed companies and started using SANs, most problems seem to be cpu bound on the server. I've seen a few really good nested loops queries on small tables that would take hours to complete.
I'm curious what others have run into.
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Jun 27 2006 - 23:24:22 CDT
![]() |
![]() |