Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: queries too slow
Steve,
Comments embedded.
Gaja.
OK, here is a shot at remote application tuning:
Cut and paste from original posting
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0 Execute 157 475.75 508.74 1441954 1453090 793 157 Fetch 0 0.00 0.00 0 0 0 0
total 158 475.75 508.74 1441954 1453090 793 157
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 31 (SDOSTLO20) (recursive depth: 1) Rows Execution Plan -------
0 UPDATE STATEMENT GOAL: CHOOSE 0 UPDATE OF 'EC_PRDITM' 0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'EC_PRDITM_SKU_UK' (UNIQUE) *******************************************************************
SELECT PRDITMID FROM EC_PRDITM WHERE SKU = :b1 AND DBSTS = 'A';
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0 Execute 157 0.01 0.01 0 0 0 157 Fetch 157 373.98 393.26 1441790 1453035 471 157
total 315 373.99 393.27 1441790 1453035 471 314
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 31 (SDOSTLO20) (recursive depth:1)
Rows Execution Plan -------
0 SELECT STATEMENT GOAL: CHOOSE 0 TABLE ACCESS GOAL: ANALYZED (BY INDEX> ROWID) OF 'EC_PRDITM' 0 INDEX GOAL: ANALYZED (UNIQUE SCAN)OF 'EC_PRDITM_SKU_UK' (UNIQUE)
OK, so we have an UPDATE and a SELECT. It looks like this is being called from some kind of iterative block of code. The UPDATE and the SELECT both are executed 157 times.
The per execution statistics for the UPDATE is :
1) CPU - 3.03 secs 2) CPU Elapsed - 3.24 secs 3) Physical I/O - 9184 blocks 4) Logical I/O - 9260 blocks 5) Rows updated - 1
The per execution statistics for the SELECT is :
1) CPU - 2.38 secs 2) CPU Elapsed - 2.50 secs 3) Physical I/O - 9183 blocks 4) Logical I/O - 9258 blocks 5) Rows selected - 1
Total number of blocks in Table (per previous posting) - 9255
Need I say more...;-)
Some things to try out:
> I do agree with you. I have seen plenty of queries that were
> doing massive index scans, and were
> returning most of the rows. They had a great hit-rate, but
> would have run much faster if they
> would have done a full table scan.
>
Thank you, as you mentioned full table scan has its place.
> Anyway, I'm sure the problem with the query is something like
> it is using the wrong index, or
> there isn't a good index to use, or maybe even that it should
> be doing a full table scan.
> Experiment, and find what works the best.
>
Bingo...hope the concatenated index helps.
Cheers,
Gaja.
> --- Gaja Krishna Vaidyanatha <gajav_at_yahoo.com> wrote:
> > Friends,
> >
> > Here we go again, the false correlation between db cache hit
> > ratio and its effect on performance. We just got done with
> a
> > similar thread a couple of weeks ago. Even without seeing
> > anything in Prakash's environment, I would probably guess
> that
> > the current size of the database buffer cache may be OK. A
> 79%
> > cache hit ratio is by no means "WAY too low". WAY too low
> > compared to what - 99%? Is there any guarantee that at 99%
> > cache hit ratio that the queries will be screaming? I
> strongly
> > doubt it.
> >
> > There are way too many folks who pin all of their hopes in
> > performance increase by throwing memory at the Oracle SGA,
> > specically at the db buffer cache.
> >
> > Myth : Quite possibly one of the biggest performance myths
> out
> > there is, "If I do not run my Oracle database with a cache
> hit
> > ratio of 90% or above, I have bad performance".
> >
> > Performance needs to be measured by "system throughput", how
> > much work has the system done for you. We have even dealt
> with
> > systems with 65-70% cache hit ratios where the performance
> of
> > the transactions and the queries was steller. The
> incremental
> > increase in cache hit ratio that one achieves by configuring
> the
> > db buffer cache with more memory, does not do much to system
> > performance.
> >
> > Hence the memory should be probably utilized for something
> else
> > (e.g. sort_area_size, hash_area_size etc.). Like anything
> in
> > life, the law of diminishing returns will come to play
> sooner or
> > later. Further, the reference to a 99% cache hit ratio and
> the
> > assumption that performance is great as a result, it not
> > correct. Not by a long shot.
> >
> > While a reasonable cache hit ratio is required to avoid an
> > inordinate amount of physical I/O, the bigger issue that
> needs
> > to be tackled is "What is causing such a high amount of
> physical
> > I/O?". 9.9 out of 10 times, it is an application problem.
> May
> > be the application needs to perform a "full table scan"
> instead
> > of an "index scan" or vice versa. May be the index used is
> not
> > optimal. It does not have all the "required columns". A
> bad
> > cache hit ratio is a "symptom", while a bad query is "the
> > disease".
> >
> > Another point to keep in mind is that on systems that are
> > configured with mass storage devices (which have
> controller/disk
> > level caches), a physical I/O from Oracle's perspective
> still
> > does not mean "physical I/O from disk". It could mean
> "logical
> > I/O from the storage array's cache". There is more to
> > performance than the cache hit ratio of your db buffer
> cache.
> >
> > Fact: Within reason and hardware limitation if the db
> buffer
> > cache is configured optimally, the cache hit ratio is only a
> > very small indicator/symptom of the bigger problem/disease.
> >
> > Remember the 80-20 rule. 80% or more of our problems with
> > Oracle is with bad SQL and no matter how much memory you
> throw
> > at Oracle, if the disease is not cured, the symptoms will
> still
> > exist. A 99% cache hit ratio may not mean much when
> measuring
> > system performance and system throughput. I personally
> prefer a
> > system which has a 99% throughput factor than a 99% db
> buffer
> > cache hit ratio.
> >
> > Don't get me wrong I am all for "logical I/O" instead of
> > "physical I/O" and I am fully aware of the 3 orders of
> magnitude
> > in performance difference between the two. I am all for
> giving
> > Oracle "enough" memory. But only upto a certain point. I'd
> > rather spend my time fixing, re-writing and tuning my
> queries.
> > The performance difference that I get in tuning a query is
> many
> > times higher than 3 orders of magnitude. I believe in
> curing
> > diseases not symptoms. Let's go tune those awful queries.
> >
> > Best Regards,
> >
> > Gaja.
> >
> > --- Jack Silvey <JSilvey_at_XOL.com> wrote:
> > > Prakash,
> > >
> > > 79% is WAY too low. For an OLTP system (small number of
> rows
> > > returned) you
> > > should see in the >90% range, and want to see >95% range.
> >99%
> > > is ideal. Get
> > > after that DBA! It will help greatly across the board.
> > >
> > > Now, about the number of blocks and rows visited for the
> > > number of rows
> > > returned:
> > >
> > > The question we must ask ourselves is why does oracle
> think it
> > > necessary to
> > > run through 1.4 million rows to get to the 156 it returns?
> > > Especially with a
> > > unique index?
> > >
> > > Gut instinct tells me that the optimizer is making
> incorrect
> > > choices based
> > > on the prescense of the :b7 variable. While using bind
> > > variables will reduce
> > > parsing, it can also cause incorrect optimizer choices.
> > >
> > > I would bet that the optimizer is choosing all the rows
> where
> > > DBSTS = 'A'
> > > and then sorting through those rows to find out how many
> have
> > > the sku=:b7
> > > clause.
> > >
> > > How many rows in the table have DBSTS = 'A'? If it is 1.4
> > > million, we have
> > > our answer.
> > >
> > > Jack
![]() |
![]() |