RE: how many LIOs is too many
Date: Tue, 21 Jan 2014 21:32:55 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D901DD78ED_at_exmbx05.thus.corp>
Raman,
My point was that 1,800 per hour would be equivalent to one CPU; I wasn't trying to say that 1,800 was the actual number I had worked out (and you had said "thousands per hour" so I wasn't assuming that the figures shown were necessarily the worst figures you had seen.
It's not always safe to assume that the figures you get from v$sql are complete, by the way, it's possible for the child cursor to be flushed or invalidated at some point and reloaded with the counts set back to zero.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle
From: Ram Raman [veeeraman_at_gmail.com] Sent: 21 January 2014 21:11
To: Jonathan Lewis
Cc: ORACLE-L
Subject: Re: how many LIOs is too many
Thanks to everyone who responded. Jonathan, how do you say that it is 1800 exec/hr? I did (789388/ (sysdate-first_load_time)); with about 33 days, it came to abuot 1,000 exec/hr.
I have some more information about the SQL: https://drive.google.com/file/d/0B9YC82qZ8_3eNGxFeHdQdlJiN1k/edit?usp=sharing
In this case the number of LIOs per execution is probably not the important bit - the important bit is that the query seems to take a little over 2 CPU seconds per execution. At 1,800 executions per hour (rather than the "few thousand" you suggest, this would be equivalent to eliminating one of your CPUs. Unless you've got at least 8 (real) CPUs, you don't want to be running this query; if you've got a small number of real CPUs which are using threads to fake themselves up to look like lots of CPUs you really don't want to be running this query.
To answer your question
1) Yes - and the bizarre thing is that the code fetch a couple of hundred rows in order, processed and updated the first one (which took it off the list) then re-ran the query to fetch a couple of hundred again. If you can't see the code, try tracing it (and read the trace file) to see what the process does next after fetching the 500.
2) No formula that can be generally applied - especially if you're interested in precision. Rick's suggestion is a generous over-estimate and talks about "final set" - but doesn't make cleara that "final set" could have to allow for an aggregate: your query might be required to aggregate 20,000 rows to 500 - any estimate should be about the 20,000 not the 500. In your case (as Mark indicates) you may be acquiring and sorting a very large volume of data and then discarding all but 500 rows, and it's the volume acquired that matters, not the volume returned.
Bottom line - don't worry about the details at present, that query (with its current plan) is clearly too CPU-intensive to be run thousands of times per hour.
a) find out what it's doing and why - you may be able to reduce the execution count or fetched volume b) review the current execution plan to see if there is a better path possible c) review the SQL to see if it can be re-written
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Jan 21 2014 - 22:32:55 CET