Re: Does Oracle keep a history of which blocks are/were loaded into the buffer cache?

From: Cary Millsap <cary.millsap_at_method-r.com>
Date: Wed, 15 Dec 2010 22:45:55 -0600
Message-ID: <AANLkTikXq8KJRNoY_aXJTzg3x7BB3cLP6ZcrNGqknUUx_at_mail.gmail.com>



Charles,

If you had a directory full of trace files and wanted to see what times your blocks were read in, you could *grep* for the 'db.*read' events and then * sort* by the tim value on each line (assuming you're using Oracle release 10.2+).

We have two software tools that you might be interested in knowing about. If you passed your "grep ... | sort" data that I mentioned above to our *mrnl *tool
(http://method-r.com/component/content/article/116), it would show you the wall time at which each read call was made. If you wanted to filter or aggregate your trace data (using *group-by* and *where*-type features), that's what our *mrskew* tool does. You can see some examples at http://method-r.com/component/content/article/117.

Cary Millsap
Method R Corporation
http://method-r.com

On Tue, Dec 14, 2010 at 7:31 PM, Charles Schultz <sacrophyte_at_gmail.com>wrote:

> Mark,
>
> Yeah, I realized my goof with logminer as soon as I sent it. =) Oh well.
>
> Fortunately, I am not overly worried about the overhead associated with
> SQL_TRACE; that is an interesting idea. I am not so sure I could sell that
> on a Production database, though. v$segment_stats sounds like the best way
> to at least get some idea. As I mentioned to another lister, partitioning is
> an option on the table but I really want to justify it first before I
> blindly offer it as a solution. *grin* The table is not currently
> partitioned, hence my analysis.
>
> Thanks for the ideas.
>
> On Tue, Dec 14, 2010 at 17:31, Bobak, Mark <Mark.Bobak_at_proquest.com>wrote:
>
>> Charles,
>>
>>
>>
>> I�m pretty sure you�re out of luck. Consider that blocks are loaded from
>> disk to satisfy queries, as well as DML, and logminer will have nothing to
>> say about that. You can enable SQL_TRACE, and it will capture file#/block#
>> data, but at great overhead and expense.
>>
>>
>>
>> Oracle just doesn�t track to that level of detail, as far as I know. You
>> can look at V$SEGMENT_STATISTICS, to get an idea of which segments have more
>> physical I/O happening on them, but no detail about which set of blocks in a
>> table were most recently loaded from disk. If the table is partitioned by
>> date, and the partitions closely align w/ the level of granularity that you
>> require for your analysis, that may allow you to use V$SEGMENT_STATISTICS
>> and periodic snapshots to determine if newer data is read from disk more
>> recently than old data.
>>
>>
>>
>> Just some thoughts�.hope that helps,
>>
>>
>>
>> -Mark
>>
>>
>>
>> *From:* oracle-l-bounce_at_freelists.org [mailto:
>> oracle-l-bounce_at_freelists.org] *On Behalf Of *Charles Schultz
>> *Sent:* Tuesday, December 14, 2010 5:47 PM
>> *To:* ORACLE-L
>> *Subject:* Does Oracle keep a history of which blocks are/were loaded
>> into the buffer cache?
>>
>>
>>
>> Good day,
>>
>>
>>
>> I am trying to analyze a given table in an effort to determine how often
>> the older records are used compared to the newer records. I know I can go
>> against the AWR and determine the age of some statements by searching for a
>> date column in the table and grabbing the associated bind variable, but this
>> obviously misses any queries that limit data via a join (ie, no bind
>> variable). Is there a way to determine when a particular range of datafile
>> blocks have been last loaded from disk, or how often? I did some digging in
>> x$bh but that has no history (that I know of). In the meantime, I'll start
>> up a LogMiner session and scrape all interested ROWIDs, but this is not a
>> pretty solution. Maybe the only solution?
>>
>> --
>> Charles Schultz
>>
>
>
>
> --
> Charles Schultz
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Dec 15 2010 - 22:45:55 CST

Original text of this message