Re: Does Oracle keep a history of which blocks are/were loaded into the buffer cache?
Date: Thu, 16 Dec 2010 11:26:42 -0600
Message-ID: <4D0A4BD2.5050906_at_ardentperf.com>
One idea would be to take a few snapshots of x$bh yourself to compare over time. I've actually done this myself with very informative results. Of course you need to be careful about the impact. I don't really know otherwise, so I just assume that querying v$bh is somewhat expensive in latches and such. But that being said, I sometimes find that the value of the information outweighs the temporary overhead.
One relatively cheap tool I like is called Lab128, and it has a useful "buffer explorer" which allows you to visualize the buffer nicely. One thing I've done (not especially elegant) is just take a handful of screenshots of this window at different points in time, to compare... http://www.lab128.com/lab128_screens3.html
-Jeremy
Charles Schultz wrote:
> Mark,
>
> Yeah, I realized my goof with logminer as soon as I sent it. =) Oh well.
>
> Fortunately, I am not overly worried about the overhead associated
> with SQL_TRACE; that is an interesting idea. I am not so sure I could
> sell that on a Production database, though. v$segment_stats sounds
> like the best way to at least get some idea. As I mentioned to another
> lister, partitioning is an option on the table but I really want to
> justify it first before I blindly offer it as a solution. *grin* The
> table is not currently partitioned, hence my analysis.
>
> Thanks for the ideas.
>
> On Tue, Dec 14, 2010 at 17:31, Bobak, Mark <Mark.Bobak_at_proquest.com
> <mailto:Mark.Bobak_at_proquest.com>> wrote:
>
> Charles,
>
>
>
> I’m pretty sure you’re out of luck. Consider that blocks are
> loaded from disk to satisfy queries, as well as DML, and logminer
> will have nothing to say about that. You can enable SQL_TRACE,
> and it will capture file#/block# data, but at great overhead and
> expense.
>
>
>
> Oracle just doesn’t track to that level of detail, as far as I
> know. You can look at V$SEGMENT_STATISTICS, to get an idea of
> which segments have more physical I/O happening on them, but no
> detail about which set of blocks in a table were most recently
> loaded from disk. If the table is partitioned by date, and the
> partitions closely align w/ the level of granularity that you
> require for your analysis, that may allow you to use
> V$SEGMENT_STATISTICS and periodic snapshots to determine if newer
> data is read from disk more recently than old data.
>
>
>
> Just some thoughts….hope that helps,
>
>
>
> -Mark
>
>
>
> *From:* oracle-l-bounce_at_freelists.org
> <mailto:oracle-l-bounce_at_freelists.org>
> [mailto:oracle-l-bounce_at_freelists.org
> <mailto:oracle-l-bounce_at_freelists.org>] *On Behalf Of *Charles Schultz
> *Sent:* Tuesday, December 14, 2010 5:47 PM
> *To:* ORACLE-L
> *Subject:* Does Oracle keep a history of which blocks are/were
> loaded into the buffer cache?
>
>
>
> Good day,
>
>
>
> I am trying to analyze a given table in an effort to determine how
> often the older records are used compared to the newer records. I
> know I can go against the AWR and determine the age of some
> statements by searching for a date column in the table and
> grabbing the associated bind variable, but this obviously misses
> any queries that limit data via a join (ie, no bind variable). Is
> there a way to determine when a particular range of datafile
> blocks have been last loaded from disk, or how often? I did some
> digging in x$bh but that has no history (that I know of). In the
> meantime, I'll start up a LogMiner session and scrape all
> interested ROWIDs, but this is not a pretty solution. Maybe the
> only solution?
>
-- http://www.ardentperf.com +1 312-725-9249 Jeremy Schneider Chicago -- http://www.freelists.org/webpage/oracle-lReceived on Thu Dec 16 2010 - 11:26:42 CST