RE: Keep buffer cache question

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Fri, 15 Mar 2019 07:26:46 -0400
Message-ID: <033c01d4db22$115a5250$340ef6f0$_at_rsiz.com>


A mundane reason could be a degradation in the cluster factor of one or both indexes (insufficient to change the plan, but causing an increase in physical reads.)

The sledge hammer approach to rule this out is to physically re-order the table in the order of the index being used for range scans.

Another mundane reason would be if some column(s) have recently been expanded in length sufficiently to give you a lot of row relocation and/or actual multi-block storage for a single row. The sledge hammer approach will mitigate relocation but be less effective for multi-block rows.

Please notice the words sledge hammer: If someone leaps from here to a periodic table rebuild strategy they get what they deserve. Depending on the size of your table, ruling this out (or in) may be more or less expensive than further analysis.

Good luck.

As JL pointed out, trying to beat the LRU buffer cache for tables that are expanding (and especially involving indexed access) is not a likely win. That strategy is well matched to pretty small lookup tables that are slowly changing. But even those can be kept in the regular cache with a "heart-beat" periodic triple full scan via index that also references a non-indexed column, or if small enough by FTS.

These are not center case solutions, but rather are exceptional methods for specific edge cases that are not really rare, but are not that frequent either.

And they are WAY down the list from better plans, but you've documented that your plans did not change.

mwf

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis
Sent: Thursday, March 14, 2019 6:23 PM
To: oracle-l_at_freelists.org
Subject: Re: Keep buffer cache question

Did any of the relevant segments appear in "Segments by physical reads" ? You need to find out whether it's the indexes or the tables. As a basic guideline you almost certainly WON'T beat the LRU algorithm by setting up the keep cache.

A change like this can happen simply because objects (and particularly indexes) get bigger over time as the data sizes grow. You can get into the position (especially when the number of queries grows) that a query reads a leaf block into memory but causes another leaf block from the same index to be flushed, and a few seconds later some other query wants the leaf block that was flushed.

Consider an index on (customer_id, order_date) - when the data is small index entries for "the most recent order for customer X" may find two or three customers in the same leaf block, so one query benefits from the caching caused by another. As the data gets larger you get to a position where every customer has several leaf blocks and every query for "the most recent order for customer X" has to read a different leaf block and queries don't get any benefit from each other. At this point your only solution is to increase the buffer cache to ensure that one block for each customer can stay in memory long enough for its next usage.

If you do try implementing a KEEP pool, don't forget to check for the effects of read-consistency. Depending how CR blocks are created you may find them as copies created in the default cache, or the recycle cache (if you have one), and some (because of operation "copy current to new buffer") will be in the keep cache. Sizing the keep cache to keep the blocks AND the CR blocks can be problematic. You'll have to check what actually happens because the behaviour changes with version of Oracle and I haven't checked it recently

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on behalf of dmarc-noreply_at_freelists.org <dmarc-noreply_at_freelists.org> Sent: 14 March 2019 21:22
To: oracle-l_at_freelists.org
Subject: Keep buffer cache question

In doing an AWR report comparison for comparable times one major difference I saw was that 2 frequently run queries were suddenly doing a lot of physical i/o. For a comparable 2 hour period they went from 1.5 million to 1.8 million executions but physical reads increased from 0 to 1.2 million. I sampled a few other random times and this was consistent. The queries are both doing index access. One is an index range scan and the other a unique scan against the primary key.

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Fri Mar 15 2019 - 12:26:46 CET

Original text of this message