Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Keep Pool Question

Re: Keep Pool Question

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 9 Mar 2005 08:54:20 +0000 (UTC)
Message-ID: <d0mdjs$mg4$1@titan.btinternet.com>

Brian,

Simple rule - if the resources are there, then you might as well find a way to take advantage of them.

If you are confident that you have 2-3 GB of free memory all the time, then the most useful place for that memory is probably somewhere within Oracle's control.

There are always a couple of caveats, of course - a) is the free memory really free, or is it being

    used as a filesystem buffer - I believe some     flavours of unix take advantage of "free" memory     until it is required by temporarily using it for f/s.

    if this is the case, then that memory may be     protecting you from something that would     otherwise be a problem. (e.g. If you are running     on file-system, then it reduces the problem     caused by the 'small-table' bug, because those     small tables may be buffered in the filesystem,     so you won't necessarily get much benefit from     creating a KEEP pool for them)

b) If you do large sorts from time to time, are

     you going to have an unlucky coincidence     some time where several large sorts happen     concurrently and put pressure on the memory.     (One quick check - how close have you been     to hitting your pga_aggregate_target ? How     much memory from the 2-3Gb should you     'reserve' in case you ever do hit the target.

c) The 'small table threshold' defaults to 2% of

    the buffer cache. If you increase the buffer     cache, you could get unlucky, and find that     some quite large tables now count as small     tables, and have a surprising side-effect on     the cache (hence I/O rate) due to occasional     tablescans.

d) If you make the cache too big, you may find

    that some blocks that are subject to high     modification rates start leaving long chains     of CR clones - which means that the cache     buffers chains latch protecting that block can     get pressured because sessions holding it     spend increasing amounts of time scanning     the long chains for the block they want (A typical     chain is supposed to average "half" a buffer, but     I have seen chains ending up with several hundred     copies of the same block). This problem shows     up most if your code does 'select for update'.     You can detect the issue by querying x$bh and     aggregating on file number and block number     where the data object id is less then 4 million.

There are probably a few other events potential threats that you can come up with.

Nuno makes a very important point - it is a bit to easy to think "put the whole of something", or "put all of several things" into the KEEP pool.

But it would be perfectly valid to cater for 'the really important bit only, plus some spare'. And, of course the point about the working cycle is critical - for example if you have one table that you use very regularly only in the day, and another that is used very hard only in the night, (okay, it's a silly example to make the point) then the KEEP pool only has to be large enough for the larger of the two, not for the sum of the two.

Ideally, of course, for a fixed amount of memory, you don't want to define any special pools because doing so reduces Oracle's ability to respond optimally to varying loads. In practice, however, there will always be cases where you can beat the internal algorithms because you know the data and the business use of data.

One final thought - apart from the "small table" bug, I don't usually find the KEEP pool to be useful - but I have occasionally found a small RECYCLE pool to be useful as a way of stopping very large objects with a very scattered access pattern from reducing the benefit of the main cache (polluting the cache as James Morle calls it).

-- 
Regards

Jonathan Lewis

http://www.jlcomp.demon.co.uk/faq/ind_faq.html
The Co-operative Oracle Users' FAQ

http://www.jlcomp.demon.co.uk/seminar.html
Public Appearances - schedule updated Jan 21st 2005






"Brian" <brian_at_databaseknowledge.com> wrote in message 
news:1110237242.136283.111680_at_z14g2000cwz.googlegroups.com...

> Thanks for your reply. Tuning the application would be a great help.
> At the moment the developers are swamped. In the meantime, I try to
> tune the db to compensate. It will take years to remodel the data and
> rewrite millions of lines of code that does not use bind variables.
>
> You bring up a good point: 3.3G is large. The thing is, one db writer
> with asynchronous io works fine. OS does not swap, load average is <2
> most of the time, statspack has never reported 'buffer busy waits',
> check pointing is problem free, the BCHR (for what it's worth) is
> decent. So whatever extra work is being placed on the system by having
> large LRU queues is not an issue (as far as I can tell).
>
> I recently added 600M to buffer cache because of disk i/o issues.
> Before this statspack was reporting very high i/o wait times (> 2000
> buffer waits in the tablespace io stats section). iostat was
> reporting high awaits. Tablespaces were moved off hot disks, tuned sql
> and I did the standard DBA stuff. To make a long story short, I
> figured out that there is a controller problem because it only is able
> to sustain a max i/o rate of 10M/sec when it should be 80M/sec.
>
> So as a last resort I increased the buffer cache by 600M. Then
> statspack reported great io times with buffer waits < 10. I checked
> and rechecked. The extra 600M took a huge load off the disks. Buffer
> busy waits remained fine.
>
> Adding another gig is probably over kill. As I mentioned no one is
> having any issues. I kind of wanted to push the cache until I saw some
> kind of performance problem and then back it down. If the box has a
> total of 8Gig ram, what would the problem if I went to a 4.3G cache and
> a keep pool? The pool will have it's own LRU queue, the box won't
> swap, if buffer busy waits is an issue then I'll add another db
> writer. Am I missing something? There is 130G of data sitting in the
> datafiles. The nodes always have at least 2G sometime 3G of free
> memory.
>
Received on Wed Mar 09 2005 - 02:54:20 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US