Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Memory limitations in Oracle?

Re: Memory limitations in Oracle?

From: Howard J. Rogers <howardjr_at_www.com>
Date: Wed, 28 Mar 2001 18:59:39 +1000
Message-ID: <3ac1a80e@news.iprimus.com.au>

"Vikas Agnihotri" <onlyforposting_at_yahoo.com> wrote in message news:js02ctc9s54jgep76efb267aut4b7q15ih_at_4ax.com...
> On Tue, 27 Mar 2001 20:14:54 +0200, in comp.databases.oracle.server
> you wrote:
>
> I have read the docs. My question is generic in nature. Not specific
> to any particular platform. Just curiosity.
>
> 1. Is SGA sizing a exact science or more of an art? Start off with 1/3
> of RAM, monitor the hit ratio, increase, and repeat? Or is there a
> more methodical approach?
>

It's a scientific art. And it's iterative... you never arrive at a 'right' answer.

And DAMn! I See tOSs All!

Yes it's methodical. First tune your Design, then your Application code, then your Memory (which is where you're jumping in), then your I/O, then your Contention (locks and latches etc), and then your O/S.

I'm sorry my mnemonic is so poor, but it's one I've grown to love (?!). C=Contention=See. Otherwise DAM I C O/S comes pretty close.

> 2. What is the rationale behind the "1/3 of RAM" rule of thumb?
>

The rationale is that any more than that and you are liable to induce paging out to disk, and you also have to allow room for the various Server processes that are going to be doing work on behalf of Users.

> 3. You say "Oracle will not use 200GB". Is this a documented
> limitation in Oracle?

I doubt it. He's saying that throwing vast quantities of memory at Oracle is a waste of resource, and that Oracle is unlikely to need those sorts of quantities of RAM to have an effective cache. There are hit ratios you should be aiming for, different ratios for different parts of the SGA. Once you've hit them, adding extra memory gives you increasingly decreasing returns (if you get my drift. The law of diminishing marginal utility is where it is all at... you eat your first Mars Bar, it tastes wonderful, you give it a score of 10. You eat your fiftieth Mars Bar, and it gets a score of -100, whilst you vomit. Something like that, anyway. You can push a buffer cache hit ratio from 40% to 80% with ease. Getting it past 95% is really hard. You end up adding 1Gb more RAM to get an extra 0.1% improvement in the ratio).

>I realize that throwing memory at the problem
> wont remedy poor application design. Again, my questions are just out
> of curiosity than anything else.
>

DAM I C O/S???? What he's really saying is that by the time DBAs get involved, it's already too late, and you are fighting a losing battle. Yes, you can make something that takes 3 hours to run do its stuff in 20 minutes... but if it had been coded right, or (better) designed right in the first place, then the thing should have been expected to run in 2 minutes from the word go, and your job as DBA would have been to get that down to 30 seconds.

Hence the importance of doing things methodically... and realising that DBAs are last out of the blocks.

> 4. How much memory do the shadow/background processes need? Again, is
> there a methodical approach to determine this?

Depends on your configuration really. Each server process has a PGA attached, the size of which is largely determined by the sort_area_size parameter. How big that should be depends very much on how many concurrent sorts are taking place in your application (which is, I guess, a design and coding thing), and how big those sorts actually are. The tuning goal is to get sorts happening in memory a very high percentage of the time. The sort_area_size needs to be as big as makes that goal come true. And then multiply that by the number of concurrent users, and that's roughly the amount of memory all your server processes will chew up.

In multi-threaded server configuration, it's all handled rather differently... with what used to be PGA now becoming part of the Shared Pool (or Large Pool if you've got one). The overall memory requirement should not be much different, but where it's allocated will be very different.

Regards
HJR
>
> Thanks
>
>
> >> Understood. What I meant was that assuming the OS supports it, is
> >> there any *Oracle* limitation on using any amount of memory? f.i., if
> >> the machine has a 200GB RAM, can Oracle use it all?
> >>
> >> >Oracle will refuse to start unless it can allocate all memory without
> >> >paging to disk. However, allocating all your RAM to Oracle's SGA is
> >> >probably an unwise decision, what are the Oracle shadow processes
 going
> >> >to use, let alone any other process you might want to run?
> >>
> >> Hm. How do you size the memory used by the shadow processes?
> >>
> >> In general, how is the SGA sized? Bumping it up upon discovering a
> >> poor hit ratio is a reactive approach. What is the best-practice
> >> approach for up-front design? Rules of thumb?
> >>
> >> Thanks...
> >>
> >
> >If you would only first read the documentation from Oracle pertinent to
 your
> >platform (which you of course don't mention), they you would know most of
> >the answers.
> >BTW rule of thumb is not to use more than 1/3 of physical RAM and I am
 sure
> >Oracle won't use 200G.
> >Also those monstrous amounts of RAM won't remedy bad application design.
>
Received on Wed Mar 28 2001 - 02:59:39 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US