Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Memory limitations in Oracle?
On 27 Mar 2001 16:45:13 +0100 (BST), Andrew Mobbs <andrewm_at_chiark.greenend.org.uk> wrote:
>Vikas Agnihotri <vikasa_at_despammed.com> wrote:
>>Are there any memory limitations in Oracle?
>>
>>In other words, if I have a machine with 20GB of physical memory
>>(hypothetically), can I assign it all to Oracle's SGA via the relevant
>>instance parameters (db_block_buffers, shared_pool_size, log_buffers,
>>etc).
>
>I'm currently running an instance with a 20GB SGA. Whether you'll be able
>to or not depends on your OS and the version of Oracle. Things claiming
>"64 bit" are usually a good bet.
>
>Even if your OS and Oracle version support large SGAs, you may need to
>ensure that your OS is configured to do so. Things like maximum size
>of shared memory segments, and maximum number of shared memory
>segments are important.
Understood. What I meant was that assuming the OS supports it, is there any *Oracle* limitation on using any amount of memory? f.i., if the machine has a 200GB RAM, can Oracle use it all?
>Oracle will refuse to start unless it can allocate all memory without
>paging to disk. However, allocating all your RAM to Oracle's SGA is
>probably an unwise decision, what are the Oracle shadow processes going
>to use, let alone any other process you might want to run?
Hm. How do you size the memory used by the shadow processes?
In general, how is the SGA sized? Bumping it up upon discovering a poor hit ratio is a reactive approach. What is the best-practice approach for up-front design? Rules of thumb?
Thanks... Received on Tue Mar 27 2001 - 10:39:26 CST
![]() |
![]() |