Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: More Memory the better ...Why not
"Franklin" <member29243_at_dbforums.com> wrote in message
news:3283899.1061860214_at_dbforums.com...
>
> Well, how about using a 10 gig db_cache_size?
>
>
>
> Is there such a thing as too much data buffer RAM?
>
>
>
> Actually, all data buffer RAM is hashed, so size is not an issue
> there, BUT you can degrade performance with a super-sized
> db_cache_size is when:
>
>
>
> 1 - You have heavy use of temp tables, truncates or data purging -
> Oracle must sweep all blocks in the data buffer to clean-out
> inval;id blocks
There is a partial checkpoint done when segment is dropped, is it the case with temp tables as well? Can you point me to source of this information?
> 2 - High updates - The DBWR must scan the whole data buffer looking for
> dirty blocks, causing high work for the DBWR process.
No it doesn't. There is a parameter _db_block_max_scan_pct which sets the percentage of buffers to inspect when looking for free (used to be _db_block_max_scan_cnt IIRC).
Tanel. Received on Tue Aug 26 2003 - 01:49:45 CDT
![]() |
![]() |