Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Best size for datafiles
.5TB data warehouses tend to become much bigger faster than expected.
Consider going with 2GB file and when I say 2GB I don't mean 2000m.
Use some math to work out the exact file size based on number of
planned extents plus the checkpoint block which Oracle does not add
ontop of the requested size e.g. 1 file with 20 100MB extents =
2048016k with overhead IF it is DMT, if it is LMT, go with 2048128k to
allow for the extent bitmap.
I work on a 10TB+ DW and these calculations are working well for us in performance, B&R, and overall storage mangement.
The next thing to consider is Nearline storage. Once you break a TB in
a Data Warehouse you typically have 500 - 800GB of unused data. Save
yourself a few hundred thousand by flipping that data read only and
adding Nearline to your environment.
http://www.otg.com/Products/DXDB/
willjamu_at_mindspring.com (James A. Williams) wrote in message news:<3bcd26c9.9000852_at_news.mindspring.com>...
> I am on a Solaris Platform running 8.1.7. I am creating a .5 tb
> datawarehouse.
>
>
> I am going to have a lot of datafiles.
>
> A smaller datafile spread out generally performs better.
>
>
> I like 500 mb but will go with 1000 mb files.
>
>
> What does anyone think? Vertias options are delaylog,
> mincache=direct,convosync=direct and mincache=dsync and
> convosysnc=direct.
Received on Fri Oct 19 2001 - 14:26:37 CDT
![]() |
![]() |