Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Best size for datafiles
"Howard J. Rogers" <howardjr_at_www.com> wrote in message news:<3bce7882_at_news.iprimus.com.au>...
> Remembering that the data file is the smallest unit of backup (and restore
> until 9i), then 5Gb files sound like a dreadful idea, though with the right
> hardware I can imagine it working.
>
> The balance is: the more files, the more opportunity for I/O distribution
> around different devices, and the more flexibility and ease you have in a
> backup strategy. But also, the more files, the longer checkpoints take.
>
> Given that I take the risk of Instance failure to be low-ish, I tend to
> recommend huge log files that never switch, thus minimising the number of
> checkpoints. And given few checkpoints, I can't see what possible
> nightmares there can be with multiple smaller data files, instead of fewer
> humungous ones.
>
> Go on: enlighten me!
Maybe I've made one puzzling statement. When I've said about 5GB files
I meant up to 5 GB files. I've made remark about careful layout
planning, all depends on how data are stored in database. My idea is
not to divide 0.5TB with 5GB and then use 100 of 5GB files, that's
just all about averages or maybe high boundaries. These big files
should be used only for big objects, usually in data warehouse are
only few extremely big tables. These giant tables should be
partitioned through number of huge files. However in all warehouses
are also big number of smaller tables with business logic etc., these
tables should be treated as common tables in typical Oracle
environment. There is no need to have these in 5GB data files.
Your remmark about huge log files gave me an idea that you do not
consider that we have been talking about DSS system, so maybe 99%
operations above tables are selects, so redo logs are not the biggest
problem we have.
The problem with thousands of data files are mainly in management and
layout planning. Management of large data warehouse is always a
serious task, with thousands of data_files you'll have more to do.
Also keeping of order in these files should be a problem.
You wrote about "I/O distribution around different devices", but you
will never have so many devices to have problems with spreading
hundred of data files between them. Why you need thousands of files
for distribution around different devices, when you have typically
only hundred of these devices in your box and external storage units ?
It will be very hard task to buy drive under 9GB for big UNIX box now,
so hundred of these drives has near twice of expected storage.
Using more of them will be an overkill. Of course I'm talking about
effective sizes, not considering mirroring.
That's all for now, my lunch time is comming ...
-- _________________________________________ Dusan Bolek, Ing. Oracle team leader Note: pagesflames_at_usa.net has been cancelled due to changes (maybe we can call it an overture to bankruptcy) on that server. I'm still using this email to prevent SPAM. Maybe one day I will change it and have a proper mail even for news, but right now I can be reached by this email.Received on Thu Oct 18 2001 - 05:33:44 CDT
![]() |
![]() |