Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: A quick question on File size in Unix
When I was having a problem imp'ing a large file in 8i, someone at O
support mentioned there were still rumors of unproven bugs at 2G
limits. My problem turned out to be a problem with NFS (actually,
biod's) with cp'ing files over about 10G. So, sorry for possible
myth-mongering, but there you go. I've also spent a couple of years
with hp-ux and windows with ~30G exp files on the same NFS devices, so
there you go (I just assumed the exp generation was slower than cp, so
didn't show the problem).
Sybrand's comment about waking Murphy is worth listening to. Perhaps it should be bedding Murphy :-)
Another thing to consider, when I changed data files from 1 big and a bunch of smaller to a bunch of 2G files, RMAN backup was better able to utilize parallelization (5 hours v 4 hours).
The create database defaults are different from the link you listed if you generate sql scripts with the wizard.
Note 119507.1 on metalink explains MAXDATAFILES in detail, including that one way to increase it is to recreate the controlfile. The Database Administrator's Guide mentions you can add datafiles up to the value of the init parameter db_files, and that will expand the controlfile over MAXDATAFILES. But of course, you need to bounce to change db_files, so that doesn't get you much over create controlfile noresetlogs.
jg
-- @home.com is bogus. "This workaround should only be used following due consideration to how it may effect the ability to recover and as such should not be made public even after 10G goes production." - From a public metalink problem doc.Received on Thu Feb 10 2005 - 19:06:46 CST