Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.misc -> Re: Large Directory Problem
Well, like all system level problems, the solution changes with the scale of the problem. The original post didn't really state either of the two important criteria: the file system type (FAT or NTFS) or the expected number of files to be stored.
As Scott said, if Randall is using FAT that will explain the abysmal performance -- he should certainly switch to NTFS, which will solve most all of the indexing issues even with very large numbers of files. He should also look into disabling 8.3 name creation on NTFS, as this can slow down directory operations on very large NTFS directories.
-- Tim Hill -- Windows NT MVP Scott L. Holmes wrote in message <#Q0sVm979GA.233_at_uppssnewspub05.moswest.msn.net>...Received on Sun Oct 04 1998 - 00:00:00 CDT
>Michael D. Long wrote in message <01bdefb2$23d942a0$020aa8c0_at_hammer>...
>>Perhaps you should perform a cost benefit analysis of storing the
>>BLOB data on RAID or DASD vs. some type of optical storage.
>>The cost of primary storage is prohibitive when the database grows
>>above 100Gb.
>
>Oh. well yeah. I see your point. Never worked with anything over a few
>hundred meg.
>
>Cheerio.
>--
>Scott L. Holmes
>
>Neutrino Mass, it's what keeps me from becoming completely unstable
>
>