Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: storing a million of small files Oracle (or other db) vs. File System
"Connor McDonald" <connor_mcdonald_at_yahoo.com> wrote in message
news:9hb0r0$73p$1_at_news.chatlink.com...
> I would say that as a rough guide, that file systems are greater at
looking
> after a small number of large files, and databases are great at looking
> after a large number of small files...
>
> hth
> --
Not necessarily true. Databases are great at looking after _records_, but there have been major performance flaws over the years for databases looking after binary data.
I developed a large system a few years ago that maintained many 100s of thousands of files on a UNIX fs, and the performance was very good! We assinged each document a number, and then had a function that turned the number into a path like /docs/a/b/f/e/d/e/s/201110.pdf, that is, an n-level tree, with each level having 26 children. This was you get over the linear directory search problem, and every thing works fine.
YMMV, but this worked great for me...
Cheers
James Received on Sat Jul 21 2001 - 19:12:24 CDT
![]() |
![]() |