Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Choosing data file size for a multi TB database?

Re: Choosing data file size for a multi TB database?

From: Ranko Mosic <ranko.mosic_at_gmail.com>
Date: Fri, 2 Sep 2005 20:58:36 -0400
Message-ID: <367369f105090217586502d9a1@mail.gmail.com>


We have multi G files here, no problem with them. How did they come up with 1T, and 10T figure ?

 On 9/2/05, Kevin Closson <kevinc_at_polyserve.com> wrote:
>
> you'd have to look far and wide for a filesystem that still
> imposes that lock on writes, WHEN direct IO is used...
> VxFS,PolyServe, Sol UFS (forcedirectio mount)
> and most other legacy unix derivations pulled that
> lock for directio use cases long, long ago. Mid 90's.
> Heck, I remember measuring the goodness of that
> fix on Sequent Dynix with Oracle 6.0.27 back in
> 1990 :-)
>
> ------------------------------
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Hameed, Amir
> *Sent:* Friday, September 02, 2005 4:48 PM
> *To:* oracle-l_at_freelists.org
> *Subject:* RE: Choosing data file size for a multi TB database?
>
> On very large data files running on buffered filesystem, wouldn't the
> single-writer lock cause foreground processes (that are trying to read data)
> to wait when the DBWR is check pointing?
> ------------------------------
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Branimir Petrovic
> *Sent:* Friday, September 02, 2005 6:54 PM
> *To:* oracle-l_at_freelists.org
> *Subject:* RE: Choosing data file size for a multi TB database?
>
> What about checkpoint against tens of thousands of data files, surely
> more-merrier rule holds? For that reason (or due to a fear factor) I
> was under may be false impression that smaller number (in hundreds)
> of relatively larger data files (20 GB or so) might be better choice.
> Other very real problem with 10TB database I can easily foresee, but
> for which I do not know proper solution, is how would one go about the
> business of regular verification of taped backup sets? Have another
> humongous hardware just for that purpose? Fully trust the rust? (i.e.
> examine backup logs and never try restoring, or...) What do people
> do to ensure multi TB monster databases are surely and truly safe
> and restorable/rebuildable?
> Branimir
>
> -----Original Message-----
> *From:* Tim Gorman [mailto:tim_at_evdbt.com]
> *Sent:* Friday, September 02, 2005 5:59 PM
> *To:* oracle-l_at_freelists.org
> *Subject:* Re: Choosing data file size for a multi TB database?
>
> Datafile sizing has the greatest regular impact on backups and restores.
> Given a large multi-processor server with 16 tape drives available, which
> would do a full backup or full restore fastest?
>
>
> - a 10-Tbyte database comprised of two 5-Tbyte datafiles
> - a 10-Tbyte database comprised of ten 1-Tbyte datafiles
> - a 10-Tbyte database comprised of two-hundred 50-Gbyte datafiles?
> - a 10-Tbyte database comprised of two-thousand 5-Gbyte datafiles?
>
>
> Be sure to consider what type of backup media are you using, how much
> concurrency will you be using, and the throughput of each device?
>
> There is nothing "unmanageable" about hundreds or thousands of datafiles;
> don't know why that's cited as a concern. Oracle8.0 and above has a
> limitation on 65,535 datafiles per tablespace, but otherwise large numbers
> of files are not something to be concerned about. Heck, the average
> distribution of a Java-based application is comprised of 42 million
> directories and files and nobody ever worries about it...
>

>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Sep 02 2005 - 20:00:39 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US