Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large log files

Re: Large log files

From: Yaroslav Perventsev <p_yaroslav_at_cnt.ru>
Date: Thu, 15 Mar 2001 15:01:46 +0300
Message-ID: <98qahg$em3$1@news247.cnt.ru>

"Howard J. Rogers" <howardjr_at_www.com> ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ × ÎÏ×ÏÓÔÑÈ ÓÌÅÄÕÀÝÅÅ: news:3ab086e0_at_news.iprimus.com.au...
>
> "Yaroslav Perventsev" <p_yaroslav_at_cnt.ru> wrote in message
> news:98pt9i$dra$1_at_news247.cnt.ru...
> > Hello!
> >
> > 1. I think large log files very bad idea because if database fails you
 may
> > lost large amount of data.
>
 

> Untrue. The size of the logs is irrelevant -provided they are mirrored,
> such that loss of the current redo log is insured against. No data will
> ever be lost in Oracle, provided all redo is available. Mirroring helps
> prevent the loss of redo.
>

What If disks crash happens???

> The problem with a large redo log is (as the original poster stated)
 simply
> that INSTANCE recovery takes longer than it otherwise would do. The time
> taken to perform media recovery is unaffected.
>
> >Also archive process will consume huge resource
> > during archiving.
>
> Potentially. But that's what having up to 32 redo log groups is designed
> for. It's also why 8.1+ allows you to create up to 10 archiver processes.
> In any event, ARCH will "consume" no more "resources" doing 10 copies of
 1Mb
> redo logs than it would copying a single 10Mb file (minimal issues about
> disk head movement excluded).
>
> Regards
> HJR
>
>
> > 2. May be you may reduce activity using on tables by specifying
 nologging
> > etc.. If you load large amount data by sqlldr --> user unrecoerable
 option.
> > In one word, try minimise loading on redo logs during batch jobs and do
> > backup database more frequently.
> >
> > Best regards.
> > Yaroslav
> >
> > "Andrew Mobbs" <andrewm_at_chiark.greenend.org.uk> ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ ×
 ÎÏ×ÏÓÔÑÈ
> > ÓÌÅÄÕÀÝÅÅ: news:yCA*rl-Qo_at_news.chiark.greenend.org.uk...
> > > I work on the performance of an application, and don't have much
> > > experience of production DBA activates.
> > >
> > > However, I'd like to know what the management implications are of very
> > > large redo log files. The application on a large system generates redo
> > > at rates of well over 10MB/s.
> > >
> > > Specifically:
> > >
> > > At these rates, does the 30 minute between log switches rule of thumb
> > > still hold? i.e. should I be recommending redo log files of the order
> > > of several tens of gigabytes?
> > >
> > > How would a DBA go about archiving log files that can be generated at
> > > rates greater than can be easily streamed onto a single tape?
> > >
> > > I assume that recovery would be a problem (as in take geological time)
> > > with large redo logs. Is there any way of improving this, or
> > > minimising it (other than avoiding unplanned outages)?
> > >
> > > In general, what other problems would be posed by such large log
 files?
> > >
> > > Thanks,
> > >
> > > --
> > > Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/
> >
> >
>
>
Received on Thu Mar 15 2001 - 06:01:46 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US