Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Large log files
"Howard J. Rogers" <howardjr_at_www.com> ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ × ÎÏ×ÏÓÔÑÈ ÓÌÅÄÕÀÝÅÅ:
news:3ab086e0_at_news.iprimus.com.au...
>
> "Yaroslav Perventsev" <p_yaroslav_at_cnt.ru> wrote in message
> news:98pt9i$dra$1_at_news247.cnt.ru...
> > Hello!
> >
> > 1. I think large log files very bad idea because if database fails you
may
> > lost large amount of data.
>
> Untrue. The size of the logs is irrelevant -provided they are mirrored,
> such that loss of the current redo log is insured against. No data will
> ever be lost in Oracle, provided all redo is available. Mirroring helps
> prevent the loss of redo.
>
> The problem with a large redo log is (as the original poster stated)
simply
> that INSTANCE recovery takes longer than it otherwise would do. The time
> taken to perform media recovery is unaffected.
>
> >Also archive process will consume huge resource
> > during archiving.
>
> Potentially. But that's what having up to 32 redo log groups is designed
> for. It's also why 8.1+ allows you to create up to 10 archiver processes.
> In any event, ARCH will "consume" no more "resources" doing 10 copies of
1Mb
> redo logs than it would copying a single 10Mb file (minimal issues about
> disk head movement excluded).
For example:
1. You have 10Mb redo logs. When switch occurs archiver do it work during
appropriate time and after that sleep awaiting next!
2. If you have 10Gb redo logs.When switch occurs archiver do it work for a
LONG time after that sleep awaiting next LOng time too!
In first case you have more or less uniform loading.
Opposite, In second case your disk loading will jump from time to time.
It's my assumtion.
>
> Regards
> HJR
>
>
> > 2. May be you may reduce activity using on tables by specifying
nologging
> > etc.. If you load large amount data by sqlldr --> user unrecoerable
option.
> > In one word, try minimise loading on redo logs during batch jobs and do
> > backup database more frequently.
> >
> > Best regards.
> > Yaroslav
> >
> > "Andrew Mobbs" <andrewm_at_chiark.greenend.org.uk> ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ ×
ÎÏ×ÏÓÔÑÈ
> > ÓÌÅÄÕÀÝÅÅ: news:yCA*rl-Qo_at_news.chiark.greenend.org.uk...
> > > I work on the performance of an application, and don't have much
> > > experience of production DBA activates.
> > >
> > > However, I'd like to know what the management implications are of very
> > > large redo log files. The application on a large system generates redo
> > > at rates of well over 10MB/s.
> > >
> > > Specifically:
> > >
> > > At these rates, does the 30 minute between log switches rule of thumb
> > > still hold? i.e. should I be recommending redo log files of the order
> > > of several tens of gigabytes?
> > >
> > > How would a DBA go about archiving log files that can be generated at
> > > rates greater than can be easily streamed onto a single tape?
> > >
> > > I assume that recovery would be a problem (as in take geological time)
> > > with large redo logs. Is there any way of improving this, or
> > > minimising it (other than avoiding unplanned outages)?
> > >
> > > In general, what other problems would be posed by such large log
files?
> > >
> > > Thanks,
> > >
> > > --
> > > Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/
> >
> >
>
>
Received on Thu Mar 15 2001 - 05:59:40 CST
![]() |
![]() |