Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Large log files
Andrew Mobbs wrote:
>
> Yaroslav Perventsev <p_yaroslav_at_cnt.ru> wrote:
> >
> >"Howard J. Rogers" <howardjr_at_www.com> wrote:
> >>
> >> "Yaroslav Perventsev" <p_yaroslav_at_cnt.ru> wrote in message
> >> news:98pt9i$dra$1_at_news247.cnt.ru...
> >>
> >> >Also archive process will consume huge resource
> >> > during archiving.
> >>
> >> Potentially. But that's what having up to 32 redo log groups is designed
> >> for. It's also why 8.1+ allows you to create up to 10 archiver processes.
> >> In any event, ARCH will "consume" no more "resources" doing 10 copies of
1Mb
> >> redo logs than it would copying a single 10Mb file (minimal issues about
> >> disk head movement excluded).
> >
> >For example:
> >1. You have 10Mb redo logs. When switch occurs archiver do it work during
> >appropriate time and after that sleep awaiting next!
> >2. If you have 10Gb redo logs.When switch occurs archiver do it work for a
> >LONG time after that sleep awaiting next LOng time too!
> >In first case you have more or less uniform loading.
> >Opposite, In second case your disk loading will jump from time to time.
> >
> >It's my assumtion.
>
> You're underestimating the workload that I'm trying understand how to
> manage. 10MB redo log files would be filled in less than one second. That
> is completely impractical. Using 10GB files, the "long" time between log
> switches you refer to could be around 15 minutes.
>
> Getting the archiver to stream 10+ MB/s isn't a problem, especially as
> these days many OSes allow you to bypass buffer cache. I agree that a
> predictable workload is desirable, but this just means that ARCH should
> be tuned to keep pace with the average rate of the LGWR, and there should
> be enough log groups to absorb the peaks.
>
> Disk failure is quite easily covered by having one or more mirrors to
> the redo logs.
>
> >>
> >> > 2. May be you may reduce activity using on tables by specifying
nologging
> >> > etc.. If you load large amount data by sqlldr --> user unrecoerable
option.
> >> > In one word, try minimise loading on redo logs during batch jobs and do
> >> > backup database more frequently.
>
> Nologging is an interesting idea, however would be difficult to implement,
> and impossible in certain situations. I haven't done detailed estimates,
> but I think that even with frequent backups, and in a situation where
> the data are still available for re-processing, it would unacceptably
> decrease mean system availability.
>
> --
> Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/
The only drawback I've ever seen with large log files is related to standby databases, namely
Other than that, I'm a fan of 'bigger is better' - less check points especially..
hth
connor
-- =========================================== Connor McDonald http://www.oracledba.co.uk (mirrored at http://www.oradba.freeserve.co.uk) "Some days you're the pigeon, some days you're the statue"Received on Fri Mar 16 2001 - 05:00:07 CST
![]() |
![]() |