Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: What is the size of arc files
Hans Forbrich <news.hans_at_telus.net> wrote in message news:<FLw8d.27310$N%.13971_at_edtnps84>...
> Howard J. Rogers wrote:
>
> > Hans Forbrich wrote:
> >
> >> U C wrote:
> >>
> >>> Hi All,
> >>> I would like to know few things from about different log files:
> >>>
> >>> 1) What is the size of arc file. When it creates new files and what is
> >>> the maximum size that it can have.
> >>
> >> Assuming you mean an archived log file, it's simply a copy of a log file
> >> set aside by the archiver process to be available for database recovery
> >> if
> >> needed. When the database is running with archiving enabled and active,
> >> each log switch should not be considered complete until the log file is
> >> archived.
> >>
> >> Depending on your needs (and assuming you are using archiving), you might
> >> decide to keep all the archived log files online or near line.
> >>
> >> If archiving is not used, there are no archive files. Otherwise each is
> >> the size of the log file it archives, and you have as many online and
> >> nearline as are needed for your recovery strategy.
> >>
> >> One way to look at log sizes: the size of the log file is the unit of
> >> unrecoverable transactions - if your log file is sized for a switch at 10
> >> minutes, the worst case scenario is that you will lose 10 minutes of
> >> transactions (assuming all log files die or become corrupt & you are
> >> actively archiving).
> >
> > That is not actually true. It will mean, for sure, that only the last 10
> > minutes of dirtied buffers will have to be re-constructed, since the dirty
> > buffers extant at the time of the last log switch will have been
> > checkpointed to the data files and can thus be read whole and entire from
> > there. But that's not the same as saying you will only lose 10 minutes of
> > transactions. If I start a transaction at 9.00am, and it only gets around
> > to offering me the chance to commit at 11.00am, then a loss of the current
> > redo log at 10.59am would cause me to lose the entire 1 hour 59 minutes
> > worth of transaction, even though the last log switch took place at
> > 10.58am. "Recovery" in that scenario will consist of applying peanut-sized
> > quantities of redo, and then performing massive quantities of rollback.
> >
> > We need to be careful to make the distinction between 'applying redo'
> > (which frequent checkpointing and hence frequent log switches will
> > certainly limit the need for) and 'recovering transactions', which is a
> > different matter entirely.
> >
>
> True.
>
> As an observation ... in the world I've experienced, the longer transaction
> tends to be repeatable/restartable batch or background runs whereas the
> OLTP sessions requiring paper to be set aside tend to be very rapid screen
> entries (or machine scans) with a commit after each. Total loss tends to
> occur for the data entry, not the batch.
My world is full of OLTP transactions that should be long but are broken up into shorter ones, which are then a PITA to fix, even with dying clients, much less db recovery. The World of Applications That Can Run On Multiple Databases And Have Screwy Ideas Of What Transactions Are. So yes, loss tends to be data entry, but manually reintegrating transactions is necessary.
>
> I will modify my future discussions to include your extremely valid points.
>
> /Hans
jg
-- @home.com is bogus. H1-B? I hear'd you was dead! http://www.signonsandiego.com/uniontrib/20041005/news_1b5h1b.htmlReceived on Tue Oct 05 2004 - 15:32:24 CDT