Huge generation of archivelog: how to tweak that ?

From: Xavier Maillard <xma_at_gnu.org>
Date: Tue, 04 Aug 2009 07:11:54 +0200
Message-ID: <873a88f9id.wl%xma_at_gnu.org>



Hi,

we are creating standby databases (physical) for several databases (9i). All in all it works perfectly except one thing: for one of them archivelog generation is totally out of control.

We have multiplexed archivelog destinations sized at 8Gb (which, based on our estimations was something unlikely to happen below one full production day). Today, these 8Gb are hitten in *one* hour only; to be more precise, this happens for at least one program: a purge.

Today, this is a no-go for our whole dataguard platform since every hour we must delete manually archive logs manually to permit the purge to finish correctly (thus breaking our standby database).

What I am trying to figure out is this:

  • why do we hit such archivelog production ?
  • what is exactly stored in an archived redo log ?
  • how can we distingly disminish this archive log generation ?
  • what could be done in order not to break our standby database ?
  • is there a "best practice" our developers should follow to code his purge system (# of commit, commit frequency, DLL to avoid using, ...

I googled hard but found nothing. Any help would be greatly appreciated here !

Thank you in advance.

Xavier Received on Tue Aug 04 2009 - 00:11:54 CDT

Original text of this message