Re: Best practice for Dataguard in 10g?

From: Nuno Souto <dbvision_at_iinet.net.au>
Date: Mon, 07 Sep 2009 18:17:41 +1000
Message-ID: <4AA4C1A5.3000308_at_iinet.net.au>



Vishal Gupta wrote,on my timestamp of 6/09/2009 10:45 PM:

> LGWR for redo log shipping. If there is too much redo generation at any
> point in time which your network is not able to cope with, then log
> shipping will switch from LGWR to ARC automatically. And it will revert
> back to LGWR automatically when it catches up with latest redo
> generation.

That's excellent news indeed. I wasn't aware that ARCH would kick in. That is indeed good news.

> But if you are generating 500GB/day or 355MB/min on average. Then
> depending on your redo log size and log switching frequency. Using ARCH
> will also not harm you. It depends on how much data you are willing to
> use in case of primary DW failure. And when you can regenerate
> missed/lost data from your OLTP.

We generate 500GB/day but it's peaky, not constant. Ie: there are peaks of redo generation where we switch 1GB redo logs in less than a minute, then it slows down to a much more sedate rate of one switch every 15 minutes or so, and even longer idle periods.

> There will be *some* data loss even by using LGWR unless you are using
> SYNC and AFFIRM options, which you would not want to use to protect
> against hanging primary database, which will occur during network
> failure prevent redo log transfer.

Exactly. We can easily afford some data loss. The ETL bursts are from external sources, with a staging area for the transfered data. And all ETL operations are atomic and logged/serialized by the application, so we can always reconstruct from a given interrupt point and identifying which it is is a piece of cake with the application log.

Thanks heaps for the feedback, Vishal.
Much appreciated.

-- 
Cheers
Nuno Souto
in sunny Sydney, Australia
dbvision_at_iinet.net.au
--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 07 2009 - 03:17:41 CDT

Original text of this message