Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Terabytes of archived redo logs
Lou,
Online backups and no downtime are not the issue. The first question is why so much redo log? Potentially you could examine the SQL and see if something could be changed. Next, you say it is spread over 5 databases. That seems to make things a little more reasonable.
As far as the backup, does your disk hardware offer something like a snapshot capability. Where you could take a snapshot of the data, then mount the snapshot on a backup server and perform an RMAN backup?
Dennis Williams
On 5/3/07, Lou Avrami <avramil_at_concentric.net> wrote:
>
> Hi folks,
>
> I'm about to inherit an interesting project - a group of five 9.2.0.6databases that produce approximately 2 terabytes (!) of archived redo log
> files per day.
>
> Apparently the vendor configured the HP ServiceGuard clusters in such a
> way that it takes over an hour to shut down all of the packages in order to
> shut down the database. This amount of downtime supposedly can't be
> supported, so they decided to go with online backups and no downtime.
>
> Does anyone out there have any suggestions on handling 400 gig of archived
> redo log files per day? I was thinking of either a neear-continuous RMAN
> job or shell cron that would write the logs to either tape or a storage
> server. Actually, I think that our tape library might be overwhelmed also
> by the constant write activity. My thinking right now is a storage server
> and utilizing a dedicated fast network connection to push the logs
> over. Storage though might be an issue.
>
> If anyone has any thoughts or suggestions, they would be appreciated.
> BTW, I already had the bright idea of NOARCHIVELOG mode and cold
> backups. :)
>
> Thanks,
> Lou Avrami
>
>
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Thu May 03 2007 - 07:08:33 CDT
![]() |
![]() |