Re: backing up a big DB

From: Orlando L <oralrnr_at_gmail.com>
Date: Tue, 29 Mar 2022 13:09:46 -0500
Message-ID: <CAL8Ae76VroP3p3Jz98tLmjiCcc8XySoTcUULm_fumemoLqOccw_at_mail.gmail.com>



So it looks like you need to restore the backup to validate the backup.

I am reading the manual
<
https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/validating-database-files-backups.html#GUID-18A4B00E-0D9B-410D-8ABB-2AC78DB99AA4>: "You can run RESTORE ... VALIDATE to test whether RMAN can restore a specific file or set of files from a backup. RMAN chooses which backups to use. The database must be mounted or open for this command" So I guess I can restore the control file in the test server (test server can see the same NFS) and run a restore validate.

This line confirms it. "When validating files on disk or tape, RMAN *reads all blocks in the backup piece or image copy*." After incrementally updated backups, we can run 'restore controlfile' and run RESTORE VALIDATE once a month in the test server to check the backups. I think I have my strategy.

OL.

On Mon, Mar 28, 2022 at 1:36 PM Ilmar Kerm <ilmar.kerm_at_gmail.com> wrote:

> <Shameless plug of my old code, that I'm trying to not use anymore>
>
> We have done similar thing for many-many years now, update image copy on a
> dedicated NFS server incrementally - while letting NAS system keep the long
> term history in snapshots. But as "the state of any backup is unknown until
> a restore is attempted" - automating the restores is the key here.
> We restore test all databases every day, and once a month also run a full
> validate database on the restored copy, to catch these potentially rotten
> bits.
> All code is here:
> https://ilmarkerm.eu/blog/oracle-imagecopy-backup/
>
> </Shameless plug of my old code, that I'm trying to not use anymore>
>
> As for timings - it all depends on the storage. Our all flash NAS systems
> and all pipes in between ar pretty fast, so for small databases like 23TB
> do not really notice the validate database command at all, it probably runs
> a few hours. For a 300TB+ database it is already noticeable and it usually
> runs a few days.
>
> I'm trying to move away from this kind of backup system lately, since
> taking a backup is quite expensive and block change tracking keeps having
> nasty bugs occasionally (19c), up to freezing the entire database. Seems
> to be easier just to create another dataguard for storing database on a
> separate storage with long retention time (aka backup) and have NAS
> snapshot it internally on a few hour schedule to keep long retention time.
> Writing restore tests for it ATM.
>
>
> On Mon, Mar 28, 2022 at 6:06 PM Orlando L <oralrnr_at_gmail.com> wrote:
>
>> Hi
>>
>> We have a 23TB Oracle database and the full backup times are a problem.
>> We are currently backing it up to an NFS on weekends. I am trying to see
>> options on cutting down the time. I am looking into incrementally updated
>> backups, which I think may cut down the backup time drastically. I am
>> concerned about the long run though. Since it copies over only the changed
>> data, I am wondering what will happen if some not-frequently accessed block
>> in the backup goes corrupt in the backup. I am thinking that it may be a
>> problem when it is time to do a restore. Am I warranted in this kind of
>> thinking? I am wondering about the VALIDATE command used on a backup of a
>> big DB of this size. Anyone uses VALIDATE on such big backups? How long
>> does it take. All ideas welcome. 19c.
>>
>> PS. No money for BCV or a parallel dataguard server to offload backups.
>>
>> Orlando.
>>
>>
>
> --
> Ilmar Kerm
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Mar 29 2022 - 20:09:46 CEST

Original text of this message