Re: backing up a big DB
Date: Mon, 28 Mar 2022 20:36:36 +0200
Message-ID: <CAKnHwtfgQG4UfF3xVRukeFdVpjzd2uqMVcZu1eXRje2b-hESmg_at_mail.gmail.com>
<Shameless plug of my old code, that I'm trying to not use anymore>
We have done similar thing for many-many years now, update image copy on a
dedicated NFS server incrementally - while letting NAS system keep the long
term history in snapshots. But as "the state of any backup is unknown until
a restore is attempted" - automating the restores is the key here.
We restore test all databases every day, and once a month also run a full
validate database on the restored copy, to catch these potentially rotten
bits.
</Shameless plug of my old code, that I'm trying to not use anymore>
As for timings - it all depends on the storage. Our all flash NAS systems
and all pipes in between ar pretty fast, so for small databases like 23TB
do not really notice the validate database command at all, it probably runs
a few hours. For a 300TB+ database it is already noticeable and it usually
runs a few days.
I'm trying to move away from this kind of backup system lately, since
taking a backup is quite expensive and block change tracking keeps having
nasty bugs occasionally (19c), up to freezing the entire database. Seems
to be easier just to create another dataguard for storing database on a
separate storage with long retention time (aka backup) and have NAS
snapshot it internally on a few hour schedule to keep long retention time.
Writing restore tests for it ATM.
On Mon, Mar 28, 2022 at 6:06 PM Orlando L <oralrnr_at_gmail.com> wrote:
> Hi
All code is here:
https://ilmarkerm.eu/blog/oracle-imagecopy-backup/
>
> We have a 23TB Oracle database and the full backup times are a problem. We
> are currently backing it up to an NFS on weekends. I am trying to see
> options on cutting down the time. I am looking into incrementally updated
> backups, which I think may cut down the backup time drastically. I am
> concerned about the long run though. Since it copies over only the changed
> data, I am wondering what will happen if some not-frequently accessed block
> in the backup goes corrupt in the backup. I am thinking that it may be a
> problem when it is time to do a restore. Am I warranted in this kind of
> thinking? I am wondering about the VALIDATE command used on a backup of a
> big DB of this size. Anyone uses VALIDATE on such big backups? How long
> does it take. All ideas welcome. 19c.
>
> PS. No money for BCV or a parallel dataguard server to offload backups.
>
> Orlando.
>
>
-- Ilmar Kerm -- http://www.freelists.org/webpage/oracle-lReceived on Mon Mar 28 2022 - 20:36:36 CEST