Re: backing up a big DB
Date: Mon, 28 Mar 2022 12:07:15 -0500
Message-ID: <CAL8Ae77GiLkMd-jDgQAsGEq6Chu+nfoATNT+eVPZeERCP2EsEQ_at_mail.gmail.com>
Excellent point Mark. Should have told you that before. 1) Purpose: Recover the whole database if production DB goes 'bad' 2) Not that urgent: Users maybe willing to wait a couple of days (or 3) 3) If I understand your question correctly, we need to be able to recover the data at least till the previous backup; this is not an online system, data goes in via nightly/weekly loads.
Thanks,
OL
On Mon, Mar 28, 2022 at 12:00 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:
> The first thing we need to know to make sensible recommendations is the
> purpose of the full backup.
>
>
>
> The second thing we need to know is your recovery urgency. This may vary
> by application, which in turn may mean that the database should be
> separated into a small number of databases if the data required for the
> most urgent recoverable database is small. In theory the pieces can be PDBs
> if you don’t mind recovering at urgency into an prepared container for the
> urgent PDBs. That would mean some practice. I recommend frequent failovers
> be built into your procedures so that you avoid the situation of being
> rusty at your recovery procedure should you ever have to do it for real.
>
>
>
> The third thing we need to know is the order of coherent sets from which
> you intend to do recovery.
>
>
>
> But since you are currently asking about something being backed up to NFS
> and you protest that you can’t do anything like coherently split plexes
> (aka mirrors), I suppose the first diagnostic is to pick some individual
> tablespace and compare the speed to NFS versus some direct attached disk
> spare on your server.
>
>
>
> That will tell you whether latency and bandwidth to your NFS destination
> is the problem (or not).
>
>
>
> You could also spin up a database on the NFS storage and run Kevin’s SLOB
> on the NFS storage.
>
>
>
> Good luck,
>
>
>
> mwf
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Orlando L
> *Sent:* Monday, March 28, 2022 12:06 PM
> *To:* oracle-l_at_freelists.org
> *Subject:* backing up a big DB
>
>
>
> Hi
>
>
>
> We have a 23TB Oracle database and the full backup times are a problem. We
> are currently backing it up to an NFS on weekends. I am trying to see
> options on cutting down the time. I am looking into incrementally updated
> backups, which I think may cut down the backup time drastically. I am
> concerned about the long run though. Since it copies over only the changed
> data, I am wondering what will happen if some not-frequently accessed block
> in the backup goes corrupt in the backup. I am thinking that it may be a
> problem when it is time to do a restore. Am I warranted in this kind of
> thinking? I am wondering about the VALIDATE command used on a backup of a
> big DB of this size. Anyone uses VALIDATE on such big backups? How long
> does it take. All ideas welcome. 19c.
>
>
>
> PS. No money for BCV or a parallel dataguard server to offload backups.
>
>
>
> Orlando.
>
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Mon Mar 28 2022 - 19:07:15 CEST