RE: backing up a big DB

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Mon, 28 Mar 2022 12:59:53 -0400
Message-ID: <025301d842c5$3f708980$be519c80$_at_rsiz.com>



The first thing we need to know to make sensible recommendations is the purpose of the full backup.  

The second thing we need to know is your recovery urgency. This may vary by application, which in turn may mean that the database should be separated into a small number of databases if the data required for the most urgent recoverable database is small. In theory the pieces can be PDBs if you don’t mind recovering at urgency into an prepared container for the urgent PDBs. That would mean some practice. I recommend frequent failovers be built into your procedures so that you avoid the situation of being rusty at your recovery procedure should you ever have to do it for real.  

The third thing we need to know is the order of coherent sets from which you intend to do recovery.  

But since you are currently asking about something being backed up to NFS and you protest that you can’t do anything like coherently split plexes (aka mirrors), I suppose the first diagnostic is to pick some individual tablespace and compare the speed to NFS versus some direct attached disk spare on your server.  

That will tell you whether latency and bandwidth to your NFS destination is the problem (or not).  

You could also spin up a database on the NFS storage and run Kevin’s SLOB on the NFS storage.  

Good luck,  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Orlando L Sent: Monday, March 28, 2022 12:06 PM
To: oracle-l_at_freelists.org
Subject: backing up a big DB  

Hi  

We have a 23TB Oracle database and the full backup times are a problem. We are currently backing it up to an NFS on weekends. I am trying to see options on cutting down the time. I am looking into incrementally updated backups, which I think may cut down the backup time drastically. I am concerned about the long run though. Since it copies over only the changed data, I am wondering what will happen if some not-frequently accessed block in the backup goes corrupt in the backup. I am thinking that it may be a problem when it is time to do a restore. Am I warranted in this kind of thinking? I am wondering about the VALIDATE command used on a backup of a big DB of this size. Anyone uses VALIDATE on such big backups? How long does it take. All ideas welcome. 19c.  

PS. No money for BCV or a parallel dataguard server to offload backups.  

Orlando.  

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Mar 28 2022 - 18:59:53 CEST

Original text of this message