Re: backing up a big DB

From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Thu, 31 Mar 2022 16:45:43 -0400
Message-ID: <4f38fb9f-ee58-38a8-4924-777741133449_at_gmail.com>


On 3/28/22 12:06, Orlando L wrote:
Hi

We have a 23TB Oracle database and the full backup times are a problem. We are currently backing it up to an NFS on weekends. I am trying to see options on cutting down the time.  I am looking into incrementally updated backups, which I think may cut down the backup time drastically. I am concerned about the long run though. Since it copies over only the changed data, I am wondering what will happen if some not-frequently accessed block in the backup goes corrupt in the backup. I am thinking that it may be a problem when it is time to do a restore. Am I warranted in this kind of thinking? I am wondering about the VALIDATE command used on a backup of a big DB of this size. Anyone uses VALIDATE on such big backups? How long does it take. All ideas welcome. 19c.

PS. No money for BCV or a parallel dataguard server to offload backups. 

Orlando.

Where is the database located? Is it some kind of SAN or JBOD? If it is SAN, as it most probably is, what kind of SAN is it? Most of the modern SAN devices support storage snapshots, so that may be a solution. Second, what kind of a NAS device are you backing up to? What kind of the network connection do you have between the NAS and the database server? How fast is your NAS? What kind of the writing speed can you achieve? What version of NFS do you have? In my experience, NFS 4 is much faster than NFS 3.

Regards

-- 
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com
-- http://www.freelists.org/webpage/oracle-l Received on Thu Mar 31 2022 - 22:45:43 CEST

Original text of this message