Re: Question on RMAN restore from tape
From: Keith Moore <keithmooredba_at_gmail.com>
Date: Wed, 18 Dec 2019 13:31:09 -0600
Message-Id: <68E6BA11-D77C-4BD3-A065-25396FEADDEB_at_gmail.com>
>> On 19 Dec 2019, at 5:01 am, Keith Moore <keithmooredba_at_gmail.com> wrote:
>>
>> I am working for a client that has an Exadata Cloud at customer. We just migrated a large database and I am setting up backups. The backups go to the Object Storage that is part of the Cloud at Customer environment and backups and restores are done through a tape interface.
>>
>> As part of the testing, I tried to restore a single 5 GB archivelog and eventually killed it after around 12 hours.
>>
>> After tracing and much back and forth with Oracle support, it was found that the issue is related to filesperset. The archivelog was part of a backup set with 45 archive logs and was around 500 GB in size. To restore the archive log, the entire 500 GB has to downloaded, throwing away what is not needed.
>>
>> The obvious solution is to reduce filesperset to a low number.
>>
>> But, my question for people with knowledge of other backup systems (hello Mladen) is whether this is normal. It is horribly inefficient for situations like this. Since object storage is “dumb”, maybe there is no other option but it seems like this should be filtered on the storage end rather than transferring everything over what is already a slow interface.
>>
>> Keith --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
Date: Wed, 18 Dec 2019 13:31:09 -0600
Message-Id: <68E6BA11-D77C-4BD3-A065-25396FEADDEB_at_gmail.com>
Yes, I am aware of that. What I am hoping to get answered is how this is handled by other vendors of backup solutions. Keith
> On Dec 18, 2019, at 1:06 PM, Leng <lkaing_at_gmail.com> wrote: > > Hi Keith, > > You’ll need to play with filesperset or maxpiecesize or maxsetsize to get a size that will work for you. Most often the default will not be useful if you only want to restore a single small file from a large backuppiece. > > Cheers, > Leng >
>> On 19 Dec 2019, at 5:01 am, Keith Moore <keithmooredba_at_gmail.com> wrote:
>>
>> I am working for a client that has an Exadata Cloud at customer. We just migrated a large database and I am setting up backups. The backups go to the Object Storage that is part of the Cloud at Customer environment and backups and restores are done through a tape interface.
>>
>> As part of the testing, I tried to restore a single 5 GB archivelog and eventually killed it after around 12 hours.
>>
>> After tracing and much back and forth with Oracle support, it was found that the issue is related to filesperset. The archivelog was part of a backup set with 45 archive logs and was around 500 GB in size. To restore the archive log, the entire 500 GB has to downloaded, throwing away what is not needed.
>>
>> The obvious solution is to reduce filesperset to a low number.
>>
>> But, my question for people with knowledge of other backup systems (hello Mladen) is whether this is normal. It is horribly inefficient for situations like this. Since object storage is “dumb”, maybe there is no other option but it seems like this should be filtered on the storage end rather than transferring everything over what is already a slow interface.
>>
>> Keith --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
-- http://www.freelists.org/webpage/oracle-lReceived on Wed Dec 18 2019 - 20:31:09 CET