RE: RMAN catalog question

From: Michael Schmitt <mschmitt_at_uchicago.edu>
Date: Fri, 20 Jul 2012 14:06:00 +0000
Message-ID: <1184E7EFAB1D1C47A5038D06F64BE92601FB352F_at_XM-MBX-02-PROD.ad.uchicago.edu>



We are doing something similar to David, the main difference is we are creating a new catalog for each of the databases. For example
  • Single RMAN catalog instance running 10.2.0.5 Catalog A - created for database A using rman executable 10.2.0.3 for database A running 10.2.0.3 Catalog B - created for database B using rman executable 11.2.0.1 for database B running 11.2.0.1 Catalog C - created for database C using rman executable 11.1.0.7 for database C running 11.1.0.7 Catalog D - created for database D using rman executable 11.1.0.7 for database D running 11.1.0.7 Catalog E - created for database E using rman executable 11.2.0.3 for database E running 11.2.0.3 Catalog F - created for database F using rman executable 11.2.0.1 for database F running 11.2.0.1 .....

I think you get the point for how that works with another 100+ databases.

While it is likely just personal preference, I find some benefits to designing our RMAN environment in this way that I do not see when using a single catalog

  • The catalog is using the same version as the target database, I don't have to worry about needing to upgrade the catalog (this is what I am trying to confirm)
  • If I upgrade a database in my environment, I just connect to rman using the new rman client version and run "upgrade catalog". This only impacts the database I just upgraded since it has its own catalog
  • If I remove a database from the environment, removing the catalog is as easy as dropping the user for that stores that databases catalog
  • If I need to move the catalog to a different server, the move only affects that single system
  • If there is some form of corruption to my catalog (which I experienced with a DataGuard configuration once), I can drop the catalog and rebuild it while only impacting that one environment
  • I don't have to worry about upgrading the instance I use to store all the catalogs every time I deploy a new version (this is also what I am trying to confirm)

I am mainly trying to determine that the setup I outlined still falls into the certified configuration. I believe it does, since it seems like the catalog schema is created based on the version of the RMAN client used in the "create catalog" command, and not related to the version of the instance which stores the catalog.

-----Original Message-----
From: David Robillard [mailto:david.robillard_at_gmail.com] Sent: Friday, July 20, 2012 8:44 AM
To: Michael Dinh
Cc: Michael Schmitt; oracle-l mailing list Subject: Re: RMAN catalog question

Hello Michael,

> Separate catalog for production and non-production.

That's even better! If you combine the examples I wrote and the one you just sent us, then I think your Oracle RDBMS backup and recovery setup would be just about ideal.

Unless we can even improve it somehow?

DA+

On Fri, Jul 20, 2012 at 9:24 AM, Michael Dinh <mdinh235_at_gmail.com> wrote:
> Hello David,
>
> Great examples. At one company, I had set up 2 RMAN catalogs. Connect
> to 1 for backup and when completed, connect to the 2 and resync catalog.
>
> Test and patch/upgrade 1 RMAN catalog first, when everything okay,
> then repeat for 2 catalog.
>
> Separate catalog for production and non-production.
>
> -Michael.
>
> mdinh.wordpress.com
>
>
> On Fri, Jul 20, 2012 at 4:20 AM, David Robillard
> <david.robillard_at_gmail.com>
> wrote:
>>
>> Hello Michael,
>>
>> > RMAN Compatibility Matrix [ID 73431.1] provides a little more details.
>> > RMAN executeable <= target database (when backup 9i database, use
>> > 9i
>> > RMAN)
>> > Catalog schema >= RMAN executeable
>> > Catalog database >=10.2.0.3 (note 1)
>> >
>> > Based on the information, I would create the RMAN database and
>> > catalog using the latest and greatest release to ensure compatibilty.
>>
>> I agree.
>>
>> As a concrete example, I recently consolidated several database
>> backups into a single RMAN catalog. The RMAN catalog was installed on
>> a dedicated machine running 11.2. This machine was only used for RMAN.
>> The target databases were a mix of 9.2, 10.2 and 11.2 RAC on various
>> operating systems (RedHat 3, 4 and 5, Solaris 8, 9 and 10 plus
>> Windows Server 2003 R2). Yes, RedHat 3 and Solaris 8 were still in
>> production
>> (!)
>>
>> To backup of the 11.2 RAC clusters, I used a cron job on the RMAN
>> 11.2 machine that would run a shell script which connected to a
>> service name (and not an SID) on the RAC clusters. This script used
>> the 11.2 rman binary from the RMAN 11.2 machine to connect to the
>> 11.2 RAC instances. I then used global RMAN stored scripts < execute
>> script 'script_name' using 'db_unique_name' > to backup the clusters.
>>
>> To backup the 9.2 databases, I used a cron job on the 9.2 machines
>> that called the local 9.2 rman executable. The 9.2 rman executable
>> would then connect to the 11.2 RMAN catalog and use a different
>> stored script from the one used with 11.2. The reason for that is
>> that in 9.2 the concept of global scripts doesn't exist yet. So you
>> have to create a stored script for each and everyone one of your 9.2
>> databases that are going to use your 11.2 RMAN catalog.
>>
>> Finally, to backup the 10.2 databases, I again used a cron job (or
>> scheduled task in Windows) that executed a shell script (or a batch
>> file) which used the local 10.2 rman executable to connect to the
>> 11.2 RMAN catalog. This script then executed a global script stored
>> in the catalog. In 10.2, we now have access to global scripts. Of
>> course, you need a script for UNIX/Linux machines and a different one
>> for the Windows target because the paths aren't the same.
>>
>> To backup the RMAN 11.2 catalog (let's not forget to do this :) I
>> used yet another cron job that used the local RMAN 11.2 binary and
>> connected to the catalog using < nocatalog >.
>>
>> Backup data for all of these databases (including the RMAN catalog
>> backup data) was sent to a central NFS directory (or a Samba share
>> from that same NFS server) which was subsequently sent to tape. Those
>> tapes were then encrypted and sent offsite.
>>
>> As you can see, backup of the 11.2 targets is the other way around
>> then from the 9.2 and 10.2 targets. I mean, in 11.2, I used the RMAN
>> server's binary to connect to the targets. While for the 9.2 and
>> 10.2, I used the target's binary to connect to the catalog. The
>> reason is simple : when I used the RMAN machine's 11.2 rman
>> executable to connect to a 9.2 or 10.2 database, you get an error.
>>
>> If you'd like to see the cron scripts and the RMAN stored scripts,
>> let me know.
>>
>> HTH,
>>
>> David
>> --
>> David Robillard
>> http://www.linkedin.com/in/davidrobillard
>> http://itdavid.blogspot.ca/
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jul 20 2012 - 09:06:00 CDT

Original text of this message