Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Manual update / replication on a live database?

Re: Manual update / replication on a live database?

From: Frank van Bortel <frank.van.bortel_at_gmail.com>
Date: Thu, 04 Jan 2007 08:05:32 +0100
Message-ID: <engujs$88i$1@news6.zwoll1.ov.home.nl>


tcp100 schreef:

> John, that REALLY helps. Thanks for all the details.
> 
> As for your query:
> 
> "Your requirements are for "continous" availability of the readonly
> copy are surprising.  Is no downtime (not even a minute or so) allowed
> at all during the three "data freshening events" that occur each week?
> That seems surprising, given that the same requirements also seem to
> tolerate three very discontinous "jumps" in the data content seen by
> the users of the "read only" data. "
> 
> I agree with you; that sounds ridiculous - but where I stand now,
> Management has set down an edict for this project saying "no downtime".
>  I think that means that there should be no loss of service as far as
> users are concerned; discontinuity of data is not as much as an issue.
> 
> I agree that it makes no sense, but until I am able to get the "brass"
> to clarify on this and understand it from a technical point of view, I
> need to approach this "as-is".
> 
> Again, and also in reaction to the previous poster -- this is not my
> utopian scenario, and I know it's a bit ridiculous, but unfortunately
> it's not my decision to make.  The isolation of this server is set in
> stone - it's a legal restriction, and there is no budging on that,
> period.  The "no downtime" restriction may be one of managementese,
> that we can work through - but until I can solicit a viable solution
> with alternatives and timeframes, that's the word I unfortunately have
> to go by...
> 
> And so goes the world of work.. :)
> 
> 
> John K. Hinsdale wrote:

>> Oracle user with replication issues, tcp100 wrote:
>>
>>> We have two databases - one is our "active" database that takes in
>>> transactions, and the other is a read-only "copy" at a remote location.
>>> We'd like to update this "copy" database three times a week, but
>>> since this remote database is not WAN connected, it'd have to be via CD
>>> or other manual file transfer.
>>>
>>> The single requirement, however, is that we can't take either database
>>> down to extract the data (on the "active" side) or to upload the data
>>> (on the "read only copy" side). Both databases need to be able to be
>>> accessed uninterrupted during any such maintenance procedures.
>> Chris, a lot of how this might be approached will depend on size, both
>> the raw size of your database in general as well as the incremental
>> size (and nature) of the transactions applied between updates of the
>> "copy" database. E.g., is is mostly INSERT's or is there a lot of
>> UPDATE and DELETE.
>>
>> Knowing nothing else, though, below is one option I'd consider if it
>> were me. It involves copying the ENTIRE database and not an
>> incremental approach as you seem to be hoping for. I'm not sure an
>> "offline" incremental solution is possible, but I would not know.
>>
>> Anyway, one way:
>>
>> (1) Run export ("exp") on LIVEDB, using CONSISTENT=Y to get a
>> read-consistent snapshot of the database as of a point in time. Take
>> this "exp" copy at a time of day where modification traffic
>> (INSERT/UPDATE/DELETE) is at its lowest, if there is such a time of
>> day. This will help avoid running out of redo space when you do a
>> "consistent exp". Compress the dump file -- if you have a
>> multiprocessor or multi-core machine, check out the awesome
>> "pbzip2" utility; it will save you a factor of 8 or however many
>> CPUs/cores you have. Copy this compressed export file to your CD/DVD
>> burner, Commodore 64 cassette tape drive, or whatever totable storage
>> media you plan to use. Don't forget to buckle your seat belt on the
>> drive over to the non-WAN connected site ;)
>>
>> (2) On the "readonly" site, maintain either two separate schemas
>> (SCHEMA1 and SCHEMA2) on a single instance, or two separate Oracle
>> instances
>> (INST1 or INST2). At any given time, call one of these
>> schemas/instances the "active" read-only copy, and the other the "next
>> to load" copy. Point client applications at the "active" copy.
>>
>> To freshen the "readonly" site's data, wipe clean all the objects in
>> the "next-to-load" instance (or schema): you can DROP USER ... CASCADE
>> and then just recreate the user. Uncompress and import ("imp") the
>> file created in (1) to create a new version of the database with the
>> fresher data.
>>
>> You'll need to keep track, perhaps in a config file somehwere, which
>> of the two read-only instances/servers is the "active" one, and which
>> is the "next-to-load" one, and to swap the two after each freshening.
>> If it's a thrice-weekly thing it'd all be automated with scripts
>> of course ;)
>>
>> (3) "Cut over" at the readonly site to access the new data. This can
>> be done at one of two software "levels":
>>
>> (a) If you have two separate instances, you can update your TNS or
>> Oracle names resolving info to "point" to the new instance. For
>> example, if TNS alias clients use is "READONLY" and it resolves to
>>
>> host=somehost, port=1521, SID=INST2
>>
>> you can then update it so that "READONLY" resolves to
>>
>> host=somehost, port=1521, SID=INST1
>>
>> This will cause all clients that connect to the Oracle serivce
>> "INST1" to now use the newer data on INST1. You now do your next
>> re-load into server INST2. Note that you'll need unchanging TNS
>> aliases for INST1 and INST2 to use for the freshening programs
>> (incl. "imp") themselves!
>>
>> This is doing the cutover at the "Oracle service resolution"
>> level.
>>
>> (b) If you have two separate schemas, you'd need to reconfigure ALL
>> the client apps which connect to use, say SCHEMA2 now instead of
>> SCHEMA1. This is probably much harder to do and definitely harder to
>> maintain. This is doing the cutover at the "client application"
>> level.
>>
>> (c) Another approach is to drop and re-create synonyms in some third
>> schema whose name remains constant. But this will produced a momentary
>> "hiccup" in availability.
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>>
>> Your requirements are for "continous" availability of the readonly
>> copy are surprising. Is no downtime (not even a minute or so) allowed
>> at all during the three "data freshening events" that occur each week?
>> That seems surprising, given that the same requirements also seem to
>> tolerate three very discontinous "jumps" in the data content seen by
>> the users of the "read only" data.
>>
>> Even so, approaches (3a) and (3b) above should ensure that no attempt
>> to access the read-only copy is ever denied.
>>
>> Again, the nature of this approach is to dump and load the entire
>> database. This will not work well if the database is so big it cannot
>> be dumped (or re-loaded) in some sub-day amount of time (say, four -
>> six hours). Then again, if the database is that huge you will also
>> being to run into mundane issues of storing it on removable media for
>> the transfer. You might consider DLT tape. And definitely look into
>> "pbzip2". Where I work we have a 64-processor Linux box and that
>> factor of 64 speedup in compressions is really nice ;)
>>
>> Hope that help,
>>
>> John Hinsdale

>
Well, in that case, consider spilling the version, and transportable tablespaces.

If the source db cannot even have the tablespaces made read-only, consider taking an inconsistent (or "hot") backup, using RMAN, and use the RMAN duplicate command to resurrect the source on a different location. Make the tablespace(s) invloved read-only, and export these. Compress to your C64 cassette player, and off to the remote site.

Similar approach as described above (aged, live db and up-to-date, coming online db) on the remote site.

-- 
Regards,
Frank van Bortel

Top-posting is one way to shut me up...
Received on Thu Jan 04 2007 - 01:05:32 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US