the best approach to migrate a database to new server
From: <ahmed.fikri_at_t-online.de>
Date: Fri, 31 Jan 2020 15:22:01 +0100 (CET)
Message-ID: <1580480521140.9749333.9cd7754cbf44139091ba49d1e7f93876550c49d8_at_spica.telekom.de>
Hi all,
Gesendet mit der Telekom Mail App
<https://kommunikationsdienste.t-online.de/redirects/email_app_android_sendmail_footer>
Date: Fri, 31 Jan 2020 15:22:01 +0100 (CET)
Message-ID: <1580480521140.9749333.9cd7754cbf44139091ba49d1e7f93876550c49d8_at_spica.telekom.de>
Hi all,
we are planning to migrate a 16 Terabyte database from 11g on aix machine to 12c on linux. In the target db is Dataguard used. The DB has about 50 Schemas the biggest one is about 11 TB the second is 2.6 TB then four with each one 1 TB the rest is each one less than 1 TB. Unfortunately all Schemas share the table spaces.
Wich approach could we use with less downtime?
My idea was to move the schemas to separate tablespaces an migrate the
Schemas using transportable ts.
Or somehow copying the metadata to do new instance in such way the new db
use the old data files and then copy them separately one by one.
Or even copying the Schemas separately using dblink and data pump.
Any idea please?
Regards
Ahmed Fikri
Gesendet mit der Telekom Mail App
<https://kommunikationsdienste.t-online.de/redirects/email_app_android_sendmail_footer>
-- http://www.freelists.org/webpage/oracle-lReceived on Fri Jan 31 2020 - 15:22:01 CET