Re: Fwd: RE: convert big endian to medium endian
Date: Sun, 9 Feb 2020 15:46:26 +0100
Message-ID: <CANkb5P3FGqD7L2__hcrUeoooWdKeOsjFB-_M0+H48wLmC98b1g_at_mail.gmail.com>
:-)
At the beginning I was not convinced that the export is slow because of a
bug, but because of the large number of partitions. (That's why I ruled out
an upgrade in the source will help and definitely forgot this option)
Thanks for this simple idea. I will see if we can realize it.
Am So., 9. Feb. 2020 um 15:04 Uhr schrieb Mark J. Bobak <mark_at_bobak.net>:
> Getting into this late, but, you're going from 11g on AIX to 12c on Linux,
> correct? So, to avoid 11g bug, couldn't you update source database to 12c
> first, then do the export? Presuming that bug is fixed in 12c?
>
> -Mark
>
> On Sun, Feb 9, 2020, 06:57 Ahmed Fikri <gherrami_at_gmail.com> wrote:
>
>> I think what would be problematic for us is the import of the metadata (I
>> have no idea how long it will take, but with regard to the export time, I
>> can expect that it will take a long time).
>>
>> But I think the idea is also a good option for us. I think we will find
>> a way to synchronize both DBs after the migration.
>>
>> I will report about which option we have chosen and how the migration
>> went (if we did it).
>>
>> Thanks and regards
>> Ahmed
>>
>> Am Sa., 8. Feb. 2020 um 23:32 Uhr schrieb Tanel Poder <
>> tanel_at_tanelpoder.com>:
>>
>>> If the metadata export takes most of the time, you can do the export in
>>> advance, ensure that no schema (and other metadata) changes happen after
>>> you've started your metadata export and later on load the data separately.
>>>
>>> You'd need to worry about metadata that changes implicitly with your
>>> application workload - like sequence numbers increasing. The sequences can
>>> be altered just before switchover to continue from where they left off in
>>> the source.
>>>
>>> You can even copy historical partitions (that don't change) over way
>>> before the final migration to cut down the time further. But if XTTS speed
>>> is enough for you for bulk data transfer, then no need to complicate the
>>> process.
>>>
>>> Years (and decades) ago, when there was no XTTS and even no RMAN, this
>>> is how you performed large migrations. Copy everything you can over in
>>> advance and only move/refresh what you absolutely need to during the
>>> downtime.
>>>
>>> --
>>> Tanel Poder
>>> https://tanelpoder.com/seminar
>>>
>>>
>>> On Sat, Feb 8, 2020 at 10:38 AM Ahmed Fikri <gherrami_at_gmail.com> wrote:
>>>
>>>> Hello Jonathan,
>>>>
>>>> sorry for the confusion.
>>>>
>>>> the productive db is about 16 TB (11.2.0.4 on AIX) and has about 4,5
>>>> Mio Partitions and every day come about 1000 partitions. And the metadata
>>>> export takes 3 days 4 hours. The DBA I think is very experienced he had
>>>> already find out that the export is slow because of known bug in 11.2.0.4
>>>> (he sent me the MOS ID and has mentioned that the problem is related to an
>>>> x$k... view - Monday I will send the exact information). Unfortunately I
>>>> haven't shown much interest to this information (big mistake) because I
>>>> thought that the problem is because of our application design (and this is
>>>> only my opinion as developer ) and also I thought that it should be
>>>> possible somehow to convert the whole db without the need to use the
>>>> datapump for the metadata (from the theory I think it is possible - but as
>>>> I realized now it is in the practice tough )
>>>>
>>>> And to check my assumption that we can convert all db datafiles (the
>>>> datafiles for the metadata too) using c or c++ I use a 10GB big DB.
>>>>
>>>> Thanks and regards
>>>> Ahmed
>>>>
>>>>>
>>>>>>
-- http://www.freelists.org/webpage/oracle-lReceived on Sun Feb 09 2020 - 15:46:26 CET