RE: Multiple imports simultaneous?
Date: Mon, 4 Jan 2016 13:46:19 -0500
Message-ID: <019401d14720$34917ad0$9db47070$_at_rsiz.com>
For things that prove to behave badly in the non-dp imp/exp, or in impdp/expdp you might experiment with sqlplus copy. If longs are involved make sure to set your LONG sqlplus session variable to a least a byte longer than your longest long, lest you be subject to the documented functionality of silent truncation.
When last measured sqlplus copy reached a performance peak plateau somewhere between 512 and 4096 rows per copycommit and you want your arraysize to be something reasonable (at least in the 10s). The number of rows per copycommit is the product of arraysize and copycommit, so arraysize 128 and copycommit 4 would put you at 512.
Beware attempting overdone parallelism lest you simply clog your network interface. When last checked several types were omitted from functionality.
Now, before you respond that sqlplus copy is deprecated or otherwise less good than either imp/exp or impdb/expdp, please re-read the caveat in the first sentence.
Regards,
mwf
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Storey, Robert (DCSO)
Sent: Monday, January 04, 2016 1:18 PM
To: Seth Miller
Cc: Kenny Payton; Oracle L
Subject: RE: Multiple imports simultaneous?
I’m currently running down something I read in one of the 9i docs but is missing in the metalink docs.
In the 9i docs it states that on an IMPORT if your table conatins a DATE column (among other times), then commit=y causes inserts to be commited at each row, vice an import array of rows. This would seem to negate the effects of the BUFFER option.
From: Seth Miller [mailto:sethmiller.sm_at_gmail.com]
Sent: Monday, January 04, 2016 12:15 PM
To: Storey, Robert (DCSO)
Cc: Kenny Payton; Oracle L
Subject: Re: Multiple imports simultaneous?
Yes, I missed that the data was coming out of 9i.
Seth Miller
On Mon, Jan 4, 2016 at 12:10 PM, Storey, Robert (DCSO) <RStorey_at_dcso.nashville.org> wrote:
Wish I could make your day.
But, since I am migrating from a 9i system, I can only use the native imp/exp
From: Seth Miller [mailto:sethmiller.sm_at_gmail.com]
Sent: Monday, January 04, 2016 11:07 AM
To: Storey, Robert (DCSO)
Cc: Kenny Payton; Oracle L
Subject: Re: Multiple imports simultaneous?
Robert,
Please make my day and tell me you are using Data Pump, not imp.
Seth Miller
On Mon, Jan 4, 2016 at 7:21 AM, Storey, Robert (DCSO) <RStorey_at_dcso.nashville.org> wrote:
Yea, I’m exploring options of just don’t doing the import, and then doing it later by loading it into a temp table, and then breaking the data into smaller chunks, like you suggested…
From: Kenny Payton [mailto:k3nnyp_at_gmail.com]
Sent: Wednesday, December 30, 2015 3:21 AM
To: Storey, Robert (DCSO)
Cc: Oracle L
Subject: Re: Multiple imports simultaneous?
I would concentrate on separating and improving the speed of the single table with the 12 hour import time. I have ran into similar issues with tables with out of line clob data. The way I resolved it was to break the single table export and import into several smaller jobs that can be ran concurrently. If you have a column that lends itself easily to breaking the table fairly evenly into many chunks use that column. If it's a number field you can use the mod function. If you don't have a good key you can use the dbms_rowid package and mod on something like block id. I was able to minimize complexity a bit by taking this one table and converting the move into inserts across a database link but you should also be able to use the query option on expdp. I'm assuming this table is not partitioned, otherwise using the partition key would probably work nicely as well.
In my experience the only way I could obtain dramatic import speed improvements was to break the single table into many smaller concurrently running jobs.
Kenny
On Dec 29, 2015 4:29 PM, "Storey, Robert (DCSO)" <RStorey_at_dcso.nashville.org> wrote:
Just curious about import.
I’m importing an older 9i system into an 11g system. I am precreating the tablespaces, etc.
Ultimately, the import only cares about 3 schemas. One that only has my PL/SQL and the other two have the data. So, Schema A, B, C. Schema B has about 360 tables and about 190 million rows total. Schema D has about 45 tables, but and about 35 million rows of which 27 are in one table that has an XMLTYPE column. Importing just the one table in Schema B takes about 12 hours. I’m working on methods to trim that time.
But, is it possible to do multiple imports at once if using different inbound schemas.
- Export the database to create my dump file.
- ON the target server, make 3 copies of the import file.
- Do an import of Schema A, rows=n, indexes=n, constraints=n
- Do an import of Schema B, rows=n, indexes=n, constraints=n
- Do an import of Schema C, rows=n, indexes=n, constraints=n
- Do an import for each schema where rows =n, and indexes and constraints = y.
Theoretically, this should not interfere with each other. I can set the database to no-archive and increase redo logs so that the waits should be reduced.
Thoughts?
-- http://www.freelists.org/webpage/oracle-lReceived on Mon Jan 04 2016 - 19:46:19 CET