Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: sqlldr performance
Nan <nandagopalj_at_hotmail.com> wrote:
>I actually have say for e.g. 1000 datafiles with control information
>and data.
>Each file has about 100,000 row in it. Further assume all the
>datafiles are
>for a particular table. If I run multiple copies of sqlldr one per
>data file I hope to get better performance. Since all the sqlldr load
>data into the same table, will there be contention issues?
(Some of this might be wrong for direct path, I'm only familiar with large parallel sqlldr loads into a cluster, with requires conventional path)
There will be the same contention issues as with any parallel insert.
Biggest problem will probably be freelist contention, make sure you have plenty of freelists on the table, keep it prime number. In 8.1.6+ this can be done dynamically. Any indexes on the table you almost certainly want to drop then recreate if possible. Disable any insert triggers.
-- Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/Received on Thu Oct 18 2001 - 12:06:05 CDT
![]() |
![]() |