sqlldr [message #687743] |
Mon, 22 May 2023 08:29 |
|
deepakdot
Messages: 91 Registered: July 2015
|
Member |
|
|
Hi,
I am loading a csv file to a table having CLOB column. This table load takes lot more time.
ctl file looks like below:
load data
infile '/sqlldrtest/1_TEST_TABLE.csv'
append into table TEST_TABLE
fields terminated by "|"
trailing nullcols
(COl1,
COL2,
COL3 DATE 'YYYY-MM-DD',
COL4 TIMESTAMP 'YYYY-MM-DD HH24.MI.SS.FF',
COL5 CHAR(20000) NULLIF COL5 = 'DUMMY_CLOB'
)
command used to Run:
sqlldr TEST/TEST@//hostname:1521/DBTEST 1-TEST_TABLE.ctl parallel=true rows=500000 bindsize=200000000
Here COL5 is a CLOB Column. the CSV File contain the CLOB column data as well. Also when I provide rows=500000, still commit is happening approx every 10K rows only. If commit can happen in every 500000 rows, it would be faster ?
Please suggest how do I fast load this table. any other parameter I need to add ?
Regards,
Deepak Samal
|
|
|
Re: sqlldr [message #687744 is a reply to message #687743] |
Mon, 22 May 2023 08:46 |
John Watson
Messages: 8964 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
I don't see that commit5 frequency would have any significance. However, using direct load might help a lot.
|
|
|
|
|