Probl improving conventional path load, direct=false, with number of commits and bindsize/readsize [message #500788] |
Wed, 23 March 2011 09:20 |
|
ilharco
Messages: 8 Registered: March 2011 Location: Portugal
|
Junior Member |
|
|
Hi guys,
I'm currently doing an academic benchmark, TPC-H, and I have some big tables that I want to load using direct path (which workde great and was very fast) and conventional path.
For that, I used this batch script:
for /F "tokens=1,2" %%A in (table4_records.txt) do (
sqlldr userid='tpch/tpch' control=%%A.ctl rows=%%B bindsize=? readsize=? SILENT=HEADER log=bulkload_logs\sf4\bulk_%%A%1.log
)
The problem is that, no matter what values I give to the bindsize and/or readsize options, it always commit from 65534 to 65534 rows. I already pass the %%B size which is the exactly number of rows per table.
In direct load, I just used the rows and the commit as REALLY done after the hole table was loaded.
I want to do something like that, but with conventional load path - I know that is not faster, but that's the point.
Could you please tell me how can I give the correct parameters so I can:
1- load as much data, at a time, as I can;
2- commit less frequently, preferably at the end of the table's load.
Here are the tables' names and number of rows:
lineitem 23996604 -> is the biggest and has aprox. 3GB on disk
orders 6000000
partsupp 3200000
part 800000
customer 600000
supplier 40000
nation 25
region 5
|
|
|
|
|