import with blob column [message #163385] |
Thu, 16 March 2006 10:12 |
kapilcool
Messages: 2 Registered: March 2006 Location: London
|
Junior Member |
|
|
Hi,
I am trying to import a dump file. The table has a blob column and has 9 million rows. The import is running very slow i.e. 1 million rows in 16 hours.
The following is my import par file:
USERID=oracle/oracle
BUFFER=67108864
FILE=( /export/apps/oracle/admin/db1/exp/DB_tables_dat01.dmp,
/export/apps/oracle/admin/db1/exp/DB_tables_dat02.dmp,
/export/apps/oracle/admin/db1/exp/DB_tables_dat03.dmp)
FILESIZE=2000m
FULL=N
FROMUSER=oracle
TOUSER=oracle
INDEXES=N
CONSTRAINTS=N
GRANTS=N
IGNORE=Y
ROWS=Y
COMMIT=Y
COMPILE=N
STATISTICS=NONE
ANALYZE=Y
FEEDBACK=250000
LOG=imp_db_tables.log
--------------------------------------
It looks like there are too many log switches going on, as seen by the alert log:
Thu Mar 16 07:05:55 2006
Beginning log switch checkpoint up to RBA [0x37.2.10], SCN: 0x0000.00c27857
Thread 1 advanced to log sequence 55
Current log# 1 seq# 55 mem# 0: /u1/oradata/DBDB02/redo_g1m1.log
Current log# 1 seq# 55 mem# 1: /u2/oradata/DBDB02/redo_g1m2.log
Thu Mar 16 07:34:55 2006
Completed checkpoint up to RBA [0x37.2.10], SCN: 0x0000.00c27857
Thu Mar 16 09:01:09 2006
Beginning log switch checkpoint up to RBA [0x38.2.10], SCN: 0x0000.00c706e8
Thread 1 advanced to log sequence 56
Current log# 2 seq# 56 mem# 0: /u1/oradata/DBDB02/redo_g2m1.log
Current log# 2 seq# 56 mem# 1: /u2/oradata/DBDB02/redo_g2m2.log
Thu Mar 16 09:29:28 2006
Completed checkpoint up to RBA [0x38.2.10], SCN: 0x0000.00c706e8
Thu Mar 16 10:55:34 2006
Beginning log switch checkpoint up to RBA [0x39.2.10], SCN: 0x0000.00cb958c
Thread 1 advanced to log sequence 57
Current log# 3 seq# 57 mem# 0: /u1/oradata/DBDB02/redo_g3m1.log
Current log# 3 seq# 57 mem# 1: /u2/oradata/DBDB02/redo_g3m2.log
---------------------------------------------------
Is there anything I can do to improve the data load speed. Is it possible to ignore redo generation at all?
Thanks
Kapil
|
|
|
|