Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Informatica Bulk Mode behavior
Waleed, =20
Indeed you were on the right track. We got the following msg from
Informatica
Support. "Oracle design limitation"?????? =20
Problem:
In 6.x version running session targeting to Oracle using bulk mode
creates new extents=20
at every commit. This sometimes leads to the following error: "ORA-01568
unable to create=20
initial extent."=20
Solution:
This is a known issue (CR 53791) that occurs when using Oracle bulk
loading. To resolve this=20
issue increase the commit interval to very high value. SUPPORTING
INFORMATION: This issue is=20
due to the following design limitation in Oracle: Direct Path Loading
(using same code as bulk=20
loading) is going to use only blocks above the high water mark. Also
Direct Path loading does=20
not check for Partial blocks are not used, so no reads are needed to
find them and fewer writes=20
are performed. In a conventional data load the bind array buffer is
filled and passed to Oracle=20
with a SQL INSERT command which results in the space management by the
SQL Command Processing to=20
be utilized. Therefore bulk loading will allocate more extents.=20
-----Original Message-----
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Khedr, Waleed
Sent: Thursday, April 22, 2004 11:34 AM
To: oracle-l_at_freelists.org
Subject: RE: Informatica Bulk Mode behavior
I would ask the developers how they configured the commit interval (if =
=3D
it's set to a very low value)
Waleed
-----Original Message-----
From: Thomas Jeff [mailto:jeff.thomas_at_thomson.net]
Sent: Thursday, April 22, 2004 10:33 AM
To: oracle-l_at_freelists.org
Subject: Informatica Bulk Mode behavior
List,
Is anyone familiar with how Informatica's bulk mode works? =3D20
We have a situation where I created a 4GB tablespace for a new and
simple=3D20 truncate/load operation from Informatica, around 7 million
rows estimated=3D20 to take up 1.5GB. =3D20 =3D20 Using bulk mode, =
which
appears to be a direct load (in the SQL cache, the=3D20 INSERT statement
has a hint that I've never seen before: SYS_DL_CURSOR which=3D20 I =
assume
stands for Direct Load), they run out of space in the tablespace
after=3D20 about 200K rows have been inserted.
If I then manually rebuild the table, the 200K rows gets compressed back down=3D20 to one extent.
So there's a lot of either empty or preallocated space. Thinking
somehow=3D20
the high-water mark was the culprit, I manually truncated the table
before=3D20 they reran their job. =3D20
Still the same problem. =3D20
If the job runs in 'normal' mode, which is row-by-row processing, it runs fine,=3D20 although of course, performance is quite poor.
Any ideas/experience as to how this bulk mode operates?
Thanks.
Email: jeff.thomas_at_thomson.net
Indy DBA Master Documentation available at: http://gkmqp.tce.com/tis_dba
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html'unsubscribe' in the subject line.
-----------------------------------------------------------------
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to: oracle-l-request_at_freelists.org put
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.htmlput 'unsubscribe' in the subject line.
-----------------------------------------------------------------
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to: oracle-l-request_at_freelists.org
![]() |
![]() |