Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> [Fwd: Re: 2 Gb file limit problem]
I would have fed the spooled output (flat file) into a named pipe &
set its output thru split.
This is similar to what I do with exports for tables that are larger
than 2GB.
-------- Original Message --------
Subject:
Re: 2 Gb file limit problem
Date:
Mon, 30 Jul 2001 12:00:32 -0800
From:
"Satish Iyer" <Satish.Iyer_at_ci.seattle.wa.us>
Reply-To:
ORACLE-L_at_fatcity.com
Organization:
Fat City Network Services, San Diego, California
To:
Multiple recipients of list ORACLE-L <ORACLE-L_at_fatcity.com>
Thnaks Joe,Yes that was what
I did as a work-around yesterday and I had to be around for a long time
on a week-end. I have most of these processes automated and it works
fine with 95% of the tables . Doing this for about 600 tables. So this
would mean going back to change code again. Was wondering if there was
any straightforward options that I was missing. Satish
>>> "JOE TESTA" <JTESTA_at_longaberger.com> 07/30/01 11:44AM >>>how
about this: (avg_row_size
+ delimiters) * number_of_rows = total bytes. total
bytes / 1900000000 = number of pieces. number_of_rows
/ number_of_pieces = number of rows per piece select
number of rows needed multiple times, spooling each one individually. then
sqlldr all the pieces. joe
>>> Satish.Iyer_at_ci.seattle.wa.us
07/30/01 02:20PM >>>Hi List, I
need to transport few tables from one instance to another and of course
found the sqlldr method much faster than the exp/imp.But
the problems is for large tables .When I spool such input tables to a flat
file , it stops spooling into it after about 2 Gb. Any possible solutions
to get around it. I am on AIX 4.3.3/8.1.5 My
ulimits on AIX aretime(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
unlimited
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) 2000 Thanks Satish
Received on Mon Jul 30 2001 - 15:43:57 CDT
![]() |
![]() |