Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: What is fastest way to extract large table to flat text file

Re: What is fastest way to extract large table to flat text file

From: Billy Verreynne <vslabs_at_onwe.co.za>
Date: Wed, 18 Jul 2001 07:59:45 +0200
Message-ID: <9j38ld$buv$1@ctb-nnrp2.saix.net>

"sdfsd" <sdfes_at_dsf.com> wrote in

> 1 - use sqlplus do a spool. Concerns are: network traffic sending
 it to
> Sqlplus,

Not an issue if you run SQL*Plus on the server.

> cost of having it processed for display even if we dont have it
> displayed,

Set echo and termout off and redirect the standard output and standard error to /dev/null.

> concerns about limits of how big a file it can handle.

I have spooled over 10GB's worth of data like that (20+ hour runtime)... across a network (pulling production data across to a data warehouse server). There are no limits imposed by SQL*Plus wrt file sizes - that is up to the file system space available and the size of the file system.

Not sure if SQL*Plus is the fastest at getting the data out wrt issues such as array fetches.. but even if you can get the data out of Oracle slightly faster than what SQL*Plus manages, the bottleneck will still be i/o. You have a single process doing the writes to the output spool file. So from that perspective, I doubt that selecting another method of extracting the data (e.g. UTL package) will be any faster than SQL*Plus.

Another option could be to run the extract/spool in parallel - implement something akin to PQ which uses rowid to have several query slaves working in parallel on the same table. Using a similar method you can perform a parallel spool to seperate files and then concat these files into a single file after the parallel spools completed.

--
Billy
Received on Wed Jul 18 2001 - 00:59:45 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US