Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Best way to export data from multiple tables to delimited text file

Re: Best way to export data from multiple tables to delimited text file

From: Anton Buijs <aammbuijs_at_xs4all.nl>
Date: Thu, 26 Sep 2002 21:57:06 +0200
Message-ID: <amvoq3$kn9$1@news1.xs4all.nl>


The default behaviour of Oracle gives you statement level consistent read (also called: read committed isolation mode). So each query or fetch from a cursor sees the data as it was at the start of the query or the open of the cursor.
When you issue the command:
  set transaction read only;
you will get a transaction level consistent read: all queries and fetches see the data as it was at the beginning of the transaction. It's only valid for the next transaction that starts. So you must do this before the first select and it lasts till a commit or rollback. You can not do insert, update, delete or select for update (results in ORA-1456).
This is the command export uses when making a consistent=yes export (see Utilities Guide)

Another way is to issue the command:

    alter session set isolation_level=serializable; you will get a transaction level consistent read again but: - updates are possible with the risk of getting ORA-8177 - you don't need to repeat the command after each commit or rollback; it stands for the entire session: all queries and fetches see the data as it was at the beginning of the transaction in your session (note it says: for each transaction in the session, not the entire session). Since you only want to do queries I don't think you need this.

You could use this to achieve your goal: you have to do a number of queries on different tables and want to see all data in a consistent state as it was at the beginning of your series of queries, so at the start of your transaction. Remember that after commit or rollback a new transaction starts so you will see the data then from that timestamp (inside Oracle: what is commited with a lower System Change Number than your transaction SCN). See the Concepts Guide and Metalink (search for strings "read only" or "serializable") to learn more about this. With this you can avoid the time consuming creation of a copy of the tables with exp/imp or having locks on tables for a long time.

I prefere to use sqlplus to spool to a csv file, rather than PL/SQL and UTL_FILE. This can be easier to work with too because UTL_FILE can only write on the database server while sqlplus can be run from any client. I think it will also be faster provided "set termout off" is used. To keep the file as small as possible use "set trimspool on". Set linesize at least to the length of the longest line that can be queried. Togheter with "set pagesize 0 echo off feedback off verify off doc off" a real csv file without headers etc. wil be written.

Paul <f1fteen_at_hotmail.com> schreef in berichtnieuws 7b7286ec.0209260459.5f07ce24_at_posting.google.com...
| Hi all,
|
| I have several large tables (holding between 2 and 5 million records
| each) in an oracle (8i) database on solaris 8.
|
| These tables are being updated frequently and are related to each
| other through foreign keys.
|
| I want to export the data from all of these tables to a text file (one
| text file per table) on the file system. I also need the data to be in
| a consistent state (in the same way that the CONSISTENT=Y parameter
| works in EXP). This is important because as I said above, the tables
| are being updated regularly.
|
| I will be doing this export on a monthly basis, and I will be
| scheduling it to run late in the evening.
|
| What is the best way to go here?
|
| Options I have considered ...
|
| 1. Create a DMP file using EXP with CONSISTENT=Y and then IMP into a
| temporary schema. I will then write some PL/SQL using UTL_FILE to
| extract the data from the tables in the temporary schema (which will
| be in a consistent state) into text files on the file system.
|
| 2. Write some PL/SQL which will initially "lock" the tables against
| update, then use UTL_FILE to extract the data (which will be in a
| consistent state) into text files on the file system.
|
| 3. Create "snapshots" of the tables into temporary schema first, then
| write some PL/SQL using UTL_FILE to extract the data from the tables
| in the temporary schema (which will be in a consistent state) into
| text files on the file system.
|
| If option 2., what would be the best method of locking the tables
| against update at the start of the PL/SQL script?
|
| The reason for doing this is to load the data into a SQL Server
| database (I do not have a direct link between the databases), and I
| believe that I cannot achieve this using the DMP file directly - is
| this correct?
|
| I would greatly appreciate help from anyone who has experience in this
| area.
|
| Thanks in advance.
|
| P.
Received on Thu Sep 26 2002 - 14:57:06 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US