Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Using a pipe accross network with export/import

Re: Using a pipe accross network with export/import

From: Tanel Poder <change_to_my_first_name_at_integrid.info>
Date: Thu, 28 Aug 2003 09:39:05 +0300
Message-ID: <3f4da38a$1_1@news.estpak.ee>


You could use exp -> pipe -> rsh to second server -> pipe -> imp. That way you use rsh to transfer your data between pipes over network and there's no need to store results in a file. (you can also use ssh -C option for compressing traffic if got enough CPU power).

Or use exp/imp for transporting schema structure only and transfer all data over database link. This would be the preferred method for me at least. You could even transfer schema structure first, during uptime, provided that it doesn't change in mean time (you can check it using last_ddl_time in dba_objects). Disable constraints and set indexes unusable. And then kick out users from your source database, transfer increased sequence values and start using insserts over dblink to transfer data.

I have a simple script for generating insert commands over dblink:

set linesize 300

set pagesize 0

set feedback off

set trimspool on

spool /tmp/apps_insert.sql

prompt alter session enable parallel dml;;

select 'insert /*+ APPEND PARALLEL(' || table_name || ',4) NOLOGGING */ into '

   || owner || '.' || table_name

   || ' select * from ' || owner || '.' || table_name || '@dblink;'

from dba_tables

where owner in ('AP', 'APPLSYS', 'APPS', 'AR', 'GL', 'JE')

order by owner;

spool off

exit

Parallel hint might not be feasible in your environment, but depending on your network, you could run several insert scripts parallelly anyway.

Tanel.

"Billy Verreynne" <vslabs_at_onwe.co.za> wrote in message news:1a75df45.0308272122.7cb8b9f1_at_posting.google.com...
> Rick Denoire <100.17706_at_germanynet.de> wrote i
>
> > I am considering how to transfer a DB to a new platform minimizing
> > downtime. I would start a full export of the source DB (8.1.7, Solaris
> > 7) into a named pipe located on a NFS mounted LUN which belongs to the
> > target DB (Oracle 9i, Linux/Intel). On the target DB side, the import
> > job would read from the named pipe, so export and import will run in
> > parallel.
> >
> > Is this feasible?
>
> Yes. But I would go a tad further and also compress the data.
> Especially seeing as this goes over a network.
>
> Something like this.
>
> In Unix:
>
> On Machine 2 (destination + NFS share):
> # mkfifo /nfsshare/pipe2
> # mkfifo /nfsshare/pipe3
> # uncompress </nfsshare/pipe2 >/nfsshare/pipe3&
> # imp parfile=x.par file=/nfsshare/pipe3
> (these processes will now wait for data to be written to the pipes)
>
> On Machine 1 (source + NFS mount):
> # mkfifo /tmp/pipe1
> # compress </tmp/pipe1 >/nfsshare/pipe2&
> # exp parfile=y.par file=pipe1
>
>
> In English :
> Machine1's Export writes data to pipe1 on its local file system.
> There it is compressed and written into a pipe2 on the NFS mount (this
> pipe resides on machine2 and thus the compressed data goes across the
> network).
>
> Machine2 reads the compressed data from pipe2 (a local file) and
> uncompresses it to pipe3 (also on a local file). Import grabs its data
> from pipe3 and pumps the data into the database.
>
>
> Hmm.. I hopes this work like it should across NFS. I'm not so sure
> about how pipes are treated on a NFS mount... (afterall, that pipe
> resides and is managed by a specific kernel and we're talking two
> machines here).
>
> Instead of NFS, you can also use FTP. The exact same principle.
> Instead of using NFS, you use FTP to read data from a pipe and write
> this data to a pipe on the other machine - this I know works and have
> done it numerous times.
>
> --
> Billy
Received on Thu Aug 28 2003 - 01:39:05 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US