Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Using a pipe accross network with export/import
Rick Denoire <100.17706_at_germanynet.de> wrote i
> I am considering how to transfer a DB to a new platform minimizing
> downtime. I would start a full export of the source DB (8.1.7, Solaris
> 7) into a named pipe located on a NFS mounted LUN which belongs to the
> target DB (Oracle 9i, Linux/Intel). On the target DB side, the import
> job would read from the named pipe, so export and import will run in
> parallel.
>
> Is this feasible?
Yes. But I would go a tad further and also compress the data. Especially seeing as this goes over a network.
Something like this.
In Unix:
On Machine 2 (destination + NFS share):
# mkfifo /nfsshare/pipe2 # mkfifo /nfsshare/pipe3 # uncompress </nfsshare/pipe2 >/nfsshare/pipe3& # imp parfile=x.par file=/nfsshare/pipe3(these processes will now wait for data to be written to the pipes)
On Machine 1 (source + NFS mount):
# mkfifo /tmp/pipe1 # compress </tmp/pipe1 >/nfsshare/pipe2& # exp parfile=y.par file=pipe1
In English :
Machine1's Export writes data to pipe1 on its local file system.
There it is compressed and written into a pipe2 on the NFS mount (this
pipe resides on machine2 and thus the compressed data goes across the
network).
Machine2 reads the compressed data from pipe2 (a local file) and uncompresses it to pipe3 (also on a local file). Import grabs its data from pipe3 and pumps the data into the database.
Hmm.. I hopes this work like it should across NFS. I'm not so sure about how pipes are treated on a NFS mount... (afterall, that pipe resides and is managed by a specific kernel and we're talking two machines here).
Instead of NFS, you can also use FTP. The exact same principle. Instead of using NFS, you use FTP to read data from a pipe and write this data to a pipe on the other machine - this I know works and have done it numerous times.
-- BillyReceived on Thu Aug 28 2003 - 00:22:19 CDT