Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Migrate 200 Million of rows
vslabs_at_onwe.co.za (Billy Verreynne) wrote in message news:<1a75df45.0307242338.3173ad9a_at_posting.google.com>...
> nuno.tavares_at_ams.com (Nuno) wrote in
>
> > I would like to migrate 200 Million records. Furthermore my new table
> > will have more fields that the old one. The new table will be in
> > different database therefore a link will be used as well.
> <snipped>
>
> A slightly different take (and is exactly what I'm doing at the moment
> - also moving several 50+ million partitions from one db to another).
>
> The criteria:
> I want to move compressed (zipped) data across then network. Network
> lag is a big problem (it can take 24 hours to move 10 Gb across).
> Worse, the other db is not even on the same segment. And there's
> _nothing_ I can do about the network issues (besides bitching which I
> do often ;-).
>
> Problem:
> I can not unload on the source platform. Insufficient file system
> space for an export (or a custom CSV unload). Even a compressed one.
>
> Solution:
> NFS mount the destination platform file system on the source system.
> On the source system do a:
> # mkfifo xpipe
> # compress <xpipe >/dest_system_nfs_mount/part1.dmp.Z&
> # exp file=xpipe parfile=part1.par
>
> The trick is that instead of pushing the exported data across in
> uncompressed format, the compressor runs locally on the source system.
> It then writes the compressed export data to the NFS mount. Sure,
> there are some overhead using NFS, but that is insignificant against
> the fact that I'm moving compressed data across the (very slow)
> network from the source platform to destination platform.
Personally, I've had very bad luck with NFS, and would suggest it not be used for moving quantities of data around. But then again, I'm on a fiber network, so maybe the NFS device just can't handle it.
>
> If NFS is not available, FTP can also be instead. Have the compressor
> writing into pipe ypipe. Start the export also in the background. Now
> start up FTP and connect to the destination system and FTP the
> contents of the compressed pipe:
> # ftp dest
> .. enter userid & password
> cd /pub/uploads
> type image
> put ypipe part1.dmp.Z
> ...
> bye
One problem I saw was manually FTPing from the NFS device to a K-class, the biod's just go nuts, and the file (an uncompressed export) got mangled. Going from N-class to NFS device always seemed to work fine (even exporting directly there)... until I discovered too late it partially truncated 2 db files when I cp'd a dozen files, a cold backup in the middle of a complicated applications upgrade... SO glad the upgrade succeeded and I didn't have to restore from that (the two files were LMT TEMP which I know I don't have to backup and an unused USERS TS - but how can I trust the rest?)!
>
>
> The only problem with using FTP is that you increase the risk of a
> process failing (2 pipes + 3 processes versus 1 pipe + 2 processes).
> If a process fails, a pipe will break and that will take the whole
> thing down. Kind of a NOLOGGING/UNRECOVERABLE op.
jg
-- @home.com is bogus. http://www.signonsandiego.com/news/uniontrib/tue/business/news_1b29wcom.htmlReceived on Tue Jul 29 2003 - 17:14:50 CDT
![]() |
![]() |