Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Import extent size problems

Re: Import extent size problems

From: stephen O'D <stephen.odonnell_at_gmail.com>
Date: Fri, 24 Aug 2007 03:49:11 -0700
Message-ID: <1187952551.675592.240240@x40g2000prg.googlegroups.com>


On Aug 24, 10:30 am, hjr.pyth..._at_gmail.com wrote:
> On Aug 24, 7:01 pm, stephen O'D <stephen.odonn..._at_gmail.com> wrote:
>
>
>
> > > It doesn't "ignore" the request for 67MB. It seeks to honour it,
> > > although it won't do it in the precise way the storage clause is
> > > demanding it.
>
> > > That is, if you've got INITIAL 67M, your table will be created with 67
> > > 1MB extents (or 134 512KB extents etc). You will thus get your 67MB,
> > > but not quite in the way you were expecting it.
>
> > > Locally managed tablespaces have always tried to 'honour in the
> > > intention' the iNITIAL clause; it's the NEXT they completely ignore!
>
> > Thats where my understanding went missing, as I thought it just
> > ignored the entire storage clause on the table, but its makes sense
> > that it doesn't.
>
> > > If you really and truly want to muck about with manipulating storage
> > > clauses (not a sensible option, but if you insist...) then do a Google
> > > Search for DataBee. Allows you to override storage clauses in a dump
> > > file without having to modify them in the source database.
>
> > It may not be sensible for production, but I am attempting to recreate
> > a DEV copy of a terra byte+ database that contains only the lookup
> > table data, all objects and stored procedures so I need to get the
> > storage parameters down to a manageable size (over 600 tables in the
> > schema at 50MB each = too big!). I will checkout DataBee and see what
> > it can do for me!
>
> > Thanks,
>
> > Stephen.
>
> Databee is here:http://www.databee.com/
>
> Their free DDL extraction wizard is probably what you want... but that
> requires a rows=n export dump file. It's very good, though, and I've
> used it with a 10g dump file without incident.
>
> Someone else has already mentioned the INDEXFILE=Y ROWS=N versions of
> import: that will get you a text file you can hack to pieces that can
> act as a tables & index creation script.
>
> You could also use dbms_metadata.get_ddl to achieve the same sort of
> thing, but it's going to be on a table-by-table basis. Something like
> select 'exec dbms_metadata.get_ddl('||owner, table_name||');' from
> user_tables would be in order, though, to generate the script that
> will generate the script that will create the objects you want in one
> hit -and with a bit of spooling and editing, the storage clauses can
> be sorted.
>
> The basic principle in either case is to pre-create your objects using
> new storage parameters and then run import with ignore=y just to
> populate the already-created objects.
>
> Not sure if you mentioned an Oracle version, but if it's 10g or higer
> then you would probably do better with the Data Pump version of export/
> import, which has a TRANSFORM parameter that lets you over-ride
> storage clauses (amongst other things) as you are doing the import.

I found a tool called DDL Wizard made by the same people who make DataBee I believe, and it allowed me to get all the objects created without storage clauses - quite a useful tool.

I am importing into Oracle 10G, but exporting from 9i so I am stuck with imp/exp for the time being.

Thanks for all the help on this thread - its got me what I needed finally!

Cheers,

Stephen. Received on Fri Aug 24 2007 - 05:49:11 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US