Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: shrinking .dbf files?
ilaletin_at_usa.net (Igor Laletin) wrote in message news:<f9226414.0406030124.132576ba_at_posting.google.com>...
> "Howard J. Rogers" <hjr_at_dizwell.com> wrote in message news:<40be6e91$0$1585$afc38c87_at_news.optusnet.com.au>...
> > "OtisUsenet" <otis_usenet_at_yahoo.com> wrote in message
> > news:5606b639.0406021601.23e35af1_at_posting.google.com...
> > > Hello,
> > >
> > > Problem:
> > > While importing a DB dump I run out of space on the partition that
> > > holds various Oracle data files (*.dbf). It seems that some of those
> > > .dbf files, like the one that holds temporary tablespace, grew really
> > > big during import.
> > >
> > > My disk (a partition with Oracle files, really) is now 100% full, and
> > > I am hoping to find a way to shrink some of the big .dbf files.
> > >
> > > For example, here are some large files:
> > >
> > > 3.5G /u01/app/oracle/oradata/ogdev/temp01.dbf
> > > 1.6G /u01/app/oracle/oradata/ogdev/users01.dbf
> > > 441M /u01/app/oracle/oradata/ogdev/system01.dbf
> > > 376M /u01/app/oracle/oradata/ogdev/undotbs01.dbf
> > > 139M /u01/app/oracle/oradata/ogdev/example01.dbf
> > >
> > > Is there a way for me to shrink these?
> > >
> > > I can't really drop tablespaces that use these files, since without
> > > them my DB would be useless, but I would like to at least shrink these
> > > files a bit for now, so my partition is not 100% full.
The Tablespace Map option in Enterprise Manager is useful for figuring out what objects are at the top of each file, so you can move the objects to a datafile on another filesystem and shrink those on this tablespace.
You also might try a couple of shutdown/startups and see if that doesn't shrink that temp ts (depending on how it is defined).
> > >
> > > Thanks!
> > > P.S.
> > > I'm using Oracle 9.2.0.1 under Linux (RedHat 9.0)
> >
> >
> > Alter database datafile '/full/path/and/filename' resize 300M;
> >
> > Or whatever size you deem appropriate.
> >
> > It may not work, though, because if it bumps into data on the way down, it
> > can't just lop that data out of the database (obviously!). So you can only
> > resize downwards if the end of the data file is full of fresh air. If it
> > isn't, there's not a lot you can do quickly or easily to fix the problem.
> >
> > Regards
> > HJR
>
> ... and in any case shrink your temporary tablespace. It's by far the
> largest one. I'll take a (safe) guess that it's a default temporary
> tablespace with tempfiles. You need to create another (small) temp ts,
> make it a default one, drop the original one and re-create it with a
> smaller size.
>
> TEMP grew up because your tempfile is autoextendable. Well, now you
> know your temp space requirements :) I'm not sure why it grew during
> the import. Sure you may build indexes as part of the import but the
> other tablespaces don't look that big. They unlikely to contain an
> index which requires 3.5G to be built. Unless the list of big ts is
> incomplete.
Otisusenet: Did you use commit=N on the import? Did you use compress=Y on the export? Try importing without indices.
>
> Anyway, what's wrong with 100% full? Or you use filesystem which needs
> some free space to be resized?
In older unix the 100% was a soft limit, which could ocassionally be bypassed by the fs, using things not normally given to the filesystem. My personal record was 164% (as stated by bdf) on an early SunOS. I would be interested to know if this still applies in the linux world.
jg
-- @home.com is bogus. http://www.signonsandiego.com/uniontrib/20040602/news_1b2adelphia.htmlReceived on Thu Jun 03 2004 - 13:27:52 CDT