Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Use TOAD rebulid or perform table truncate & import?
"Glen A. Sromquist" <stromqgl_at_alpac.ca> wrote in message
news:lNxn6.1427$go.76039_at_news0.telusplanet.net...
> I have 4 table(s) in a database that are showing a high number of chained
> rows, should I :
>
You mean that chain_cnt in dba_tables is high? That might be migration or chaining. If it's chaining, then it suggests your block size was too small (so the cure is to recreate your entire database ;-) ). If it's migration, then the cure is to somehow persuade Oracle to re-insert those records, since migration only ever occurs as a result of updates to records, not inserts.
You don't say what version of Oracle you are using, but in 8i, you could simply issue an 'alter table blah move tablespace whatever' command, and that causes the table to be re-created (and hence records to be re-inserted). All indexes are invalidated, which is a bit of a bummer, so they'll need rebuilding. The destination tablespace, incidentally, can be the same as the source.
But a move command has a serious flaw: the entire table is re-created, when not all records are in need of re-inserting. So a more subtle approach is to locate the rowid's of the migrated rows, select those records out into a temporary storage area via their rowids, and then re-insert them by selecting from the temporary storage area. Oracle comes with a package that does just that. I suspect TOAD is doing very much the same thing, if not actually tapping into the supplied package.
As such, there are no potential pitfalls -except that a delete of a record and a fresh insert will cause leaf entries in the index to be marked for deletion, and then potential block splits to occur at the time of re-insertion. So your indexes may be less efficient afterwards than they were before, and you should consider rebuilding them afterwards.
There are various formulae out there designed to work out PCTFREE... all of them are somewhat dodgy, so I ignore them and look at my table definitions. If there are few NOT NULLS, PCTFREE should be higher. If there are lots of VARCHAR2s and NUMBER fields, PCTFREE should be higher. If a significant proportion of the record is prone to updates, PCTFREE should be higher. Higher than what? The default of 10%. A table, subject to updates, with few not nulls, and comprised of a lot of varchar2s and number fields should be a candidate for 25% or so. Particularly if there are few full table scans on such a table, I'd err on the side of higher PCTFREEs and who cares at the waste of space and low data density in the blocks: if access is largely via indexes, then it is more important to ensure that the indexes point to the right places first time. Tables subject to lots of full tablescans need to be handled with more care -where the indexes points to is less critical (since they're not being used!), and you want to retrieve as much data in as few block reads as possible.
Regards
HJR
> -open the database in mount mode, export the table(s), truncate the table
&
> import the data back in
> or
> -open the database in mount mode, use the TOAD rebuild table utility to
> build the script, and run the script?
>
> The latter seems like a lot simpler/faster process, but are there any
> potential pitfalls?
>
> What is the best method to determine the optimal PCTFREE for these tables
to
> prevent this in the future?
>
>
> thanks in advance
>
>
Received on Fri Mar 02 2001 - 17:18:34 CST
![]() |
![]() |