Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: sqllder hangs at 99-100% CPU on select of fet$

Re: sqllder hangs at 99-100% CPU on select of fet$

From: Howard J. Rogers <howardjr2000_at_yahoo.com.au>
Date: Wed, 10 Sep 2003 11:04:08 +1000
Message-ID: <3f5e78f5$0$14559$afc38c87@news.optusnet.com.au>

"Don Seiler" <don_at_seiler.us> wrote in message news:pan.2003.09.10.00.49.05.905639_at_seiler.us...
> In-line responses throughout:
>
> On Wed, 10 Sep 2003 09:20:45 +1000, Howard J. Rogers wrote:
> > First, try not to get too wound-up about Sybrand's ad hominem attacks on
> > you.
>
> Thanks. Like I said to him, I'm not a certified anything and would
> welcome some formal Oracle training. I can be unnecessarily harsh
> sometimes as well. :p
>
>
> > Second, uet$ and fet$ are two tables in the data dictionary that are
used to
> > search for Used ExTents and Free ExTents. When you do a mass load like
this,
> > your table runs out of space, and has to extend. Therefore, it has to
run
> > off to fet$ to find some free space that it can use to extend into.
>
> I realize that this is just a small bandage, but would adding a datafile
> allow things to continue. Again, I know the problem is serious and needs
> to be addressed ASAP, but some operations need to run ASAP and if I can
> add a block on contiguous free space in the form of new datafiles than I
> would like to do that now and start planning for the remedies you suggest
> below.

It won't help, probably (I suppose you could suck it and see). The issue is not whether or not a piece of free space can be found (which your temporary fix would certainly clear up). But how long it takes to search for a piece of free space. And that's determined by the number of rows in that fet$ table it has to search through. And adding a new datafile is just going to mean more rows, not get rid of any existing ones.

>
> > Now: if this were 8i or 9i, the use of locally managed tablespace would
> > eliminate the visits to fet$ at a stroke (their main claim to fame). But
> > you're on 7, so that's not an option.
>
> Like I said Oracle 9i is our chosen path for Q4. Obviously I'd like to
> also know of any fixes I should try and do to the data while doing the
> upgrade, so I don't just bring old baggage along for the ride. Or will 9i
> automagically fix a lot of these issues?

No, it won't fix it automagically! If you migrate your existing database to 9i, then all the baggage simply gets converted into 9i baggage. There *is* a procedure to convert data dictionary tablespaces into locally managed ones (which you'd have to invoke yourself, manually), but it doesn't do a spectacularly good job at it, and at the end of it, all you have is the current mess that happens to be locally managed. That will certainly mean no more extended lookups in fet$, but it still means you'll have a mess.

>

[snip]
>
> Again I wonder if I can provide a small crutch in the form of throwing
> more datafiles at the tablespace.

See above. Try it and see by all means, but if fet$ is really the issue, that isn't going to help much I think.

>

[snip]
>
> Yes the main problem no matter what is finding actual disk space to hold
> another copy of the data in that tablespace. But your solution is
> definitely the route to go with re-setting the initial and next extents.
> Is there a way to calculate the optimal extent size?

No such thing, really. The way I would do it is: start with the assumption that on a 2K block system, 121 extents is ideal. On a 4K block database, 249 extents. On an 8K block system, 505 is the magic number (there are technical reasons for those numbers being good, though it won't kill you to exceed them moderately).

Now measure the complete existing size of your table by looking in dba_extents where segment_name='BLAH', and adding up the BYTES for all the extents listed.

Divide the total size of the table (not forgetting to add in some extra to allow for a reasonable amount of future growth) by the ideal number of extents, and you have your ideal extent size. Sort of.

On an 8K block system, with a 500MB table, that would mean (worst case) extent sizes of about 1M each. I'd probably go 8M myself and have done with it. If the table was 16GB, worst case would be 32MB extents... I'd probably go 64MB.

It wouldn't hurt you to stick to the extent sizes that Oracle 8i and 9i use when 'autoallocating' locally managed tablespaces: 64K, 1M, 8M, 64M and 256M. Just pick from that list the size that gets you a number of extents under the 'ideal' numbers of extents I mentioned above. If everything inside a dictionary-managed tablespace comes in one or other of those 5 sizes, then you won't totally eliminate the possibility of fragmentation, but you will certainly have minimised it.

Regards
HJR Received on Tue Sep 09 2003 - 20:04:08 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US