Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Index compression vs. table compression
Jonathan Lewis wrote:
> Notes inline
>
> Regards
>
> Jonathan Lewis
>
> http://www.jlcomp.demon.co.uk/faq/ind_faq.html
> The Co-operative Oracle Users' FAQ
>
> http://www.jlcomp.demon.co.uk/seminar.html
> Public Appearances - schedule updated Dec 23rd 2004
>
>
>
>
>
>
> "Howard J. Rogers" <hjr_at_dizwell.com> wrote in message
> news:41d49207$0$3805$afc38c87_at_news.optusnet.com.au...
>
>
>> Therefore, we >>want a mechanism that will say "if you are read by a FTS, stay at the cold >>end of the LRU list, even though you are actually the most recently used >>block"... and that is precisely what the *NOCACHE* clause does. But there >>is then a further problem: how is the optimiser likely to read small, >>useful, lookup tables?.. er, via a FTS, probably, if they are genuinely >>small.
Why?
A small table is always likely to be read via a FTS using CBO. Even for a single key lookup...
>
>>And therefore a further tool is needed: a mechanism which will distinguish >>between nasty, huge FTSes of bulky tables, and small, OK, FTSes of useful >>lookup tables.
Forget the "genuinely small". Deal with the actual issue being discussed here. How do you distinguish between benign and bad FTSes?
>> And that is what the *CACHE* clause >>does: if you specify it as an attribute of a small lookup table, its >>blocks will indeed be read into the hot half of the LRU list, *even though >>they were read by a FTS*.
Whatever. Does this change the conclusions to be drawn from anything I've written? If not, say so. If yes, explain why.
HJR Received on Fri Dec 31 2004 - 10:11:59 CST
![]() |
![]() |