Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Join Two Extremely Large Tables
Note in-line
-- Regards Jonathan Lewis http://www.jlcomp.demon.co.uk The educated person is not the person who can answer the questions, but the person who can question the answers -- T. Schick Jr One-day tutorials: http://www.jlcomp.demon.co.uk/tutorial.html ____UK_______April 22nd ____USA_(FL)_May 2nd ____Denmark__May 21-23rd ____Sweden___June ____Finland__September ____Norway___September Three-day seminar: see http://www.jlcomp.demon.co.uk/seminar.html ____UK_(Manchester)_May x 2 ____Estonia___June (provisional) ____Australia_June (provisional) ____USA_(CA, TX)_August The Co-operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/faq/ind_faq.html "Bass Chorng" <bchorng_at_yahoo.com> wrote in message news:bd9a9a76.0304171100.202a75ee_at_posting.google.com...Received on Thu Apr 17 2003 - 14:19:21 CDT
> >
> > In real life, your index is more likely to be
> > height 4.
>
> I validated the index in QA a few weeks back,
> the height was 7.
That must have taken a bit of time ! Roughly how big is your key size ? Does your process delete most rows after reporting them and then add more rows through an increasing sequence ?
> The data was refreshed at least
> 6 months ago. We use 8K block size. One problem
> is this table gets deleted and the index has
> never been rebuilt. We are running 8.1.7. Can't
> do online rebuild.
You could - but presumably don't want to hit the catastrophic bugs on this size of index. Is the index the type that can benefit from a COALESCE. (It's too late now for this to reduce the height, but it could stop it from getting worse).
>
> We do need the 15 B rows processed in one shot.
> I figure sorting the output file would probably
> be more costly. So my only option is nested loop
> then ? (Thats pretty much what we are doing now,
> except it is artificial. We do not join, we use
> foreach A, do B.. ).
>
Sounds like the nested loop is it. You should get a noticeable improvement by doing the join (hinted very carefully to do exactly what you want) and then using array fetches of a few dozen to a few hundred rows at a time. It's messier to code at the boundary conditions than one row at a time - but significantly more efficient.
![]() |
![]() |