Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Join Two Extremely Large Tables
I have 2 tables A and B. Table A contains 86 million rows, about 4.5 GB
in size. Table B contains 1.2 billion rows, about 25 GB in size.
For each row in table A, I need to query table B, which is indexed, returning in average of 15 rows and process them.
If I use foreach cursor and loop thru table A, the result will be probably at least 3-4 billion IOs. Since in reality, my index depth is probably far more than 10 levels, I would expect total IOs to be near 20 billions.
Basically, this is an enormous index range scan.
The # of IO is not a problem, but the time it takes is. This process would take 6-7 hours.
Would if be feasible for me to hash join these 2 giant tables and not use index at all ? Is it faster this way ? What about my temp segment ? I would either give it a hash_area_size of 10 GB (not even sure if Oracle accepts that) or prepare a temp segment that size, or my process would fail.
Your advice is appreciated. Received on Wed Apr 16 2003 - 22:46:44 CDT
![]() |
![]() |