Home » RDBMS Server » Performance Tuning » Query Tuning (Oracle 9.2)
Query Tuning [message #311579] |
Fri, 04 April 2008 19:39 |
chintu00
Messages: 91 Registered: February 2007 Location: NJ, US
|
Member |
|
|
I am trying to tune following query
I have item_loc(hash on loc) and od_item_loc_db2(hash on loc) partitioned.
If someone suggest something to make it better I will really appreciate it?
SELECT /*+ parallel(degree 4) */
l.item,
l.loc,
l.status,
primary_supp,
replen_type_cd,
replen_sub_type_cd,
l.last_update_id,
l.last_update_datetime,
qty_on_hand,
qty_on_ord
FROM item_loc l,
item_master m,
od_item_loc_db2 db2
WHERE l.item = m.item
AND db2.sku(+) = l.item
and db2.loc(+) = l.loc
--AND l.loc = v_store
and l.status in ('A', 'C')
and m.status in ('A', 'C');
|
|
|
|
Re: Query Tuning [message #311588 is a reply to message #311580] |
Fri, 04 April 2008 21:18 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
And when you are pasting the Explain Plan back here, you can also tell us:
- How many rows in each table
- How many rows matching and l.status in ('A', 'C')
- How many rows matching and m.status in ('A', 'C');
- How man rows does the query return?
Ross Leishman
|
|
|
Re: Query Tuning [message #311598 is a reply to message #311588] |
Sat, 05 April 2008 00:08 |
chintu00
Messages: 91 Registered: February 2007 Location: NJ, US
|
Member |
|
|
I have not seen such a bad explain untill all are full table scan
Plan
SELECT STATEMENT CHOOSE
Cost: 8,280 Bytes: 9,486,815 Cardinality: 87,035 5 HASH JOIN OUTER
Cost: 8,280 Bytes: 9,486,815 Cardinality: 87,035 3 HASH JOIN
Cost: 8,073 Bytes: 4,003,610 Cardinality: 87,035
1 TABLE ACCESS FULL RMS10.ITEM_LOC
Cost: 1,595 Bytes: 3,133,260 Cardinality: 87,035 Partition #: 3 Partitions accessed #1
2 TABLE ACCESS FULL RMS10.ITEM_MASTER
Cost: 5,832 Bytes: 5,350,760 Cardinality: 535,076
4 TABLE ACCESS FULL RMS10.OD_ITEM_LOC_DB2
Cost: 2 Bytes: 63 Cardinality: 1 Partition #: 5 Partitions accessed #1
[formatted by moderator]
[woops - removed code tags again - they made it worse - moderator]
[Updated on: Sat, 05 April 2008 07:06] by Moderator Report message to a moderator
|
|
|
Re: Query Tuning [message #311646 is a reply to message #311598] |
Sat, 05 April 2008 07:09 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
I cannot read this plan as you have not formatted it. It also contains line-breaks in the wrong places. Could you please post the plan properly indented and enclosed in [code] .. [/code] tags.
Have you given any thought to the other questions asked above that will help diagnose your problem?
Ross Leishman
|
|
|
Re: Query Tuning [message #311665 is a reply to message #311646] |
Sat, 05 April 2008 10:48 |
chintu00
Messages: 91 Registered: February 2007 Location: NJ, US
|
Member |
|
|
Please find the explain plan as an attachment.
I will get you other information too
Thanks ,I appreciate your help
|
|
|
Re: Query Tuning [message #311699 is a reply to message #311588] |
Sat, 05 April 2008 17:39 |
chintu00
Messages: 91 Registered: February 2007 Location: NJ, US
|
Member |
|
|
Hey Ross,
Please find all the information you are looking for
select count(*) from item_loc --139537131
select count(*) from item_master --1099625
select count(*) from od_item_loc_db2 db2 ---131283868
select count(*) from item_loc --139536709
where status in ('A', 'C')
select count(*) from item_master --1092082
where status in ('A', 'C')
I am attaching the screen print of the explain plan as the formatting gets messed up when I paste it in this message window.
Thanks
|
|
|
|
Re: Query Tuning [message #311711 is a reply to message #311702] |
Sat, 05 April 2008 20:27 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
OK. So we have:
- One big table: item_master - at 1M rows
- Two really big tables: item_loc & od_item_loc_db2 - at 140M rows
- Two filter predicates that filter only a tiny percentage of the rows - most rows are accepted for the join.
When you join very large tables like this, you cannot avoid using vast quantities of TEMP space. HASH join is the best join method, but the hash table cannot fit in memory so it spills to disk. This extra IO makes the query much, much longer.
The only way to avoid the hash join spilling to disk (other than increasing your memory allocation - which is not a scalable approach) is to HASH PARTITION the tables on the join key. This makes for smaller join sets that the SQL engine can take one partition at a time.
If each table was hash partitioned (or sub-partitioned) on l.item (or m.item, or db2.sku) such that each table contained the SAME number of hash partitions (or sub-partitions) and no partition exceeded your memory capabilities (you would have to experiment, but 1M rows might be a good starting point) then Oracle could do the join one partition at a time. You would almost certainly have to remove the PARALLEL hint - parallelism will use extra memory.
Search the documentation for partition-wise joins and read this article
If you are not prepared to do that, you already have the most efficient plan. You can try to increase your PGA, but I don't think it will give you very good results.
Ross Leishman
|
|
|
Re: Query Tuning [message #311760 is a reply to message #311579] |
Sun, 06 April 2008 11:35 |
coleing
Messages: 213 Registered: February 2008
|
Senior Member |
|
|
Ross is right, although you can mitigate the effect of disk overspill by rewriting this query.
1. Create a temporary table using the join between ITEM_MASTER and ITEM_LOC, as you have these down as a standard join. This will reduce the resultset to a maximum of number of rows equal to ITEM_MASTER.(1M).
Then analyze this table, and outer join it to
OD_ITEM_LOC_DB2 to get your final result.
Get a noparallel timing, and then start adding parallel hints to see if you get any performance increase.
|
|
|
Re: Query Tuning [message #311800 is a reply to message #311760] |
Sun, 06 April 2008 22:08 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
coleing wrote on Mon, 07 April 2008 02:35 | This will reduce the resultset to a maximum of number of rows equal to ITEM_MASTER.(1M).
|
Only if you join on a unique key in ITEM_LOC. If you are joining on a unique key in ITEM_MASTER and a non-unique key in ITEM_LOC, the result set could be up to 140M rows.
If you join on non-unique keys in both tables, the sky's the limit (up to 140,000,000,000,000).
Unless I'm missing something fundamental, materialising an intermediate result will only INCREASE I/O; unlike the hash-join in the PGA, it generates UNDO and REDO (and REDO on the UNDO), which could effectively triple the I/O.
Ross Leishman
|
|
|
Re: Query Tuning [message #311834 is a reply to message #311579] |
Mon, 07 April 2008 01:38 |
coleing
Messages: 213 Registered: February 2008
|
Senior Member |
|
|
Sorry, yes a non unique join could be 140m rows.
I have found that materializing intermediate results to be an excellent way of managing large queries. Plus, when the second join fails due to to too much overspill (runnign out temp, or tablespace), you only have to restart the part of the query that is currently running.
Sure, if the resultset fits in the PGA. its not a problem. But we are assuming that it wont by a long way.....
If you materialize the first part of the query, the second part(which is probably going to be the biggest), can be managed more effectively.
The temp table can be partitioned, you can split the query into streams, all sorts.
Yes, you are increasing IO, but this is sometimes faster than having a 100M resultset in memory. I know it shouldnt be, but ive seen it so many times speed up massive queries, that we always consider it.
|
|
|
Re: Query Tuning [message #311861 is a reply to message #311834] |
Mon, 07 April 2008 02:57 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
Hmmm. I'll remain skeptical if you don't mind.
I usually find that the theoretical optimal solution is also the actual optimal solution. When this is not the case, either there is missing factor in the theory, or its implemented improperly in the actual.
There needs to be some tangible advantage that offsets the extra I/O.
I take your point that materialising an intermediate result can avoid failure due to an inadequately sized PGA. Sort-Merge joins are another way of achieving this; although they are debatably faster than materialisation due to the less efficient join technique.
Ross Leishman
|
|
|
Goto Forum:
Current Time: Sat Nov 23 01:41:15 CST 2024
|