Home » RDBMS Server » Performance Tuning » Problem with SQL Plan
Problem with SQL Plan [message #347360] |
Thu, 11 September 2008 06:21 |
majstoru
Messages: 68 Registered: October 2006 Location: Serbia
|
Member |
|
|
Hi,
I have this SQL statement
SELECT
Z.ID,
Z.BROJ,
Z.DATUM,
Z.LOG01 AS USLUZNO,
SUM(S.KOLICINAX) AS KOMADA,
SUM(S.KOLICINA) AS TEZINA
FROM ROBDOKUMENT Z
JOIN ROBDOKUMENTS S ON S.ROBDOKUMENT = Z.ID
WHERE
Z.TIP = 'K01'
GROUP BY
Z.ID, Z.BROJ, Z.DATUM, Z.LOG01
and this PLAN for SQL statement
Plan
SELECT STATEMENT ALL_ROWSCost: 9,055 Bytes: 3,462,368 Cardinality: 119,392
4 HASH GROUP BY Cost: 9,055 Bytes: 3,462,368 Cardinality: 119,392
3 HASH JOIN Cost: 8,089 Bytes: 3,462,368 Cardinality: 119,392
1 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENT Cost: 189 Bytes: 39,292 Cardinality: 2,068
2 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENTS Cost: 7,884 Bytes: 8,502,660 Cardinality: 850,266
and here is structure ROBDOKUMENT is a master table with fild ID as primary key, and ROBDOKUMENTS is detail table with field ROBDOKUMENT like forein key to table ROBDOKUMENT. I have Indexes ID, TIP - ROBDOKUMENT and ID, RODBOKUMENT - ROBDOKUMENTS. With this scenario oracle make PLAN like I describe and I don't know how to solve last 2 items from plan:
1 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENT Cost: 189 Bytes: 39,292 Cardinality: 2,068
2 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENTS Cost: 7,884 Bytes: 8,502,660 Cardinality: 850,266
Please help!!!
|
|
|
Re: Problem with SQL Plan [message #347499 is a reply to message #347360] |
Thu, 11 September 2008 22:05 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
How many rows in ROBDOKUMENT?
How many rows in ROBDOKUMENTS?
How many rows have Z.TIP = 'K01' ?
Generally speaking, you should always index join keys and filter predicates:
- ROBDOKUMENTS(ROBDOKUMENT)
- ROBDOKUMENT(ID)
- ROBDOKUMENT(TIP)
But depending on the answers to the above questions, this may or may not help.
Don't forget to gether statistics with DBMS_STATS.GATHER_TABLE_STATS().
Ross Leishman
|
|
|
Re: Problem with SQL Plan [message #347541 is a reply to message #347360] |
Fri, 12 September 2008 01:12 |
harrysmall3
Messages: 109 Registered: April 2008 Location: Massachusetts
|
Senior Member |
|
|
Hi Majstoru,
Ross raised the right questions for you to look into, especially insuring fresh statistics are gathered.
Concerning:
Quote: |
With this scenario oracle make PLAN like I describe and I don't know how to solve last 2 items from plan:
1 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENT Cost: 189 Bytes: 39,292 Cardinality: 2,068
2 TABLE ACCESS FULL TABLE YUHOR.ROBDOKUMENTS Cost: 7,884 Bytes: 8,502,660 Cardinality: 850,266
|
Let me back up further and ask what is being defined as the problem? Oracle could very well be making the correct decision and the full table scan shouldnt be targeted as an issue by itself.
It does come down to the number of records you expect to be retrieved with the tip value of 'K01' and the size of the tables. The hash join can be very efficient with its in-memory sorting.
If this query will be re-used for other other id values, also to consider is that the hash join will provide more or less a consistent performance time while the nested_loop can tend to have an exponential performance for its relationship between time to process and volume.
With complete statistics gathered, Oracle can make some pretty good choices based on the skew of the data - switching back and forth from hash-join to nested loop index approach based on the record count expected.
Best Regards,
Harry
(hash-join fan )
|
|
|
Re: Problem with SQL Plan [message #347571 is a reply to message #347541] |
Fri, 12 September 2008 03:43 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
Harry,
I think I have established my credentials as a fellow hash join fan in past, but even I reckon you're giving NL a raw deal.
Clearly you work in a high volume batch-style environment. But there are as many people - if not more - working in OLTP environments where FTS means death and indexed-nested-loops is the ONLY applicable join method. I'd hate to see you leading any of these people down the garden path.
NL is slow, agreed; but it does scale linearly, not exponentially. It just seems that way because HJ scales linearly as well but with a shallower pitch.
Ross Leishman
|
|
|
Re: Problem with SQL Plan [message #347620 is a reply to message #347571] |
Fri, 12 September 2008 06:58 |
harrysmall3
Messages: 109 Registered: April 2008 Location: Massachusetts
|
Senior Member |
|
|
Ross you are correct, I am guilty of losing sight of the OLTP aspect.
I re-read my post, and while my intent was to suggest treating the FTS as nuetral until the tbl size/expected rows information was ascertained, my points stated were blatantly biased
No raw deals intended, sorry Majstoru!
I have been stuck in my back-end labyrynth for quite some time.
Here's a literal "brain dump" of what went through my mind on seeing the query:
for method_to_madness in (Select * from harrys_brain) LOOP:
...<my eyes jump to>
"Ok, its not on Z.ID, we're processing for a category".
<eyes>
YUHOR.ROBDOKUMENT Bytes: 39,292 Cardinality: 2,068
YUHOR.ROBDOKUMENTS Bytes: 8,502,660 Cardinality: 850,266
Quote: | "detail rec 100k? 850k rows, master tbl 2068 rows...of what?"
<jump to explain plan reference for cardinality>
"ok estimate of rows. Uniqueness is for Z.ID"
"So what about indexing tip?...use the index to limit the records
being joined"
"If the document was our claim master tbl, and documents the detail, and TIP was a diagnosis code, did that ever work out?"
"Only when we joined to the diag table. the unique scan. not worth mentioning, Ross suggested that index"
"I wonder if Z.TIP criteria were replaced with Z.ID if,"
"for one row? yes it would take the index...do it for Z.ID 100 rows"
"forget that. drop the optimizer_index_cost_adj until the index takes..."
"try it at 100 first."
"no fast full scan." "So what? its not available"
"it could be forced"
..WHERE EXISTS (SELECT ID FROM DOCUMENT D2 WHERE D1.ROWNUM = D2.ROWNUM)
"pointless, that has never improved anything"
"What about TIP?"
"<static>"
"So what if there are only a few rows for K01?"
"No way, you're lazy, you'd have selected the Z.ID's from document and written the query against documents...there's a lot of values"
"How many?"
"find out what it means... +K01 +Document +ID +Datum -> google"
"these are some sort of web parms. tags. K01 seems common
id="+K08+"&li=3&gr="+K01+((K06.length>0)?"&af="+K06:"")+" target=_new>"
"forget it. write the post"
|
Hopefully a humurous start for a good friday for all!
Best regards
Harry
|
|
|
Re: Problem with SQL Plan [message #348257 is a reply to message #347360] |
Tue, 16 September 2008 04:01 |
majstoru
Messages: 68 Registered: October 2006 Location: Serbia
|
Member |
|
|
Hi to all,
Thanks on yours replys, I was read a lot of performace tips on a lot of oracle support sites and if I'm rigth I must index every field in a table which is used for primary key, foreign key, search condition or join. Thay is mean that primary key is not index (or indexed by it self) I must create index on that field in Oracle?
Thanks again
|
|
|
Re: Problem with SQL Plan [message #348263 is a reply to message #348257] |
Tue, 16 September 2008 04:09 |
JRowbottom
Messages: 5933 Registered: June 2006 Location: Sunny North Yorkshire, ho...
|
Senior Member |
|
|
I think you'll find that all your primary keys are already indexed - Oracle creates an index for them by default.
FOreign keys are efinitely worth indexing in general, particularly if you ever plan on doing deletes from the master table,
Columns that are joined on are often worth indexing.
Other columns that are searched on can be worth indexing, or can sometimes be worth including in other iudexes.
There is no hard and fast set of rules - if there were, it would have been automated by now.
You need to look at the queries that run against your tables, and create the smallest set of indexes that you can that will give you an acceptable level of performance.
|
|
|
Goto Forum:
Current Time: Tue Nov 26 09:37:29 CST 2024
|