Home » RDBMS Server » Performance Tuning » Oracle hash join (Oracle 9i)
Oracle hash join [message #513030] Thu, 23 June 2011 06:11 Go to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
Hi Guys,

I really appriciate if you can help me below sql query issue.

I'm joinging two tables event_types and tmp_acc tables.

event_types contains 2 Billion records
tmp_acc contains 20,000 records.

Resulting rows are about 300,000 records

in event_types table end_t and account_obj_id0 are joined indexed

no indexs in tmp_acc.

When I run below query with nexted loop it takes 6 hrs to complete. But when I run with hash join even after 4 days it was still running. Please can you let me know what is wrong with hash join here. Why it takes so long.

I'm joining only 20000 rows. So I think there should be a way to get result rows quickly. I really appriciate your help on this.

show parameters hash_area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
hash_area_size integer 2097152

explain plan for
select --+ parallel(e,6)

from EVENT_TYPES e, tmp_acc t where e.poid_type in
(
'/event/billing/adjustment/account',
'/event/billing/debit',
'/event/customer/status',
'/event/contract',
'/event/billing/writeoff/account',
'/event/billing/payment/freeserve_dd'
)
and e.poid_id0 < 1383062484100000000
and e.end_t >= 1262304000
and e.account_obj_id0 = t.created_t
/
set linesize 500

Explained.

SQL> SQL> set pagesize 0
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 859 | 1170K| 13 |
|* 1 | HASH JOIN | | 859 | 1170K| 13 |
| 2 | TABLE ACCESS FULL | ACCOUNTS | 41 | 533 | 2 |
|* 3 | TABLE ACCESS BY INDEX ROWID| EVENT_TYPES | 859 | 1159K| 10 |
|* 4 | INDEX RANGE SCAN | I_EVENT__ID | 7546 | | 2 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("E"."ACCOUNT_OBJ_ID0"="T"."CREATED_T")
3 - filter(("E"."POID_TYPE"='/event/billing/adjustment/account' OR
"E"."POID_TYPE"='/event/billing/debit' OR
"E"."POID_TYPE"='/event/billing/payment/freeserve_dd' OR
"E"."POID_TYPE"='/event/billing/writeoff/account' OR
"E"."POID_TYPE"='/event/contract' OR "E"."POID_TYPE"='/event/customer/status'
) AND "E"."END_T">=1262304000)
4 - access("E"."POID_ID0"<1383062484100000000)

Note: cpu costing is off

24 rows selected.
Re: Oracle hash join [message #513035 is a reply to message #513030] Thu, 23 June 2011 06:42 Go to previous messageGo to next message
Michel Cadot
Messages: 68716
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Your statistics are not up to date.

Please read OraFAQ Forum Guide and How to use [code] tags and make your code easier to read.
Make sure that lines of code do not exceed 80 characters when you format.
Indent the code, use code tags and align the columns in result.
Use the "Preview Message" button to verify.
Also always post your Oracle version, with 4 decimals.

Regards
Michel

Re: Oracle hash join [message #513082 is a reply to message #513035] Thu, 23 June 2011 08:41 Go to previous messageGo to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
Michael.. I'm quite new this forum.. I have done it my best.

Oracle version : Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production

explain plan for
select --+ parallel(e,6)

from EVENT_TYPES e, tmp_acc t where e.poid_type in
(
'/event/billing/adjustment/account',
'/event/billing/debit',
'/event/customer/status',
'/event/contract',
'/event/billing/writeoff/account',
'/event/billing/payment/freeserve_dd'
)
and e.poid_id0 < 1383062484100000000
and e.end_t >= 1262304000
and e.account_obj_id0 = t.created_t
/
set linesize 500

Explained.

SQL> SQL> set pagesize 0
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);


-----------------------------------------------------------------------------
| Id | Operation                    | Name        | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT             |             | 859  | 1170K | 13 |
|* 1 | HASH JOIN                    |             | 859  | 1170K | 13 |
| 2  | TABLE ACCESS FULL            | ACCOUNTS    | 41   | 533   | 2  |
|* 3 | TABLE ACCESS BY INDEX ROWID  | EVENT_TYPES | 859  | 1159K | 10 |
|* 4 | INDEX RANGE SCAN             | I_EVENT__ID | 7546 |       | 2  |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("E"."ACCOUNT_OBJ_ID0"="T"."CREATED_T")
3 - filter(("E"."POID_TYPE"='/event/billing/adjustment/account' OR
"E"."POID_TYPE"='/event/billing/debit' OR
"E"."POID_TYPE"='/event/billing/payment/freeserve_dd' OR
"E"."POID_TYPE"='/event/billing/writeoff/account' OR
"E"."POID_TYPE"='/event/contract' OR "E"."POID_TYPE"='/event/customer/status'
) AND "E"."END_T">=1262304000)
4 - access("E"."POID_ID0"<1383062484100000000)

Note: cpu costing is off

24 rows selected. 




CM: fixed end tag. Should be [/code] not [code/]
In addition explain plans should be inside the tags.

[Updated on: Thu, 23 June 2011 08:44] by Moderator

Report message to a moderator

Re: Oracle hash join [message #513083 is a reply to message #513082] Thu, 23 June 2011 08:46 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
As Michel says your statistics are not up to date. Or you're taking the explain plan from a different DB to the one you are having the problem in.
That explain indicates that there is very little data in either table.
Re: Oracle hash join [message #513096 is a reply to message #513083] Thu, 23 June 2011 11:30 Go to previous messageGo to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
Thanks for the response cookie.. But I can confirm statistics are given from my pre-production database and it is the database that contains 2 Billion records in EVENT_TYPES table. I double checked it and it is giving the same statistics. I don't know where the problem is.. Any idea?
Re: Oracle hash join [message #513097 is a reply to message #513096] Thu, 23 June 2011 11:37 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
post SQL & results from below

select count(*)
from EVENT_TYPES e, tmp_acc t where e.poid_type in
(
'/event/billing/adjustment/account',
'/event/billing/debit',
'/event/customer/status',
'/event/contract',
'/event/billing/writeoff/account',
'/event/billing/payment/freeserve_dd'
)
and e.poid_id0 < 1383062484100000000
and e.end_t >= 1262304000
and e.account_obj_id0 = t.created_t
/
select count(*)
from EVENT_TYPES e
/
select count(*)
from tmp_acc t
/
Re: Oracle hash join [message #513098 is a reply to message #513096] Thu, 23 June 2011 11:42 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
Unless I'm completely misreading it that plan thinks it's going to find 859 rows in EVENT_TYPES and 41 in ACCOUNTS.
I'd triple check that your DB stats are up to date.
What does:
SELECT table_name, last_analyzed, num_rows
FROM user_tables
WHERE table_name IN ('ACCOUNTS', 'EVENT_TYPES');
give on the pre-production database?
Re: Oracle hash join [message #513099 is a reply to message #513098] Thu, 23 June 2011 11:49 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>from EVENT_TYPES e, tmp_acc t
how/why does PLAN display ACCOUNT table_name?
Re: Oracle hash join [message #513166 is a reply to message #513098] Fri, 24 June 2011 03:03 Go to previous messageGo to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
both columns are null.

I wonder this is why explain plan gives wrong number of rows. Do you know how to update statistics. I searched in internet and ran "analyze table tmp_acc validate structure" but didn't update the stats

SELECT table_name , last_analyzed , num_rows
FROM user_tables
WHERE table_name IN ('TMP_ACC', 'EVENT_T');

TABLE_NAME	LAST_ANALYZED	NUM_ROWS
EVENT_T		
TMP_ACC		




CM: fixed code tags

[Updated on: Fri, 24 June 2011 03:45] by Moderator

Report message to a moderator

Re: Oracle hash join [message #513169 is a reply to message #513097] Fri, 24 June 2011 03:08 Go to previous messageGo to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
@BlackSwan

Number of rows in resulting query : 312342

Number of records in event_types : very close to 2 billion records

tmp_acc(accounts) : 20,000

sorry tmp_acc same as accounts table.. I have accidently renamed the table name (tmp_acc to accounts) in the execution plan.. Appologies for the confusion.

[Updated on: Fri, 24 June 2011 03:12]

Report message to a moderator

Re: Oracle hash join [message #513173 is a reply to message #513169] Fri, 24 June 2011 03:44 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
End code tag is [/code] not [code/]. I've fixed it for you.

If last_analyzed and num_rows are null then the tables have never been analyzed. However that should mean that the rule based optimiser would be used instead of the cost based optimiser. Which leads to a completely different explain plan (RBO ones don't have cost). So it looks like your explain was run against different tables.

The analyze command you used doesn't gather statistics. The correct analyze command, as detailed in the docs (they've got a search facility, use it) is:
analyze table tmp_acc compute statistics


However since you're on 9i you should probably use the dbms_stats package instead.
Actually you should upgrade to a supported version. 10g plus does this for you automatically.

And don't edit explain plans.
Re: Oracle hash join [message #513227 is a reply to message #513173] Fri, 24 June 2011 09:06 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>sorry tmp_acc same as accounts table..
>I have accidently renamed the table name (tmp_acc to accounts) in the execution plan.. Appologies for the confusion.
I won't waste more time since I can't rely upon what is posted.
Re: Oracle hash join [message #513237 is a reply to message #513173] Fri, 24 June 2011 15:28 Go to previous messageGo to next message
John Watson
Messages: 8960
Registered: January 2010
Location: Global Village
Senior Member
Quote:
If last_analyzed and num_rows are null then the tables have never been analyzed. However that should mean that the rule based optimiser would be used instead of the cost based optimiser.

@CM, and @anyone else interested
The above statement is not necessarily true. If it were true, things would be a lot better. In fact, if even one table involved in a query has statistics, then the cost based optimizer is used. The costs are calculated using default values for all tables that do not have statistics, and the defaults are terrible. The detail is buried in metalink, but it is something ridiculous like 200 blocks in the table. This is why if you have analyzed anything, you MUST analyze everything. 9i attempts to fix this problem by using dynamic sampling, but the 9i default for the optimizer_dynamic_sampling parameter is 1, which is too low. The 10g/11g default is 2. I usually like to set it to 4.
In this particular case, neither table has statistics, but dynamic sampling (even as low as 1) should have given the optimizer some idea of the size of the tables. I have never known it to be so wrong.
Re: Oracle hash join [message #513239 is a reply to message #513237] Fri, 24 June 2011 19:24 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
I deliberately glossed over that as I figured it'd just confuse things further. Personally I suspect schema/DB confusion rather than the CBO kicking in on unanalyzed tables.

Worth noting as well that querying an Index Organized table will cause the CBO to kick in as the RBO doesn't know what to do to them, and I believe that also happens with some other specialist objects, can't remember the list off hand.
Re: Oracle hash join [message #513379 is a reply to message #513169] Mon, 27 June 2011 03:06 Go to previous messageGo to next message
hkchital
Messages: 128
Registered: September 2008
Location: Singapore
Senior Member
How many rows do you get for :
select count(*) from event_types t
where
e.poid_id0 < 1383062484100000000
e.end_t >= 1262304000


What indexes do you have on EVENT_TYPES ?

When you have
select --+ parallel(e,6)
in the query, are you planning to use ParallelQuery against
EVENT_TYPES ? (That doesn't seem to be happening right now).


Hemant K Chitale
Re: Oracle hash join [message #513492 is a reply to message #513379] Mon, 27 June 2011 16:58 Go to previous messageGo to next message
dushan_st
Messages: 6
Registered: June 2011
Location: London
Junior Member
@hkchital : I tried to run the query (below) you asked, but it is still running after 14 hrs. I'm not sure how long it takes.
[code]
select --+ parallel (e,Cool
count(*) from event_types e
where
e.poid_id0 < 1383062484100000000
and e.end_t >= 1262304000
[code/]

Indexes in that table are;

Index_Name Column
I_EVENT__ACCTOBJ_END_T ACCOUNT_OBJ_ID0
I_EVENT__ACCTOBJ_END_T END_T
I_EVENT__ID POID_ID0

@cookiemonster: I tried to analysed it but failed after 38 hours.
analyze table event_types compute statistics
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 4 with name "_SYSSMU4$"
too small
Elapsed: 38:50:34.55

It is not easy to handle this table.

Can't we find a better execution plan than nested join at all please ? I thought there should be a way to get the output in lesser than 6 hours as only 20000 records I join with event_types table's index field..
Re: Oracle hash join [message #513494 is a reply to message #513492] Mon, 27 June 2011 17:12 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>ORA-01555: snapshot too old: rollback segment number 4 with name "_SYSSMU4$"
results when some session is changing table & doing COMMIT.
The session reporting ORA-01555 is victim; not culprit; unless doing COMMIT inside LOOP itself.
Re: Oracle hash join [message #513497 is a reply to message #513494] Mon, 27 June 2011 18:14 Go to previous messageGo to next message
John Watson
Messages: 8960
Registered: January 2010
Location: Global Village
Senior Member
If you really can't analyze the table without hitting an ORA-1555, you can set the statistivcs manually. You say you know about how many rows there are, so use the dbms_stats.set_table_stats procedure to give the optimizer the best information you can.

Re: Oracle hash join [message #513504 is a reply to message #513169] Mon, 27 June 2011 22:13 Go to previous messageGo to next message
hkchital
Messages: 128
Registered: September 2008
Location: Singapore
Senior Member
How about testing the query for 1 row from the ACC or TMP_ACC table ? Add a filter predicate for 1 row only.

then trace the query with
exec DBMS_SESSION.SESSION_TRACE_ENABLE(waits=>FALSE,binds=>FALSE)
and review the trace file -- see how much I/O it does on the EVENT_TYPES table and indexes.


(Surely there are multiple flaws in the design. A table called "EVENT_TYPES" holds transactions rather than types. The join seems to be on a CREATED_T column -- what is this ? Why not an ACCOUNT_ID column ?)


Hemant K Chitale
Re: Oracle hash join [message #513558 is a reply to message #513492] Tue, 28 June 2011 04:05 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
dushan_st wrote on Mon, 27 June 2011 22:58
@cookiemonster: I tried to analysed it but failed after 38 hours.
analyze table event_types compute statistics
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 4 with name "_SYSSMU4$"
too small
Elapsed: 38:50:34.55


38 hrs?!?

You need to find out why it was taking so long. Re-run it and trace what it's doing.
If you can't speed it up then you need to do as John advised and set the stats manually.

Look, with invalid/missing stats any and every query against those tables runs a serious risk of coming up with a bad plan. Especially since oracle thinks there is very little data in the table at the moment. And when oracle thinks there is little data it usually favours full table scans over index access.

You need proper stats.
Re: Oracle hash join [message #513559 is a reply to message #513558] Tue, 28 June 2011 04:06 Go to previous messageGo to next message
cookiemonster
Messages: 13958
Registered: September 2008
Location: Rainy Manchester
Senior Member
@dushan_st- How many times do I have to tell you:

cookiemonster wrote on Fri, 24 June 2011 09:44
End code tag is [/code] not [code/].

There is a preview function as well. Use it.


[Updated on: Tue, 28 June 2011 04:07]

Report message to a moderator

Re: Oracle hash join [message #514161 is a reply to message #513559] Sat, 02 July 2011 00:26 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
If you can't collect stats on all rows, try using a small percentage and work your way up if it is not sufficient. You don't need perfect stats, just good enough to get what you want. Here is an example of collecting stats on one percent of the rows in a table:

exec dbms_stats.gather_table_stats('KEVIN','LOGTABLE',estimate_percent=>1)

Also, if you really want us to look at this, try doing someting intelligent, like:

1) posting the create table statement
2) posting the real query with the missing columns
3) giving us create index statements

Also, provide your partitioning scheme. Is this a partitioned table? If not why not?

Lastly a little info about what the columns means would help. For example, is end_t a time stamp of some kind? created_t sure looks like one yet it is in an equi-join with something that looks like a user ID. Are some of these columns multi-purpose and you are using some kind of generice database design?

Currently things seem so confused in your presentation of the data that I am inclined not to believe a lot of it. As for the time it takes, do you have any clue as to about how long it should take. Personally I think you should be able to find and join 300,000 rows to 20,000 rows in under a minute so there is no excuse of 38 hours. But this could depend a lot upon how the data is distributed. But you can manage that with a partitioning scheme for example on poid_type, but that is getting ahead of ourselves.

Please step back from the problem and collect data and present it in a way that is easily readable and understandable.

Thanks, Kevin
Previous Topic: AWR Report network foreground wait events
Next Topic: Performance Standard Edition without partitioning
Goto Forum:
  


Current Time: Sun Nov 24 21:33:27 CST 2024