Re: Yes, this is probably a stupid question but...

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 13 Apr 2008 22:52:30 +0100
Message-ID: <luudnRsIfZkEHZ_VnZ2dnUVZ8hmdnZ2d@bt.com>

"Pat" <pat.casey_at_service-now.com> wrote in message news:b1ed3a55-a7f3-4ba1-a6fd-53c0f0548bd5_at_b64g2000hsa.googlegroups.com...
> On Apr 13, 11:20 am, hpuxrac <johnbhur..._at_sbcglobal.net> wrote:
>> On Apr 12, 5:55 pm, Pat <pat.ca..._at_service-now.com> wrote:
>>
>>
>> Another option to consider is looking at is using/creating a
>> materialized view to support the count query.
>>
>> Going down the posts in the other part of this thread, if you force
>> the count query to use a hash join, this might help the queries that
>> return a high count ( 200k ) but could ( possibly severely ) impact
>> the queries that ( running now with a nested loop ) return a small
>> count.
>
> I'm going to show my ignorance here, but would a materialized view
> help if there's a lot of transactional volume on the table e.g. if
> there are, say, 500 or so updates per hour across that table? Wouldn't
> I need to refresh the view every few seconds to keep it current? Or is
> this the part I'm missing?

There is an option to "refresh on commit".

It's not a good idea to use it on a very busy table as the overheads are dramatic (a single row update resulted in about 45 statement executions on the commit the last time I tested it). However, at one update every 7 seconds, you may find that the overheads are acceptable.

There are a couple of oddities with read-consistency, though.

Check what happens if you update the table, then (from the same session) run the query BEFORE committing. Your version of the query has to go to the base tables to see your update rather than using the materialized view -
and you may then be caught in a trap where other users start sharing the cursor that visited the base tables rather than the view. Unless you have very good control over how the updates and queries synchronise, you could get caught in the trap of randomly changing performance.

A completely different thought - this may be a case where you want to get literal values into your query and build histograms to support the query. You had 150,000 out of 180,000 rows with status 'active'. If you create a histogram on the column, and use literals in your SQL you may find that the optimizer uses the 'damage-limiting' hash join on the worst case, and the nested loop on the other cases. The cost of the extra optimisation stages may be insignificant compared to the benefit of doing the right thing every time.

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html
Received on Sun Apr 13 2008 - 16:52:30 CDT

Original text of this message