Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: optimize this!

Re: optimize this!

From: Daniel Morgan <damorgan_at_x.washington.edu>
Date: Mon, 04 Oct 2004 17:49:25 -0700
Message-ID: <1096937447.367090@yasure>


FC wrote:

> "Daniel Morgan" <damorgan_at_x.washington.edu> wrote in message
> news:1096865277.974633_at_yasure...
>

>>FC wrote:
>>
>>
>>>Hi,
>>>I am trying to optimize the following sql statement (oracle 10g) but I

>
> can't
>
>>>get anything better than an index range scan.
>>>
>>>select distinct to_char(shopping_date,'MONTH/YYYY') d,
>>>trunc(shopping_date,'MM') r
>>>from bills
>>>where owner_id = 25
>>>order by 2 desc
>>>
>>>     (null) SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=39

>
> Bytes=390)
>
>>>(null)
>>>      1 0 _SORT (ORDER BY) (Cost=3 Card=39 Bytes=390) (null)
>>>      2 1 _SORT (UNIQUE) (Cost=2 Card=39 Bytes=390) (null)
>>>      3 2 _INDEX (RANGE SCAN) Of BILLS_IDX2 (INDEX) (Cost=1 Card=45
>>>Bytes=450) (null)
>>>
>>>
>>>Table bills comes with the following indexes (table and indexes have

>
> been
>
>>>analyzed beforehand)
>>>
>>>1) a primary key index on column id which is not part of this query.
>>>2) a btree index on columns owner_id and id (used by other queries)
>>>3) a btree index on columns owner_id and shopping_date (this one is

>
> actually
>
>>>picked by the optimizer)
>>>4) a function index on owner_id, to_char(shopping_date,'MONTH/YYYY'),
>>>trunc(shopping_date,'MM') which is ignored due to low selectivity I

>
> presume.
>
>>>So, my question is, given the fact that the query is expected to select
>>>around 50 rows from a table containing some 10.000 rows, is the index

>
> range
>
>>>scan on index number 3 the best I can get?
>>>
>>>Does it make any sense to define function indexes with functions like

>
> trunc
>
>>>that  "flatten" values and therefore reduce selectivity?
>>>
>>>Of course in my case it does not make sense to keep this function index

>
> as
>
>>>it is not going to be used ever.
>>>
>>>Thanks,
>>>Flavio
>>
>>You want better than a cost of 3 ... why? Don't you have a job to do?
>>
>>Personally I'd expect that a full table scan wouldn't hurt with such a
>>small table.
>>
>>-- 
>>Daniel A. Morgan
>>University of Washington
>>damorgan_at_x.washington.edu
>>(replace 'x' with 'u' to respond)
>>

>
>
> Daniel,
> I am not concerned about current table's size of course, but this table
> could easily grow up to 1.000.000 rows or 10.000.000 rows, so I am just
> making sure my application is scalable. Moreover this query is needed to
> populate a drop down list, so the user should wait as little as possible to
> get the output as this is not the only SQL query that is run to build up the
> page.
>
> Flavio

You can not do what you are trying to do. As the table increases from 10K rows to 10M rows the plan will change too. Load the table with 10M rows with a realistic cardinality or forget the exercise.

-- 
Daniel A. Morgan
University of Washington
damorgan_at_x.washington.edu
(replace 'x' with 'u' to respond)
Received on Mon Oct 04 2004 - 19:49:25 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US