Re: Query Performing slow after upgrade to 11g
Date: Mon, 4 Aug 2014 11:51:46 +1200
Message-ID: <CAKE9HuMstBaFMtozTZaG9qXqbvqMYO-wn_K_6jeN0ZUcfDvUZw_at_mail.gmail.com>
Hi All,
Thank you JL,Mark and Carlos for the suggestions.
We managed to get this resolved, we found 2 solutions to avoid the merge Cartesian join.
- Managed to get an earlier plan from one of the test database, as it was rule based earlier. Resolved through the "Ordered" hint.
- We added another table to the join condition which started driving of the proper index.
On Fri, Aug 1, 2014 at 9:52 PM, Carlos Sierra <carlos.sierra.usa_at_gmail.com> wrote:
> I can take a closer look. Please execute the SQL again and send me output
> of scripts attached. SQL does not have to complete.
>
>
> Carlos Sierra
> carlos.sierra.usa_at_gmail.com
> Life is Good!
>
>
>
>
>
> On Aug 1, 2014, at 2:33, Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
> wrote:
>
>
> As Carlos suggests, getting suitable stats is important as a starting
> point; however this query MAY be an example of a classic problem that leads
> to inappropriate Cartesian merge joins.
>
> Note the predicates:
>
> AND order_item.STATUS_DT between
> to_date('20140729000000','YYYYMMDDHH24MISS') and
> to_date('20140730000000','YYYYMMDDHH24MISS')
> AND order_item.LAST_UPD between
> to_date('20140729000000','YYYYMMDDHH24MISS') and
> to_date('20140730000000','YYYYMMDDHH24MISS'
>
> both request a single day's worth of data using a range-based predicate.
>
> The column names suggest (to me) that one day of last_upd will identify a
> very small fraction of the data, similarly status_dt will do the same. The
> optimizer assumes all columns are independent and would therefore estimate
> that you want: "a very small fraction of a very small fraction" of the
> data - which could result in a cardinality estimate of 1 which is then
> likely to trigger a Cartesian merge join.
>
>
> A complicating factor (if that scenario isn't the trigger) is that your
> query may be asking for data which is out of range - i.e. dates above the
> high value recorded for the column in this case. If the columns are
> actually using date and time the number of rows for any given value is very
> small - and when range-based query asked for data that is out of range the
> optimizer uses an estimate based on "column = constant". I've seen the
> optimizer chnages its estimate from 10,000 to 3 for the query "how many
> rows in the last 15 minutes" when "the last 15 minutes" went just past the
> high value on the relevant column. In cases like this you need to fake the
> stats to ensure that the high values are always a reasonable distance (e.g.
> 24 hours) into the future. There is some sample code outlining the method
> in the comments on this blog post:
> http://jonathanlewis.wordpress.com/2006/11/29/low_value-high_value
>
>
> Regards
> Jonathan Lewis
> http://jonathanlewis.wordpress.com
> _at_jloracle
> ------------------------------
> *From:* oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on
> behalf of Veerabasaiah C [veeracb_at_gmail.com]
> *Sent:* 01 August 2014 00:38
> *To:* oracle-l_at_freelists.org
> *Subject:* Query Performing slow after upgrade to 11g
>
> Hi All,
>
> We recently upgraded our database from Oracle 9i to Oracle 11g.
>
> In Oracle 9i we were using RULE based optimizer as it was the need from
> the application.
>
> Attached query used to finish in about 30 mins in the old one, but after
> upgrade we are seeing this query hanging in there forever almost.
>
> From the plan it is doing a "Merge Cartesean" which I think is causing the
> slowness. Need some help to fix.
>
>
> --
> Veerabasaiah C B
>
>
>
>
-- Veerabasaiah C B "Only put off until tomorrow what you are willing to die having left undone. - Picasso"Received on Mon Aug 04 2014 - 01:51:46 CEST
-- http://www.freelists.org/webpage/oracle-l
- application/octet-stream attachment: Query_Worked2.sql
- application/octet-stream attachment: Query_worked1.sql