Re: Improve Query sqlm_bshtdhgmsvqad

From: Sayan Malakshinov <xt.and.r_at_gmail.com>
Date: Sat, 2 Nov 2024 11:23:53 +0000
Message-ID: <CAOVevU7injviSY=avmy4wxbdG_RD+J6+v8hkVvFQ3T+f6YhLXQ_at_mail.gmail.com>



Amit,

Just to clarify:
1. My first suggestion (avoid OR) was about the hash join in line #47 (please check the filter predicates of that step). It would be more effective if LAST_UPDATE_DATE is indexed in both tables. 2. My second suggestion (avoid to_char(ID)) was about oe_order_lines_all at operation #59 + line #28.
If you check the code that is from the part: ...
LEFT OUTER JOIN (
   *APPS.OE_ORDER_LINES_ALL* OE_ORDER_LINES_ALL   *INNER JOIN* *APPS.OE_ORDER_HEADERS_ALL* OE_ORDER_HEADERS_ALL       *ON* OE_ORDER_*LINES*_ALL*.HEADER_ID*=OE_ORDER_*HEADERS*_ALL.
*HEADER_ID*

  )
*ON* RA_CUSTOMER_TRX_LINES_ALL.*INTERFACE_LINE_ATTRIBUTE6*=*TO_CHAR(*
OE_ORDER_LINES_ALL.*LINE_ID)*) Ie. at first you are joining oe_order_*lines*_all and oe_order_*headers*_all by header_id - resulting in that big hash join of 62M rows and then, you are doing the left join (#28) of those 62M rows by TO_CHAR(LINE_ID), which prevents index access. If you re-write it without to_char (or create FBI index on to_char(line_id)), CBO would be able to choose index access with nested loops

--
Kind regards,
Sayan


On Sat, Nov 2, 2024 at 10:15 AM Jonathan Lewis <jlewisoracle_at_gmail.com>
wrote:

>
> Half your time is spent on single block reads of oe_order_lines_all at
> operation 59 - as before these are (almost certainly) the single block
> reads as Oracle follows chained and/or migrated rows. If not they are reads
> of the undo tablespace. Did you ever pursue the session activity stats to
> find out which? Almost all of those rows are discared in the hash join at
> operation 28.
>
> The build table for operation 28 has a descent that accounts for most of
> the rest of the time - the hash join at operation 47 which does a massively
> spill to disc when it builds 15M rows, and probes with the result of a 230M
> row tablescan. The sad thing about this join is that Oracle's estimates are
> in the right ballpark and it STILL wants to do it. This suggests a need to
> manipulate the code to find a different path (possibly following
> suggestions from Sayan) that eliminates excessive data more efficiently.
> Again almost all of the data disappears on that join (output is 230K rows
> from 230M input).
>
> Regards
> Jonathan Lewis
>
>
>
>
>
> Regards
> Jonathan Lewis
>
>
>
> On Fri, 1 Nov 2024 at 15:40, Amit Saroha <eramitsaroha_at_gmail.com> wrote:
>
>> Hi All,
>>
>> I have another query which takes hours to run hence I am requesting that
>> you check the included SQL monitoring data and let me know if there is
>> anything we can do to optimize the run time.
>>
>> Thank you for your assistance and advice in advance.
>>
>>
>> Best Regards,
>> AMIT
>>
>

-- 
Best regards,
Sayan Malakshinov
Oracle performance tuning engineer
Oracle ACE
http://orasql.org

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Nov 02 2024 - 12:23:53 CET

Original text of this message