Re: Improve Query sqlm_bshtdhgmsvqad

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Sat, 2 Nov 2024 16:33:29 +0000
Message-ID: <CAGtsp8=u39r3hqfszwC25mU+9AwV4z5xTfMGCkUJsEp=y=YLJg_at_mail.gmail.com>



Lothar,

First suggestion.
Tactically I think you're right about increasing the PGA significantly - the build size is (reported as) 747MB, with max memory at 922MB. Looking at the cascade of hash joins abive this one, many of which may have to have their BUILD tables in place at the same time, a figure like 1.2GB might be sufficient to allow Oracle to keep this build table in memory and eliminate all the TEMP I/O. In the short term that could be a good enough fix for that part of the problem,

Strategically, there's a predicate COMPLETE = 'Y' involved which suggests that the volume of data can only increase (more things are completed as time passes) and depending on the rate of increase it may be necessary to keep increasing the PGA.

Second suggestions:
Yes, I think you're right - though it might require a "push predicate" hint as well, and the whole (to_char()) thing is a little worrying since the result of the comparison could depend on the NLS settings of the session running the query. (As Sayan pointed out "to_Number(lhs , with exception handling) = rhs" is much better than "lhs = to_char(rhs, without an explicit conversion format). This could reduce 2.5M I/O requests to 230K (or 460 if every target row had to follow a chain)

Regards
Jonathan Lewis

Regards
Jonathan Lewis

On Sat, 2 Nov 2024 at 10:40, Lothar Flatz <l.flatz_at_bluewin.ch> wrote:

> Hi Jonathan,
>
> the hash join at operation 47 seems to spill to disk after using 1GB of
> PGA. Total temp usage of 33 GB.
> Apart from the certainly correct point that the plan needs to be fixed, it
> seems to me that it could not hurt to increase PGA substantially.
>
> Hash Join in Line 40 is an left outer join, join criteria being :
> RA_CUSTOMER_TRX_LINES_ALL"."INTERFACE_LINE_ATTRIBUTE6"=TO_CHAR("OE_ORDER_LINES_ALL"."LINE_ID")
>
> Probably we do not have an index on TO_CHAR("OE_ORDER_LINES_ALL"."LINE_ID")
> . Thus resulting in a Hash Join.
>
> Let me know what you think.
>
> Regards
>
> Lothar
>
> Am 02.11.2024 um 11:14 schrieb Jonathan Lewis:
>
>
> Half your time is spent on single block reads of oe_order_lines_all at
> operation 59 - as before these are (almost certainly) the single block
> reads as Oracle follows chained and/or migrated rows. If not they are reads
> of the undo tablespace. Did you ever pursue the session activity stats to
> find out which? Almost all of those rows are discared in the hash join at
> operation 28.
>
> The build table for operation 28 has a descent that accounts for most of
> the rest of the time - the hash join at operation 47 which does a massively
> spill to disc when it builds 15M rows, and probes with the result of a 230M
> row tablescan. The sad thing about this join is that Oracle's estimates are
> in the right ballpark and it STILL wants to do it. This suggests a need to
> manipulate the code to find a different path (possibly following
> suggestions from Sayan) that eliminates excessive data more efficiently.
> Again almost all of the data disappears on that join (output is 230K rows
> from 230M input).
>
> Regards
> Jonathan Lewis
>
>
>
>
>
> Regards
> Jonathan Lewis
>
>
>
> On Fri, 1 Nov 2024 at 15:40, Amit Saroha <eramitsaroha_at_gmail.com> wrote:
>
>> Hi All,
>>
>> I have another query which takes hours to run hence I am requesting that
>> you check the included SQL monitoring data and let me know if there is
>> anything we can do to optimize the run time.
>>
>> Thank you for your assistance and advice in advance.
>>
>>
>> Best Regards,
>> AMIT
>>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Nov 02 2024 - 17:33:29 CET

Original text of this message