Re: ASH reporting on SQL spending most of time on CPU

From: Goti <aryan.goti_at_gmail.com>
Date: Thu, 26 Aug 2021 08:55:42 +0530
Message-ID: <CAOzfMupm+z5mKWKKML70sfh3Z0R2uHcQyFToaS1hedHJdNUAYQ_at_mail.gmail.com>



Thanks Andy, Laurentiu, Lothar for your response.

I will definitely consider the recommendations and keep this thread updated on the progress.

Andy to your question on the query I used in my first email, please find the query below. Please let me know if I need to change this query.

select

       o.object_name obj,
       o.object_type otype,

  o.object_id oid,
  o.object_name oname,
       ash.SQL_ID,
       w.class,
       TRIM(ROUND(RATIO_TO_REPORT(COUNT(*)) OVER () * 100, 1)) || '%' PCT
from gv$active_session_history ash,
     ( select rownum class#, class from gv$waitstat ) w,
      all_objects o

where sql_id = '&sqlid'

   and w.class#(+)=ash.p3
   and o.object_id (+)= ash.CURRENT_OBJ#    group by
o.object_name,o.object_type,o.object_id,o.object_name,ash.SQL_ID,w.class order by 5 desc;

Thanks,
Goti

On Thu, Aug 26, 2021 at 1:19 AM Laurentiu Oprea <laurentiu.oprea06_at_gmail.com> wrote:

> Can you try a sql patch with hints:
>
> LEADING(_at_"SEL$45F1E8F4" "MSM"_at_"SEL$7" "T"@"SEL$6" "CMP"@"SEL$9"
> "TP"_at_"SEL$5" "CT"_at_"SEL$5" "C"@"SEL$8" "OLD1"@"SEL$1"
> "VW_SQ_2"_at_"SEL$65D2478C") INDEX_RS_ASC(_at_"SEL$087C8491" "C"@"SEL$8"
> ("CUSTOMER"."CUST_ID")) LEADING(_at_"SEL$58798622" "T"_at_"SEL$4" "TP"@"SEL$3" "C"@"SEL$2"
> "MSM"_at_"SEL$2" ) LEADING(_at_"SEL$087C8491" "T"@"SEL$14" "TP"@"SEL$13" "C"@"SEL$12"
> "MSM"_at_"SEL$12" )
>
>
>
> În mie., 25 aug. 2021 la 21:25, Andy Sayer <andysayer_at_gmail.com> a scris:
>
>> Hi Goti,
>>
>> What query did you use to return the results you shared in the first
>> email? The reason I ask is that it looks like you are returning the
>> current_obj# as the object but this is not valid when the event is null.
>> You should be joining to the v$sql_plan row and using that. It's easy to
>> go down the wrong rabbit hole based on misleading information.
>>
>> The join conditions between customer and market_segment_mapping are
>> quite interesting:
>>
>> INNER JOIN terr.market_segment_mapping MSM
>> ON ( ( C.customer_group = MSM.customer_group
>> AND C.industry_code = MSM.industry_code
>> AND C.industry_code_5 = MSM.industry_code_5 )
>> OR ( C.customer_group = MSM.customer_group
>> AND C.industry_code = MSM.industry_code
>> AND MSM.industry_code_5 IS NULL )
>> OR ( C.customer_group = MSM.customer_group
>> AND MSM.industry_code IS NULL
>> AND MSM.industry_code_5 IS NULL ) )
>> AND MSM.eff_date <= :B1
>> AND MSM.end_date > :B1
>> (taken from the NOT IN subquery)
>>
>> The plan says you're joining to market_segment_mapping in the NOT IN subquery
>> using an index on MARKET_SEGMENT_ID: this, on average, finds 60 rows per
>> loop which escalates to 4 million rowids in the 5 minute sample. There is
>> an extra join condition that is repeated in all of the OR branches - C.customer_group
>> = MSM.customer_group, If you had an index on market_segment_mapping which
>> covered both of these conditions you should find this effort is reduced
>> quite a lot (you might need to factorize out this condition for the
>> optimizer).
>>
>> In the main subquery, you're spending a lot of CPU time doing the hash
>> join for this condition too. You can see that it is using COMPANY_ID and COUNTRY
>> to create the hash buckets, the CUSTOMER_GROUP condition is evaluated as
>> a filter rather than an access, meaning you are comparing more rows than
>> necessary to each other and probably using more memory to do it. Here, all
>> you need to do is factorize out the condition. If that is not enough then
>> you may want to put effort into expanding out the other conditions you are
>> OR'ing between.
>>
>> There is a worse row index used on line 46 of the plan though, here each
>> row driving this loop causes 8 index range scans on average (due to the
>> number of partitions it needs to read from) and then the filter leads to
>> about 1000 rows per partition - in the 5 minute sample you obtain 447
>> million rowids that all need to be accessed in TERR_POSTAL. Eventually
>> this gets filtered down to 60898 rows (the number of rows from the driving
>> row source). Judging by the table name, the postal_code join filter
>> probably provides extremely good selectivity, so an index on that will
>> probably give you better performance. If you need to include additional
>> columns in the index to provide this selectivity (maybe terr_id and
>> country?) then do so. Also consider whether it's worth making it a
>> global index - if postal_code is supposed to be unique in terr_postal then
>> it would be sensible to make it a global unique index.
>>
>> tl;dr
>>
>> - Change both joins between customer and market_segment_mapping so
>> customer_group is outside the parentheses.
>> - Create index on market_segment_mapping
>> (market_segment_id, customer_group)
>> - create index on TERR_POSTAL ( terr_id , postal_code )
>>
>>
>> Hope this helps,
>> Andrew
>>
>>
>> On Wed, 25 Aug 2021 at 14:39, Lothar Flatz <l.flatz_at_bluewin.ch> wrote:
>>
>>> Ok, the short answer is that there should be a condition applied in step
>>> 26 on Terr_Postal which is estimated as too selective.
>>> IT is possible that this is the source of your trouble. Find out why it
>>> is wrong and correct the stats if possible.
>>> Sorry that I am a bit short, but I am under some time pressure.
>>>
>>> Regards
>>>
>>> Lothar
>>>
>>> Am 25.08.2021 um 13:12 schrieb Goti:
>>>
>>> Thanks Laurentiu for responding. Please find the below link which
>>> contains the sql monitor report. This was taken 5 minutes after it was
>>> started.The SQL ran for more than 5 hours and hence we had to cancel that.
>>> I don't see SQL in RTSM history too. I am not sure how far this will be
>>> useful.
>>>
>>>
>>> https://gist.githubusercontent.com/aryangoti/eae0776d88d46361c2c60d0d6b083680/raw/ec1ac6a6359ca3f99c6f7216850dfa09103b6d78/gistfile1.txt
>>>
>>> Thanks,
>>> Goti
>>>
>>>
>>> On Wed, Aug 25, 2021 at 4:07 PM Laurentiu Oprea <
>>> laurentiu.oprea06_at_gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Can you attach a sql monitor report?
>>>>
>>>>
>>>> În mie., 25 aug. 2021 la 12:58, Goti <aryan.goti_at_gmail.com> a scris:
>>>>
>>>>> Hi All,
>>>>>
>>>>> We have a SQL performance issue and as per ASH the SQL is spending
>>>>> most of its time on CPU. Not sure which operation is taking more time. Can
>>>>> someone guide me how to troubleshoot what is causing the SQL to spend time
>>>>> on the CPU?
>>>>>
>>>>> SQL_ID EVENT SESSION MODULE
>>>>> PCT
>>>>> ------------------ ------------------------------------ -------
>>>>> ------------------------------------ -------
>>>>> fyfnh1pnxjg2d ON CPU PL/SQL
>>>>> Developer 91.9% <<<<<<<<<<<<
>>>>>
>>>>>
>>>>> OBJ OTYPE OID
>>>>> ONAME SQL_ID CLASS
>>>>> PCT
>>>>> --------------------------- --------------------------- ----------
>>>>> ------------------------------------ ------------------ ------------------
>>>>> ----------
>>>>> XIE1CUST_TERR INDEX PARTITION 106842
>>>>> XIE1CUST_TERR fyfnh1pnxjg2d data block
>>>>> 0%
>>>>> XFK1PARTITION_TERR_POSTAL INDEX PARTITION 107430
>>>>> XFK1PARTITION_TERR_POSTAL fyfnh1pnxjg2d data block
>>>>> .2%
>>>>> XFK1PARTITION_TERR_POSTAL INDEX PARTITION 107435
>>>>> XFK1PARTITION_TERR_POSTAL fyfnh1pnxjg2d
>>>>> 95.5%
>>>>> XIE1CUST_TERR INDEX PARTITION 106843
>>>>> XIE1CUST_TERR fyfnh1pnxjg2d data block
>>>>> 1.6%
>>>>> XIE1CUST_TERR INDEX PARTITION 106841
>>>>> XIE1CUST_TERR fyfnh1pnxjg2d data block
>>>>> .1%
>>>>> XIE4CUST_TERR INDEX PARTITION 106911
>>>>> XIE4CUST_TERR fyfnh1pnxjg2d free list
>>>>> 0%
>>>>>
>>>>> fyfnh1pnxjg2d 0%
>>>>> XIE1CUST_TERR INDEX PARTITION 106839
>>>>> XIE1CUST_TERR fyfnh1pnxjg2d data block
>>>>> 2.3%
>>>>> XIE1CUST_TERR INDEX PARTITION 106838
>>>>> XIE1CUST_TERR fyfnh1pnxjg2d data block
>>>>> .3%
>>>>>
>>>>> Execution Plan Details:
>>>>>
>>>>> https://gist.githubusercontent.com/aryangoti/7f0bc85cbe6df372e488deecbfdf30ef/raw/40598552b0a5c47c6857c86b06eabbca2d5daf8a/gistfile1.txt
>>>>>
>>>>> Thanks,
>>>>> Goti
>>>>>
>>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Aug 26 2021 - 05:25:42 CEST

Original text of this message