Re: Higher Read response in 19C
Date: Sat, 17 Sep 2022 21:30:49 +0530
Message-ID: <CAKna9VaZbNTVgmt1oUJaukAmqtC1s1QQGJm5bVZ9G8Mf8xNOXA_at_mail.gmail.com>
Thank you Pap, Mladen. I believe we are using ASMM but not AMM. I will
double check.
We want to find the root cause as that must be impacting across many sqls.
But yes, regarding just helping one odd query I agree manual work area with
2GB of hash area size setting for that specific session may help. But I
wanted to understand if this can be done through a hint which we can push
through a profile without a need of code change. I know such thing is not
possible in 11.2, but is this possible in 19C, to push a hint for setting
hash area size for that session/query?
On Sat, 17 Sep 2022, 9:22 pm Pap, <oracle.developer35_at_gmail.com> wrote:
> If it's just one odd query impacting response time significantly, you may
> try setting work area size manual with hash area size for that specific
> session to minimize temp spill.
>
> On Sat, 17 Sep 2022, 7:57 pm Lok P, <loknath.73_at_gmail.com> wrote:
>
>> I see from the below IO stats summary, it says the tempfile read in case
>> of 11g for the same 4hrs duration of AWR the small read response ~1.14ms vs
>> in case of 19c its 26.84ms. So is there any different handling of temp
>> reads in 11.2 vs 19c? Or is it just that the small read must be served from
>> hard disk rather than flash disk because of some other reason or the way
>> now 19c deals with the same underlying X8M storage cells?
>>
>> *On 11G:-*
>>
>> IOStat by Filetype summary
>> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other
>> columns suffixed with K,M,G,T,P are in multiples of 1000
>> Small Read and Large Read are average service times
>> Ordered by (Data Read + Write) desc
>> Filetype Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs
>> per sec Data per sec Small Read Large Read
>> Data File 11.6T 4532.3 1.1G 800G 1773.51 75.922M 346.18us 897.65ms
>> Temp File 1.3T 1007.67 122.029M 276.4G 195.14 26.234M 1.14ms 43.33ms
>> Log File 603.3G 57.35 57.253M 901.5G 609.87 85.558M 2.04ms 45.14ms
>> Archive Log 0M 0 0M 463.5G 43.99 43.985M
>> Control File 18.4G 94.96 1.745M 1.1G 6.58 .103M 306.61us 3.38ms
>> Other 31M 0.78 .003M 0M 0 0M 1.88ms
>> TOTAL: 13.5T 5693.05 1.3G 2.4T 2629.09 231.803M 526.09us 839.93ms
>>
>>
>> *On 19C:*
>> IOStat by Filetype summary
>> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other
>> columns suffixed with K,M,G,T,P are in multiples of 1000
>> Small Read and Large Read are average service times
>> Ordered by (Data Read + Write) desc
>> Filetype Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs
>> per sec Data per sec Small Read Large Read
>> Data File 107.4T 15779.97 10.2G 1009.6G 3532.94 95.607M 718.98us 347.66ms
>> Log File 499.5G 47.61 47.301M 996.5G 261.48 94.365M 2.78ms 59.84ms
>> Archive Log 0M 0 0M 493.9G 46.77 46.771M
>> Temp File 190.3G 37.4 18.017M 294.1G 56.65 27.853M 26.84ms 39.44ms
>> Control File 20.8G 102.93 1.972M 475M 2.81 .044M 619.77us 6.33ms
>> TOTAL: 108.1T 15967.91 10.2G 2.7T 3900.65 264.641M 763.90us 345.57ms
>>
>>
>>
>>
>>
>> On Sat, Sep 17, 2022 at 3:31 PM Lok P <loknath.73_at_gmail.com> wrote:
>>
>>> Hello Listers, Post 19C migration we are seeing the response time of
>>> 'direct path read temp' has been bumped up from ~1.6ms to ~36ms. So we have
>>> many queries doing temp read while performing hash join performing slow.
>>> Also we have observed the amount of tempspill remains the same as it was
>>> before for those queries. We had a PGA size 40GB of PGA on 11g , on 19c we
>>> made it 60GB. Wanted to understand the cause of this high temp read
>>> response and how to fix.
>>>
>>> Attached are the Key instance activity, top foreground waitevent and
>>> instance efficiency , sections from AWR from a similar load for 11g vs 19c.Its
>>> X8M half RAC exadata machine with 19.3.7 image. And in this migration no
>>> storage cell level changes were done; it was only the RDBMS migration from
>>> version 11.2.0.4 to 19.15. Looking into the AWR for the similar load
>>> period from 11g to 19c, I am seeing below..
>>>
>>> 1)In the "Instance Efficiency Percentages" section it shows the "Flash
>>> Cache Hit %" for the similar load period was 25% on 11g vs ~47% on 19C.
>>>
>>> 2)The "Top 10 Foreground Events by Total Wait Time" section showing
>>> there were ~21million "cell single block physical read" and avg wait time
>>> was ~621 micro sec on 11g. and for "direct path read temp" total waits were
>>> ~3.5million with ~1.68milli sec avg wait time.
>>>
>>> But on 19c awr its showing four different stats for "cell single block
>>> physical read" i.e.
>>>
>>> "cell single block physical read: flash cache" showing ~2.5million waits
>>> and ~6.1ms avg wait time.
>>>
>>> "cell single block physical read: pmem cache" showing ~10million waits
>>> and ~1.16ms avg wait time.
>>>
>>> "cell single block physical read" showing 104K waits and ~47.37ms avg
>>> wait time .
>>>
>>> "cell single block physical read: RDMA" showing 33million waits and
>>> 69.56micro sec avg wait time.
>>>
>>> "direct path read temp" showing 373K waits and 36.04 milli sec avg wait
>>> time.
>>>
>>> 3)"IOStat by Filetype summary" on 11g was showing ~1.3TB data read with
>>> "small read" response ~1.13ms and "large read" 43.33ms. But on 19c the read
>>> is ~190.3GB with "small read" response ~26.84ms and "large read" 39.44ms.
>>> It does point to the slow temp file read.
>>>
>>> 4) We do have the cell storage flash cache mode "write back" mode but i
>>> am seeing "PMEMCACHE- writethrough" so not sure if its something adding
>>> to this issue?
>>>
>>> 5) In the "cache sizes" section I am seeing an additional NUMA Pool
>>> Size. Not sure if it's related.
>>>
>>> Thanks and Regards
>>>
>>> Lok
>>>
>>>
>>>
-- http://www.freelists.org/webpage/oracle-lReceived on Sat Sep 17 2022 - 18:00:49 CEST