Re: Higher Read response in 19C

From: Lok P <loknath.73_at_gmail.com>
Date: Thu, 22 Sep 2022 01:03:26 +0530
Message-ID: <CAKna9VYzsEeK4UV2h+RCBf7tgGMWKuV1WFZ-fTFr4pAYaoJf2A_at_mail.gmail.com>



Yes Mark the issue still persists. What we tried is to just drive it in parallel so as to influence the oracle to do more large reads. And as posted the sql monitor in below git link , this path is reading ~328GB with ~338K read request i.e. ~1MB per request which I believe is a large read. But in past runs it was reading ~780GB with ~2million read requests i.e ~408KB per request which was mostly small reads.

but yes, we are still not able to understand why the small read response (mainly for temp files) is getting slower post upgrade.

https://gist.github.com/databasetech0073/3828c8c09bf6a80910e9373deabcf015

On Wed, Sep 21, 2022 at 9:24 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

> Has this problem “gone away?”
>
>
>
> IF it has, then I **suspect** it may have to do with some need to do
> small reads one time in sga to clear this, that, or the-other-thing from
> the block.
>
>
>
> IF you get a new case, my suggestion if to read each table involved
> individually forcing the reads through the SGA (probably with appropriate
> scheduling). Obviously don’t make this a case of Compulsive Tuning Disorder
> (CTD). (Thanks Gaja Krishna Vaidyanatha, for naming this syndrome).
>
>
>
> If once through the SGA is not enough, then my suspicion is wrong, but
> hauling in blocks through the PGA might repetitively “fix” the block and/or
> require a single block read to get the correct version of the block in
> memory, and then, since it is PGA, not SGA, not be able to permanently
> “fix” the block or even share the current version of the block with other
> concurrent sessions for which it would be a useful version of the block. I
> hope that all makes sense.
>
>
>
> It is quite important that this sort of activity does not become a
> treadmill myth. The difference between things done “once” or “after a major
> change” making a big performance improvement and putting that load on the
> system routinely after an impressive one time only improvement is difficult
> to overstate.
>
>
>
> Good luck,
>
>
>
> mwf
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Lok P
> *Sent:* Monday, September 19, 2022 5:48 AM
> *To:* Willy Klotz
> *Cc:* Oracle L
> *Subject:* Re: Higher Read response in 19C
>
>
>
> Thank you so much.
>
> Actually in our case as i see historical trends from dba_hist_pga_stat,
> the 'total PGA in use' is not touching anywhere the pga_aggregate_target we
> have set which is 60gb. So I am wondering how that can cause the slow temp
> file read issue here?
>
>
>
> And also in 11.2 we had PGA set as 40gb and we have bumped it up to 60gb
> during 19c migration knowing that there are additional processes which will
> consume more memories in 19C.
>
>
>
> On Mon, 19 Sep 2022, 2:13 pm Willy Klotz, <willyk_at_kbi-gmbh.de> wrote:
>
> Hi,
>
> we had similar problems with “direct path write temp” and “direct path
> read temp”, also with “log file sync” which has also grown by factor 10
> according to your excel.
>
>
> In our environment (no exadata), we saw this when we got more and more
> processing of xml-data, and also going from database 19.10 to 19.12 to
> 19.14. Our solution was to increase pga_aggregate_target and
> pga_aggregate_limit, which solved the problem and brought the values back
> to normal. In our opinion it had to do with some type of sorting, however
> we did not investigate further.
>
> Just my 2 cents, maybe it can help.
>
> Regards
> Willy
>
>
>
>
> Von: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]
> Im Auftrag von Lok P
> Gesendet: Samstag, 17. September 2022 12:01
> An: Oracle L <oracle-l_at_freelists.org>
> Betreff: Higher Read response in 19C
>
> Hello Listers, Post 19C migration we are seeing the response time of
> 'direct path read temp' has been bumped up from ~1.6ms to ~36ms. So we have
> many queries doing temp read while performing hash join performing slow.
> Also we have observed the amount of tempspill remains the same as it was
> before for those queries. We had a PGA size 40GB of PGA on 11g , on 19c we
> made it 60GB. Wanted to understand the cause of this high temp read
> response and how to fix.
> Attached are the Key instance activity, top foreground waitevent and
> instance efficiency , sections from AWR from a similar load for 11g vs
> 19c.Its X8M half RAC exadata machine with 19.3.7 image. And in this
> migration no storage cell level changes were done; it was only the RDBMS
> migration from version 11.2.0.4 to 19.15. Looking into the AWR for the
> similar load period from 11g to 19c, I am seeing below..
> 1)In the "Instance Efficiency Percentages" section it shows the "Flash
> Cache Hit %" for the similar load period was 25% on 11g vs ~47% on 19C.
> 2)The "Top 10 Foreground Events by Total Wait Time" section showing there
> were ~21million "cell single block physical read" and avg wait time was
> ~621 micro sec on 11g. and for "direct path read temp" total waits were
> ~3.5million with ~1.68milli sec avg wait time.
> But on 19c awr its showing four different stats for "cell single block
> physical read" i.e.
> "cell single block physical read: flash cache" showing ~2.5million waits
> and ~6.1ms avg wait time.
> "cell single block physical read: pmem cache" showing ~10million waits and
> ~1.16ms avg wait time.
> "cell single block physical read" showing 104K waits and ~47.37ms avg wait
> time .
> "cell single block physical read: RDMA" showing 33million waits and
> 69.56micro sec avg wait time.
> "direct path read temp" showing 373K waits and 36.04 milli sec avg wait
> time.
> 3)"IOStat by Filetype summary" on 11g was showing ~1.3TB data read with
> "small read" response ~1.13ms and "large read" 43.33ms. But on 19c the read
> is ~190.3GB with "small read" response ~26.84ms and "large read" 39.44ms.
> It does point to the slow temp file read.
> 4) We do have the cell storage flash cache mode "write back" mode but i am
> seeing "PMEMCACHE- writethrough" so not sure if its something adding to
> this issue?
> 5) In the "cache sizes" section I am seeing an additional NUMA Pool Size.
> Not sure if it's related.
> Thanks and Regards
> Lok
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Sep 21 2022 - 21:33:26 CEST

Original text of this message