Re: Huge DBF sequential reads by KTSJ while updating securefile LOBS.

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Thu, 31 Mar 2022 17:57:35 +0100
Message-ID: <CAGtsp8kBCMMuBEEv2x0i4uzmZuAcGs1qtENn61R44Z1AN-bg2w_at_mail.gmail.com>



I don't have a convenient instances of 11.2.0.4 available for testing, but if you check these lines:

2060 _at_1, (W003) , STAT, non-idle wait time , 382, 75.2, , , , , 95.5 per execution
2060 _at_1, (W003) , STAT, segment prealloc time (ms) , 3772, 742.58, , , , , 943 per execution
2060 _at_1, (W003) , WAIT, db file sequential read , 3877402, 763.33ms, 76.3%, [WWWWWWWW ], 7314, 1.44k, 530.13us average wait

It looks as if you're see W003 doing what it's supposed to do when space is at a premium - working to pre-allocate space for the segment which is going to be growing as more rows are inserted. Quite possibly this slows the inserts down when the machine is under load, but possibly it removes the threat more contention if sessions had to allocate and their own space as they were doing the inserts.

You might find that you got better performance if you defined the lobs to be cached but assigned them to a relatively small recycle cache so that they didn't flood the main buffer cache. I don't know why Wnnn seem to be reading the blocks before allocating the space but (a) may they're reading some other data dictionary blocks, or (b) you've got some other feature enable that has a side effect (e.g. flashback logging). Since you've got ASH data on the waits you could check the p1/p2/p3 (or current obj#, current_file#, current_block# and current_row#) values for the db file sequential reads to see if they give you any clues.

Regards
Jonathan Lewis

On Thu, 31 Mar 2022 at 12:35, Goti <aryan.goti_at_gmail.com> wrote:

> Hi All,
>
> Envt :11.2.0.4 on RHEL 6.5
>
> We are observing slowness in OGG replication on the target side. The
> replication itself is updating a table which has a secure file LOB. There
> are a lot of updates performed during this time (Close to 25K in an hour).
> The FG session waits for "direct path write" as it was created with the
> NOCACHE attribute. However, the ASH shows that this is internally
> triggering some space management slaves to do many single block reads.
> There are no trace files also associated with the W00* processes.
>
> Please find the TKPROF report for the OGG replicat OS process.
> https://gist.github.com/aryangoti/f49660a4bbb23c58a9f403ac9270eb7a
>
> Snapper for W0003 process.
> https://gist.github.com/aryangoti/f2cf46ebcec3920d79f4fb719c01f309
>
> ASH details (5 minutes) when the replication was slow.
> https://gist.github.com/aryangoti/6674691d7770b6eb667718589633aec5
>
> Please let me know how to troubleshoot this further.,
>
> Thanks,
>
> Goti
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Mar 31 2022 - 18:57:35 CEST

Original text of this message