Re: Performance issue - Library cache lock
Date: Mon, 15 Feb 2021 17:40:42 +0530
Message-ID: <CAEjw_fjBpM-QNdaRjQc3RP3RQpfZ4vZH21Y_98XJzXww56=4yg_at_mail.gmail.com>
Thank You So much.
We have AUDIT_TRAIL parameter set as OS. I verified multiple databases showing the value set as OS in our case. Not sure there is any downside of setting it as DB? And when querying the DBA_STMT_AUDIT_OPTS against the application user_name, I only see "CREATE SESSION" being the operation audited, nothing else is there. And seeing some other entries in that table but USER_NAME was NULL for those.
During the issue period, we saw the audit dest was not filled fully and there was enough space left. To mitigate the issue DBA team changed the audit_dest which was initially in a shared mount and made that local now and for this both nodes were shut down along with all running applications. Not sure if the shared mount was hung because of some other reason which is why even enough space was there but writing to it was getting hung.
Regards
Pap
On Sat, Feb 13, 2021 at 10:25 PM Mohamed Houri <mohamed.houri_at_gmail.com> wrote:
>
> Hello,
>
> *"Seems to be many of the SYS/Background processes are getting stuck with
> wait "Disk file operations I/O" now. Below is one of the sql is also stuck
> on "Disk file operations I/O" now. Wondering what the reason is? The
> P1,p2,p3 in v$session for the sql suffering from wait "Disk file operations
> I/O" showing 8,0,8"*
>
> Here’s below a link to an article I wrote to summarize a Disk file
> operations I/O
> <https://hourim.wordpress.com/2020/12/21/disk-file-operations-i-o/> where
> the triplet (p1,p2,p3) equals(8,0,8)
> https://hourim.wordpress.com/2020/12/21/disk-file-operations-i-o/ In my
> case, it turned to be an excessive amount of actions audited by the
> database where the destination of the audit was set at the os level
> (audit_trail = os) You can also, as suggested by Stefan Koehler
> <http://www.soocs.de/> in the comment part of the above blog post, use
> Tanel Poder psn using *syscall *and *filename *as input parameters to
> get, eventually, the exact type of filename and operation your Disk file
> operations I/O is referring to.
>
> [oracle_at_host1 ~]$ sudo psn -G syscall,filename
>
> Linux Process Snapper v0.18 by Tanel Poder [https://0x.tools]
>
> Sampling /proc/syscall, stat for 5 seconds... finished.
>
>
>
> === Active Threads
> ==============================================================
>
> samples | avg_threads | comm | state | syscall |
> filename
>
> ---------------------------------------------------------------------------
>
> 13 | 1.00 | (oracle_*_cd) | Running (ON CPU) | [running] |
>
> 11 | 0.85 | (rcu_sched) | Running (ON CPU) | [running] |
>
> 1 | 0.08 | (ohasd.bin) | Running (ON CPU) | [running] |
> Best regards
>
> Mohamed Houri
>
> Le ven. 12 févr. 2021 à 16:37, Pap <oracle.developer35_at_gmail.com> a
> écrit :
>
>> Seems to be many of the SYS/Background processes are getting stuck with
>> wait "Disk file operations I/O" now. Below is one of the sql is also stuck
>> on "Disk file operations I/O" now. Wondering what the reason is? The
>> P1,p2,p3 in v$session for the sql suffering from wait "Disk file operations
>> I/O" showing 8,0,8 respectively. And seems to be its miscellaneous I/O.
>> Anybody has seen such an issue?
>>
>> SELECT LAST_ARCHIVE_TIMESTAMP FROM SYS.DAM_LAST_ARCH_TS$ WHERE
>> RAC_INSTANCE# = 1 AND AUDIT_TRAIL_TYPE# = 4
>>
>> On Fri, Feb 12, 2021 at 7:31 PM Shane Borden <sborden76_at_yahoo.com> wrote:
>>
>>> What version is this? Lots of bugs with datapump in 12.1 and 12.2.
>>> Look at MOS for any patches.
>>>
>>> Shane Borden
>>> sborden76_at_yahoo.com
>>> Sent from my iPhone
>>>
>>> On Feb 12, 2021, at 7:31 AM, Pap <oracle.developer35_at_gmail.com> wrote:
>>>
>>>
>>>
>>> We have stream_pool_size set as ~256MB.
>>>
>>> We killed the specific session and now one of the other sessions which
>>> was showing "enq:TQ DDL contention" just before entering into "reliable
>>> message wait" and executing below sql. What we are wondering about is that
>>> the export used to run daily without any issue , so why is it experiencing
>>> such an odd wait today and making the database flooded with concurrency?
>>>
>>> BEGIN sys.kupc$que_int.create_queues(:1, :2, :3, :4, :5); END;
>>>
>>> On Fri, Feb 12, 2021 at 5:25 PM Shane Borden <sborden76_at_yahoo.com>
>>> wrote:
>>>
>>>> This is some sort of a datapump job going on. Do you have enough
>>>> streams_pool configured?
>>>>
>>>> Shane Borden
>>>> sborden76_at_yahoo.com
>>>> Sent from my iPhone
>>>>
>>>> On Feb 12, 2021, at 5:00 AM, Pap <oracle.developer35_at_gmail.com> wrote:
>>>>
>>>>
>>>>
>>>> Hello All,
>>>>
>>>> We are seeing one of our database nodes flooding with "concurrency"
>>>> wait i.e "library cache lock" and when going back to the start of the wait
>>>> in ASH we are seeing the blocking session actually seems to be from one of
>>>> the export sessions and it's from SYS too. The two sqls which were captured
>>>> in ASH from that session are below. The second one is the one spending time
>>>> on an event "reliable message" with current_obj# showing s '-1' and
>>>> blocking others now. When we see the start of the issue, that session was
>>>> showing current_obj# as index "AQ$_PROPAGATION_STATUS_PRIMARY" for quite a
>>>> long time. Are there any bugs associated with such an issue? This table
>>>> seems to be streams related but we don't use streams in this database, so
>>>> not sure from where it's popping up.
>>>>
>>>> BEGIN
>>>>
>>>> SYS.KUPM$MCP.MAIN('SYS_EXPORT_TABLE_02', 'SYS', 0, 0);
>>>>
>>>> END;
>>>>
>>>> BEGIN sys.kupc$que_int.delete_queues(:1, :2); END;
>>>>
>>>>
>>>>
>
> --
>
> Houri Mohamed
>
> Oracle DBA-Developer-Performance & Tuning
>
> Visit My - Blog <http://www.hourim.wordpress.com/>
>
> Let's Connect - <http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*Linkedin
> Profile <http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*
>
> My Twitter <https://twitter.com/MohamedHouri> - MohamedHouri
> <https://twitter.com/MohamedHouri>
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Mon Feb 15 2021 - 13:10:42 CET