Re: Event : latch: ges resource hash list

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Mon, 4 Oct 2021 10:48:20 +0100
Message-ID: <CAGtsp8mFyu50Z5sySdqv69+mCuQxYhuqS3D=85opxK6to3RYdw_at_mail.gmail.com>



 GES is the global enqueue service (which isn't about buffer cache), so it looks as if you are doing something that requires coordination of some locking event. (And the code path is followed regardless of how many instances are up.)

I would take a couple of snapshots of v$enqueue_stat over a short period of time to see if any specific enqueue is being acquired very frequently; but some global enqueue gets don't get recorded in that view - so it may show nothing interesting. And I would do the same (snapshots) of v$rowcache to see if any if the dictionary cache objects were subject to a high rate of access. EIther of these might give you some clue about what's going on.

Historic issues:

sequences being accessed very frequently and declared with NOCACHE (or very small CACHE) or with ORDER.

Some bugs relating to tablespace handling, undo handling, VPD, the result in massive overload on dc_tablespaces, dc_users, dc_objects, dc_rollback_segments (though I can't remember if any of them were still around in 12.2).

Regards
Jonathan Lewis

On Mon, 4 Oct 2021 at 10:23, Krishnaprasad Yadav <chrishna0007_at_gmail.com> wrote:

> Hi Experts ,
>
> There is a situation around which is causing an event : latch: ges
> resource hash list in database . CRS /RDBMS is 12c2 version on solaris
>
> DB is 2 node RAC , but due to application compatibility node 2 always
> remains down. however on node 1 we lot of query waiting for latch : ges
> resource hash list ,(no specific query is ,but all )
>
> on node 2 ,the complete CRS stack is down , not sure why this event is
> popping up on node1 .
>
> Parallely CPU for node 1 also remains higher more than 80% most of the
> time .
>
> Any light about this event will be helpful .
>
>
>
> Regards,
> Krishna
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Oct 04 2021 - 11:48:20 CEST

Original text of this message