Re: library cache: bucket mutex X"

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Thu, 24 Jun 2021 22:58:39 +0100
Message-ID: <CAGtsp8kk_yGsEc9bzRNkAvN-H9tdBNOP5Lf0Z-bMfdNB4-FBEw_at_mail.gmail.com>



On the other hand, editing your previous statement (referencing v$sqlarea) to join to v$sqlstats, or even a simple

select sta.sql_id, sta.executions from V$SQLSTATS sta where sta.sql_id in (select ses.sql_id from V$session ses)

I get a plan which ends up with a nested loop by indexed access into the relevant x$.

However, if I use hints to set the cardinality of the X$ under v$session to a few thousand (which, given your hardware you may have), I end up with a "large-scale" plan that does a hash semi-join with full tablescan of v$sqlstats as the build table. So you may need some heavy hinting to get efficient plans.

Regards
Jonathan Lewis

On Thu, 24 Jun 2021 at 22:22, kyle Hailey <kylelf_at_gmail.com> wrote:

>
> Making some progress - remotely so going slow.
>
> Looks like using "SQL_ID in ( ) " on v$SQLSTATS causes a full
> table scan so this takes over* 3 minutes *
>
> select sql_id, executions from V$SQLSTATS where sql_id in (
> '5w8u1cvrsc038', '4tppmg7r9s0bj');
>
> and this takes 0.01 seconds
>
> select sql_id, executions from V$SQLSTATS where sql_id = '5w8u1cvrsc038';
>
> The latter uses an index
>
> ---------------------------------------------------------
> | Id | Operation | Name |
> ---------------------------------------------------------
> | 0 | SELECT STATEMENT | |
> |* 1 | FIXED TABLE FIXED INDEX| X$KKSSQLSTAT (ind:1) |
>
> and the former does a full table scan
>
> ------------------------------------------
> | Id | Operation |Name |
> ------------------------------------------
> | 0 | SELECT STATEMENT | |
> |* 1 | FIXED TABLE FULL| X$KKSSQLSTAT |
>
> ---------------------------------------------------------------------
>
> Wondering if there is a way to force the index usage.
>
> Putting the SQL_ID values in a table and joining on the table does a
> nested loops with index lookup.
>
> select v.sql_id, executions f rom V$SQLSTATS v, sqlids i where
> v.sql_id=i.sql_id;
> ---------------------------------------------------------
> | Id | Operation | Name |
> ---------------------------------------------------------
> | 0 | SELECT STATEMENT | |
> | 1 | NESTED LOOPS | |
> | 2 | TABLE ACCESS FULL | SQLIDS |
> |* 3 | FIXED TABLE FIXED INDEX| X$KKSSQLSTAT (ind:1) |
> ---------------------------------------------------------
>
>
>
> Kyle
>
>
>
> On Thu, Jun 24, 2021 at 3:43 AM Jonathan Lewis <jlewisoracle_at_gmail.com>
> wrote:
>
>>
>> Life used to be so simple when the library cache was protected by latches
>> and the maximum number of buckets was 131,072.
>>
>> Extrapolating from the good old days, I'd guess that the your session is
>> waiting to acquire a shared lock on a library cache hash bucket, but
>> another session is holding the mutex in exclusive mode because it wants to
>> add or remove something in the bucket (on the chain). Your session needs to
>> hold shared because the join on hash value (or sql_id if you modified the
>> query) has to find the hash bucket/chain and then walk through it to find
>> the matching item and it doesn't want the chain to make or break while it's
>> walking it. (I would be a little suprised if your session was trying to
>> acquire Exclusive, but I could be wrong).
>>
>> There are three factors that might make the waits more likely
>> a) if the shared pool / library cache (x$kglob) is very large and the
>> bucket count is still only 131,072 then there may be a lot of very long
>> chains that need to be searched
>> b) if the execution plan does a "full tablescan" of x$kglob instead of an
>> index by the hash value / sql_id (can't remember the relevant x$ column
>> names off the top of my head) then Oracle may be walking x4kglob by object
>> chain (not one bucket at a time), which means it will hit a bucket mutex
>> (possibly twice) for every single item in x$kglob. (This is what used to
>> happen with the latches in the old days) - and the pure volume of activity
>> might be sufficient to cause all these waits.
>> c) If the memory target is that big it does suggest a big, busy, system
>> with lots of users - so maybe there's a lot of other user activity
>> constantly disrupting the library cache, which is where the competiion for
>> the mutex comes from.
>>
>> Obviously you need to be able to see some report from v$mutex_sleep and
>> v$mutex_sleep_history to get a better insight into what might be happening;
>> and I would be interested to know if your very large memory target has
>> automatically resulted in a far larger number of library cache hash
>> buckets. I'm not sure whether it's possible to find the latter without
>> doing a library cache dump or querying x$kglob directly.
>>
>> Regards
>> Jonathan Lewis
>>
>>
>>
>> On Wed, 23 Jun 2021 at 20:49, kyle Hailey <kylelf_at_gmail.com> wrote:
>>
>>> Anyone seen issues "library cache: bucket mutex X"?
>>> Running Oracle 19c with 128 CPUs 3.9TB memory .
>>> Database running ok, but queries on v$sqlarea, v$sqstats wait on "library
>>> cache: bucket mutex X", for example
>>>
>>> SELECT sess.sid,
>>> sess.username,
>>> sqla.optimizer_mode,
>>> sqla.hash_value,
>>> sqla.address,
>>> sqla.cpu_time,
>>> sqla.elapsed_time,
>>> sqla.sql_text
>>> FROM v$sqlarea sqla, v$session sess
>>> WHERE sess.sql_hash_value = sqla.hash_value
>>> AND sess.sql_address = sqla.address;
>>>
>>>
>>> No other queries run into this wait, only queries on v$sqlarea and
>>> v$sqlstat (haven't tried v$sql but would expect the issue as well).
>>> I don't have access to the database so can't really poke around, thus
>>> wondering if anyone else has experiences with "library cache: bucket
>>> mutex X".
>>> CPU and memory look fine on the host ( of course with 128 CPUs 3.8TB
>>> memory) . Also wondering if the large memory could be involved. Don't have
>>> the shared pool size. Do know DB is using memory_target at about 3TB.
>>>
>>>
>>> Kyle
>>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Jun 24 2021 - 23:58:39 CEST

Original text of this message