RE: Global Cache and Enqueue Services statistics
Date: Wed, 1 Aug 2012 08:30:05 -0400
Message-ID: <304F58144267C5439E733532ABC9A3A1159957F6_at_USA0300MS02.na.xerox.net>
Mark,
Most of the waits are coming from a few concurrent manager jobs. Currently, the concurrent managers are using the load-balanced TNS string and therefore, multiple processes from the same concurrent manager are running on different RAC nodes. One of the things that I am looking into is to tie managers to specific nodes so that all processes from a certain manager would run on the same node. This should help in reducing the interconnect traffic to some extent. I understand that tables in EBS are shared between modules and therefore, achieving a high degree of affinity may not be possible.
Thanks,
Amir
-----Original Message-----
From: Mark Burgess [mailto:mark_at_burgess-consulting.com.au]
Sent: Wednesday, August 01, 2012 1:55 AM
To: Hameed, Amir
Cc: gajav_at_yahoo.com; Oracle List List; kaygopal_at_gmail.com
Subject: Re: Global Cache and Enqueue Services statistics
Amir,
is there a particular EBS process that you are seeing this problem with?
Regards,
Mark
On 01/08/2012, at 2:53 PM, K Gopalakrishnan <kaygopal_at_gmail.com> wrote:
> Amir,
>
> The problem appears to be with Buffer busy waits at GC layer. 40 ms
> range for gc buffer acquire is way too high and do you care to send
> the awr to me.
>
> Without going to additional details, do you think you can test the
> same workload with _buffer_busy_wait_timeout=2. Just use ALTER
SYSTEM.
>
> -Gopal
>
>
> On Mon, Jul 30, 2012 at 3:47 PM, Hameed, Amir <Amir.Hameed_at_xerox.com>
wrote:
>> Hi Gaja, >> Below are the top-5 wait events which are the same on all nodes: >> >> Top 5 Timed Foreground Events >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> Avg >> wait % DB >> Event Waits Time(s) (ms) timeWait Class
>> ------------------------------ ------------ ----------- ------ ------ ---------- >> db file sequential read 54,365,842 34,374 1 32.3 User I/O >> gc buffer busy acquire 461,651 18,904 41 17.8 Cluster >> enq: TX - row lock contention 11,506 15,269 1327 14.4 Applicatio >> DB CPU 11,476 10.8 >> gc current block busy 255,945 10,747 42 10.1Cluster
>>
>> I am also investigating to see if the test was run the way it should
have been because of the ' enq: TX - row lock contention' event. I have also identified statements that were suffering from the 'gc' waits shown above. The underlying segments of those statements have 'freelistgroups' defined as '1'. This is an EBS system which has been around for a long time. It was upgraded from 11.0.3 to 11i several years ago and that is most likely why 'freelist groups' of most of the standard segments is '1'.
> --
> http://www.freelists.org/webpage/oracle-l
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Wed Aug 01 2012 - 07:30:05 CDT