Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: resolving buffer busy waits

Re: resolving buffer busy waits

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Mon, 15 Sep 2003 10:20:08 +0100
Message-ID: <bk4072$ihg$1$8302bc10@news.demon.co.uk>

Most of your BBWs are 120s, with some 130s.

120 is 'I want CUR, but someone else is reading it from the file system'

130 is 'I want CR, but someone else is reading it from the file system'

In other words, waiting for someone else to complete a read request, not waiting for someone who is updating the same block.

There are a few oddities in the statspack you've printed here.

  1. You have 192M db file sequential reads, but the tablespaces you think have the BBW problem are responsible for about 230,000 of them.
  2. Your average read times for these tablespaces appear to be massive - normally I assume that this is a bug where oracle is counting time in an unsuitable way - but maybe you really do have a peculiar I/O problem.
  3. Your number and times on enqueues is huge - which may be end-user code and UL locks, but might be an issue with distributed transactions.

In your case, I would tend to assume that the pure I/O load had to be addressed first, as it might fix the BBW as a side-effect. I would also investigate why you enqueues are so expensive because that might be a totally separate problem that also needs to be addressed. I would not, initially, spend much time trying any of the 'hints and tips' fixes for buffer busy waits.

My first step would be to produce some snapshots against V$session_event - statspack is a global summary, and can easily mislead by merging two or three distinct problems into one set of numbers that give an inappropriate message.

--
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

  The educated person is not the person
  who can answer the questions, but the
  person who can question the answers -- T. Schick Jr


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html

____Finland__September 22nd - 24th
____Norway___September 25th - 26th
____UK_______December (UKOUG conference)

Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____USA__October
____UK___November


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


"Casey" <cdyke_at_corp.home.nl> wrote in message
news:8bc6b8d7.0309141409.2e440f6a_at_posting.google.com...

> folks,
>
> some numbers below from a 2hr statspack interval. got an app that
> isn't getting as much out of oracle as it probably should. app
itself
> has lots of problems, but cannot be changed ...
>
> some facts:
> - solaris 2.8, 64bit
> - oracle 8.1.7.3 64bit
> - 200000 8k block buffers (1.6 gig sga)
> - sun v880, 6 cpus, 20gig ram
> - app consists of lots of inserts, updates, deletes on a few tables
> - rowcounts for 2 "main" tables 15million rows
> - other main tables 1-2million rows
> - UFS file systems for datafiles, 6 disk stripe
> - redo spread over 4 seperate disks, again UFS (nothing else on them
> though)
> - lots of cpu idle time
> - hot tables have freelists based on itc numbers from block dumps of
> "hot" blocks
> - pctfree/pctused not too close on said tables
> - only default buffer pool (nothing kept)
> - majority of waits in both hot tablespaces are code 120
> (v$session_wait.p3)
> - but there are a good chunk of 130's too ...
> - no appropriate "test" env to test ideas ... bugger ...
>
> options as i see them, in some shaky order:
> - main tablespace w/bbw issues has only 1 table in it. create keep
> buffer pool and keep that table. means a big jump in the buffer
> cache. sucess w/that can be the spur for a better keep/recycle
> strategy. of course, expanding the buffer cache will likely
introduce
> other "pressures" ...
> - recreate tables w/pctused/pctfree to minimize rows per block
> - rebuild server entirely using raw or veritas w/directio
> - rebuild database entirely w/smaller block size
> - ... or something like that ...
>
> so w/out a "real" test env i need/want to tread carefully. i expect
> resolving the bbw issue should, hopefully, cascade up and reduce the
> two above it. am curious to hear any suggestions from anyone who
has
> seen and conquered a similar situation.
>
> TIA,
>
> casey ...
>
> ==============
> statspack info
> ==============
> Top 5 Wait Events
> ~~~~~~~~~~~~~~~~~ Wait
> % Total
> Event Waits Time (cs)
> Wt Time
> -------------------------------------------- ------------ ----------
--
> -------
> db file sequential read 942,923
192,201,526
> 68.51
> enqueue 164,245
48,919,715
> 17.44
> buffer busy waits 365,078
34,640,465
> 12.35
> log file sync 313,459
2,913,889
> 1.04
> db file parallel write 5,479
715,340
> .25
> ----------------------------------------------------------
---
>
> Total Wait
> wait Waits
> Event Waits Timeouts Time (cs)
> (ms) /txn
> ---------------------------- ------------ ---------- -----------
> ------ ------
> db file sequential read 942,923 0 ###########
> 2038 3.1
> enqueue 164,245 157,876 48,919,715
> 2978 0.5
> buffer busy waits 365,078 334,822 34,640,465
> 949 1.2
> log file sync 313,459 0 2,913,889
> 93 1.0
> db file parallel write 5,479 0 715,340
> 1306 0.0
> log file parallel write 184,341 0 321,256
> 17 0.6
> db file scattered read 2,142 0 43,208
> 202 0.0
> log buffer space 3,586 0 41,637
> 116 0.0
> latch free 27,725 25,288 20,704
> 7 0.1
> =====
>
> Av Av Av Av

> Buffer
> Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s
> Waits
> -------------- ------- -------------- ------- ------------ --------
> ----------
> Av Buf
> Wt(ms)
> ------
> {BAD_TS_1}
> 191,633 27 9,353.1 1.0 276,708 38
> 316,191
> ######
> {BAD_TS_2}
> 44,413 6 2,675.2 1.7 240,166 33
> 31,486
> 683.0
Received on Mon Sep 15 2003 - 04:20:08 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US