Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: buffer busy wait for insert session
Hi, Gopal/Alex,
We have table/indexes with freelists 23, which is pretty high.
I did some analyze today, and found the contention is mainly from the table
itself, not the index.
I grep the trace file gall1_ora_10686.trc and sqlldr the file into temp_bbw, p1 stand for trace file with buffer busy wait p1 and p2 stand for p2 the block_id and p3 stand for p3 the reason code.
And I run the SQL to find out which blocks suffers most from the buffer busy wait:
1 select segment_name,segment_type,decode(p3,120,'READ',220,'WRITE')
reasoncode ,count(*)
2 from temp_bbw a,dba_extents b
3 where a.p1=b.file_id
4 and p2 between block_id and block_id+blocks
5* group by segment_name,segment_type,decode(p3,120,'READ',220,'WRITE')
SQL> /
SEGMENT_NAME SEGMENT_TYPE REASONCODE COUNT(*)
------------------------------ ------------------ -------------------------------- ----------GALLERY TABLE READ 27537
On 5/21/05, K Gopalakrishnan <kaygopal_at_gmail.com> wrote:
> Zhu:
>
> How many freelists you have defined for the segment? Most of the waits
> are coming from Freelist merge operations and get/record free space
> operations. If I am right, you have defined default freelists or less
> freelists for that segment.
>
> You can counter this in two ways,
> Increase the PCTFREE for those segments and lowering PCTUSED so that
> blocks are rarely come back to the freelist.
> Increase the freelists/groups so that the additional freelist blocks
> reduces the header contention
> Convert the segment to ASSM, where the BMBs manage the link/unlink
> (there is no such thing in ASSM btw) operations, thus reducing the
> contention for header blocks.
>
> Let me know offline how that goes.
>
> Regards,
> Gopal
>
>
> On 5/20/05, zhu chao <zhuchao_at_gmail.com> wrote:
> > Hi, Gopal,
> > The result of your SQL is like:
> > module ADDR INDX =20
> > INST_ID WHY0 WHY1 WHY2 OTHER WAIT
> > ---------------------------------------- ---------------- ----------
> > ---------- ---------- ---------- ---------- ----------
> > ktswh85: ktsfblink 0000000FDC2482D0 93 =20
> > 1 844661 0 0 303
> > kduwh02: kdusru 0000000FDC249C90 299 =20
> > 1 7403209 0 0 351
> > kduwh01: kdusru 0000000FDC249C70 298 =20
> > 1 160094785 0 0 369
> > ktswh14: ktsnbk 0000000FDC247B30 32 =20
> > 1 846449 0 0 441
> > kdiwh127: kdislink 0000000FDC24A7D0 389 =20
> > 1 6512919 0 0 584
> > kdiwh133: kdisdelete 0000000FDC24A890 395 =20
> > 1 3919793 0 0 649
> > kdiwh07: kdifbk 0000000FDC249EF0 318 =20
> > 1 1.2173E+11 0 0 798
> > kdowh00: kdoiur 0000000FDC249A70 282 =20
> > 1 8076703 0 0 1228
> > ktswh37: ktsufl 0000000FDC247D70 50 =20
> > 1 15550 0 0 1233
> > kdiwh23: kdiins 0000000FDC24A0F0 334 =20
> > 1 4915161 0 0 2772
> > kdswh02: kdsgrp 0000000FDC249AD0 285 =20
> > 1 3.9589E+10 0 0 2939
> >
> > module ADDR INDX =20
> > INST_ID WHY0 WHY1 WHY2 OTHER WAIT
> > ---------------------------------------- ---------------- ----------
> > ---------- ---------- ---------- ---------- ----------
> > ktuwh03: ktugnb 0000000FDC248AF0 158 =20
> > 1 251757436 0 0 3749
> > ktbwh00: ktbgtl 0000000FDC247970 18 =20
> > 1 671746 0 0 4984
> > kdiwh126: kdisparent 0000000FDC24A7B0 388 =20
> > 1 0 6689768 29850 5985
> > kcbwh1: kcbchg1 0000000FDC247770 2 =20
> > 1 596397933 0 0 11275
> > kdiwh161: kdifind 0000000FDC24AA30 408 =20
> > 1 6214341 0 0 21099
> > ktswh34: ktsgsp 0000000FDC247D30 48 =20
> > 1 70742 0 0 59984
> > kdiwh130: kdisle 0000000FDC24A830 392 =20
> > 1 6689771 0 0 72246
> > kdtwh01: kdtgrs 0000000FDC249B90 291 =20
> > 1 545508036 0 0 99979
> > ktswh72: ktsbget 0000000FDC248130 80 =20
> > 1 281638372 0 0 121225
> > ktuwh01: ktugus 0000000FDC248AB0 156 =20
> > 1 848098452 0 0 191407
> > ktuwh02: ktugus 0000000FDC248AD0 157 =20
> > 1 652447066 0 0 587564
> >
> > module ADDR INDX =20
> > INST_ID WHY0 WHY1 WHY2 OTHER WAIT
> > ---------------------------------------- ---------------- ----------
> > ---------- ---------- ---------- ---------- ----------
> > ktswh39: ktsrsp 0000000FDC247DB0 52 =20
> > 1 216480406 0 0 923447
> > kdiwh22: kdifind 0000000FDC24A0D0 333 =20
> > 1 1.0040E+10 0 0 1525588
> > kdswh05: kdsgrp 0000000FDC249B30 288 =20
> > 1 2.0280E+10 0 0 16670027
> > ktswh28: ktsgsp 0000000FDC247C70 42 =20
> > 1 281634373 0 0 20941585
> > ktswh07: ktsmfl 0000000FDC247A70 26 =20
> > 1 128646167 0 0 64707802
> > ktswh06: ktsmfl 0000000FDC247A50 25 =20
> > 1 128488810 0 0 70708363
> > ktswh05: ktsmfl 0000000FDC247A30 24 =20
> > 1 637034019 0 0 467257374
> >
> > 458 rows selected.
> >
> > Elapsed: 00:00:00.28
> > 18:48:40 SQL> l
> > 1 select wh.kcbwhdes "module",
> > 2 sw.*
> > 3 from x$kcbwh wh,
> > 4 x$kcbsw sw
> > 5 where wh.indx =3D sw.indx
> > 6 --and sw."OTHER WAITS" > 0
> > 7* order by 8
> >
> > The result is difficult to understand. Can you interprte it?
> > Thanks
> >
> > On 5/20/05, K Gopalakrishnan <kaygopal_at_yahoo.com> wrote:
> > > Zhu:
> > >=20
> > > Is that table partitioned? I would go for HASH partitioning as you
need
> > > to throw the incoming rows in to multiple blocks and this will reduce
> > > the contention for 'concurrnet write' BBW.
> > >=20
> > > Can you run the following query and identify which functions cause the
> > > buffer busy waits?
> > >=20
> > > select wh.kcbwhdes "module",
> > > sw.why0 "calls",
> > > sw.why2 "waits",
> > > sw.other_wait "caused waits"
> > > from x$kcbwh wh,
> > > x$kcbsw sw
> > > where wh.indx =3D sw.indx
> > > and sw.other_wait > 0
> > > order by sw.other_wait;
> > >=20
> > >=20
> > >=20
> > > Have a nice day !!
> > > ------------------------------------------------------------
>
>
> Best Regards,
> K Gopalakrishnan
> Co-Author: Oracle Wait Interface, Oracle Press 2004
> http://www.amazon.com/exec/obidos/tg/detail/-/007222729X/
>
-- Regards Zhu Chao www.cnoug.org <http://www.cnoug.org> -- http://www.freelists.org/webpage/oracle-lReceived on Sat May 21 2005 - 00:13:56 CDT
![]() |
![]() |