Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: db block size, too big wasting buffer?

Re: db block size, too big wasting buffer?

From: Howard J. Rogers <dba_at_hjrdba.com>
Date: Tue, 25 Jun 2002 18:37:37 +1000
Message-ID: <af9a61$ori$1@lust.ihug.co.nz>


Sometimes I give up. You try to kill myths, and then someone with alleged expertise propogates it yet again (not you, but your source who recommended 32K blocks).

There's physics at work here (let's leave raw devices out of it for a moment, shall we?). There is a file system buffer. It's size is known. You need to match that buffer size, or pay the penalty. For AIX and Linux, the buffer size is 4K. For most other Unixes, the buffer size is 8K.

And that's all there is to it. Anything else is witch-doctoring, ignorance, or both.

(Like I said, raw is a different matter).

Regards
HJR "sg" <s4v4g3_at_europe.com> wrote in message news:3D178B57.3010408_at_europe.com...
> Hi
>
> I read this from
>
>

http://www.dbasupport.com/forums/showthread.php?threadid=23841&pagenumber=2
>
> there were an argument between some people, some said big is good and
> some said not, one said this
>
> ================================
> Posted by Sweetie
>
> in oltp small amount of rows are fetched, if let's sayin a query has to
> returns 10 rows in 10 blocks then instead of 80K Oracle have to read
> 320K into memory
>
> Has anyone benchmark this? Or this just theory you peeps are talking
about?
>
> This is not correct.
> Take an example
>
> 8K BLOCK SIZE:
> A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit
> into one block. So, the total number of blocks needed is 10 blocks.
> The total memory needed to bring the data is 10*8 = 80K.
>
> 32K BLOCK SIZE:
> With 32K block size, the same table needs only 3 blocks, because each
> block stores 80 rows.
> The total memory needed to bring the data is 3*32 = 96K.
>
> I agree there is a small wastage in the memory if the block size is set
> to 32K, this wastage can be offset with the real benefits reap out in
> the disk space management. However, Oracle SGA is designed to keep the
> buffer cache as long as possible, until it ages out based on the LRU
> algorithm.
>
> One can argue that what if the 10 rows come from 10 different blocks of
> 32K size, then there is a more memory wastage than that of memory
> wastage with 8K size?
>
> It is very difficult to answer those type of question.
>
> In my experience, I have not seen any true OLTP system in the world.
> Even if you say OLTP, many real time reports are running against the
> OLTP system only. OLTP performance takes dive when the real time reports
> are running. So it is always a hybrid system.
>
> That is why I recommend 32K block size.
> =============================================
>
> the other said
>
> =============================================
> 8K BLOCK SIZE:
> A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit
> into one block. So, the total number of blocks needed is 10 blocks.
> The total memory needed to bring the data is 10*8 = 80K.
>
> That would be a perfect world if you asume your query is going to fetch
> rows from a single block which is almost never true, so this is not an
> argument it is just a fact. Plus in OLTP since all reads are almost
> indexed we are even reading more blocks into memory!
>
> It is correct that many OLTP systems are hybrid but that cannot justify
> that a big block size is good, enough big is enough
>
> Plus even we are making the block big and we say that minimize the I/O,
> but hey we are telling ourselves that I/O is minimized at Oracle level,
> is this true at OS level? Are we minimizing at OS level? If not what is
> the use of minimizing at Oracle level. At end of the day Oracle runs on
> an OS and it is limited by OS limitations.
>
> Assuming most UNIX/NT I/O are 64K I do agree that at some point I/O is
> minimized because we make Oracle to do less work by just reading 2
> blocks if we have a block size of 32K and reading 8 blocks if block size
> were 8K, but can anyone tell me what is the difference between system
> calls of 2 oracle data block reads and 8 oracle data block reads? I
> would try it but I am sorry because I dont have any 32K block sizes
> databases, since the moderators are recommending this size they mush
> have vast experience with these 32k block sizes database and they can
> probably give it a shot and try it then post some benchmark results?
>
> I think we should run strace at OS level to benchmark the system calls
> with different block sizes. And SAR the system I/O. Dont you think so?
>
> ===========================================
>
>
> So who's right?
>
>
> Sybrand Bakker wrote:
>
> > "sg" <s4v4g3_at_europe.com> wrote in message
> > news:3D175E5F.6090202_at_europe.com...
> >
> >>Hi
> >>
> >>I am about to create a OLTP database of 16KB in 8.1.7, I have always
> >>read that dont make the db block size too big or we would be wasting
> >>data buffer. Is this true?
> >>Some say that db block size is another Oracle Myth, it does not really
> >>matter, is this a correct statement?
> >>
> >>
> >
> > Fact: in an OLTP you read individual records mostly
> > Fact: Oracle will always read a complete block.
> >
> > Question: do you think a 16k block size will fit your OLTP scenario?
> >
> > Also: please define 'some'? Hearsay? Knowledge?
> >
> > Regards
> >
> >
> > --
> > Sybrand Bakker
> > Senior Oracle DBA
> >
> > to reply remove '-verwijderdit' from my e-mail address
> >
> >
> >
> >
>
Received on Tue Jun 25 2002 - 03:37:37 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US