Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: db_block_size and performance

Re: db_block_size and performance

From: Tommy <tcusan_at_yahoo.com>
Date: Mon, 19 Oct 1998 10:38:29 -0700
Message-ID: <362B8724.66BF1325@yahoo.com>


Hi,

You have one chance to set this parameter before you create the database. You can not change it
on an existing database. Oracle delivers its software with a default size of 2K blocksize. In most
cases, a 2K blocksize is not optimal for a large database. Your choice is between 4k and 8K.

Sizing guidelines :

     Use 8K as the block size for Oracle Applications, SAP, and other large commercial

     application packages.

     Use 8K blocks if you optimize your database for batch activities or large reports where

     large amounts of data are frequently read in by full table scans.

     For databases stored in the UNIX filesystem, use the same size for Oracle blocks as the

     file system blocks.

    For example, a default filesystem with 8K blocksize should have 8K blocks for the database. If

    you choose a smaller size for the database, all writes of the database have to do a read of the

    larger fs blocks and then write them to disk (read before write).     If your database has tables with a row length of more than 4K, set the db_block_size

    parameter to 8K blocks.

    For applications which use direct read/writes with rows less than 4K, set this parameter to

    use 4K blocksizes.

    If you are running on a UNIX server, you may determine what the system page size is by

    issuing the command: pagesize at your UNIX command line.

   The reason you want to have your db_block_size equal to the system page size is to optimize

   the use of virtual memory. The following two examples should illustrate why this setting is optimal:

1 - You set your db_block_size to (pagesize - N). Now, more than one db_block fit in one page of memory.

    Should your computer need to swap out a page, it will be writing more than one db_block (i.e., all of one

    and part of another). Now, if your machine needs to access the datablock which was partially written to

    disk, it may need to read in the part which was written to disk... even if that part was very small.

    This causes extra disk I/O which will result in an overall slowdown of your system.

2 - You set your db_block_size to (pagesize + N). Now, if the system needs to write out a page, it will not

    write out an entire db_block. This means that you could have a db_block which spans multiple system

    pages. To read a db_block, the database may need to force the OS to read back in the whole

    db_block. If the db_block spans multiple pages, that's multiple pages which need to be read in from

    disk and written out to disk.

If your db_block_size is set to the system page size, than to read in a db_block only forces one round of I/O -- one to read in a page from disk and

one to write one out.

Peter Shankey wrote:

> I read that changing the db_block_size could see a increase in speed of
> up to 40%. What are your experinces with changing the db_block_size? I
> realize I will have to re-create the WHOLE db and I am wondering if it
> is really worth the effort. My db_block_size it set to the default of
> 2048. The db is running on solaris 2.5.1
Received on Mon Oct 19 1998 - 12:38:29 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US