Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: DB_FILE_MULTIBLOCK_READ_CNT (Problem restated)
In article <3352EAD5.294D_at_sprintmail.com>,
jimnash_at_sprintmail.com wrote:
>
> My apologies for a poorly written original post. Thanks to those
> who replied, but allow me restate the problem.
>
> The most critical aspect of this database is I/O to a single, large
> (3GB) table. Most queries to this table are SELECT statments involving
> perhaps 25% of the rows. The goal is to increase throughput to this
> table. For the time being, OPS and PQO are not options, but disk
> striping is.
>
> The table will be striped at the file system level (Veritas), assume
> among three disks. Assume a stripe width of 64K. This implies that a
> 192K
> byte read request from Oracle will be transferred by the file system
> into 3 separate 64K byte read requests directed to each of the disks in
> the logical volume.
>
> Existing parameters of note are as follows:
>
> * OS max I/O size = 64K bytes
> * DB_BLOCK_SIZE = 4096
> * DB_FILE_MULTIBLOCK_READ_CNT = 16
>
> The question is how DB_FILE_MULTIBLOCK_READ_CNT should be changed, if at
> all, to make best use of striping.
>
> The reason I question the use of the "standard" configuration of these
> values ((DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_CNT) = MAX I/O size for
> O/S) is that the file system understands that the logical volume on
> which the
> table (tablespace file) resides is indeed not a single disk, but three
> disks.
> Is it reasonable, therefore, to ask Oracle to read three times "Max I/O
> size
> for the O/S", so as to keep those babies humming?
>
> The question may also relate to Oracle readahead processing. As the
> DBMS understands that a large, contiguous part of the table needs to be
> scanned, it could
> simultaneously post multiple nowait reads to the table, hoping to "move
> the queue" to the disk. If so, it would argue for leaving
> DB_FILE_MULTIBLOCK_READ as it is, letting Oracle "tune the disks"
> itself.
>
> - Jim
Hi Jim,
I have one important question: are you using filesystem files or raw devices? It sounds as if you are using filesystem files. If that's the case, you will not get any I/O operation larger than the blocksize of your filesystem (typically 8K). If you're not using filesystem files, you will not get any I/O operation larger than 64K, in which case your current settings are appropriate.
I would think that the overhead involved with caching a 192K request and splitting it up, etc., would be larger than any gains you might have by having sent only one I/O request versus having sent 3 requests.
One thing that's important to realize is that Oracle will not parallelize your query unless you're using PQO. It sounds as if that's what you really want and need.
Irregardless, I wouldn't try to perform I/O from Oracle any larger than the maximum single I/O operation for your system. 64K if raw, 8K otherwise (or the block size of your filesystem, if not 8K). Certainly, if you are trying to do 192K I/O on a filesystem whose block size is 8K, you are caching excessively.
Brenda
-------------------==== Posted via Deja News ====----------------------- http://www.dejanews.com/ Search, Read, Post to UsenetReceived on Tue Apr 15 1997 - 00:00:00 CDT
![]() |
![]() |