Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Raid 50
since I'm not a BAARF member as of yet, I can still discuss this. Mogens, you could fix that :)
are you sure that they aren't using RAID 5 sets with 5 or 9 members?
assumumptions:
1. the database is of a blocksize that is a multiple
of 2 (e.g. 2048, 4096, 8192, 16384 bytes).
2. the operating system will request an IO size that is a multiple of 2 (e.g 64, 128, 256, 512, 1024 kilobytes (KB))
3. a user process within database instance will
request a read of size db_block_size *
db_file_multiblock_read_count during full table scans,
fast full scans, etc
It would then follow that for a database block size of 8 KB, with a dbfmbrc of 32, that 8192 bytes * 32 = 256 KB would be requested. If the stripe size = the OS IO request size, only one read by the RAID controller should be required to satisfy that request.
(on tests that I ran on w2k adv svr, values of dbfmbrc of up to 128 were honored, meaning that if a read of 1 MB was requested, it was answered with exactly that size by the operating system. No discounting was necessary for tables where reading the segment header was effectively one read out of 100).
with a RAID 5 volume of 4 disks using a 256 KB stripe size, how are you going to store those 8 KB blocks?
fragmented across physical devices.
4 disks - 1 parity disk = 3
256 KB 1 block
------ * -------- = 10.66 blocks / stripe
3 disks 8 KB
2 blocks out of 16 are going to be split across physical disks. that is just plain ugly.
what a pain when you only have single block IO being
performed (index range scan) when you have to access
multiple physical drives to obtain a single block, one
eighth of the time.
(insert claim of gigantic expensive RAID cache here).
RAID 5 is supposed to be good at high concurrency IO,
lots of single block requests, as each drive can be
controlled individually. If you lose that by splitting
blocks across disks - I think that you've lost any
potential advantage for RAID 5 (aside from the giant
expensive RAID cache).
with a RAID 5 volume of 5 disks for a 256 KB stripe, how are you going to store those 8 KB blocks?
256 KB 1 block
------ * -------- = 8 blocks / stripe
4 disks 8 KB
8 blocks per disk per stripe, or course, with the parity being stored on the 5th member, falling on whatever disk's turn it is to store parity.
with a RAID 5 volume of 9 disks for a 1024 KB stripe, how are you going to store those 8 KB blocks?
16 blocks per disk per stripe, or course, with the parity being stored on the 9th member, falling on whatever disk's turn it is to store parity.
even numbers of disks for RAID 10 (4,8), odd numbers for RAID 5 (5,9).
3 drive RAID 5 volumes indicate a lack of understanding of the system designer or a total disregard for performance by the person signing the check.
just my opinion.
Paul
btw - average waits for IO at a large site running on a Dell PE 6650 with external direct attached storage average under 3 milliseconds for both scattered and sequential reads. (lots of them)
then again, if you have very little physical IO, the average waits tend to increase. funny how averages work sometimes :)
willing to provide summary statspack data.
-- Archives are at http://www.freelists.org/archives/oracle-l/ FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html -----------------------------------------------------------------Received on Wed Jul 07 2004 - 18:41:20 CDT
![]() |
![]() |