Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Disk/tspace config. for Oracle on AIX
I've just taken delivery of two new servers, one F50 with one adapter, and
one S70 with 6 adapters - all SSA. The model number is '4N - PCI SSA
Multi-Initiator/RAID EL Adapter'. This is the latest SSA adapter I know of
from IBM. The servers will be dedicated Oracle servers (7 and 8).
The SSA manual only states RAID5. The 'smit devices' screen now also shows 'SSA RAID arrays' as an option, once you have the adapter driver software installed. Once you select an adapter to set-up a RAID array, the *only* option you are offered is RAID5.
You *cannot* mirror RAID5 arrays (not that you'd want to) - it says so in the manual. You cannot set-up RAID1 or RAID0.
You *can* use LVM to set up mirrored LVs, just like you always could, RAID adapter or not. This is effectively RAID1. The problem is, you can't set-up RAID0 (stripe) and then mirror it. And you can't stripe *and* mirror via LVM - at that's really want you want - striping, but with the option to mirror, to save you having to go through a lengthy datafile(s) rebuild if you lose a disk in a stripe set.
The latest versions of LVM under AIX4.2 and 4.3 offer a 'mirror vg' option. Whether this means you could have a vg mirrored that contained striped lv's I don't know. I've posted a message on the comp.sys.aix newsgroup about this. If you could, that's effectively given you RAID0+1. Only problem is, it seems to be a very 'software' based solution, so how much worse it's performance would be over a purely 'hardware' based RAID0+1 adapter is anyone's guess?...
Hassle your reseller for more info., because the SSA manuals make no mention of RAID0 or RAID1 or how you can do this. Keep me posted with what you find.
Cheers,
MotoX.
Scott Cote wrote in message <35AD78D7.10EEE538_at_contractor.net>...
>MotoX,
>
>I was talking to an IBM reseller and they told me that I could install my
SSA
>system as raid 10 (raid 1 + raid 5) for the Oracle parallel server. Do you
>think they are full of beans?
>
>SCott
>
>MotoX wrote:
>
>> Mark,
>>
>> One thing I missed mentioning yesterday was the Gorman and Dodge book -
>> Oracle8 Datawarehousing. Don't be misled by the title, it covers Oracle7
as
>> well. It's *by far* the best Oracle dw book out there, and shows the
Oracle
>> Press datawarehousing book up for the overpriced piece of crap it is -
IMHO.
>>
>> As regards the other stuff, see below:
>>
>> Mark Baker wrote in message
>> <900458729.6262.0.nnrp-04.c1ed77a9_at_news.demon.co.uk>...
>> >Could you tell me if your testing revealed any performance differences
with
>> >separated month partitions set up as the following (or similar)
scenarios?
>> >
>> >We have allocated 6 disks in particular to housing partitions for our
main
>> >fact tables (summary tables, 36 months of data). We can either place
them
>> >thus:
>> >
>> >1. Without LV striping, by specific tablespace storage definition on
>> >partition creation:
>> >
>> >disk 1 : month 1, month 7, month 13, ...
>> >disk 2 : month 2, month 8, month 14, ...
>> >...
>> >disk 6 : month 6, month 12, month 18, ...
>> >
>>
>> I would never do it this way.
>>
>> >... or we can
>> >
>> >2. stripe with the LVM across all disks on 1 (or several) logical
volume(s)
>> >:
>> >
>> >Disk 1 -6 : Months 1-36 thinly sliced in 64k chunks.
>>
>> I've found in testing that this gives the best read and write rate, as
long
>> as the rest of your system can keep up. The performance I've seen isn't
>> linear - 6 disks will push/pull around 4 times that of a single disk, not
6
>> times. You also need to hit the drives in parallel (PQO), else the
striping
>> is mostly wasted.
>>
>> >
>> >or maybe even
>> >
>> >3. stripe with the LVM having striped 2 logical volumes
>> >
>> >Disk 1-3 : Even months, (+indexes for odd months)
>> >Disk 2-6 : Odd months, (+indexes for even months)
>> >
>>
>> Could be. Easier to restore when a disk goes BOOM!
>>
>> >and many more, there are so many combinations!
>> >
>> >With parallel query and option 1 isn't there a certainty of there being
>> disk
>> >contention between parallel query fetches all attempting to read from
all
>> >disks? Or maybe this doesn't add up to much of an overhead anyway?
>>
>> I'd go for taking multiple 1 Gig sized LV's (and thus tablespaces) spread
>> over multiple drives - sorta like your 2, but only if the table is big
>> enough to warrant it (see below about disk loss, and you might want to
use 3
>> instead) and the access patterns are uniform. Sum them up to give the
space
>> you require. You'd have to look close at the access patterns of your data
to
>> get the most balanced distribution. Another important point about the
>> paritioning scheme you adopt is maintenance and recovery, so always
factor
>> this in as much as raw speed.
>>
>> Like I said in my previous post, I'm not a big fan of Oracle level
striping,
>> it's too course and too much hassle. Hardware or an LVM is much better.
The
>> downside of a very large table striped over many drives *without
>> protection - RAID5 or RAID1 (or LVM mirror)* is that if you lose a drive
you
>> have to recover the entire stripe set - you can just recover the dead
disk.
>> For this reason, on a very big table I'd look at multiple stripe sets.
>> Multiple stripe sets also work better when you have some big joins and/or
>> big summary table creates - it helps to pull them away from the same
>> disks/stripes as the central fact table(s).
>>
>> Another thing to look at is merging historic data into single partitions
and
>> pulling it off to less and/or slower storage, as it's less likely to be
>> accessed. Again, look closely at your own system.
>>
>> >
>> >With option 2 I'm assuming many LVs rather than just 1 will help with
the
>> >optimizer to break up the query in parallel with each LV treated as
>> separate
>> >physical disks. (?)
>> >
>> >Did you rule out mirroring, which as well as the extra data security
(which
>> >we don't need for summary tables) can improve the speed of 'reads' by
AIX
>> >making use of both copies, because it still lagged behind striping the
data
>> >with LVM?
>>
>> Mirroring is great. But...
>>
>> As far as I know, IBM don't offer RAID0+1 on thier SSA adapters, and you
>> can't *both* stripe and mirror through LVM on the same LVs at the same
time.
>> SSA adapters only support RAID5 at present. Not sure about IBM SCSI, but
>> you'd still have the same problem in LVM. It's a pain.
>>
>> If you find anything different to the above, I'm all ears. You could also
>> consider systems like EMC Symmetix for more flexibility, but the cost is
4-5
>> times higher. Obviously there will be other 3rd-party products that sit
>> somewhere in between.
>>
>> >
>> >Once again thank you for your help.
>> >
>> >Best regards
>> >
>> >Mark
>>
>> You're welcome. Keep me posted with how your testing goes.
>>
>> MotoX.
>
>
>
Received on Thu Jul 16 1998 - 02:46:07 CDT
![]() |
![]() |