Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Disk/tspace config. for Oracle on AIX
Whoops - change the bit below that reads "you can just recover the dead
disk" to "you can't just recover the dead disk".
MotoX
MotoX wrote in message
<900487676.26361.0.nnrp-08.c2de712e_at_news.demon.co.uk>...
>Mark,
>
>One thing I missed mentioning yesterday was the Gorman and Dodge book -
>Oracle8 Datawarehousing. Don't be misled by the title, it covers Oracle7 as
>well. It's *by far* the best Oracle dw book out there, and shows the Oracle
>Press datawarehousing book up for the overpriced piece of crap it is -
IMHO.
>
>As regards the other stuff, see below:
>
>Mark Baker wrote in message
><900458729.6262.0.nnrp-04.c1ed77a9_at_news.demon.co.uk>...
>>Could you tell me if your testing revealed any performance differences
with
>>separated month partitions set up as the following (or similar) scenarios?
>>
>>We have allocated 6 disks in particular to housing partitions for our main
>>fact tables (summary tables, 36 months of data). We can either place them
>>thus:
>>
>>1. Without LV striping, by specific tablespace storage definition on
>>partition creation:
>>
>>disk 1 : month 1, month 7, month 13, ...
>>disk 2 : month 2, month 8, month 14, ...
>>...
>>disk 6 : month 6, month 12, month 18, ...
>>
>
>
>I would never do it this way.
>
>>... or we can
>>
>>2. stripe with the LVM across all disks on 1 (or several) logical
volume(s)
>>:
>>
>>Disk 1 -6 : Months 1-36 thinly sliced in 64k chunks.
>
>
>I've found in testing that this gives the best read and write rate, as long
>as the rest of your system can keep up. The performance I've seen isn't
>linear - 6 disks will push/pull around 4 times that of a single disk, not 6
>times. You also need to hit the drives in parallel (PQO), else the striping
>is mostly wasted.
>
>>
>>or maybe even
>>
>>3. stripe with the LVM having striped 2 logical volumes
>>
>>Disk 1-3 : Even months, (+indexes for odd months)
>>Disk 2-6 : Odd months, (+indexes for even months)
>>
>
>
>Could be. Easier to restore when a disk goes BOOM!
>
>>and many more, there are so many combinations!
>>
>>With parallel query and option 1 isn't there a certainty of there being
>disk
>>contention between parallel query fetches all attempting to read from all
>>disks? Or maybe this doesn't add up to much of an overhead anyway?
>
>
>I'd go for taking multiple 1 Gig sized LV's (and thus tablespaces) spread
>over multiple drives - sorta like your 2, but only if the table is big
>enough to warrant it (see below about disk loss, and you might want to use
3
>instead) and the access patterns are uniform. Sum them up to give the space
>you require. You'd have to look close at the access patterns of your data
to
>get the most balanced distribution. Another important point about the
>paritioning scheme you adopt is maintenance and recovery, so always factor
>this in as much as raw speed.
>
>Like I said in my previous post, I'm not a big fan of Oracle level
striping,
>it's too course and too much hassle. Hardware or an LVM is much better. The
>downside of a very large table striped over many drives *without
>protection - RAID5 or RAID1 (or LVM mirror)* is that if you lose a drive
you
>have to recover the entire stripe set - you can just recover the dead disk.
>For this reason, on a very big table I'd look at multiple stripe sets.
>Multiple stripe sets also work better when you have some big joins and/or
>big summary table creates - it helps to pull them away from the same
>disks/stripes as the central fact table(s).
>
>Another thing to look at is merging historic data into single partitions
and
>pulling it off to less and/or slower storage, as it's less likely to be
>accessed. Again, look closely at your own system.
>
>>
>>With option 2 I'm assuming many LVs rather than just 1 will help with the
>>optimizer to break up the query in parallel with each LV treated as
>separate
>>physical disks. (?)
>>
>>Did you rule out mirroring, which as well as the extra data security
(which
>>we don't need for summary tables) can improve the speed of 'reads' by AIX
>>making use of both copies, because it still lagged behind striping the
data
>>with LVM?
>
>
>Mirroring is great. But...
>
>As far as I know, IBM don't offer RAID0+1 on thier SSA adapters, and you
>can't *both* stripe and mirror through LVM on the same LVs at the same
time.
>SSA adapters only support RAID5 at present. Not sure about IBM SCSI, but
>you'd still have the same problem in LVM. It's a pain.
>
>If you find anything different to the above, I'm all ears. You could also
>consider systems like EMC Symmetix for more flexibility, but the cost is
4-5
>times higher. Obviously there will be other 3rd-party products that sit
>somewhere in between.
>
>>
>>Once again thank you for your help.
>>
>>Best regards
>>
>>Mark
>
>
>You're welcome. Keep me posted with how your testing goes.
>
>MotoX.
>
>
>
Received on Wed Jul 15 1998 - 02:33:05 CDT
![]() |
![]() |