Re: Multiple DBs on One RAC - Adding New Nodes and Different Storage
Date: Thu, 26 Dec 2013 14:18:13 -0600
Message-ID: <CAFH+ifcCPg=9UGN6xUbPbSyxt2C6DGSFKzCX-L4Lc40V3qUSAg_at_mail.gmail.com>
I was planning on using services to segregate the instances, and as Tim has pointed out, the voting disks and OCR are on the EMC. Everything is in the +GRID diskgroup however:
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 9926c093ccad4f6abf673a8047935f65 (/dev/mapper/grid000p1) [GRID] 2. ONLINE 28bc607c79f04fe0bf5826cd2df7b3b5 (/dev/mapper/grid001p1) [GRID] 3. ONLINE 4f78ee0112964fc5bfd0302f8fc71b7f (/dev/mapper/grid002p1) [GRID]4. ONLINE a3398fe4bb5d4f26bf1bb785ead58c8a (/dev/mapper/grid003p1) [GRID] 5. ONLINE 0531278bdfd14fc9bfb371ee9ec33028 (/dev/mapper/grid004p1) [GRID] Located 5 voting disk(s).
oracle_at_rchr1p01(33287) +ASM1 /home/oracle $ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 4608 Available space (kbytes) : 257512 ID : 470198443 Device/File Name : +GRID Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user
Can I just share the +GRID diskgroup to all the nodes and just have the Compellant zoned to Node 3 and the new nodes? Actually going to add 1 node to our 2-node test/dev cluster first, so I have some 'play' time, but am a little worried about the effect on the HBA traffic if all SANs are connected to all nodes. But there won't be any traffic for the new app on the first two nodes, nor will there be any traffic for the present app on the two new nodes. Only problem might be with Node 3 and either HBA traffic or an overall reduction in cache fusion traffic speed if Node 3
can't keep up. Hmmmmmmmm .................. Just sort of thinking out loudhere.
On Thu, Dec 26, 2013 at 1:56 PM, Paresh Yadav <yparesh_at_gmail.com> wrote:
> Ah node vs node. Ignore my previous message which makes no sense. I read
> as node = instance.
>
> Thanks
> Paresh
> 416-688-1003
>
>
>
> On Thu, Dec 26, 2013 at 2:52 PM, D'Hooge Freek <Freek.DHooge_at_uptime.be>wrote:
>
>> David,
>>
>> You will at least need to make the ASM diskgroup holding the voting / ocr
>> files visible on the new nodes as well.
>> I have never tested what happens if not all ASM diskgroups are visible on
>> all nodes, but I would expect to see some errors when creating / mounting
>> new disk groups.
>>
>> What would be the reason to not have all involved luns visible on all rac
>> nodes?
>> In my opinion, it would make it easier and flexible to have all asm luns
>> visible on all nodes.
>>
>> I don't understand what you mean with master node?
>> If you want to dedicate one (or more nodes) for a application or
>> database, you can use db services. In the service configuration you can
>> mark some instances as preferred and other as available.
>> There is also no requirement to create an instance on each of the nodes.
>> You could for example have the following setup:
>>
>> Database A, used by application X has instances on nodes 1,2 and 3.
>> The db service used by application X has the instances on nodes 1 and 2
>> as preferred and on instance 3 as available (standby so to speak).
>> Dabase B, used by application Y and Z has instances on nodes 3, 4 and 5.
>> The db service used by application Y has the instances on nodes 3 and 4
>> as preferred and on instance 5 as available, while the db service used by
>> application Z has nodes 4 and 5 as preferred an node 3 as available.
>>
>> As you can see, you can play an puzzle with this as much as you want.
>>
>> Kind regards,
>>
>> --
>> Freek D'Hooge
>> Uptime
>> Oracle Database Administrator
>> email: freek.dhooge_at_uptime.be
>> tel +32(03) 451 23 82
>> http://www.uptime.be
>>
>>
>>
>>
>> On do, 2013-12-26 at 13:22 -0600, David Barbour wrote:
>>
>> Merry Christmas (to those of you who celebrate), Happy Holidays (to those
>> who may not but have something going on - including New Year's), and Good
>> Afternoon:
>>
>>
>> We are currently running a 3-node RAC on RHEL 6.3(version
>> 2.6.32-279.22.1.el6.x86_64), Oracle 11.2.0.3.0 on ASM. Storage is
>> fibre-connected EMC VMAX. Cluster cache communication is handled via
>> Infiband. We are adding a new application and two nodes. The question
>> arises with the storage. We're putting the new application on a Dell
>> Compellent SAN. Also fibre connected. The plan is to make the 2 new nodes
>> the primary nodes for this application, and make node 3 of the current
>> cluster the fail-over.
>>
>>
>> Can anyone who has used multiple different SANs before provide any
>> hints/tips/tricks/pitfalls?
>>
>>
>> Will we have to serve the current EMC Storage to the new nodes? I
>> don't see why as we're not expanding the number of instances of the
>> currently running application. But we will have to zone the new storage to
>> connect to the third node of the current cluster. Any thoughts on how to
>> install this new application? I don't really want instances of it on Nodes
>> 1 & 2 at all. If I make Node 3 the master node and install from there, I
>> should be able to pick Nodes 4 & 5 on which to create the other new
>> instances.
>>
>>
>> Any thoughts?
>>
>>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Thu Dec 26 2013 - 21:18:13 CET