Re: Anyone tried kill ASM in 11gR2 RAC?
From: LS Cheng <exriscer_at_gmail.com>
Date: Thu, 21 Jan 2010 18:53:11 +0100
Message-ID: <6e9345581001210953g4113e73che931b0a299c58e7c_at_mail.gmail.com>
That sounds ok but that is that written anywhere, or anywhere which states Clusterware doesnt really need ASM instances up to access the disks except it needs it to get the disk information...
Date: Thu, 21 Jan 2010 18:53:11 +0100
Message-ID: <6e9345581001210953g4113e73che931b0a299c58e7c_at_mail.gmail.com>
That sounds ok but that is that written anywhere, or anywhere which states Clusterware doesnt really need ASM instances up to access the disks except it needs it to get the disk information...
Thanks!
-- LSC On Thu, Jan 21, 2010 at 2:36 PM, Bobak, Mark <Mark.Bobak_at_proquest.com>wrote:Received on Thu Jan 21 2010 - 11:53:11 CST
> Yep, makes sense, I think.
>
>
>
> Clusterware starts, ASM serves up OCR and voting disk geometry, as it
> relates to raw devices that make up your OCRDATA diskgroup. Clusterware
> caches that info, no longer needs to talk to ASM for it.
>
>
>
> You do the damage, including changing ownership of devices that make up
> OCRDATA diskgroup to root:root. But, clusterware processes run as root, so,
> they can still read/write those raw devices.
>
>
>
> What happens if you chown the devices to root:root, then also chmod 000 all
> those devices?
>
>
>
> -Mark
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *LS Cheng
> *Sent:* Thursday, January 21, 2010 7:44 AM
> *To:* K Gopalakrishnan
> *Cc:* Oracle Mailinglist
> *Subject:* Re: Anyone tried kill ASM in 11gR2 RAC?
>
>
>
> Hi
>
> So even OCRDATA Disk Group is not mounted and the physical disks has
> root.root instead of grid.oinstall ownership Clusterware will be up and
> running? So basically you mean Clusterware does not need ASM to be up to
> access the OCRDATA disks?
>
> My test was
>
> - kill ASM
> - change asm disks (OCRDATA) from grid.oinstall to root.root
> - check clusterware status which was up and running
>
>
>
>
>
> Thanks
>
> On Thu, Jan 21, 2010 at 1:38 PM, K Gopalakrishnan <kaygopal_at_gmail.com>
> wrote:
>
> Clusterware failure will happen _only_ when it can not acess the
> physical devices (disk timeout in css) and shutting down ASM does not
> revoke the access to disks. In your case clusterware _knows_ the
> location of ocr/voting information in ASM disks and it can continue
> reading/writing even ASM instance is down.
>
> -Gopal
>
>
>
>
>
> On Thu, Jan 21, 2010 at 2:51 AM, LS Cheng <exriscer_at_gmail.com> wrote:
> > Hi
> >
> > I was doing some cluster destructive tests on RAC 11gR2 a few days ago.
> >
> > One of tests was kill ASM and see how does that affects Clusterware
> > operation since OCR and Voting Disks are located in ASM (OCRDATA Disk
> > Group). After killing ASM nothing happened as it was quicky started up
> > again. So far so good. The next test was same test but changing the ASM
> > Disks ownership so when ASM is restarted OCR Disk Group cannot be
> accessed.
> > Surprisingly ASM Was started up, Database Disk Group was mounted OCR disk
> > Group obviously did not get mounted but then the Cluster was working
> without
> > any problems.
> >
> > So how is this happening? Doesnt Clusterware need to write and read to
> > Voting Disk every second? I was expecting a Clusterware failure in the
> node
> > but everything worked just as everything were ok.
> >
> > Thanks!
> >
> > --
> > LSC
> >
> >
>
>
>
-- http://www.freelists.org/webpage/oracle-l