Re: Oracle RAC nodes eviction question

From: Justin Mungal <justin_at_n0de.ws>
Date: Wed, 20 Aug 2014 20:09:28 -0500
Message-ID: <CAO9=aUxQBRWhEuN5Yyj0LgLjwr5FFofPkgF4T4ZE336+49Vuvw_at_mail.gmail.com>



Same here Jeremy. All database homes and GI homes are on local storage. Outside of isolating the binaries from SAN issues, it also increases availability: we can patch homes on one node while maintaining availability on others (ie. a shared database home would require shutting down all database instances running from it before patching).

On Tue, Aug 19, 2014 at 7:38 PM, Jeremy Schneider < jeremy.schneider_at_ardentperf.com> wrote:

> Old thread, I know. :) Just wanted to add a quick comment in response to
> this message - for this exact reason, I've become a proponent of always
> having GI on local storage (even in large environments).
>
> I've been in situations where we had SAN problems and it was much more
> complicated to build good timelines because none of the GI logs were
> available.
>
> -Jeremy
>
> --
> http://about.me/jeremy_schneider
> Sent from my iPhone
>
> On Aug 13, 2014, at 5:38 PM, "Hameed, Amir" <Amir.Hameed_at_xerox.com> wrote:
>
> Thanks Seth.
>
> Since log files are located inside the GI_HOME and therefore, they are
> also on the NAS, there were no entries in the log files when NAS head
> failed over, which is expected.
>
> *From:* Seth Miller [mailto:sethmiller.sm_at_gmail.com
> <sethmiller.sm_at_gmail.com>]
> *Sent:* Wednesday, August 13, 2014 5:32 PM
> *To:* Hameed, Amir
> *Cc:* oracle-l_at_freelists.org
> *Subject:* Re: Oracle RAC nodes eviction question
>
>
>
> Amir,
>
> The first question is, why was the node evicted. The answer to that should
> be pretty clear in the clusterware alert log. If the binaries go away for
> any amount of time, cssdagent or cssdmonitor will likely see that as a hang
> and initiate a reboot.
>
> Seth Miller
>
>
>
> On Wed, Aug 13, 2014 at 2:57 PM, Hameed, Amir <Amir.Hameed_at_xerox.com>
> wrote:
>
> Folks,
>
> I am trying to understand the behavior of an Oracle RAC Cluster if the
> Grid and RAC binaries homes become unavailable while the Cluster and Oracle
> RAC are running. The Grid version is 11.2.0.3 and the platform is Solaris
> 10. The Oracle Grid and the Oracle RAC environments are on NAS with the
> database configured with dNFS. The storage for Grid and RAC binaries are
> coming from one NAS head whereas the OCR and Voting Disks (three of each)
> are spread over three NAS heads so that in the event that one NAS head
> becomes unavailable, the cluster can still access two voting disks. The
> recommendation for this configuration came from the storage vendor and
> Oracle. What we observed was that last weekend when the NAS head where the
> Grid and RAC binaries were mounted from went down for a few minutes, all
> RAC nodes were rebooted even though two voting disks were still accessible.
> In my destructive testing about a year ago, one of the tests run was to
> pull all cables of NICs that were used for kernel NFS on one of the RAC
> nodes but the cluster did not evict that node. Any feedback will be
> appreciated.
>
>
>
> Thanks,
>
> Amir
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Aug 21 2014 - 03:09:28 CEST

Original text of this message