Re: IPMP testing on RAC cluster
Date: Fri, 1 May 2009 07:38:42 -0500
Message-ID: <289232290905010538p1c1714c1yf9318f2b9a9a50da_at_mail.gmail.com>
Thanks to all that responded both on the list and privately.
We pretty much did the same as you describe and used the "plug out cable" method. Besides we also did a snoop on the interfaces to check for traffic on interfaces. The host IP and VIP failed over smoothly to the redundant NICs when the cable for the active interface (for public interface) was pulled out. Ditto with private IP when the active interface cable was pulled for the private interface.
For the private IP testing, I also monitored the GV$CLUSTER_INTERCONNECTS view and it showed the interface failing over. For example, the view that was showing this earlier --
select * from gv$cluster_interconnects;
INST_ID NAME IP_ADDRESS IS_ SOURCE ---------- ------------------------- ---------------- --- ------------------------------- 1 bge1 10.10.10.72 NO cluster_interconnectsparameter
2 bge1 10.10.10.73 NO cluster_interconnectsparameter
2 rows selected.
got changed to showing this --
select * from gv$cluster_interconnects;
INST_ID NAME IP_ADDRESS IS_ SOURCE ---------- ------------------------- ---------------- --- ------------------------------- 1 bge3:1 10.10.10.72 NO cluster_interconnectsparameter
2 bge1 10.10.10.73 NO cluster_interconnectsparameter
2 rows selected.
The communication was between nic3 and nic1 (via switches) when the cable for nic1 was plugged out on node1. The output from snoop also showed the same.
- Ravi
On Fri, May 1, 2009 at 4:59 AM, LS Cheng <exriscer_at_gmail.com> wrote:
> Hi
>
> I do this for private interconnect test
>
>
> - open ssh session in node 1
> - ping node 2
> - open ssh session in node 2
> - ping node 1
> - open ssh session in node 1
> - tail -f messages
> - open ssh session in node 2
> - tail -f messages
> - pull node 1's cable
> - check for lost packages in the ping sessions
> - check messages
>
>
> For VIP do the same and measure the failover time. Bear in mind VIP
> failback is not default in 10.2.0.4 so dont expect VIP failing back if you
> have pulled all public network cables and plug them back :-)
>
>
> Regards
>
> --
> LSC
>
>
>
>
>
>
> On Tue, Apr 28, 2009 at 5:03 PM, Ravi Gaur <ravigaur1_at_gmail.com> wrote:
>
>> We are trying to run some tests to verify that our ipmp configuration in
>> our RAC sandbox cluster works as desired. We've two nic cards for public and
>> two for private and IPMP is configured for both. The plan is to physically
>> plug them out (one at a time) and then plug them back in and ensure that
>> everything works normally (ie no noticeable hiccups etc).
>> Has anyone done this testing before and if so, are willing to share the
>> test plan? Any queries/logs etc that might help.
>> Here's some additional details if that helps --
>> 2 node RAC on Solaris 10
>> Oracle 10.2.0.4
>> Interfaces -- Public - bge0 and bge2; Private - bge1 and bge 3
>>
>> TIA,
>>
>> - Ravi Gaur
>>
>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Fri May 01 2009 - 07:38:42 CDT