Re: Clufvy and private interconnects
From: Dan Norris <dannorris_at_dannorris.com>
Date: Wed, 12 Mar 2008 07:16:34 -0500
Message-ID: <47D7C9A2.8050900@dannorris.com>
I noticed that you have 2 NICs configured with separate IPs on each of the two networks. If you want to build some redundancy for the NICs on your servers, you need to investigate interface bonding. Simply putting two NICs on the same subnet with different IP addresses isn't sufficient to create a redundant NIC configuration. Instead, you'll need to bond the two physical interfaces together (with the bonding software driver) and then use a single IP address on the bonded pseudointerface (typically called bond0, bond1, etc). Search metalink for "linux ethernet bonding" and you'll find a few helpful notes.
Dan
Jeffery Thomas wrote:
Date: Wed, 12 Mar 2008 07:16:34 -0500
Message-ID: <47D7C9A2.8050900@dannorris.com>
I'm not sure why it is failing. In previous versions, when you used RFC reserved networks, the tool couldn't find a network suitable for VIPs since it assumed that all reserved networks were for private interconnects. Looks like you sort of have the opposite problem. I'm not sure the reason for this and there's no other notes on ML indicating why, so I'd file an SR on it. I expect that VIPCA should still run fine anyway.
I noticed that you have 2 NICs configured with separate IPs on each of the two networks. If you want to build some redundancy for the NICs on your servers, you need to investigate interface bonding. Simply putting two NICs on the same subnet with different IP addresses isn't sufficient to create a redundant NIC configuration. Instead, you'll need to bond the two physical interfaces together (with the bonding software driver) and then use a single IP address on the bonded pseudointerface (typically called bond0, bond1, etc). Search metalink for "linux ethernet bonding" and you'll find a few helpful notes.
Dan
Jeffery Thomas wrote:
Solaris 10 We are in the process of prepping two boxes for a 10g RAC cluster. I downloaded the 11g cluvfy (as recommended by Oracle) and our config passes every check; but for some reason cannot find suitable interconnects.-- http://www.freelists.org/webpage/oracle-l Received on Wed Mar 12 2008 - 07:16:34 CDT
My question would be: what exactly is cluvfy looking for when it is scanning for the interconnects? User equivalence checks out, we are using switches, and so on.
cluvfy comp nodecon -n oracnp01,oracnp02 -verbose
<public IP stuff found)
....
Interfaces found on subnet "192.168.1.0" that are likely candidates for VIP:
oracnp01 e1000g2:192.168.1.101
oracnp02 e1000g2:192.168.1.103
Interfaces found on subnet "192.168.1.0" that are likely candidates for VIP:
oracnp01 e1000g5:192.168.1.100
oracnp02 e1000g5:192.168.1.102
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP:
oracnp01 e1000g3:192.168.2.101
oracnp02 e1000g3:192.168.2.103
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP:
oracnp01 e1000g4:192.168.2.100
oracnp02 e1000g4:192.168.2.102
WARNING: Could not find a suitable set of interfaces for the private interconnect.
Result: Node connectivity check passed.
Thanks,
Jeff