root.sh failed on second node [message #497573] |
Sun, 06 March 2011 04:18  |
suresh.wst
Messages: 53 Registered: June 2008 Location: Hyderabad
|
Member |
|
|
Hi,
I am trying to install RAC on RHEL4 using vmware for learning purpose. root.sh on node1 was successful but on node2, following error occured:
The "/home/oracle/oracle/product/10.2.0/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
The "/home/oracle/oracle/product/10.2.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
Commands in the above script is:
/home/oracle/oracle/product/10.2.0/bin/racgons add_config rac1.tsb.com:6200 rac2.tsb.com:6200
/home/oracle/oracle/product/10.2.0/bin/oifcfg setif -global eth0/192.168.100.0:public eth1/200.200.100.0:cluster_interconnect
/home/oracle/oracle/product/10.2.0/bin/cluvfy stage -post crsinst -n rac1,rac2
Please help me in finding the solution for this. Thanks in advance.
Suresh
|
|
|
|
Re: root.sh failed on second node [message #497582 is a reply to message #497578] |
Sun, 06 March 2011 05:00   |
suresh.wst
Messages: 53 Registered: June 2008 Location: Hyderabad
|
Member |
|
|
I tried to run individual commands, following is the output:
[root@rac2 ~]# su - oracle
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/racgons add_config rac1.tsb.com:6200 rac2.tsb.com:6200
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/oifcfg setif -global eth0/192.168.100.0:public eth1/200.200.100.0:cluster_interconnect
PRIF-12: failed to initialize cluster support services
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/cluvfy stage -post crsinst -n rac1,rac2
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac2".
Checking user equivalence...
User equivalence check failed for user "oracle".
Check failed on nodes:
rac2
WARNING:
User equivalence is not set for nodes:
rac2
Verification will proceed with nodes:
rac1
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check failed for "CSS daemon".
Check failed on nodes:
rac1
Cluster manager integrity check failed.
Checking cluster integrity...
Cluster integrity check failed. This check did not run on the following nodes(s):
rac1
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
ERROR:
Unable to obtain OCR integrity details from any of the nodes.
OCR integrity check failed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check failed for "CRS daemon".
Check failed on nodes:
rac1
Checking daemon liveness...
Liveness check failed for "CSS daemon".
Check failed on nodes:
rac1
Checking daemon liveness...
Liveness check failed for "EVM daemon".
Check failed on nodes:
rac1
CRS integrity check failed.
Post-check for cluster services setup was unsuccessful on all the nodes.
In the above output I found that there is a problem with user equivalency, but I am able to ssh to both the nodes without password as oracle user.
on node1:
--------------
[oracle@rac1 ~]$ id oracle
uid=500(oracle) gid=2000(oinstall) groups=2000(oinstall),1000(dba)
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2 date
Sun Mar 6 16:28:04 IST 2011
[oracle@rac1 ~]$
on node2:
-----------------
[oracle@rac2 ~]$ id oracle
uid=500(oracle) gid=2000(oinstall) groups=2000(oinstall),1000(dba)
[oracle@rac2 ~]$
[oracle@rac2 ~]$ ssh rac1 date
Sun Mar 6 16:28:40 IST 2011
[oracle@rac2 ~]$
Suresh
|
|
|
|
|