系统环境:
操作系统:RedHat EL5
Cluster: Oracle CRS 10.2.0.1.0
Oracle: Oracle 10.2.0.1.0
如图所示:RAC 系统架构
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np81bhMmCXAAGvUOVml00019.jpg" target="_blank"></a>
二、CRS 安装
Cluster Ready Service是Oracle 构建RAC,负责集群资源管理的软件,在搭建RAC中必须首先安装.
安装需采用图形化方式,以Oracle用户的身份安装(在node1上):
注意:修改安装配置文件,增加redhat-5的支持
[oracle@node1 install]$ pwd
/home/oracle/cluster/install
[oracle@node1 install]$ ls
addLangs.sh images oneclick.properties oraparamsilent.ini response
addNode.sh lsnodes oraparam.ini resource unzip
[oracle@node1 install]$ vi oraparam.ini
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,redhat-5,UnitedLinux-1.0,asianux-1,asianux-2
[oracle@node1 cluster]$./runInstaller
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKiom1Np9D_gta0TAAOU3ODbpYo235.jpg" target="_blank"></a>
欢迎界面
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9Fqw7dMxAAQ877duc5k567.jpg" target="_blank"></a>
注意安装CRS的主目录,不能和Oracle软件的目录一致,需单独在另一个目录
[oracle@node1 ~]$ ls -l /u01
total 24
drwxr-xr-x 3 oracle oinstall 4096 May 5 17:04 app
drwxr-xr-x 36 oracle oinstall 4096 May 7 11:08 crs_1
drwx------ 2 oracle oinstall 16384 May 4 15:59 lost+found
[oracle@node1 ~]$
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9N3i4N9IAAS5W-8Dhxo331.jpg" target="_blank"></a>
添加节点(如果主机间信任关系配置有问题,这里就无法发现node 2)
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKioL1Np9Umw2PB1AAPy6AzmTDQ846.jpg" target="_blank"></a>
修改public 网卡属性(public 网卡用于和Client 通讯)
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKioL1Np9YCz2b3-AAUJHGqPOUk691.jpg" target="_blank"></a>
OCR必须采用RAW设备(Exteneral Redundancy只需一个RAW,安装后可以添加mirror)
<a href="http://s3.51cto.com/wyfs02/M01/26/27/wKioL1Np9eXiMcc_AAVxqZm2wrM209.jpg" target="_blank"></a>
VOTE DISK必须采用RAW设备(Exteneral Redundancy只需一个RAW,安装后可以添加多个raw构成冗余)
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKiom1Np9nmj6NqtAARaCrLEH4c513.jpg" target="_blank"></a>
开始安装(并将安装软件传送到node2)
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9pvSgFDUAARmCJbea54508.jpg" target="_blank"></a>
安装提示分别在两个节点按顺序执行script
node1:
1
2
3
4
<code>[root@node1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh </code>
<code>Changing permissions of /u01/app/oracle/oraInventory to </code><code>770.</code>
<code>Changing groupname of /u01/app/oracle/oraInventory to oinstall.</code>
<code>The execution of the script is complete</code>
node2:
<code>[root@node2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh </code>
[root@node1 ~]# /u01/crs_1/root.sh
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<code>WARNING: directory </code><code>'/u01'</code> <code>is </code><code>not</code> <code>owned by root</code>
<code>Checking to see </code><code>if</code> <code>Oracle CRS stack is already configured</code>
<code>/etc/oracle does </code><code>not</code> <code>exist. Creating it now.</code>
<code>Setting the permissions </code><code>on</code> <code>OCR backup directory</code>
<code>Setting up NS directories</code>
<code>Oracle Cluster Registry configuration upgraded successfully</code>
<code>assigning default hostname node1 </code><code>for</code> <code>node </code><code>1.</code>
<code>assigning default hostname node2 </code><code>for</code> <code>node </code><code>2.</code>
<code>Successfully accumulated necessary OCR keys.</code>
<code>Using ports: CSS=</code><code>49895</code> <code>CRS=</code><code>49896</code> <code>EVMC=</code><code>49898</code> <code>and</code> <code>EVMR=</code><code>49897.</code>
<code>node <nodenumber>: <nodename> <</code><code>private</code> <code>interconnect name> <hostname></code>
<code>node </code><code>1</code><code>: node1 node1-priv node1</code>
<code>node </code><code>2</code><code>: node2 node2-priv node2</code>
<code>Creating OCR keys </code><code>for</code> <code>user </code><code>'root'</code><code>, privgrp </code><code>'root'</code><code>..</code>
<code>Operation successful.</code>
<code>Now formatting voting device: /dev/raw/raw2</code>
<code>Format of </code><code>1</code> <code>voting devices complete.</code>
<code>Startup will be queued to </code><code>init</code> <code>within </code><code>90</code> <code>seconds.</code>
<code>Adding daemons to inittab</code>
<code>Expecting the CRS daemons to be up within </code><code>600</code> <code>seconds.</code>
<code>CSS is active </code><code>on</code> <code>these nodes.</code>
<code> </code><code>node1</code>
<code>CSS is inactive </code><code>on</code> <code>these nodes.</code>
<code> </code><code>node2</code>
<code>Local node checking complete.</code>
<code>Run root.sh </code><code>on</code> <code>remaining nodes to start CRS daemons.</code>
node1执行成功!
[root@node2 ~]# /u01/crs_1/root.sh
28
29
30
31
32
<code>WARNING: directory </code><code>'/u01'</code> <code>is not owned by root</code>
<code>/etc/oracle does not exist. Creating it now.</code>
<code>Setting the permissions on OCR backup directory</code>
<code>clscfg: EXISTING configuration version 3 detected.</code>
<code>clscfg: version 3 is 10G Release 2.</code>
<code>assigning </code><code>default</code> <code>hostname node1 </code><code>for</code> <code>node 1.</code>
<code>assigning </code><code>default</code> <code>hostname node2 </code><code>for</code> <code>node 2.</code>
<code>Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.</code>
<code>node <nodenumber>: <nodename> <private interconnect name> <hostname></code>
<code>node 1: node1 node1-priv node1</code>
<code>node 2: node2 node2-priv node2</code>
<code>clscfg: Arguments check out successfully.</code>
<code>NO KEYS WERE WRITTEN. Supply -force parameter to override.</code>
<code>-force is destructive and will destroy any previous cluster</code>
<code>configuration.</code>
<code>Oracle Cluster Registry </code><code>for</code> <code>cluster has already been initialized</code>
<code>Startup will be queued to init within 90 seconds.</code>
<code>Expecting the CRS daemons to be up within 600 seconds.</code>
<code>CSS is active on these nodes.</code>
<code>CSS is active on all nodes.</code>
<code>Waiting </code><code>for</code> <code>the Oracle CRSD and EVMD to start</code>
<code>Oracle CRS stack installed and running under init(1M)</code>
<code>Running vipca(silent) </code><code>for</code> <code>configuring nodeapps</code>
/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
出现以上错误,解决方法:
[root@node2 bin]# vi vipca
Linux) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL (添加此行信息)
#End workaround
[root@node2 bin]# vi srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL(添加此行信息)
在node 2重新执行root.sh:
注意:root.sh只能执行一次,如果再次执行,需执行rootdelete.sh
[root@node2 bin]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
[root@node2 bin]# cd ../install
[root@node2 install]# ls
<code>cluster.ini install.incl rootaddnode.sbs rootdelete.sh templocal</code>
<code>cmdllroot.sh make.log rootconfig rootinstall</code>
<code>envVars.properties paramfile.crs rootdeinstall.sh rootlocaladd</code>
<code>install.excl preupdate.sh rootdeletenode.sh rootupgrade</code>
[root@node2 install]# ./rootdelete.sh
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.LISTENER_NODE2.lsnr'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.ons'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.vip'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.gsd'</code><code>.</code>
<code>Shutting down Oracle Cluster Ready Services (CRS):</code>
<code>Stopping resources.</code>
<code>Successfully stopped CRS resources </code>
<code>Stopping CSSD.</code>
<code>Shutting down CSS daemon.</code>
<code>Shutdown request successfully issued.</code>
<code>Shutdown has begun. The daemons should exit soon.</code>
<code>Checking to see </code><code>if</code> <code>Oracle CRS stack is down...</code>
<code>Oracle CRS stack is </code><code>not</code> <code>running.</code>
<code>Oracle CRS stack is down now.</code>
<code>Removing script </code><code>for</code> <code>Oracle Cluster Ready services</code>
<code>Updating ocr file </code><code>for</code> <code>downgrade</code>
<code>Cleaning up SCR settings </code><code>in</code> <code>'/etc/oracle/scls_scr'</code>
[root@node2 install]#
node 2 再次出错:
[root@node2 install]# /u01/crs_1/root.sh
<code>clscfg: EXISTING configuration version </code><code>3</code> <code>detected.</code>
<code>clscfg: version </code><code>3</code> <code>is 10G Release </code><code>2.</code>
<code>NO KEYS WERE WRITTEN. Supply -force parameter to </code><code>override</code><code>.</code>
<code>-force is destructive </code><code>and</code> <code>will destroy any previous cluster</code>
<code>CSS is active </code><code>on</code> <code>all nodes.</code>
<code>Waiting </code><code>for</code> <code>the Oracle CRSD </code><code>and</code> <code>EVMD to start</code>
<code>Oracle CRS stack installed </code><code>and</code> <code>running under </code><code>init</code><code>(1M)</code>
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
解决方法:(配置网络)
[root@node2 bin]# ./oifcfg iflist
eth0 192.168.8.0
eth1 10.10.10.0
[root@node2 bin]# ./oifcfg getif
[root@node2 bin]# ./oifcfg setif -global eth0/192.168.8.0:public
[root@node2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
eth0 192.168.8.0 global public
eth1 10.10.10.0 global cluster_interconnect
并在node2上执行VIPCA:
<a href="http://s3.51cto.com/wyfs02/M00/26/28/wKioL1Np-f7jjJnSAAKJogr5OU8992.jpg" target="_blank"></a>
以root身份执行vipca(在/u01/crs_1/bin)
<a href="http://s3.51cto.com/wyfs02/M01/26/28/wKioL1Np-jGz7zABAAM2hLcunZg293.jpg" target="_blank"></a>
配置信息应和/etc/hosts文件一致
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np-pvheqYiAAPZFXo4_C0630.jpg" target="_blank"></a>
开始配置
<a href="http://s3.51cto.com/wyfs02/M01/26/28/wKioL1Np-pqhEWP-AAGGYpYYzMI076.jpg" target="_blank"></a>
vipca配置成功后,crs服务正常工作
<a href="http://s3.51cto.com/wyfs02/M02/26/28/wKioL1Np-vfiQeraAAKdYARXLR8570.jpg" target="_blank"></a>
安装完成!
验证CRS:
[root@node2 bin]# crs_stat -t
<code>Name Type Target State Host </code>
<code>------------------------------------------------------------</code>
<code>ora.node1.gsd application ONLINE ONLINE node1 </code>
<code>ora.node1.ons application ONLINE ONLINE node1 </code>
<code>ora.node1.vip application ONLINE ONLINE node1 </code>
<code>ora.node2.gsd application ONLINE ONLINE node2 </code>
<code>ora.node2.ons application ONLINE ONLINE node2 </code>
<code>ora.node2.vip application ONLINE ONLINE node2</code>
[root@node1 ~]# crs_stat -t
附:错误案例
如果在运行root.sh时出现以下错误:
<a href="http://s3.51cto.com/wyfs02/M00/26/28/wKioL1Np_SDQMHUEAAIfZxOAV4s961.jpg" target="_blank"></a>
在出现错误的节点上运行(root)vipca 解决!
@至此CRS安装成功!
本文转自 客居天涯 51CTO博客,原文链接:http://blog.51cto.com/tiany/1408023,如需转载请自行联系原作者