系統環境:
作業系統:RedHat EL5
Cluster: Oracle CRS 10.2.0.1.0
Oracle: Oracle 10.2.0.1.0
如圖所示:RAC 系統架構
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np81bhMmCXAAGvUOVml00019.jpg" target="_blank"></a>
二、CRS 安裝
Cluster Ready Service是Oracle 建構RAC,負責叢集資源管理的軟體,在搭建RAC中必須首先安裝.
安裝需采用圖形化方式,以Oracle使用者的身份安裝(在node1上):
注意:修改安裝配置檔案,增加redhat-5的支援
[oracle@node1 install]$ pwd
/home/oracle/cluster/install
[oracle@node1 install]$ ls
addLangs.sh images oneclick.properties oraparamsilent.ini response
addNode.sh lsnodes oraparam.ini resource unzip
[oracle@node1 install]$ vi oraparam.ini
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,redhat-5,UnitedLinux-1.0,asianux-1,asianux-2
[oracle@node1 cluster]$./runInstaller
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKiom1Np9D_gta0TAAOU3ODbpYo235.jpg" target="_blank"></a>
歡迎界面
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9Fqw7dMxAAQ877duc5k567.jpg" target="_blank"></a>
注意安裝CRS的主目錄,不能和Oracle軟體的目錄一緻,需單獨在另一個目錄
[oracle@node1 ~]$ ls -l /u01
total 24
drwxr-xr-x 3 oracle oinstall 4096 May 5 17:04 app
drwxr-xr-x 36 oracle oinstall 4096 May 7 11:08 crs_1
drwx------ 2 oracle oinstall 16384 May 4 15:59 lost+found
[oracle@node1 ~]$
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9N3i4N9IAAS5W-8Dhxo331.jpg" target="_blank"></a>
添加節點(如果主機間信任關系配置有問題,這裡就無法發現node 2)
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKioL1Np9Umw2PB1AAPy6AzmTDQ846.jpg" target="_blank"></a>
修改public 網卡屬性(public 網卡用于和Client 通訊)
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKioL1Np9YCz2b3-AAUJHGqPOUk691.jpg" target="_blank"></a>
OCR必須采用RAW裝置(Exteneral Redundancy隻需一個RAW,安裝後可以添加mirror)
<a href="http://s3.51cto.com/wyfs02/M01/26/27/wKioL1Np9eXiMcc_AAVxqZm2wrM209.jpg" target="_blank"></a>
VOTE DISK必須采用RAW裝置(Exteneral Redundancy隻需一個RAW,安裝後可以添加多個raw構成備援)
<a href="http://s3.51cto.com/wyfs02/M02/26/27/wKiom1Np9nmj6NqtAARaCrLEH4c513.jpg" target="_blank"></a>
開始安裝(并将安裝軟體傳送到node2)
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np9pvSgFDUAARmCJbea54508.jpg" target="_blank"></a>
安裝提示分别在兩個節點按順序執行script
node1:
1
2
3
4
<code>[root@node1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh </code>
<code>Changing permissions of /u01/app/oracle/oraInventory to </code><code>770.</code>
<code>Changing groupname of /u01/app/oracle/oraInventory to oinstall.</code>
<code>The execution of the script is complete</code>
node2:
<code>[root@node2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh </code>
[root@node1 ~]# /u01/crs_1/root.sh
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<code>WARNING: directory </code><code>'/u01'</code> <code>is </code><code>not</code> <code>owned by root</code>
<code>Checking to see </code><code>if</code> <code>Oracle CRS stack is already configured</code>
<code>/etc/oracle does </code><code>not</code> <code>exist. Creating it now.</code>
<code>Setting the permissions </code><code>on</code> <code>OCR backup directory</code>
<code>Setting up NS directories</code>
<code>Oracle Cluster Registry configuration upgraded successfully</code>
<code>assigning default hostname node1 </code><code>for</code> <code>node </code><code>1.</code>
<code>assigning default hostname node2 </code><code>for</code> <code>node </code><code>2.</code>
<code>Successfully accumulated necessary OCR keys.</code>
<code>Using ports: CSS=</code><code>49895</code> <code>CRS=</code><code>49896</code> <code>EVMC=</code><code>49898</code> <code>and</code> <code>EVMR=</code><code>49897.</code>
<code>node <nodenumber>: <nodename> <</code><code>private</code> <code>interconnect name> <hostname></code>
<code>node </code><code>1</code><code>: node1 node1-priv node1</code>
<code>node </code><code>2</code><code>: node2 node2-priv node2</code>
<code>Creating OCR keys </code><code>for</code> <code>user </code><code>'root'</code><code>, privgrp </code><code>'root'</code><code>..</code>
<code>Operation successful.</code>
<code>Now formatting voting device: /dev/raw/raw2</code>
<code>Format of </code><code>1</code> <code>voting devices complete.</code>
<code>Startup will be queued to </code><code>init</code> <code>within </code><code>90</code> <code>seconds.</code>
<code>Adding daemons to inittab</code>
<code>Expecting the CRS daemons to be up within </code><code>600</code> <code>seconds.</code>
<code>CSS is active </code><code>on</code> <code>these nodes.</code>
<code> </code><code>node1</code>
<code>CSS is inactive </code><code>on</code> <code>these nodes.</code>
<code> </code><code>node2</code>
<code>Local node checking complete.</code>
<code>Run root.sh </code><code>on</code> <code>remaining nodes to start CRS daemons.</code>
node1執行成功!
[root@node2 ~]# /u01/crs_1/root.sh
28
29
30
31
32
<code>WARNING: directory </code><code>'/u01'</code> <code>is not owned by root</code>
<code>/etc/oracle does not exist. Creating it now.</code>
<code>Setting the permissions on OCR backup directory</code>
<code>clscfg: EXISTING configuration version 3 detected.</code>
<code>clscfg: version 3 is 10G Release 2.</code>
<code>assigning </code><code>default</code> <code>hostname node1 </code><code>for</code> <code>node 1.</code>
<code>assigning </code><code>default</code> <code>hostname node2 </code><code>for</code> <code>node 2.</code>
<code>Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.</code>
<code>node <nodenumber>: <nodename> <private interconnect name> <hostname></code>
<code>node 1: node1 node1-priv node1</code>
<code>node 2: node2 node2-priv node2</code>
<code>clscfg: Arguments check out successfully.</code>
<code>NO KEYS WERE WRITTEN. Supply -force parameter to override.</code>
<code>-force is destructive and will destroy any previous cluster</code>
<code>configuration.</code>
<code>Oracle Cluster Registry </code><code>for</code> <code>cluster has already been initialized</code>
<code>Startup will be queued to init within 90 seconds.</code>
<code>Expecting the CRS daemons to be up within 600 seconds.</code>
<code>CSS is active on these nodes.</code>
<code>CSS is active on all nodes.</code>
<code>Waiting </code><code>for</code> <code>the Oracle CRSD and EVMD to start</code>
<code>Oracle CRS stack installed and running under init(1M)</code>
<code>Running vipca(silent) </code><code>for</code> <code>configuring nodeapps</code>
/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
出現以上錯誤,解決方法:
[root@node2 bin]# vi vipca
Linux) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL (添加此行資訊)
#End workaround
[root@node2 bin]# vi srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL(添加此行資訊)
在node 2重新執行root.sh:
注意:root.sh隻能執行一次,如果再次執行,需執行rootdelete.sh
[root@node2 bin]# /u01/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
[root@node2 bin]# cd ../install
[root@node2 install]# ls
<code>cluster.ini install.incl rootaddnode.sbs rootdelete.sh templocal</code>
<code>cmdllroot.sh make.log rootconfig rootinstall</code>
<code>envVars.properties paramfile.crs rootdeinstall.sh rootlocaladd</code>
<code>install.excl preupdate.sh rootdeletenode.sh rootupgrade</code>
[root@node2 install]# ./rootdelete.sh
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.LISTENER_NODE2.lsnr'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.ons'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.vip'</code><code>.</code>
<code>CRS</code><code>-0210</code><code>: Could </code><code>not</code> <code>find resource </code><code>'ora.node2.gsd'</code><code>.</code>
<code>Shutting down Oracle Cluster Ready Services (CRS):</code>
<code>Stopping resources.</code>
<code>Successfully stopped CRS resources </code>
<code>Stopping CSSD.</code>
<code>Shutting down CSS daemon.</code>
<code>Shutdown request successfully issued.</code>
<code>Shutdown has begun. The daemons should exit soon.</code>
<code>Checking to see </code><code>if</code> <code>Oracle CRS stack is down...</code>
<code>Oracle CRS stack is </code><code>not</code> <code>running.</code>
<code>Oracle CRS stack is down now.</code>
<code>Removing script </code><code>for</code> <code>Oracle Cluster Ready services</code>
<code>Updating ocr file </code><code>for</code> <code>downgrade</code>
<code>Cleaning up SCR settings </code><code>in</code> <code>'/etc/oracle/scls_scr'</code>
[root@node2 install]#
node 2 再次出錯:
[root@node2 install]# /u01/crs_1/root.sh
<code>clscfg: EXISTING configuration version </code><code>3</code> <code>detected.</code>
<code>clscfg: version </code><code>3</code> <code>is 10G Release </code><code>2.</code>
<code>NO KEYS WERE WRITTEN. Supply -force parameter to </code><code>override</code><code>.</code>
<code>-force is destructive </code><code>and</code> <code>will destroy any previous cluster</code>
<code>CSS is active </code><code>on</code> <code>all nodes.</code>
<code>Waiting </code><code>for</code> <code>the Oracle CRSD </code><code>and</code> <code>EVMD to start</code>
<code>Oracle CRS stack installed </code><code>and</code> <code>running under </code><code>init</code><code>(1M)</code>
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
解決方法:(配置網絡)
[root@node2 bin]# ./oifcfg iflist
eth0 192.168.8.0
eth1 10.10.10.0
[root@node2 bin]# ./oifcfg getif
[root@node2 bin]# ./oifcfg setif -global eth0/192.168.8.0:public
[root@node2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
eth0 192.168.8.0 global public
eth1 10.10.10.0 global cluster_interconnect
并在node2上執行VIPCA:
<a href="http://s3.51cto.com/wyfs02/M00/26/28/wKioL1Np-f7jjJnSAAKJogr5OU8992.jpg" target="_blank"></a>
以root身份執行vipca(在/u01/crs_1/bin)
<a href="http://s3.51cto.com/wyfs02/M01/26/28/wKioL1Np-jGz7zABAAM2hLcunZg293.jpg" target="_blank"></a>
配置資訊應和/etc/hosts檔案一緻
<a href="http://s3.51cto.com/wyfs02/M00/26/27/wKiom1Np-pvheqYiAAPZFXo4_C0630.jpg" target="_blank"></a>
開始配置
<a href="http://s3.51cto.com/wyfs02/M01/26/28/wKioL1Np-pqhEWP-AAGGYpYYzMI076.jpg" target="_blank"></a>
vipca配置成功後,crs服務正常工作
<a href="http://s3.51cto.com/wyfs02/M02/26/28/wKioL1Np-vfiQeraAAKdYARXLR8570.jpg" target="_blank"></a>
安裝完成!
驗證CRS:
[root@node2 bin]# crs_stat -t
<code>Name Type Target State Host </code>
<code>------------------------------------------------------------</code>
<code>ora.node1.gsd application ONLINE ONLINE node1 </code>
<code>ora.node1.ons application ONLINE ONLINE node1 </code>
<code>ora.node1.vip application ONLINE ONLINE node1 </code>
<code>ora.node2.gsd application ONLINE ONLINE node2 </code>
<code>ora.node2.ons application ONLINE ONLINE node2 </code>
<code>ora.node2.vip application ONLINE ONLINE node2</code>
[root@node1 ~]# crs_stat -t
附:錯誤案例
如果在運作root.sh時出現以下錯誤:
<a href="http://s3.51cto.com/wyfs02/M00/26/28/wKioL1Np_SDQMHUEAAIfZxOAV4s961.jpg" target="_blank"></a>
在出現錯誤的節點上運作(root)vipca 解決!
@至此CRS安裝成功!
本文轉自 客居天涯 51CTO部落格,原文連結:http://blog.51cto.com/tiany/1408023,如需轉載請自行聯系原作者