天天看點

Oracle RAC學習筆記

安裝oracleasm,

配置ASM子產品,root使用者

/etc/init.d/oracleasm configure -i
           

根據提示輸入預設使用者grid 預設組 asmdba 設定開機啟動y 設定開機搜尋y

判斷是否ASM磁盤,-p檢視的是裝置路徑名,-d用來檢視裝置号

[[email protected] node1]# oracleasm querydisk -v -p VOL1
Disk "VOL1" is a valid ASM disk
/dev/sdb1: LABEL="VOL1" TYPE="oracleasm" 
[[email protected] node1]# oracleasm querydisk -v -p /dev/sdb1
Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
[[email protected] node1]# oracleasm querydisk -v -d -p VOL1
Disk "VOL1" is a valid ASM disk on device /dev/sdb1[8,17]
/dev/sdb1: LABEL="VOL1" TYPE="oracleasm" 

           

檢視叢集節點ip資訊

以grid使用者登入sqlplus,

SQL> set linesize 400
SQL> select name,ip_address from v$cluster_interconnects;

NAME					      IP_ADDRESS
--------------------------------------------- ------------------------------------------------
eth1					      192.168.100.111

           

跟ifconfig顯示的eth1網卡位址一樣。

[[email protected] ~]$ oifcfg iflist
eth0  192.168.78.0
eth1  192.168.100.0
[[email protected] ~]$ oifcfg getif
eth0  192.168.78.0  global  public
eth1  192.168.100.0  global  cluster_interconnect

           

也可以從gv$cluster_interconnects視圖進行查找。

SQL> select * from gv$cluster_interconnects;

   INST_ID NAME 					 IP_ADDRESS						      IS_PUBLIC  SOURCE
---------- --------------------------------------------- ------------------------------------------------------------ ---------- ----------------------------------------
	 1 eth1 					 192.168.100.111					      NO Oracle Cluster Repository
	 2 eth1 					 192.168.100.222					      NO Oracle Cluster Repository

           

安裝asm

yum install oracleasm-support oracleasmlib oracleasm-`uname -r`
           

檢視資料庫執行個體

srvctl status database -d devdb
           

啟動或者關閉,把status改成start|stop

同理,監聽的相關資訊、啟動、關閉

srvctl status|stop|start listener -l listenername
           

查詢節點

[[email protected] disks]# su - grid
[[email protected] ~]$ olsnodes -s
node1	Inactive
node2	Active
           

檢視叢集的名稱

[[email protected] ~]$ cemutlo -n
scan-cluster
           

檢視叢集狀态

crsctl check cluster // 也可以加上-all參數,查詢所有節點
[[email protected] ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[[email protected] ~]$ crsctl check cluster -all
Still waiting for replies from: node1
Still waiting for replies from: node1
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
CRS-4404: The following nodes did not reply within the allotted time:
node1

           

節點應用程式狀态

[email protected] ~]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is not running
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is not running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is not running on node: node1
ONS daemon is running on node: node2
eONS is enabled
eONS daemon is not running on node: node1
eONS daemon is running on node: node2

           

asm狀态檢視

srvctl status asm  // 也可以帶-a參數
[[email protected] ~]$ srvctl status asm
ASM is running on node2

           

檢視asm配置

[[email protected] ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener was not found
PRCA-1032 : ASM listener LISTENER does not exist
ASM is enabled.

           

檢視監聽配置

[[email protected] ~]$ srvctl config listener -a
PRCN-2044 : No listener exists
           

檢視scan監聽器的狀态

srvctl status scan_listener
srvctl status scan
           

檢視vip節點運作狀态

[[email protected] ~]$ srvctl  status vip -n node2
VIP node2-vip is enabled
VIP node2-vip is running on node: node2

           

資料庫狀态查詢

srvctl status database -d devdb 
           

檢視資料庫配置

RAC叢集管理指令

crs_start -all    // 啟動
crs_start -t      // 檢視
crs_stop -all    // 關閉
           

crsctl啟停單節點RAC

crsctl stop cluster -all  // 停止所有節點叢集服務
crsctl stop cluster    // 停止本節點叢集服務
           

從OCR中删除已有的資料庫

srvctl remove database -d devdb
           

向OCR中添加一個資料庫執行個體

srvctl add instance -d -i -n 
           

通過srvctl指令管理一個節點的RAC

srvctl start|stop|status  nodeapps -n rac_node
           

使用sqlplus檢視執行個體名稱和狀态

select instance_name, status from v$instance;
           

12c下重新開機RAC資料庫

關閉順序:

關閉PDB---->關閉資料庫—>關閉叢集服務

啟動順序:

開機自啟動叢集服務—>檢視叢集服務是否正常—>打開資料庫—>打開PDB

檢視PDB狀态

SQL> select name,open_mode from v$pdbs ;
NAME OPEN_MODE
---------- --------------------
PDB$SEED READ ONLY
GNNT_PDB READ WRITE //狀态是開啟
           

關閉

SQL> alter pluggable database GNNT_PDB close;  // 打開的話,把close改成open
Pluggable database altered.
SQL> select name,open_mode from v$pdbs ;
NAME OPEN_MODE
---------- --------------------
GNNT_PDB MOUNTED //狀态變為了關閉
           

也可以切換到PDB下操作

SQL> alter session set container=gnnt_pdb;
Session altered.
SQL> shutdown;
Pluggable Database closed.
SQL> startup;
Pluggable Database closed.
           

關閉資料庫

[[email protected] ~]$ srvctl stop database -d orcl
[[email protected] ~]$ srvctl status database -d orcl
執行個體 gnnt1 沒有在 swnode1 節點上運作
執行個體 gnnt2 沒有在 swnode2 節點上運作
           

關閉叢集RAC

[[email protected] ~]# /u01/app/12.2.0/grid/bin/crsctl stop cluster -all
           

正常情況下RAC會随作業系統自動啟動,啟動程序為/etc/rc.d/ohasd

以root使用者運作所有RAC節點

# /u01/app/12.1.0/grid/bin/crsctl start crs
           

列出RAC執行個體

select instance_name,host_name,archiver,thread#,status from gv$instance;
           

結合關閉執行個體來進行驗證

srvctl stop instance -d devdb -i devdb1
           

繼續閱讀