天天看點

corosync+pacemaker+san實作web服務高可用

一:實驗環境

節點 OS IP SAN_IP VIP
node1 rhel6.5 192.168.10.11 172.16.1.1 192.168.10.100
node2 rhel6.5 192.168.10.12 172.16.1.2
san rhel6.5 172.16.1.3

1.corosync和pacemaker的概念這裡就不說了,網上有很多資料注:

2.其中兩節點IP位址已按上圖設定好

3.已連接配接好san (映射本地盤符為/dev/sdb)

4.兩節點已配置互相ssh信任,并已做了時間同步

二:安裝相關軟體(節點1和2都安裝)

1.安裝corosync、pacemaker

[[email protected] ~]# for i in 1 2; do ssh node$i yum -y install corosync* pacemaker* ; done

注:rhel 5 系列corosync、pacemaker下載下傳位址為:

http://clusterlabs.org/    根據自己的發行版本,下載下傳對應的軟體包

2.安裝crmsh

到下面位址下載下傳crmsh、pssh、python-pssh

http://crmsh.github.io/

http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/

本次下載下傳的各版本為:

crmsh-2.1-1.6.x86_64.rpm

pssh-2.3.1-4.1.x86_64.rpm

python-pssh-2.3.1-4.1.x86_64.rpm

安裝:

[[email protected] ~]# for i in 1 2; do ssh node$i yum -y --nogpgcheck localinstall /root/*.rpm; done

3.安裝apache

[[email protected] ~]# for i in 1 2; do ssh node$i yum -y install httpd; done

[[email protected] ~]# for i in 1 2; do ssh node$i chkconfig httpd off; done

三:配置corosync

1.[[email protected] ~]# cd /etc/corosync/

2.[[email protected] corosync]# cp corosync.conf.example corosync.conf

3. 完成後的配置檔案如下所示:

[[email protected] corosync]# cat corosync.conf

# Please read the corosync.conf.5 manualpage

compatibility: whitetank

totem {

       version: 2

       secauth: off

       threads: 0

       interface {

                ringnumber: 0

                bindnetaddr: 192.168.10.0     //在哪個網段上進行多點傳播,根據實際情況修改

                mcastaddr: 226.94.1.1

                mcastport: 5405

                ttl: 1

       }

}

logging {

       fileline: off

       to_stderr: no

       to_logfile: yes

       to_syslog: no

       logfile: /var/log/cluster/corosync.log   //日志所在位置

       debug: off

       timestamp: on

       logger_subsys {

                subsys: AMF

                debug: off

       }

}

amf {

       mode: disabled

}

#

# 以下為添加部分

service {

       ver: 0

       name: pacemaker          //啟動corosync時,同時啟動pacemaker

}

aisexec {

       user: root

       group: root

}

4.複制配置檔案到node2上

[[email protected] corosync]# scp corosync.conf node2:/etc/corosync/

5.啟動corosync服務

[[email protected] ~]# /etc/init.d/corosync start

Starting Corosync Cluster Engine(corosync):               [  OK  ]

[[email protected] ~]# ssh node2 "/etc/init.d/corosync start"

Starting Corosync Cluster Engine(corosync): [  OK  ]

6.設定corosync随機啟動

[[email protected] ~]# for i in 1 2; do ssh node$i chkconfig corosync on; done

四:叢集服務配置

1.檢視目前的叢集狀态

[[email protected] ~]# crm status

Last updated: Tue Jun 23 15:28:58 2015

Last change: Tue Jun 23 15:23:58 2015 via crmd on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

0 Resources configured

Online: [ node1 node2 ]

由以上可知,節點1和2都線上,還未配置任何資源

2.設定叢集屬性

[[email protected] ~]# crm configure

crm(live)configure# property stonith-enabled=false   //禁用stonith裝置

crm(live)configure# property no-quorum-policy=ignore //達不到法定票數的政策為忽略

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# show

node node1

node node2

property cib-bootstrap-options: \

       dc-version=1.1.10-14.el6-368c726 \

       cluster-infrastructure="classic openais (with plugin)" \

       expected-quorum-votes=2 \

       stonith-enabled=false \

       no-quorum-policy=ignore

3.添加檔案系統(Filesystem)資源

crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params \

  > device=/dev/sdb1 directory=/var/www/html fstype=xfs \

  > op start timeout=60 \

  > op stop timeout=60

crm(live)configure# verify

先不要送出,接着設定資源wetstore最優先運作在node1節點上

crm(live)configure# location webstore_perfer_node1 webstore 50: node1

crm(live)configure# verify

現在送出

crm(live)configure# commit

傳回到上一級,檢視目前叢集狀态

crm(live)configure# cd

crm(live)# status

Last updated: Tue Jun 23 15:55:03 2015

Last change: Tue Jun 23 15:54:14 2015 via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

1 Resources configured

Online: [ node1 node2 ]

webstore       (ocf::heartbeat:Filesystem):    Started node1

由以上可知,webstore目前運作在node1上

4.添加httpd服務資源并設定httpd服務必須和webstore在一起,webstore必須先啟動後,httpd服務才能啟動

crm(live)configure# primitive httpd lsb:httpd

crm(live)configure# colocation httpd_with_httpd inf: httpd webstore

crm(live)configure# order webstore_before_httpd Mandatory: webstore:start httpd

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Tue Jun 23 15:58:53 2015

Last change: Tue Jun 23 15:58:46 2015 via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

2 Resources configured

Online: [ node1 node2 ]

 webstore      (ocf::heartbeat:Filesystem):   Started node1

 httpd (lsb:httpd):    Started node1

5.添加虛拟IP資源,并設定虛拟IP必須和httpd服務在一起,httpd服務啟動後,才能啟動虛拟IP

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params \

  > ip=192.168.10.100 nic=eth0

crm(live)configure# colocation webip_with_httpd inf: webip httpd

crm(live)configure# order httpd_before_webip Mandatory: httpd webip

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Tue Jun 23 16:02:03 2015

Last change: Tue Jun 23 16:01:54 2015 via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

3 Resources configured

Online: [ node1 node2 ]

 webstore      (ocf::heartbeat:Filesystem):   Started node1

 httpd (lsb:httpd):    Started node1

 webip (ocf::heartbeat:IPaddr):       Started node1

五:高可用測試

1.使node1離線後,檢視叢集狀态

[[email protected] ~]# crm node standby

[[email protected] ~]# crm status

Last updated: Tue Jun 23 16:05:40 2015

Last change: Tue Jun 23 16:05:37 2015 viacrm_attribute on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

3 Resources configured

Node node1: standby

Online: [ node2 ]

 webstore      (ocf::heartbeat:Filesystem):   Started node2

 httpd (lsb:httpd):    Started node2

 webip (ocf::heartbeat:IPaddr):       Started node2

由以上可知,資源切換到了node2上

2.使node1重新上線

[[email protected] ~]# crm node online

[[email protected] ~]# crm status

Last updated: Tue Jun 23 16:06:43 2015

Last change: Tue Jun 23 16:06:40 2015 viacrm_attribute on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

3 Resources configured

Online: [ node1 node2 ]

 webstore      (ocf::heartbeat:Filesystem):   Started node1

 httpd (lsb:httpd):    Started node1

 webip (ocf::heartbeat:IPaddr):       Started node1

由以上可知,資源又回到了node1,這和我們設定的優先運作在node1上相符

至此一個簡單的web高可用配置完成