天天看點

linux學習之搭建高可用分布式檔案系統MFS

linux學習之搭建高可用分布式檔案系統MFS

分布式檔案系統mfs

vm1,vm2做高可用;vm3,vm4做存儲結點,實體機做client

192.168.2.199   vm1.example.com

192.168.2.202   vm2.example.com

192.168.2.205   vm3.example.com

192.168.2.175   vm4.example.com

192.168.2.199 mfsmaster

vip 213

1、master配置啟動

lftp i:~> get pub/docs/mfs/mfs-1.6.27-1.tar.gz 

[[email protected] ~]# mv mfs-1.6.27-1.tar.gz mfs-1.6.27.tar.gz 

[[email protected] ~]# yum install -y fuse-devel

[[email protected] ~]# rpmbuild -tb mfs-1.6.27.tar.gz 

[[email protected] ~]# cd rpmbuild/RPMS/x86_64/

[[email protected] x86_64]# rpm -ivh mfs-cgi-1.6.27-2.x86_64.rpm mfs-cgiserv-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm 

[[email protected] x86_64]# cd /etc/mfs/

[[email protected] mfs]# cp mfsmaster.cfg.dist mfsmaster.cfg

[[email protected] mfs]# cp mfsexports.cfg.dist mfsexports.cfg

[[email protected] mfs]# cp mfstopology.cfg.dist mfstopology.cfg

[[email protected] mfs]# cd /var/lib/mfs/

[[email protected] mfs]# cp metadata.mfs.empty metadata.mfs

[[email protected] mfs]# chown -R nobody .

[[email protected] mfs]# vim /etc/hosts

192.168.2.199   mfsmaster

[[email protected] mfs]# mfsmaster 啟動mfsmaster

啟動mfscgiserv

[[email protected] mfscgi]# mfsmaster 

[[email protected] mfs]# cd /usr/share/mfscgi/

[[email protected] mfscgi]# chmod +x *.cgi

[[email protected] mfscgi]# mfscgiserv 

實體機通路192.168.2.199:9425

2、配置存儲結點

[[email protected] ~]# scp rpmbuild/RPMS/x86_64/mfs-chunkserver-1.6.27-2.x86_64.rpm vm3.example.com:

[[email protected] ~]# scp rpmbuild/RPMS/x86_64/mfs-chunkserver-1.6.27-2.x86_64.rpm vm4.example.com:

兩個結點均做相同配置

[[email protected] ~]# rpm -ivh mfs-chunkserver-1.6.27-2.x86_64.rpm 

[[email protected] ~]# mkdir /mnt/chunk1

[[email protected] ~]# mkdir /var/lib/mfs

[[email protected] ~]# chown nobody /mnt/chunk1/ /var/lib/mfs/

[[email protected] ~]# cd /etc/mfs/

[[email protected] mfs]# cp mfschunkserver.cfg.dist mfschunkserver.cfg

[[email protected] mfs]# cp mfshdd.cfg.dist mfshdd.cfg

[[email protected] mfs]# vim mfshdd.cfg

/mnt/chunk1

[[email protected] mfs]# vim /etc/hosts

192.168.2.199   mfsmaster

[[email protected] mfs]# mfschunkserver 

然後重新整理192.168.2.199:9425網頁,檢視存儲結點伺服器

3、client用戶端配置

[[email protected] x86_64]# scp mfs-client-1.6.27-2.x86_64.rpm 192.168.2.168:

[[email protected] ~]# rpm -ivh mfs-client-1.6.27-2.x86_64.rpm 

[[email protected] ~]# cd /etc/mfs/

[[email protected] mfs]# cp mfsmount.cfg.dist mfsmount.cfg

[[email protected] mfs]# vim mfsmount.cfg

/mnt/mfs

[[email protected] mfs]# vim /etc/hosts

192.168.2.199   mfsmaster

[[email protected] mfs]# mkdir /mnt/mfs

[[email protected] mfs]# mfsmount 就會挂載

測試

[[email protected] mfs]# mfssetgoal -r 2 dir2/ 設定檔案夾dir2中的檔案均儲存2份

[[email protected] mfs]# mfsgetgoal dir1/

dir1/: 1

[[email protected] mfs]# mfsgetgoal dir2/

dir2/: 2

[[email protected] mfs]# cp /etc/passwd dir1/

[[email protected] mfs]# cp /etc/fstab dir2/

[[email protected] mfs]# mfsfileinfo dir1/passwd 

dir1/passwd:

chunk 0: 0000000000000001_00000001 / (id:1 ver:1)

copy 1: 192.168.2.175:9422

[[email protected] mfs]# mfsfileinfo dir2/fstab 

dir2/fstab:

chunk 0: 0000000000000003_00000001 / (id:3 ver:1)

copy 1: 192.168.2.175:9422

copy 2: 192.168.2.205:9422

[[email protected] ~]# mfschunkserver stop 停止存儲結點的服務

[[email protected] mfs]# mfsfileinfo dir2/fstab 

dir2/fstab:

chunk 0: 0000000000000003_00000001 / (id:3 ver:1)

copy 1: 192.168.2.175:9422

再次開啟,又會看到兩份,這樣避免單點故障

[[email protected] mfs]# mfschunkserver stop再次關閉

[[email protected] mfs]# mfsfileinfo dir1/passwd 

dir1/passwd:

chunk 0: 0000000000000001_00000001 / (id:1 ver:1)

no valid copies !!! 雖然能看到檔案,但是是無效的

[[email protected] mfs]# mfsfileinfo dir2/fstab 

dir2/fstab:

chunk 0: 0000000000000003_00000001 / (id:3 ver:1)

no valid copies !!!

誤删檔案回複

[[email protected] ~]# mkdir /mnt/meta

[[email protected] ~]# mfsmount -m /mnt/meta/ -H mfsmaster

[[email protected] ~]# cd /mnt/meta/trash/

[[email protected] trash]# mv 0000093F\|etc\|xdg\|autostart\|pulseaudio.desktop undel/

master恢複

[[email protected] ~]# mfsmetarestore -a

[[email protected] ~]# mfsmaster 

4、制作master的HA

停止mfs

[[email protected] ~]# umount /mnt/mfs/

[[email protected] chunk1]# mfschunkserver stop

[[email protected] chunk1]# mfschunkserver stop

[[email protected] ~]# mfsmaster stop

制作啟動腳本

[[email protected] init.d]# vim mfs

#!/bin/bash

#

# Init file for the MooseFS master service

#

# chkconfig: - 92 84

#

# description: MooseFS master

#

# processname: mfsmaster

# Source function library.

# Source networking configuration.

. /etc/init.d/functions

. /etc/sysconfig/network

# Source initialization configuration.

# Check that networking is up.

[ "${NETWORKING}" == "no" ] && exit 0

[ -x "/usr/sbin/mfsmaster" ] || exit 1

[ -r "/etc/mfs/mfsmaster.cfg" ] || exit 1

[ -r "/etc/mfs/mfsexports.cfg" ] || exit 1

RETVAL=0

prog="mfsmaster"

datadir="/var/lib/mfs"

mfsbin="/usr/sbin/mfsmaster"

mfsrestore="/usr/sbin/mfsmetarestore"

start () {

echo -n $"Starting $prog: "

$mfsbin start >/dev/null 2>&1

if [ $? -ne 0 ];then

$mfsrestore -a >/dev/null 2>&1 && $mfsbin start >/dev/null 2>&1

fi

RETVAL=$?

echo

return $RETVAL

}

stop () {

echo -n $"Stopping $prog: "

$mfsbin -s >/dev/null 2>&1 || killall -9 $prog #>/dev/null 2>&1

RETVAL=$?

echo

return $RETVAL

}

restart () {

stop

start

}

reload () {

echo -n $"reload $prog: "

$mfsbin reload >/dev/null 2>&1

RETVAL=$?

echo

return $RETVAL

}

restore () {

echo -n $"restore $prog: "

$mfsrestore -a >/dev/null 2>&1

RETVAL=$?

echo

return $RETVAL

}

case "$1" in

start)

start

;;

stop)

stop

;;

restart)

restart

;;

reload)

reload

;;

restore)

restore

;;

status)

status $prog

RETVAL=$?

;;

*)

echo $"Usage: $0 {start|stop|restart|reload|restore|status}"

RETVAL=1

esac

exit $RETVAL

[[email protected] init.d]# chmod +x mfs 

[[email protected] init.d]# /etc/init.d/mfs start 測試

[[email protected] ~]# ps -axu | grep mfsmaster

[[email protected] init.d]# /etc/init.d/mfs stop

[[email protected] init.d]# scp mfs vm2.example.com:/etc/init.d/

[[email protected] x86_64]# scp mfs-master-1.6.27-2.x86_64.rpm mfs-cgi-1.6.27-2.x86_64.rpm mfs-cgiserv-1.6.27-2.x86_64.rpm vm2.example.com:

[[email protected] ~]# rpm -ivh mfs-master-1.6.27-2.x86_64.rpm mfs-cgi-1.6.27-2.x86_64.rpm mfs-cgiserv-1.6.27-2.x86_64.rpm 

[[email protected] mfs]# cp mfsmaster.cfg.dist mfsmaster.cfg

[[email protected] mfs]# cp mfsexports.cfg.dist mfsexports.cfg

[[email protected] mfs]# cp mfstopology.cfg.dist mfstopology.cfg

[[email protected] mfs]# cd /var/lib/mfs/

[[email protected] mfs]# cp metadata.mfs.empty metadata.mfs

[[email protected] mfs]# chown -R nobody .

[[email protected] mfs]# cd /usr/share/mfscgi/

[[email protected] mfscgi]# chmod +x *.cgi

[[email protected] ~]# vim /etc/hosts 修改所有結點,mfsmaster解析為虛拟ip

192.168.2.213   mfsmaster

pacemaker還原

[[email protected] ~]# /etc/init.d/corosync start 兩個結點先啟動

crm(live)resource# stop vip 

crm(live)configure# delete vip

crm(live)configure# delete webdata

crm(live)configure# delete website

crm(live)configure# show 

node vm1.example.com

node vm2.example.com

primitive vmfence stonith:fence_xvm \

params pcmk_host_map="vm1.example.com:vm1;vm2.example.com:vm2" \

op monitor interval="60s" \

meta target-role="Started"

property $id="cib-bootstrap-options" \

dc-version="1.1.10-14.el6-368c726" \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes="2" \

stonith-enabled="true" \

no-quorum-policy="ignore"

crm(live)configure# commit 

[[email protected] ~]# /etc/init.d/corosync stop 關閉

5、安裝DRBD,存放master調用檔案

lftp i:~> get pub/docs/drbd/rhel6/drbd-8.4.3.tar.gz 

[[email protected] ~]# tar zxf drbd-8.4.3.tar.gz 

[[email protected] ~]# cd drbd-8.4.3

[[email protected] drbd-8.4.3]# yum install -y flex kernel-devel

[[email protected] drbd-8.4.3]# ./configure --enable-spec --with-km

[[email protected] drbd-8.4.3]# cp ../drbd-8.4.3.tar.gz /root/rpmbuild/SOURCES/

[[email protected] drbd-8.4.3]# rpmbuild -bb drbd.spec

[[email protected] drbd-8.4.3]# rpmbuild -bb drbd-km.spec

[[email protected] ~]# cd rpmbuild/RPMS/x86_64/

[[email protected] x86_64]# rpm -ivh drbd-*

[[email protected] x86_64]# scp drbd-* vm2.example.com:

[[email protected] ~]# rpm -ivh drbd-*

然後vm1和vm2添加2G大小的虛拟磁盤

[[email protected] ~]# vim /etc/drbd.d/mfsdata.res

resource mfsdata {

meta-disk internal;

device /dev/drbd1;

syncer {

verify-alg sha1;

}

on vm1.example.com {

disk /dev/vdb1;

address 192.168.2.199:7789;

}

on vm2.example.com {

disk /dev/vdb1;

address 192.168.2.202:7789;

}

}

[[email protected] ~]# scp /etc/drbd.d/mfsdata.res vm2.example.com:/etc/drbd.d/

分出/dev/vdb1,兩個結點都做。如下:

[[email protected] ~]# fdisk -cu /dev/vdb 

[[email protected] ~]# drbdadm create-md mfsdata

[[email protected] ~]# /etc/init.d/drbd start

設定結點1為主結點,格式化

[[email protected] ~]# drbdsetup primary /dev/drbd1 --force

[[email protected] ~]# mkfs.ext4 /dev/drbd1 

給磁盤中寫入mfs的檔案

[[email protected] ~]# mount /dev/drbd1 /mnt/

[[email protected] ~]# cd /var/lib/mfs/

[[email protected] mfs]# mv * /mnt/

[[email protected] mfs]# cd /mnt/

[[email protected] mnt]# chown nobody .

[[email protected] ~]# umount /mnt/

[[email protected] ~]# drbdadm secondary mfsdata

另一個結點檢視

[[email protected] ~]# drbdadm primary mfsdata

[[email protected] ~]# mount /dev/drbd1 /var/lib/mfs/

[[email protected] ~]# cd /var/lib/mfs/

[[email protected] mfs]# ls

changelog.2.mfs  changelog.6.mfs  metadata.mfs         metadata.mfs.empty  stats.mfs

changelog.3.mfs  lost+found       metadata.mfs.back.1  sessions.mfs

[[email protected] ~]# umount /var/lib/mfs/

關閉用戶端iscsi

[[email protected] ~]# iscsiadm -m node -u

[[email protected] ~]# iscsiadm -m node -o delete

[[email protected] ~]# /etc/init.d/iscsi  stop

[[email protected] ~]# chkconfig iscsi off

[[email protected] ~]# chkconfig iscsid off

6、corosync加入資源

crm(live)configure# primitive MFSDATA ocf:linbit:drbd params drbd_resource=mfsdata 資源drbd用于mfsmaster

crm(live)configure# primitive MFSfs ocf:heartbeat:Filesystem params device=/dev/drbd1  directory=/var/lib/mfs fstype=ext4 檔案系統資源

crm(live)configure# ms mfsdataclone MFSDATA meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true 定義主備

crm(live)configure# primitive mfsmaster lsb:mfs op monitor interval=30s 定義mfsmaster資源

crm(live)configure# group mfsgrp vip MFSfs mfsmaster

crm(live)configure# colocation mfs-with-drbd inf: mfsgrp mfsdataclone:Master

crm(live)configure# order mfs-after-drbd inf: mfsdataclone:promote mfsgrp:start

crm(live)configure# commit 

監控如下:

Online: [ vm1.example.com vm2.example.com ]

vmfence (stonith:fence_xvm):    Started vm1.example.com

 Master/Slave Set: mfsdataclone [MFSDATA]

     Masters: [ vm1.example.com ]

     Slaves: [ vm2.example.com ]

 Resource Group: mfsgrp

     vip        (ocf::heartbeat:IPaddr2): Started vm1.example.com

     MFSfs (ocf::heartbeat:Filesystem):    Started vm1.example.com

     mfsmaster  (lsb:mfs): Started vm1.example.com

配置檔案如下:

node vm1.example.com

node vm2.example.com

primitive MFSDATA ocf:linbit:drbd \

params drbd_resource="mfsdata"

primitive MFSfs ocf:heartbeat:Filesystem \

params device="/dev/drbd1" directory="/var/lib/mfs" fstype="ext4"

primitive mfsmaster lsb:mfs \

op monitor interval="30s"

primitive vip ocf:heartbeat:IPaddr2 \

params ip="192.168.2.213" cidr_netmask="32" \

op monitor interval="30s"

primitive vmfence stonith:fence_xvm \

params pcmk_host_map="vm1.example.com:vm1;vm2.example.com:vm2" \

op monitor interval="60s" \

meta target-role="Started"

group mfsgrp vip MFSfs mfsmaster

ms mfsdataclone MFSDATA \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

colocation mfs-with-drbd inf: mfsgrp mfsdataclone:Master

order mfs-after-drbd inf: mfsdataclone:promote mfsgrp:start

property $id="cib-bootstrap-options" \

dc-version="1.1.10-14.el6-368c726" \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes="2" \

stonith-enabled="true" \

no-quorum-policy="ignore"

測試高可用

[[email protected] ~]# /etc/init.d/corosync start

[[email protected] ~]# /etc/init.d/corosync start

[[email protected] ~]# mfschunkserver 

[[email protected] ~]# mfschunkserver

[[email protected] ~]# mfsmount 

關閉vm1的corosync,vm2接管所有資源,vm1狀态OFFLINE: [ vm1.example.com ],Masters: [ vm2.example.com ]     Stopped: [ vm1.example.com ],再開啟vm1的corosync,關閉再開啟vm2的corosync,使資源啟動在vm1上;

在用戶端[[email protected] dir2]# dd if=/dev/zero of=bigfile bs=1M count=300 過程中關閉vm1的mfs,fence會将其重新開機,資源服務移到另一結點(這裡出現了vm3的崩潰,可能是快照滿了,若用戶端有雙份檔案的話,會丢失一份);若關閉corosync資源服務也會移到另一結點,都不會影響用戶端的檔案。

之後使用cat檢視檔案來測試,用戶端一直cat fstab,master關閉corosync測試,會有短暫切換延遲。

fence測試,關閉vm1的eth0,vm1進入重新開機,過程中vm2需要啟動drbd,vm1開啟之後,開啟corosync,drbd是開機自啟的,也可以把corosync開機自啟。

總結:之後在自己電腦上做,遇到一些問題,導緻重裝master,使用rpm -e mfs-master解除安裝,重裝然後格式化drbd,修改權限

這裡corosync和drbd在一塊,drbd做的是mfs的master的存儲檔案/var/lib/mfs,master相當于排程

vm3和vm4還是存儲結點[[email protected] ~]# cd /mnt/chunk1/

[[email protected] ~]# ls /mnt/chunk1/

00  0D  1A  27  34  41  4E  5B  68  75  82  8F  9C  A9  B6  C3  D0  DD  EA  F7

最後需要做的就是開機啟動drbd,corosync!

7、使用heartbeat+mfsmaster高可用

這裡是在自己原來做過heartbeat的電腦做的,主機名有變化

先關閉所有,注意順序,用戶端先解除安裝

[[email protected] ~]# umount /mnt/mfs/

[[email protected] ~]# mfschunkserver stop

[[email protected] ~]# mfschunkserver stop

[[email protected] ~]# /etc/init.d/corosync stop

[[email protected] ~]# /etc/init.d/corosync stop

[[email protected] ~]# vim /etc/ha.d/haresources 

[[email protected] ~]# vim /etc/ha.d/haresources 修改

vm1.example.com IPaddr::192.168.2.213/24/eth0:0 drbddisk::mfsdata Filesystem::/dev/drbd1::/var/lib/mfs::ext4 mfs

[[email protected] ~]# /etc/init.d/drbd start 開啟drbd,比較重要

[[email protected] ~]# /etc/init.d/drbd start 直到看到兩個都是secondary

[[email protected] ~]# /etc/init.d/heartbeat start 開啟heartbeat,注意檢視日志

[[email protected] ~]# /etc/init.d/heartbeat start

[[email protected] ~]# mfschunkserver 開啟存儲節點

[[email protected] ~]# mfschunkserver 

[[email protected] ~]# mfsmount 挂載,df檢視

[[email protected] ~]# cat /mnt/mfs/1/fstab 檢視檔案内容

高可用測試

[[email protected] ~]# /etc/init.d/heartbeat stop 資源跳到另一個結點,不影響檢視檔案,再次開啟會跳回來

[[email protected] ~]# /etc/init.d/mfs start 關閉MFS檢視檔案會卡住,不帶有服務資源得檢測。

繼續閱讀