天天看點

JEESZ-Zookeeper叢集安裝

在根目錄建立zookeeper檔案夾(service1、service2、service3都建立)

[root@localhost /]# mkdir zookeeper

通過Xshell上傳檔案到service1伺服器:上傳zookeeper-3.4.6.tar.gz到/software檔案夾

2.遠端copy将service1下的/software/zookeeper-3.4.6.tar.gz到service2、service3

[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz [email protected]:/software/

[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz [email protected]:/software/

3.copy /software/zookeeper-3.4.6.tar.gz到/zookeeper/目錄(service1、service2、service3都執行)

[root@localhost software]# cp /software/zookeeper-3.4.6.tar.gz /zookeeper/

4.安裝解壓zookeeper-3.4.6.tar.gz(service1、service2、service3都執行)

[root@localhost /]# cd /zookeeper/

[root@localhost zookeeper]# tar -zxvf zookeeper-3.4.6.tar.gz

5.在/zookeeper建立兩個目錄:zkdata、zkdatalog(service1、service2、service3都建立)

[root@localhost zookeeper]# mkdir zkdata

[root@localhost zookeeper]# mkdir zkdatalog

6.進入/zookeeper/zookeeper-3.4.6/conf/目錄

[root@localhost zookeeper]# cd /zookeeper/zookeeper-3.4.6/conf/

[root@localhost conf]# ls

configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg

修改zoo.cfg檔案

The number of milliseconds of each tick

tickTime=2000

The number of ticks that the initial

synchronization phase can take

initLimit=10

The number of ticks that can pass between

sending a request and getting an acknowledgement

syncLimit=5

the directory where the snapshot is stored.

do not use /tmp for storage, /tmp here is just

example sakes.

dataDir=/zookeeper/zkdata

dataLogDir=/zookeeper/zkdatalog

the port at which the clients will connect

clientPort=2181

the maximum number of client connections.

increase this if you need to handle more clients

maxClientCnxns=60

Be sure to read the maintenance section of the

administrator guide before turning on autopurge.

The number of snapshots to retain in dataDir

autopurge.snapRetainCount=3

Purge task interval in hours

Set to "0" to disable auto purge feature

autopurge.purgeInterval=1

server.1=192.168.2.211:12888:13888

server.2=192.168.2.212:12888:13888

server.3=192.168.2.213:12888:13888

同步修改service2、service3的zoo.cfg配置

myid檔案寫入(進入/zookeeper/zkdata目錄下)

[root@localhost /]# cd /zookeeper/zkdata

[root@localhost /]# echo 1 > myid

myid檔案寫入service2、service3

echo 2 > myid

echo 3 > myid

檢視zk指令:

[root@localhost ~]# cd /zookeeper/zookeeper-3.4.6/bin/

[root@localhost bin]# ls

README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zookeeper.out

執行zkServer.sh檢視詳細指令:

[root@localhost bin]# ./zkServer.sh

JMX enabled by default

Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}

在service1、service2、service3分别啟動zk服務

[root@localhost bin]# ./zkServer.sh start

jps檢視zk程序

[root@localhost bin]# jps

31483 QuorumPeerMain

31664 Jps

分别在service1、service2、service3檢視zk狀态(可以看到leader和follower節點)

[root@localhost bin]# ./zkServer.sh status

Mode: follower

Mode: leader

看到leader和follower節點已經安裝成功

繼續閱讀