- 使用zookeeper實作方式
是對ActiveMQ進行高可用的一種有效的解決方案,高可用的原理:使用ZooKeeper(叢集)注冊所有的ActiveMQ Broker。隻有其中的一個Broker可以對外提供服務(也就是Master節點),其他的Broker處于待機狀态,被視為Slave。如果Master因故障而不能提供服務,則利用ZooKeeper的内部選舉機制會從Slave中選舉出一個Broker充當Master節點,繼續對外提供服務。
官方文檔:http://activemq.apache.org/replicated-leveldb-store.html
參考文檔https://www.ibm.com/developerworks/cn/data/library/bd-zookeeper/等
2、叢集角色介紹
zookeeper叢集中主要有兩個角色:leader和follower。
上司者(leader),用于負責進行投票的發起和決議,更新系統狀态。
學習者(learner),包括跟随者(follower)和觀察者(observer)。
其中follower用于接受用戶端請求并想用戶端傳回結果,在選主過程中參與投票。
而observer可以接受用戶端連接配接,将寫請求轉發給leader,但observer不參加投票過程,隻同步leader的狀态,observer的目的是為了擴充系統,提高讀取速度。
3、zookeeper叢集節點數
一個zookeeper叢集需要運作幾個zookeeper節點呢?
你可以運作一個zookeeper節點,但那就不是叢集了。如果要運作zookeeper叢集的話,最好部署3,5,7個zookeeper節點。本次實驗我們是以3個節點進行的。
zookeeper節點部署的越多,服務的可靠性也就越高。當然建議最好是部署奇數個,偶數個不是不可以。但是zookeeper叢集是以當機個數過半才會讓整個叢集當機的,是以奇數個叢集更佳。
你需要給每個zookeeper 1G左右的記憶體,如果可能的話,最好有獨立的磁盤,因為獨立磁盤可以確定zookeeper是高性能的。如果你的叢集負載很重,不要把zookeeper和RegionServer運作在同一台機器上面,就像DataNodes和TaskTrackers一樣。
2.環境準備
zookeeper環境
主機IP | 消息端口 | 通信端口 | 部署節點位置/usr/local下 |
192.168.0.85 | 2181 | 2287:3387 | zookeeper-3.4.10 |
192.168.0.171 | 2181 | 2287:3387 | zookeeper-3.4.10 |
192.168.0.181 | 2181 | 2287:3387 | zookeeper-3.4.10 |
3.安裝配置zookeeper
[[email protected]:/root]#tar xvf zookeeper-3.4.10.tar.gz -C /usr/local/ #解壓
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#cp zoo_sample.cfg zoo.cfg #複制配置檔案zoo.cfg
#輸出zookeeper環境變量
添加下面的資訊到/etc/profile
export ZK_HOME=/usr/local/zookeeper-3.4.10
export PATH=$PATH:$ZK_HOME/bin
#使環境變量生效
[[email protected]:/root]#source /etc/profile
4.修改zookeeper主配置檔案zoo.cfg
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#cat zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/var/log
server.1=192.168.0.171:2888:3888
server.2=192.168.0.181:2888:3888
server.3=192.168.0.85:2888:3888
參數說明#######################################
# dataDir:資料目錄
# dataLogDir:日志目錄
# clientPort:用戶端連接配接端口
# tickTime:Zookeeper 伺服器之間或用戶端與伺服器之間維持心跳的時間間隔,也就是每個 tickTime 時間就會發送一個心跳。
# initLimit:Zookeeper的Leader 接受用戶端(Follower)初始化連接配接時最長能忍受多少個心跳時間間隔數。當已經超過 5個心跳的時間(也就是tickTime)長度後 Zookeeper 伺服器還沒有收到用戶端的傳回資訊,那麼表明這個用戶端連接配接失敗。總的時間長度就是 5*2000=10 秒
# syncLimit:表示 Leader 與 Follower 之間發送消息時請求和應答時間長度,最長不能超過多少個tickTime 的時間長度,總的時間長度就是 2*2000=4 秒。
# server.A=B:C:D:其中A 是一個數字,表示這個是第幾号伺服器;B 是這個伺服器的 ip 位址;C 表示的是這個伺服器與叢集中的 Leader 伺服器交換資訊的端口;D 表示的是萬一叢集中的 Leader 伺服器挂了,需要一個端口來重新進行選舉,選出一個新的 Leader,而這個端口就是用來執行選舉時伺服器互相通信的端口。如果是僞叢集的配置方式,由于 B 都是一樣,是以不同的 Zookeeper 執行個體通信端口号不能一樣,是以要給它們配置設定不同的端口号。
每個ZooKeeper的instance,都需要設定獨立的資料存儲目錄、日志存儲目錄,是以dataDir節點對應的目錄,需要手動先建立好
5.建立ServerID 辨別
除了修改zoo.cfg配置檔案外,zookeeper叢集模式下還要配置一個myid檔案,這個檔案需要放在dataDir目錄下。
這個檔案裡面有一個資料就是A的值(該A就是zoo.cfg檔案中server.A=B:C:D中的A),在zoo.cfg檔案中配置的dataDir路徑中建立myid檔案。
在192.168.0.171伺服器上建立myid檔案,并設定為1,同時與zoo.cfg檔案裡面的server.1對應,如下:
echo “1” > /tmp/zookeeper/myid
6.三台機器zookeeper安裝配置一樣
7.啟動zookeeper,并檢視叢集狀态
#檢視192.168.0.85
[email protected]:/root#zkServer.sh start #啟動
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[email protected]:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader #角色
[email protected]:/root#zkServer.sh #檢視幫助
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Usage: /usr/local/zookeeper-3.4.10/bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
#檢視192.168.0.171
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower #角色
#檢視192.168.0.181
[email protected]:/root#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[email protected]:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower #角色
8.設定zookeeper開機自啟動
touch /etc/init.d/zookeeper #建立啟動檔案
chmod +x /etc/init.d/zookeeper #
#腳本内容如下
#!/bin/bash
#chkconfig:2345 20 90
#description:zookeeper
#processname:zookeeper
case $1 in
start) /usr/local/zookeeper-3.4.10/bin/zkServer.sh start;;
stop) /usr/local/zookeeper-3.4.10/bin/zkServer.sh stop;;
status) /usr/local/zookeeper-3.4.10/bin/zkServer.sh status;;
restart) /usr/local/zookeeper-3.4.10/bin/zkServer.sh restart;;
*) echo "require start|stop|status|restart";;
esac
chkconfig --add zookeeper #添加服務
chkconfig --level 35 zookeeper on
9.zookeeper用戶端使用
[email protected]:/usr/local/zookeeper-3.4.10/bin#zkCli.sh -timeout 5000 -server 192.168.0.85:2181
Connecting to 192.168.0.85:2181
2017-06-21 10:01:12,672 [myid:] - INFO [main:[email protected]] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2017-06-21 10:01:12,685 [myid:] - INFO [main:[email protected]] - Client environment:host.name=agent2
2017-06-21 10:01:12,685 [myid:] - INFO [main:[email protected]] - Client environment:java.version=1.7.0_79
2017-06-21 10:01:12,694 [myid:] - INFO [main:[email protected]] - Client environment:java.vendor=Oracle Corporation
2017-06-21 10:01:12,697 [myid:] - INFO [main:[email protected]] - Client environment:java.home=/usr/local/jdk1.7.0_79/jre
2017-06-21 10:01:12,697 [myid:] - INFO [main:[email protected]] - Client environment:java.class.path=/usr/local/zookeeper-3.4.10/bin/../build/classes:/usr/local/zookeeper-3.4.10/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../conf:
2017-06-21 10:01:12,700 [myid:] - INFO [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-06-21 10:01:12,700 [myid:] - INFO [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-06-21 10:01:12,700 [myid:] - INFO [main:[email protected]] - Client environment:java.compiler=<NA>
2017-06-21 10:01:12,702 [myid:] - INFO [main:[email protected]] - Client environment:os.name=Linux
2017-06-21 10:01:12,702 [myid:] - INFO [main:[email protected]] - Client environment:os.arch=amd64
2017-06-21 10:01:12,702 [myid:] - INFO [main:[email protected]] - Client environment:os.version=2.6.32-431.el6.x86_64
2017-06-21 10:01:12,703 [myid:] - INFO [main:[email protected]] - Client environment:user.name=root
2017-06-21 10:01:12,704 [myid:] - INFO [main:[email protected]] - Client environment:user.home=/root
2017-06-21 10:01:12,704 [myid:] - INFO [main:[email protected]] - Client environment:user.dir=/usr/local/zookeeper-3.4.10/bin
2017-06-21 10:01:12,713 [myid:] - INFO [main:ZooK[email protected]] - Initiating client connection, connectString=192.168.0.85:2181 sessionTimeout=5000 [email protected]
Welcome to ZooKeeper!
2017-06-21 10:01:12,877 [myid:] - INFO [main-SendThread(192.168.0.85:2181):[email protected]] - Opening socket connection to server 192.168.0.85/192.168.0.85:2181. Will not attempt to authenticate using SASL (unknown error)
2017-06-21 10:01:12,928 [myid:] - INFO [main-SendThread(192.168.0.85:2181):[email protected]] - Socket connection established to 192.168.0.85/192.168.0.85:2181, initiating session
JLine support is enabled
2017-06-21 10:01:13,013 [myid:] - INFO [main-SendThread(192.168.0.85:2181):[email protected]] - Session establishment complete on server 192.168.0.85/192.168.0.85:2181, sessionid = 0x35cc85763500000, negotiated timeout = 5000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.0.85:2181(CONNECTED) 0] ls
[zk: 192.168.0.85:2181(CONNECTED) 1] ls /
[activemq, zookeeper]
[zk: 192.168.0.85:2181(CONNECTED) 2]
輸入ls /
看到搭建的服務有zookeeper activemq,本來沒有activemq,因我已經搭建好了activemq叢集
測驗
停掉leader,看日志會選舉出另一個leader
10.部署activemq
主機 | 叢集通信端口 | 消息端口 | 控制台端口 | 部署路徑/usr/local |
192.168.0.85 | 61619 | 61616 | 8161 | apache-activemq-5.14.5 |
192.168.0.171 | 61619 | 61616 | 8161 | apache-activemq-5.14.5 |
192.168.0.181 | 61619 | 61616 | 8161 | apache-activemq-5.14.5 |
11、安裝activemq
下載下傳
wget http://www.apache.org/dyn/closer.cgi?filename=/activemq/5.14.5/apache-activemq-5.14.5-bin.tar.gz
解壓
tar xvf apache-activemq-5.14.5-bin.tar.gz -C /usr/local/
設定開機啟動
[email protected]:/usr/local/apache-activemq-5.14.5/bin#cp activemq /etc/init.d/activemq
# chkconfig: 345 63 37
# description: Auto start ActiveMQ
12.啟動activemq
[email protected]:/usr/local/apache-activemq-5.14.5/bin#./activemq start
檢視監聽端口
[email protected]:/usr/local/apache-activemq-5.14.5/bin#netstat -antlp |grep "8161\|61616\|616*"
tcp 0 64 192.168.0.85:22 192.168.0.61:52967 ESTABLISHED 6702/sshd
tcp 0 0 :::61613 :::* LISTEN 7481/java
tcp 0 0 :::61614 :::* LISTEN 7481/java
tcp 0 0 :::61616 :::* LISTEN 7481/java
tcp 0 0 :::8161 :::* LISTEN 7481/java
13.activemq 叢集配置
[email protected]:/usr/local/apache-activemq-5.14.5/conf#cat activemq.xml
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
#mq安裝路徑下的conf/activemq.xml進行mq的brokerName,并且每個節點名稱都必須相同
#brokerName=”activemq-cluster”(三個節點都需要修改)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
#釋掉擴充卡中的kahadb
<!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
#添加新的leveldb配置
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="192.168.0.85:2181,192.168.0.171:2181,192.168.0.181:2181"
hostname="192.168.0.85" #3台機器各填自己的ip
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
#啟用簡單認證
<plugins>
<simpleAuthenticationPlugin>
<users>
<authenticationUser username="${activemq.username}" password="${activemq.password}" groups="admins,everyone"/>
<authenticationUser username="mcollective" password="musingtec" groups="mcollective,admins,everyone"/>
</users>
</simpleAuthenticationPlugin>
</plugins>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
<!-- END SNIPPET: example -->
參數詳解
Replicated LevelDB Store Properties
All the broker nodes that are part of the same replication set should have matching
brokerName
XML attributes. The following configuration properties should be the same on all the broker nodes that are part of the same replication set:
property name | default value | Comments |
---|---|---|
| | The number of nodes that will exist in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage. |
| A security token which must match on all replication nodes for them to accept each others replication requests. | |
| | A comma separated list of ZooKeeper servers. |
| The password to use when connecting to the ZooKeeper server. | |
| | The path to the ZooKeeper directory where Master/Slave election information will be exchanged. |
| | How quickly a node failure will be detected by ZooKeeper. (prior to 5.11 - this had a typo zkSessionTmeout) |
| | Controls where updates are reside before being considered complete. This setting is a comma separated list of the following options: , , , , , . If you combine two settings for a target, the stronger guarantee is used. For example, configuring is the same as just using . quorum_mem is the same as and is the same as |
Different replication sets can share the same
zkPath
as long they have different
brokerName
.
The following configuration properties can be unique per node:
property name | default value | Comments |
---|---|---|
| | When this node becomes a master, it will bind the configured address and port to service the replication protocol. Using dynamic ports is also supported. Just configure with |
| The host name used to advertise the replication service when this node becomes the master. If not set it will be automatically determined. | |
| 1 | The replication node that has the latest update with the highest weight will become the master. Used to give preference to some nodes towards becoming master. |
The store also supports the same configuration properties of a standard LevelDB Store but it does not support the pluggable storage lockers :
Standard LevelDB Store Properties
property name | default value | Comments |
---|---|---|
| | The directory which the store will use to hold it's data files. The store will create the directory if it does not already exist. |
| | The number of concurrent IO read threads to allowed. |
| (100 MB) | The max size (in bytes) of each data log file before log file rotation occurs. |
| | Set to true to force checksum verification of all data that is read from the file system. |
| | Make the store error out as soon as possible if it detects internal corruption. |
| | The factory classes to use when creating the LevelDB indexes |
| | Number of open files that can be used by the index. |
| | Number keys between restart points for delta encoding of keys. |
| (6 MB) | Amount of index data to build up in memory before converting to a sorted on-disk file. |
| (4 K) | The size of index data packed per block. |
| (256 MB) | The maximum amount of off-heap memory to use to cache index blocks. |
| | The type of compression to apply to the index blocks. Can be snappy or none. |
| | The type of compression to apply to the log records. Can be snappy or none. |
14.叢集服務搭建完畢
啟動之後檢視3台機器日志
從日志可以看出192.168.0.171 為master其他為slave
測試
使用調試工具ZooInspector.zip檢視目前activemq在那個機器上
檢視消息隊列控制台
隻能在master機器上維護MQ服務在其他機器上則不會提供服務,也就是說以下3個節點隻有一個可以正産通路,down掉一台activemq 會立刻切換到别的伺服器上繼續提供服務,會話不會斷開。
http://192.168.0.85:8161/admin/queues.jsp
http://192.168.0.171:8161/admin/queues.jsp
http://192.168.0.181:8161/admin/queues.jsp
待完善。。。。
版權聲明:本文為CSDN部落客「weixin_34341229」的原創文章,遵循CC 4.0 BY-SA版權協定,轉載請附上原文出處連結及本聲明。
原文連結:https://blog.csdn.net/weixin_34341229/article/details/91493911