天天看点

ActiveMQ基于LevelDB的Zookeeper集群

  1. 使用zookeeper实现方式

是对ActiveMQ进行高可用的一种有效的解决方案,高可用的原理:使用ZooKeeper(集群)注册所有的ActiveMQ Broker。只有其中的一个Broker可以对外提供服务(也就是Master节点),其他的Broker处于待机状态,被视为Slave。如果Master因故障而不能提供服务,则利用ZooKeeper的内部选举机制会从Slave中选举出一个Broker充当Master节点,继续对外提供服务。

官方文档:http://activemq.apache.org/replicated-leveldb-store.html

参考文档https://www.ibm.com/developerworks/cn/data/library/bd-zookeeper/等

2、集群角色介绍

zookeeper集群中主要有两个角色:leader和follower。

领导者(leader),用于负责进行投票的发起和决议,更新系统状态。

学习者(learner),包括跟随者(follower)和观察者(observer)。

其中follower用于接受客户端请求并想客户端返回结果,在选主过程中参与投票。

而observer可以接受客户端连接,将写请求转发给leader,但observer不参加投票过程,只同步leader的状态,observer的目的是为了扩展系统,提高读取速度。

3、zookeeper集群节点数

一个zookeeper集群需要运行几个zookeeper节点呢?

你可以运行一个zookeeper节点,但那就不是集群了。如果要运行zookeeper集群的话,最好部署3,5,7个zookeeper节点。本次实验我们是以3个节点进行的。

zookeeper节点部署的越多,服务的可靠性也就越高。当然建议最好是部署奇数个,偶数个不是不可以。但是zookeeper集群是以宕机个数过半才会让整个集群宕机的,所以奇数个集群更佳。

你需要给每个zookeeper 1G左右的内存,如果可能的话,最好有独立的磁盘,因为独立磁盘可以确保zookeeper是高性能的。如果你的集群负载很重,不要把zookeeper和RegionServer运行在同一台机器上面,就像DataNodes和TaskTrackers一样。

ActiveMQ基于LevelDB的Zookeeper集群

2.环境准备

zookeeper环境

主机IP 消息端口 通信端口 部署节点位置/usr/local下
192.168.0.85 2181 2287:3387 zookeeper-3.4.10
192.168.0.171 2181 2287:3387 zookeeper-3.4.10
192.168.0.181 2181 2287:3387 zookeeper-3.4.10

3.安装配置zookeeper

[[email protected]:/root]#tar xvf zookeeper-3.4.10.tar.gz  -C /usr/local/ #解压
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#cp zoo_sample.cfg zoo.cfg #复制配置文件zoo.cfg
#输出zookeeper环境变量
添加下面的信息到/etc/profile
export ZK_HOME=/usr/local/zookeeper-3.4.10
export PATH=$PATH:$ZK_HOME/bin
#使环境变量生效
[[email protected]:/root]#source  /etc/profile
           

4.修改zookeeper主配置文件zoo.cfg

[[email protected]:/usr/local/zookeeper-3.4.10/conf]#cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/var/log
server.1=192.168.0.171:2888:3888
server.2=192.168.0.181:2888:3888
server.3=192.168.0.85:2888:3888

参数说明#######################################
#  dataDir:数据目录
#  dataLogDir:日志目录
#  clientPort:客户端连接端口
#  tickTime:Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
#  initLimit:Zookeeper的Leader 接受客户端(Follower)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 5个心跳的时间(也就是tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒
#  syncLimit:表示 Leader 与 Follower 之间发送消息时请求和应答时间长度,最长不能超过多少个tickTime 的时间长度,总的时间长度就是 2*2000=4 秒。
#  server.A=B:C:D:其中A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。
           

每个ZooKeeper的instance,都需要设置独立的数据存储目录、日志存储目录,所以dataDir节点对应的目录,需要手动先创建好

5.创建ServerID 标识

除了修改zoo.cfg配置文件外,zookeeper集群模式下还要配置一个myid文件,这个文件需要放在dataDir目录下。

这个文件里面有一个数据就是A的值(该A就是zoo.cfg文件中server.A=B:C:D中的A),在zoo.cfg文件中配置的dataDir路径中创建myid文件。

在192.168.0.171服务器上创建myid文件,并设置为1,同时与zoo.cfg文件里面的server.1对应,如下:

echo “1” > /tmp/zookeeper/myid
           

6.三台机器zookeeper安装配置一样

7.启动zookeeper,并查看集群状态

#查看192.168.0.85
[email protected]:/root#zkServer.sh start  #启动
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[email protected]:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader                           #角色
[email protected]:/root#zkServer.sh   #查看帮助
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Usage: /usr/local/zookeeper-3.4.10/bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
           
#查看192.168.0.171
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected]:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower   #角色
           
#查看192.168.0.181
[email protected]:/root#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[email protected]:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower #角色
           

8.设置zookeeper开机自启动

touch /etc/init.d/zookeeper  #创建启动文件

chmod +x /etc/init.d/zookeeper  #

#脚本内容如下
#!/bin/bash  
#chkconfig:2345 20 90  
#description:zookeeper  
#processname:zookeeper  
case $1 in  
          start) /usr/local/zookeeper-3.4.10/bin/zkServer.sh start;;  
          stop) /usr/local/zookeeper-3.4.10/bin/zkServer.sh stop;;  
          status) /usr/local/zookeeper-3.4.10/bin/zkServer.sh status;;  
          restart) /usr/local/zookeeper-3.4.10/bin/zkServer.sh restart;;  
          *)  echo "require start|stop|status|restart";;  
esac 


chkconfig --add zookeeper  #添加服务
chkconfig --level 35 zookeeper on
           

9.zookeeper客户端使用

[email protected]:/usr/local/zookeeper-3.4.10/bin#zkCli.sh -timeout 5000 -server 192.168.0.85:2181
Connecting to 192.168.0.85:2181
2017-06-21 10:01:12,672 [myid:] - INFO  [main:[email protected]] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2017-06-21 10:01:12,685 [myid:] - INFO  [main:[email protected]] - Client environment:host.name=agent2
2017-06-21 10:01:12,685 [myid:] - INFO  [main:[email protected]] - Client environment:java.version=1.7.0_79
2017-06-21 10:01:12,694 [myid:] - INFO  [main:[email protected]] - Client environment:java.vendor=Oracle Corporation
2017-06-21 10:01:12,697 [myid:] - INFO  [main:[email protected]] - Client environment:java.home=/usr/local/jdk1.7.0_79/jre
2017-06-21 10:01:12,697 [myid:] - INFO  [main:[email protected]] - Client environment:java.class.path=/usr/local/zookeeper-3.4.10/bin/../build/classes:/usr/local/zookeeper-3.4.10/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../conf:
2017-06-21 10:01:12,700 [myid:] - INFO  [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-06-21 10:01:12,700 [myid:] - INFO  [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-06-21 10:01:12,700 [myid:] - INFO  [main:[email protected]] - Client environment:java.compiler=<NA>
2017-06-21 10:01:12,702 [myid:] - INFO  [main:[email protected]] - Client environment:os.name=Linux
2017-06-21 10:01:12,702 [myid:] - INFO  [main:[email protected]] - Client environment:os.arch=amd64
2017-06-21 10:01:12,702 [myid:] - INFO  [main:[email protected]] - Client environment:os.version=2.6.32-431.el6.x86_64
2017-06-21 10:01:12,703 [myid:] - INFO  [main:[email protected]] - Client environment:user.name=root
2017-06-21 10:01:12,704 [myid:] - INFO  [main:[email protected]] - Client environment:user.home=/root
2017-06-21 10:01:12,704 [myid:] - INFO  [main:[email protected]] - Client environment:user.dir=/usr/local/zookeeper-3.4.10/bin
2017-06-21 10:01:12,713 [myid:] - INFO  [main:ZooK[email protected]] - Initiating client connection, connectString=192.168.0.85:2181 sessionTimeout=5000 [email protected]
Welcome to ZooKeeper!
2017-06-21 10:01:12,877 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):[email protected]] - Opening socket connection to server 192.168.0.85/192.168.0.85:2181. Will not attempt to authenticate using SASL (unknown error)
2017-06-21 10:01:12,928 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):[email protected]] - Socket connection established to 192.168.0.85/192.168.0.85:2181, initiating session
JLine support is enabled
2017-06-21 10:01:13,013 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):[email protected]] - Session establishment complete on server 192.168.0.85/192.168.0.85:2181, sessionid = 0x35cc85763500000, negotiated timeout = 5000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.0.85:2181(CONNECTED) 0] ls
[zk: 192.168.0.85:2181(CONNECTED) 1] ls /  
[activemq, zookeeper]
[zk: 192.168.0.85:2181(CONNECTED) 2] 
           

输入ls /

看到搭建的服务有zookeeper activemq,本来没有activemq,因我已经搭建好了activemq集群

测验

停掉leader,看日志会选举出另一个leader

10.部署activemq

主机 集群通信端口 消息端口 控制台端口 部署路径/usr/local
192.168.0.85 61619 61616 8161 apache-activemq-5.14.5
192.168.0.171 61619 61616 8161 apache-activemq-5.14.5
192.168.0.181 61619 61616 8161 apache-activemq-5.14.5

 11、安装activemq

下载
wget http://www.apache.org/dyn/closer.cgi?filename=/activemq/5.14.5/apache-activemq-5.14.5-bin.tar.gz
解压
tar xvf apache-activemq-5.14.5-bin.tar.gz -C /usr/local/
设置开机启动
[email protected]:/usr/local/apache-activemq-5.14.5/bin#cp activemq /etc/init.d/activemq
# chkconfig: 345 63 37  
# description: Auto start ActiveMQ 
           

12.启动activemq

[email protected]:/usr/local/apache-activemq-5.14.5/bin#./activemq start
查看监听端口
[email protected]:/usr/local/apache-activemq-5.14.5/bin#netstat -antlp |grep "8161\|61616\|616*"
tcp        0     64 192.168.0.85:22             192.168.0.61:52967          ESTABLISHED 6702/sshd           
tcp        0      0 :::61613                    :::*                        LISTEN      7481/java           
tcp        0      0 :::61614                    :::*                        LISTEN      7481/java           
tcp        0      0 :::61616                    :::*                        LISTEN      7481/java           
tcp        0      0 :::8161                     :::*                        LISTEN      7481/java           
           

13.activemq 集群配置

[email protected]:/usr/local/apache-activemq-5.14.5/conf#cat activemq.xml 
<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

   <!-- Allows accessing the server log -->
    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
          lazy-init="false" scope="singleton"
          init-method="start" destroy-method="stop">
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
#mq安装路径下的conf/activemq.xml进行mq的brokerName,并且每个节点名称都必须相同
#brokerName=”activemq-cluster”(三个节点都需要修改)
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <persistenceAdapter>
#释掉适配器中的kahadb 
            <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
#添加新的leveldb配置
		    <replicatedLevelDB
		            directory="${activemq.data}/leveldb"
		            replicas="3"
		            bind="tcp://0.0.0.0:61619"
		            zkAddress="192.168.0.85:2181,192.168.0.171:2181,192.168.0.181:2181"
		            hostname="192.168.0.85" #3台机器各填自己的ip
		            zkPath="/activemq/leveldb-stores"
		     />
        </persistenceAdapter>
#启用简单认证
         <plugins>
                <simpleAuthenticationPlugin>
                <users>
                <authenticationUser username="${activemq.username}" password="${activemq.password}" groups="admins,everyone"/>
                <authenticationUser username="mcollective" password="musingtec" groups="mcollective,admins,everyone"/>
                </users>
                </simpleAuthenticationPlugin>
          </plugins>
          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before disabling caching and/or slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
          -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        </transportConnectors>

        <!-- destroy the spring context on shutdown to stop jetty -->
        <shutdownHooks>
            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
        </shutdownHooks>

    </broker>

    <!--
        Enable web consoles, REST and Ajax APIs and demos
        The web consoles requires by default login, you can disable this in the jetty.xml file

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->
           

参数详解

Replicated LevelDB Store Properties

All the broker nodes that are part of the same replication set should have matching 

brokerName

 XML attributes. The following configuration properties should be the same on all the broker nodes that are part of the same replication set:

property name default value Comments

replicas

3

The number of nodes that will exist in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage.

securityToken

A security token which must match on all replication nodes for them to accept each others replication requests.

zkAddress

127.0.0.1:2181

A comma separated list of ZooKeeper servers.

zkPassword

The password to use when connecting to the ZooKeeper server.

zkPath

/default

The path to the ZooKeeper directory where Master/Slave election information will be exchanged.

zkSessionTimeout

2s

How quickly a node failure will be detected by ZooKeeper. (prior to 5.11 - this had a typo zkSessionTmeout)

sync

quorum_mem

Controls where updates are reside before being considered complete. This setting is a comma separated list of the following options: 

local_mem

local_disk

remote_mem

remote_disk

quorum_mem

quorum_disk

. If you combine two settings for a target, the stronger guarantee is used. For example, configuring 

local_mem, local_disk

 is the same as just using 

local_disk

. quorum_mem is the same as 

local_mem, remote_mem

 and

quorum_disk

 is the same as 

local_disk, remote_disk

Different replication sets can share the same 

zkPath

 as long they have different 

brokerName

.

The following configuration properties can be unique per node:

property name default value Comments

bind

tcp://0.0.0.0:61619

When this node becomes a master, it will bind the configured address and port to service the replication protocol. Using dynamic ports is also supported. Just configure with 

tcp://0.0.0.0:0

hostname

The host name used to advertise the replication service when this node becomes the master. If not set it will be automatically determined.

weight

1 The replication node that has the latest update with the highest weight will become the master. Used to give preference to some nodes towards becoming master.

The store also supports the same configuration properties of a standard LevelDB Store but it does not support the pluggable storage lockers :

Standard LevelDB Store Properties

property name default value Comments

directory

LevelDB

The directory which the store will use to hold it's data files. The store will create the directory if it does not already exist.

readThreads

10

The number of concurrent IO read threads to allowed.

logSize

104857600

 (100 MB)
The max size (in bytes) of each data log file before log file rotation occurs.

verifyChecksums

false

Set to true to force checksum verification of all data that is read from the file system.

paranoidChecks

false

Make the store error out as soon as possible if it detects internal corruption.

indexFactory

org.fusesource.leveldbjni.JniDBFactory, org.iq80.leveldb.impl.Iq80DBFactory

The factory classes to use when creating the LevelDB indexes

indexMaxOpenFiles

1000

Number of open files that can be used by the index.

indexBlockRestartInterval

16

Number keys between restart points for delta encoding of keys.

indexWriteBufferSize

6291456

 (6 MB)
Amount of index data to build up in memory before converting to a sorted on-disk file.

indexBlockSize

4096

 (4 K)
The size of index data packed per block.

indexCacheSize

268435456

 (256 MB)
The maximum amount of off-heap memory to use to cache index blocks.

indexCompression

snappy

The type of compression to apply to the index blocks. Can be snappy or none.

logCompression

none

The type of compression to apply to the log records. Can be snappy or none.

14.集群服务搭建完毕

启动之后查看3台机器日志

ActiveMQ基于LevelDB的Zookeeper集群
ActiveMQ基于LevelDB的Zookeeper集群
ActiveMQ基于LevelDB的Zookeeper集群

从日志可以看出192.168.0.171 为master其他为slave

测试

使用调试工具ZooInspector.zip查看当前activemq在那个机器上

ActiveMQ基于LevelDB的Zookeeper集群

查看消息队列控制台

ActiveMQ基于LevelDB的Zookeeper集群

只能在master机器上维护MQ服务在其他机器上则不会提供服务,也就是说以下3个节点只有一个可以正产访问,down掉一台activemq 会立刻切换到别的服务器上继续提供服务,会话不会断开。

http://192.168.0.85:8161/admin/queues.jsp

http://192.168.0.171:8161/admin/queues.jsp

http://192.168.0.181:8161/admin/queues.jsp

待完善。。。。

版权声明:本文为CSDN博主「weixin_34341229」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/weixin_34341229/article/details/91493911