目录
1、redis简介
2、redis主从复制实现
3、sentinel集群管理工具实现redis的高可用性
4、redis cluster
4.1、redis cluster环境搭建
4.2、redis cluster增加节点
4.3、redis cluster删除节点
4.4、redis cluster主从手动切换
5、总结
Redis是一个开源的使用ANSI C语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value数据库,并提供多种语言的API。从2010年3月15日起,Redis的开发工作由VMware主持。从2013年5月开始,Redis的开发由Pivotal赞助。---来自百度百科
从redis官网上的介绍上看,redis是一个开放源代码,遵循BSD许可的内存数据结构存储,可用作数据库缓存和消息代理。它支持的数据结构不像memcached那么单一,redis支持丰富的数据结构,如: strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries。redis支持复制,Lua脚本、基于LRU的内存回收机制,并支持事务,而且还支持不同级别上的数据持久性,并支持高可用集群和自动分区功能。
redis客户端支持许多语言的连接,如下图:
图片来自:http://redis.io/clients
目前redis稳定的版本为3.0.4,此版本修复了上一版本中的一些bug,并增加了一些新的特性,如果你想了解更多关于这方面的内容请移步https://raw.githubusercontent.com/antirez/redis/3.0/00-RELEASENOTES。
redis的单节点运行这里将不再单独演示,这些内容在接下来的演示中也可发现,本节主要讲解redis的主从复制的实现。redis的主从复制是非常简易用的配置,在这种环境下,主节点允许从节点精确的拥有一份数据的副本,这也是软件主从架构的重要的一个特性吧,如果这个特性都没有,那何谈主从。接下来看看redis主从架构的一些特性:
a、主从复制是异步进行的,从redis 2.8开始,从节点将定期的告知主节点从节点复制进程的复制量;
b、一主可以拖多从;
c、从节点可以作为其他从节点的主节点,这一点是不是与mysql很像;
d、从节点在进行复制操作时,主节点这端是非阻塞的,在进行复制的过程中主节点可以可以断续处理查询,觉得这些说得都比较啰嗦;
e、在复制过程中,从节点也是非阻塞的,如果在主节点的一个数据已被改变,而从节点还没有被复制更新,那redis可以选择是以老版本中的数据来响应还是返回一个错误给客户端,这是可以在redis的配置文件中指定的;
f、在主从复制架构中,为了均衡redis的性能的数据的持久性,可以把主节点配置为不持久化数据,这样少了许多把数据写入磁盘的开销,而在从节点上启用数据的持久性,这些我都是可以在配置文件中指定。
总结起来,redis的主从架构与mysql的些类似,在最后一点中关于数据的持久性问题上,在实际环境中是不建议关闭主节点的数据持久化功能,因为当主节点出现故障而自动重启后,存放在内存中的数据被释放,如果没有数据持久化,主节点就没有任何数据,那从节点也会把自己清空。
至于主从架构的实现原理作为运维人员不必再细究了。
配置一个节点成为主节点的从节点是非常简单的,只需要在从节点的配置文件是增加“slaveof 主节点IP 端口”,默认时从节点是只读的,是由“slave-read-only yes”来控制。
再来考虑一个问题,是任何一个节点都可以成为主节点的从吗?当然不是,redis也有认证机制来解决这个问题,默认时是没有启用此功能,若想启用此功能,在redis.conf配置文件中启用“requirepass 密码”参数,在将成为从节点的redis上可以用redis-cli客户端工具连接到redis中执行“config set masterauth 密码”来成为主节点的从,当然这样的配置只是临时生效,若想永久生效还是去修改配置文件吧。
言归正传,下边开始搭建redis主从复制环境,先交待一下实验的基础环境:
OS | IP | hostname | 角色 |
Debian 8 x64 | 192.168.207.128 | master | 主 |
192.168.207.130 | slave01 | 从 |
主节点配置:
root@master:~/tools# pwd
/root/tools
root@master:~/tools# cat /etc/issue
Debian GNU/Linux 8 \n \l
root@master:~/tools# uname -r
3.16.0-4-amd64
root@master:~/tools# hostname
master
root@master:~/tools# pwd
/root/tools
root@master:~/tools# ls
redis-3.0.4.tar.gz
root@master:~/tools# tar xf redis-3.0.4.tar.gz -C /usr/local/
root@master:~/tools# cd /usr/local/redis-3.0.4/
root@master:/usr/local/redis-3.0.4# ls
00-RELEASENOTES CONTRIBUTING depsMakefile README runtest runtest-sentinel src utils
BUGS COPYING INSTALLMANIFESTO redis.conf runtest-clustersentinel.conf tests
#可以参考一下README文件,有具体的安装方法
root@master:/usr/local/redis-3.0.4# ls src/redis #列出src目录下以redis开头的文件,现在是没有redis的启动程序和客户端连接工具的
redisassert.h redis.c redis-check-dump.c redis.h
redis-benchmark.c redis-check-aof.c redis-cli.c redis-trib.rb
root@master:/usr/local/redis-3.0.4# make #直接编译即可
最后输出:
Hint: It's a good idea to run 'make test' ;)
make[1]: Leaving directory '/usr/local/redis-3.0.4/src'
表示安装成功,上边建议运行make test进行检测
root@master:/usr/local/redis-3.0.4# make test
cd src && make test
make[1]: Entering directory '/usr/local/redis-3.0.4/src'
You need tcl 8.5 or newer in order to run the Redis test
Makefile:211: recipe for target 'test' failed
make[1]: *** [test] Error 1
make[1]: Leaving directory '/usr/local/redis-3.0.4/src'
Makefile:6: recipe for target 'test' failed
make: *** [test] Error 2
#上边报错了,是需要tcl依赖包,如果你真想运行make test那请自行安装tcl包后再执行检测命令,我这里略过做make test的操作。
root@master:/usr/local/redis-3.0.4# ls src/redis #和make之前相比多了一些文件
redisassert.h redis.c redis-check-dump redis-cli.c redis-sentinel
redis-benchmark redis-check-aof redis-check-dump.c redis-cli.o redis-server
redis-benchmark.c redis-check-aof.c redis-check-dump.o redis.h redis-trib.rb
redis-benchmark.o redis-check-aof.o redis-cli redis.o
这样redis就编译安装成功了,先来运行一下看redis是否能正确启动,命令如下:
root@master:/usr/local/redis-3.0.4# src/redis-server redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 5078
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
5078:M 30 Sep 11:59:04.708 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
5078:M 30 Sep 11:59:04.708 # Server started, Redis version 3.0.4
5078:M 30 Sep 11:59:04.708 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
5078:M 30 Sep 11:59:04.709 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
5078:M 30 Sep 11:59:04.709 * The server is now ready to accept connections on port 6379
这样redis运行了,但它默认是在前台运行的,从上边的启动的输出信息中可以看到三个“WARNING”信息,请不要忽略这些信息,你不理睬它,那在生产环境下,你就给自己或者其他同事挖了一个坑。
先来看第一个WARNING信息“The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128”大概是说somaxconn的值128设置过小,从“ /proc/sys/net/core/somaxconn”这个路径也可大概知道这个值的设置是关于网络连接中某个最大值的限定设置,此值表示网络连接的队列大小,在配置文件redis.conf中的“tcp-backlog 511”就配置在高并发环境下的最大队列大小,此值受限于系统的somaxconn与tcp_max_syn_backlog这两个值,所以应该把这两个内核参数值调大,具体解决方法如下:
root@master:~# vim /etc/sysctl.conf #在最后增加以下内容
#最大队列长度,应付突发的大并发连接请求
net.core.somaxconn = 65535
#半连接队列长度,此值受限于内存大小
net.ipv4.tcp_max_syn_backlog = 20480
root@master:~# sysctl -p #使修改生效
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 20480
做了以上修改后,再重新启动redis进行测试,发现第一个WARNING敬告信息已无,接下处理第二个WARNING信息“WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect”根据提示可知默认时overcommit_memory是被设置为0的,在主机配置较少内存时,redis运行可能出现故障,根据提示就可修复,在 /etc/sysctl.conf文件最后加上“vm.overcommit_memory = 1”后重新启动redis即可,操作如下:
root@master:~# vim /etc/sysctl.conf
#最大队列长度,应付突发的大并发连接请求
net.core.somaxconn = 65535
#半连接队列长度,此值受限于内存大小
net.ipv4.tcp_max_syn_backlog = 20480
#内存分配策略
vm.overcommit_memory = 1
说明:overcommit_memory的值有三个,0、1、2。
0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何,这里表示的所有内存是受限于overcommit_ratio值的,值是一个百分比,在Debian8上/proc/sys/vm/overcommit_ratio的默认值是50,那把overcommit_memory配置为1时,那系统可分配的内存为“物理内存+物理内存*50%”
2, 表示内核允许分配超过所有物理内存和交换空间总和的内存
root@master:~# sysctl -p #使其生效
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 20480
vm.overcommit_memory = 1
再把redis进程杀掉,重新启动测试,发现关于overcommit_memory的告警已无。最后再来处理关于“ Transparent Huge Pages (THP)”的告警,这是一个关于透明内存巨页的话题,简单来说内存可管理的最小单位是page,一个page通常是4kb,那1M内存就会有256个page,CPU通过内置的内存管理单元管理page表记录。Huge Pages就是表示page的大小已超过4kb了,一般是2M到1G,它的出现主要是为了管理超大内存,个人理解上TB的内存。而THP就是管理Huge Pages的一个抽象层次,根据一些资料显示,THP会导致内存锁影响性能,所以一般建议关闭此功能。
“/sys/kernel/mm/transparent_hugepage/enabled”有三个值,如下:
root@master:/usr/local/redis-3.0.4# cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
####
# always 尽量使用透明内存,扫描内存,有512个 4k页面可以整合,就整合成一个2M的页面
# never 关闭,不使用透明内存
# madvise 避免改变内存占用
关于THP的内容就介绍到这里,现在根据警告信息去修改,再重新启动redis测试一下告警是否消除,操作如下:
root@master:/usr/local/redis-3.0.4# echo never > /sys/kernel/mm/transparent_hugepage/enabled
root@master:/usr/local/redis-3.0.4#vim /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" #加入开机启动,写在exit 0之前
exit 0
root@master:/usr/local/redis-3.0.4# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
再停止redis,重新启动redis:
root@master:/usr/local/redis-3.0.4# src/redis-server redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 5666
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
5666:M 30 Sep 16:53:06.570 # Server started, Redis version 3.0.4
5666:M 30 Sep 16:53:06.570 * DB loaded from disk: 0.000 seconds
5666:M 30 Sep 16:53:06.570 * The server is now ready to accept connections on port 6379
#现在redis已无任何警告信息
到现在redis启动后还是在前台运行,我们再开一个窗口用redis客户端工具连接后做一个简单的测试,如下:
root@master:/usr/local/redis-3.0.4/src# pwd
/usr/local/redis-3.0.4/src
root@master:/usr/local/redis-3.0.4/src# ./redis-cli -p 6379
127.0.0.1:6379> get foo
(nil)
127.0.0.1:6379> set foo bar
OK
127.0.0.1:6379> get foo
"bar"
到这里redis单节点就可以正常工作了,接下来真正部署一个主从架构的redis集群,主从节点都按上边的的方法编译安装即可,只是在配置文件上略有区别而已,为了便于调试在配置文件中我先配置了“daemonize yes”(让redis启动后在后台运行)和“logfile "/var/log/redis.log"”(打印日志)。
先启动主节点:
root@master:/usr/local/redis-3.0.4# src/redis-server redis.conf
root@master:/usr/local/redis-3.0.4# ss -tnl | grep 6379
LISTEN 0 511 *:6379 *:*
LISTEN 0 511 :::6379 :::*
然后配置从节点的redis.conf,确保启用“slaveof 192.168.207.128 6379",接着就启动从节点,再去观察主节点的日志文件会发现有类似如下信息输出:
root@master:/usr/local/redis-3.0.4# tailf /var/log/redis.log
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
869:M 01 Oct 10:47:12.566 # Server started, Redis version 3.0.4
869:M 01 Oct 10:47:12.567 * DB loaded from disk: 0.000 seconds
869:M 01 Oct 10:47:12.567 * The server is now ready to accept connections on port 6379
869:M 01 Oct 10:47:44.213 * Slave 192.168.207.130:6379 asks for synchronization #发现有从节点来连接
869:M 01 Oct 10:47:44.214 * Full resync requested by slave 192.168.207.130:6379
869:M 01 Oct 10:47:44.215 * Starting BGSAVE for SYNC with target: disk
869:M 01 Oct 10:47:44.215 * Background saving started by pid 880 #因从节点是第一次来连接,所以主节点会启动一个后台保存进程去保存数据
880:C 01 Oct 10:47:44.229 * DB saved on disk #数据已保存到disk
880:C 01 Oct 10:47:44.230 * RDB: 0 MB of memory used by copy-on-write
869:M 01 Oct 10:47:44.284 * Background saving terminated with success
869:M 01 Oct 10:47:44.284 * Synchronization with slave 192.168.207.130:6379 succeeded #数据已同步
在主节点的日志输出中有类似下边的输出信息(都很容易理解,就不再一一解释):
1172:S 01 Oct 10:47:43.285 # Server started, Redis version 3.0.4
1172:S 01 Oct 10:47:43.295 * DB loaded from disk: 0.010 seconds
1172:S 01 Oct 10:47:43.295 * The server is now ready to accept connections on port 6379
1172:S 01 Oct 10:47:44.284 * Connecting to MASTER 192.168.207.128:6379
1172:S 01 Oct 10:47:44.285 * MASTER <-> SLAVE sync started
1172:S 01 Oct 10:47:44.285 * Non blocking connect for SYNC fired the event.
1172:S 01 Oct 10:47:44.285 * Master replied to PING, replication can continue...
1172:S 01 Oct 10:47:44.286 * Partial resynchronization not possible (no cached master)
1172:S 01 Oct 10:47:44.288 * Full resync from master: 127f65ef8fa67fd0fd33093ee44a88fe06bd4e6a:1
1172:S 01 Oct 10:47:44.357 * MASTER <-> SLAVE sync: receiving 29 bytes from master
1172:S 01 Oct 10:47:44.358 * MASTER <-> SLAVE sync: Flushing old data
1172:S 01 Oct 10:47:44.358 * MASTER <-> SLAVE sync: Loading DB in memory
接下来就测试主从是否真的能工作,在主节点上用客户端工具连接到redis,做以下操作:
root@master:/usr/local/redis-3.0.4/src# ./redis-cli -p 6379
127.0.0.1:6379> set key01 value01
OK
127.0.0.1:6379> get key01
"value01"
再到从节点上看是否能获取到”key01“的值:
root@slave01:/usr/local/redis-3.0.4/src# ./redis-cli -p 6379
127.0.0.1:6379> get key01
"value01"
127.0.0.1:6379> set key02 value02
(error) READONLY You can't write against a read only slave.
测试证明我们搭建的主从redis是能正常工作的,而在从节点上尝试进行写入操作时也是不被允许的,这是不是与mysql的机制类似,我想只要是主从复制的机制,从节点都应该不能被写入吧。
主从复制搭建是非常简单的事情,它如mysql主从复制一样当主节点挂掉后,在短时间无法恢复时我们可以手工切换到从节点,能在尽可短的时间内恢复业务。但主从复制始终在出现故障时会有一段时间redis不能上线,那是否有更好方法能让各个redis节点在监测到主节点挂掉后能不用人工 干预就能把一个从节点自动提升为主节点?sentinel就是这样一个工具。
sentinel管理工具有如下功能:
a、监视:sentinel会定期的监控主从redis的实例进程是否在正常工作;
b、通知:sentinel可能通过API向管理员发送出现问题的redis实例;
c、自动故障转移:sentinel监控到主节点不能正常工作后,可以在多个从节点中选举出一个节点成为新的主节点,而其他的从节点就会以新的主节点作为自己的主,这一切都不需要人为干预;
d、Configuration provider(我译为:配置可服务能力):表示在sentinel管理的redis集群的环境下客户端是与sentinel实例进行连接,当主节点发生故障后会重新在从节点中选举出一个节点作为主,那sentinel有能力向客户端告知新的master地址。
sentinel是一个分步式系统,它被设计成多个sentinel进程协合工作,我理解就是在运行有一个redis进程的环境下也运行着一个与之相关的sentinel进程,这样设计有什么优点呢?
a、主节点的不可用是由多个sentinel协同认定的(我猜想从多个人中选举出一个作为新主时也是多个sentinel协同完成的),这样可避免误判主节点的不可用的发生;
b、即使不是所有的sentinel实例都在正常工作也不会影响业务,这样就避免了单点故障。
现在我们就来搭建一个sentinel的高可用环境。在搭建前我需要说明的一些信息:第一、redis的节点数大于等于3;第二、因实验环境中只有两debian的虚拟机,所以我这里的3个节点都是搭建在两个主机上,只是在192.168.207.130主机上运行了2个redis实例;第三、这次我以一个普通用户来搭建,这更与实际生产环境相似。规划如下:
主机 | redis及sentinel所在目录 | redis实例监听端口 | sentinel实例监听端口 | |
主节点 | /home/redis/redis6379 | 6379 | 26379 | |
从节点 | /home/redis/redis6380 | 6380 | 26380 | |
/home/redis/redis6381 | 6381 | 26381 |
现在我们就按上表的规划把redis搭建好。还记得最开始编译安装redis的过程吧,redis编译后我们所需要的文件就是生成在了源码目录下的src目录中,而配置文件就在源码所在目录,对于想把redis运行起来我们只需要这些文件即可了,如果你已不记得是哪些文件了,那就回到源码目录运行”make PREFIX=/tmp/redis install“就可以把编译后的可执行文件都copy到/tmp/redis目录下,所以这次我只把这些文件拷贝到上表规划的相应目录中。在进行下边操作前记得把开始启动的redis进程杀掉。
主机192.168.207.128上的操作:
redis@master:~$ whoami
redis
redis@master:~$ pwd
/home/redis
redis@master:~$ mkdir redis6379
redis@master:~$ cd redis6379/
redis@master:~/redis6379$ cp -r /tmp/redis/bin ./
redis@master:~/redis6379$ cp /usr/local/redis-3.0.4/redis.conf /usr/local/redis-3.0.4/sentinel.conf ./
redis@master:~/redis6379$ ls
bin redis.conf sentinel.conf
redis@master:~/redis6379$ mkdir log #这里我创建一个单独的日志目录来存放redis的日志,当然配置文件也应当做相应的修改,略
redis@master:~/redis6379$ ls bin/
redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server
redis@master:~/redis6379$ bin/redis-server redis.conf #启动redis
看见上边的” sentinel.conf “这个配置文件了吧,这个文件就是sentinel实例运行时需要加载的配置文件,下边简单介绍一下它的常用配置:
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
第一行表示监视名为mymaster的主服务器,ip是127.0.0.1,端口是6379,而那个”2“表示的是法定票数,简单来说就是把这个主服务器判定为失效是需要2个sentinel进程同意,如果没有2个sentinel进程,那故障将不会转移,在高可用分步式环境中这个值一般设置为超过半数的节点;
第二行中的down-after-milliseconds表示sentinel判断一个主节点不可用时的超时时间,单位为毫秒;
第三行的failover-timeout表示故障转移的超时时间,这个在配置文档中有注释信息,但个人E文不好,没有看得太懂;
第四行的parallel-syncs表示当选举出新的master后,最多允许多少个从连接到新的master进行数据复制,当从重新连接新的master进行数据同步时,以前的数据将会被清空,此时从服务器不将具有被查询的能力,所以建议保持默认值。
此配置文件中没有slave的相关配置,只需要指定主节点就可以了,当你把sentile都启动后,sentile会自己配置各个从节点到这个配置文件中,也就是说sentinel有一定自我维护此配置文件的能力。说了这么多,下边就让我们动手实验一下:
192.168.207.128节点上的sentinel配置如下:
redis@master:~/redis6379$ grep -v "^#" sentinel.conf | grep -v "^$"
port 26379
dir /tmp
sentinel monitor mymaster 192.168.207.128 6379 2 #我只是把默认的ip地址换成了真实的接口地址,其他保持默认
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
配置好后,启动sentinel,如下操作:
redis@master:~/redis6379$ bin/redis-sentinel sentinel.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in sentinel mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 26379
| `-._ `._ / _.-' | PID: 1573
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1573:X 01 Oct 15:24:49.865 # Sentinel runid is a99b3228db674545eec6dcf4be2934520e469a3a
1573:X 01 Oct 15:24:49.865 # +monitor master mymaster 192.168.207.128 6379 quorum 2
#sentinel启动后也是在前台的,在调试阶段可以让标准输出重定向到一个文件中,就像记录日志文件一样,这样也便于观察sentinel的启动过程,我这里就让它在前台运行。
而在192.168.207.130节点上的redis配置和sentinel配置只是在端口设定上不同而已,这里就不再赘述了,但要说明的一点是一个主机上运行着多个redis实例,记得要把pid文件得改一下,两个实例可不能共享一个pid文件,建议就修改到自己的家目录里,并且在文件上做好标识,一眼就知道这是哪个redis进程的PID文件,在192.168.207.130的两个redis实例的redis.conf文件中不要忘记配置"slaveof 192.168.207.128 6379"指向主节点,因为它们可是主节点的从实例,配置过程就不再给出了。
在192.168.207.130节点上的目录结构是这样的,如下:
redis@slave01:~$ ls
redis6380 redis6381
redis@slave01:~$ tree
.
├── redis6380
│ ├── bin
│ │ ├── redis-benchmark
│ │ ├── redis-check-aof
│ │ ├── redis-check-dump
│ │ ├── redis-cli
│ │ ├── redis-sentinel -> redis-server
│ │ └── redis-server
│ ├── dump.rdb
│ ├── log
│ │ └── redis6380.log
│ ├── redis6380.pid
│ ├── redis.conf
│ └── sentinel.conf
└── redis6381
├── bin
│ ├── redis-benchmark
│ ├── redis-check-aof
│ ├── redis-check-dump
│ ├── redis-cli
│ ├── redis-sentinel -> redis-server
│ └── redis-server
├── log
│ └── redis6381.log
├── redis6381.pid
├── redis.conf
└── sentinel.conf
注:记得一定要按照上表细心的修改redis.conf和sentinel.conf文件
把所有节点的redis和sentinel启动好后在启动sentinel的窗口中会打印一些启动的日志信息,这里把三个实例的日志信息罗列出来,如下:
redis6379:
1644:X 01 Oct 15:40:12.692 # Sentinel runid is e3bc2b0fc1cba2a12b9c9234d2983ab58a2df1e0
1644:X 01 Oct 15:40:12.693 # +monitor master mymaster 192.168.207.128 6379 quorum 2
1644:X 01 Oct 15:40:27.889 * +sentinel sentinel 192.168.207.130:26380 192.168.207.130 26380 @ mymaster 192.168.207.128 6379
1644:X 01 Oct 15:40:37.396 * +sentinel sentinel 192.168.207.130:26381 192.168.207.130 26381 @ mymaster 192.168.207.128 6379
redis6380:
1931:X 01 Oct 15:40:25.833 # Sentinel runid is 0b6789bc75faf4e6b4f18114ea51442c2ea79dda
1931:X 01 Oct 15:40:25.833 # +monitor master mymaster 192.168.207.128 6379 quorum 2
1931:X 01 Oct 15:40:27.288 * +sentinel sentinel 192.168.207.128:26379 192.168.207.128 26379 @ mymaster 192.168.207.128 6379
1931:X 01 Oct 15:40:37.469 * +sentinel sentinel 192.168.207.130:26381 192.168.207.130 26381 @ mymaster 192.168.207.128 6379
redis6381:
1943:X 01 Oct 15:40:35.409 # Sentinel runid is f0111604f01d66e12e7e9ddbc86d1f92eeb99587
1943:X 01 Oct 15:40:35.409 # +monitor master mymaster 192.168.207.128 6379 quorum 2
1943:X 01 Oct 15:40:37.641 * +sentinel sentinel 192.168.207.128:26379 192.168.207.128 26379 @ mymaster 192.168.207.128 6379
1943:X 01 Oct 15:40:38.271 * +sentinel sentinel 192.168.207.130:26380 192.168.207.130 26380 @ mymaster 192.168.207.128 6379
对于上边的日志输出就不想多解释了,一看就会明白sentinel在干什么事情。
接下来就来测试,分成两个部份,第一是测试三个redis实例的主从复制结构是否可用,第二才是测试sentinel高可用是否能正常工作。
主从复制测试:
redis@master:~/redis6379$ bin/redis-cli -p 6379
127.0.0.1:6379> get foo #起初foo这个key没有值,下边的两个从节点也是
(nil)
127.0.0.1:6379> set foo bar #赋予foo的值为bar
OK
127.0.0.1:6379> get foo #当主节点的foo有值后,下边两个从节点也就有值了
"bar"
127.0.0.1:6379>
redis@slave01:~/redis6380$ bin/redis-cli -p 6380
127.0.0.1:6380> get foo
(nil)
127.0.0.1:6380> get foo
"bar"
redis@slave01:~/redis6381$ bin/redis-cli -p 6381
127.0.0.1:6381> get foo
(nil)
127.0.0.1:6381> get foo
"bar"
127.0.0.1:6381>
sentinel高可用测试:
redis@master:~/redis6379$ bin/redis-cli -p 26379
127.0.0.1:26379> sentinel master mymaster #此命令可打印出master主节点的一些状态信息,“mymaster”是指sentinel.conf配置文件是配置的主节点名称
1) "name"
2) "mymaster"
3) "ip"
4) "192.168.207.128"
5) "port"
6) "6379"
7) "runid"
8) "01d41d73fcf35e1c85f6d2305c54e368fdfc3578"
9) "flags"
10) "master"
11) "pending-commands"
12) "0"
13) "last-ping-sent"
14) "0"
15) "last-ok-ping-reply"
16) "518"
17) "last-ping-reply"
18) "518"
19) "down-after-milliseconds"
20) "30000"
21) "info-refresh"
22) "3765"
23) "role-reported"
24) "master"
25) "role-reported-time"
26) "336155"
27) "config-epoch"
28) "0"
29) "num-slaves"
30) "2"
31) "num-other-sentinels"
32) "2"
33) "quorum"
34) "2"
35) "failover-timeout"
36) "180000"
37) "parallel-syncs"
38) "1"
打印出mymaster这个组中从节点的状态信息,从下边的输出可看出有两个从节点
127.0.0.1:26379> sentinel slaves mymaster
1) 1) "name"
2) "192.168.207.130:6381"
3) "ip"
4) "192.168.207.130"
5) "port"
6) "6381"
7) "runid"
8) "84ad88633faffd65496d232fdd5f8cd3b57836eb"
9) "flags"
10) "slave"
11) "pending-commands"
12) "0"
13) "last-ping-sent"
14) "0"
15) "last-ok-ping-reply"
16) "445"
17) "last-ping-reply"
18) "445"
19) "down-after-milliseconds"
20) "30000"
21) "info-refresh"
22) "9710"
23) "role-reported"
24) "slave"
25) "role-reported-time"
26) "984624"
27) "master-link-down-time"
28) "0"
29) "master-link-status"
30) "ok"
31) "master-host"
32) "192.168.207.128"
33) "master-port"
34) "6379"
35) "slave-priority"
36) "100"
37) "slave-repl-offset"
38) "314641"
2) 1) "name"
2) "192.168.207.130:6380"
3) "ip"
4) "192.168.207.130"
5) "port"
6) "6380"
7) "runid"
8) "b69a95dded62594ad295ced4a1f1ccefa942e901"
9) "flags"
10) "slave"
11) "pending-commands"
12) "0"
13) "last-ping-sent"
14) "0"
15) "last-ok-ping-reply"
16) "445"
17) "last-ping-reply"
18) "445"
19) "down-after-milliseconds"
20) "30000"
21) "info-refresh"
22) "9710"
23) "role-reported"
24) "slave"
25) "role-reported-time"
26) "984624"
27) "master-link-down-time"
28) "0"
29) "master-link-status"
30) "ok"
31) "master-host"
32) "192.168.207.128"
33) "master-port"
34) "6379"
35) "slave-priority"
36) "100"
37) "slave-repl-offset"
38) "314641"
接下来我们杀掉运行在6379端口的redis实例后观察sentinel是否能正常的选举出新的主redis,同时观察各实例的日志输出就可知道当主redis挂点后到底发生了什么,如下操作:
redis@master:~/redis6379$ ps aux | grep :6379
redis 882 0.1 0.3 38976 3920 ? Ssl 14:45 0:05 bin/redis-server *:6379
redis 1214 0.0 0.1 12948 1980 pts/1 S+ 15:34 0:00 grep :6379
redis@master:~/redis6379$ kill 882
redis 26379日志输出:
1206:X 05 Oct 15:35:37.225 # +sdown master mymaster 192.168.207.128 6379
1206:X 05 Oct 15:35:37.302 # +new-epoch 1
1206:X 05 Oct 15:35:37.304 # +vote-for-leader 3123628221014c13a37183b9e054de2fad79f4f3 1
1206:X 05 Oct 15:35:38.317 # +odown master mymaster 192.168.207.128 6379 #quorum 3/2
1206:X 05 Oct 15:35:38.317 # Next failover delay: I will not start a failover before Mon Oct 5 15:41:38 2015
1206:X 05 Oct 15:35:38.394 # +config-update-from sentinel 192.168.207.130:26381 192.168.207.130 26381 @ mymaster 192.168.207.128 6379
1206:X 05 Oct 15:35:38.394 # +switch-master mymaster 192.168.207.128 6379 192.168.207.130 6381
1206:X 05 Oct 15:35:38.395 * +slave slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.130 6381
1206:X 05 Oct 15:35:38.395 * +slave slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
1206:X 05 Oct 15:36:08.405 # +sdown slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
redis 26380日志输出:
1273:X 05 Oct 15:35:37.290 # +sdown master mymaster 192.168.207.128 6379
1273:X 05 Oct 15:35:37.295 # +new-epoch 1
1273:X 05 Oct 15:35:37.298 # +vote-for-leader 3123628221014c13a37183b9e054de2fad79f4f3 1
1273:X 05 Oct 15:35:37.390 # +odown master mymaster 192.168.207.128 6379 #quorum 3/2
1273:X 05 Oct 15:35:37.390 # Next failover delay: I will not start a failover before Mon Oct 5 15:41:38 2015
1273:X 05 Oct 15:35:38.388 # +config-update-from sentinel 192.168.207.130:26381 192.168.207.130 26381 @ mymaster 192.168.207.128 6379
1273:X 05 Oct 15:35:38.388 # +switch-master mymaster 192.168.207.128 6379 192.168.207.130 6381
1273:X 05 Oct 15:35:38.389 * +slave slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.130 6381
1273:X 05 Oct 15:35:38.389 * +slave slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
1273:X 05 Oct 15:36:08.390 # +sdown slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
redis 26381日志输出:
1278:X 05 Oct 15:35:37.230 # +sdown master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.291 # +odown master mymaster 192.168.207.128 6379 #quorum 2/2
1278:X 05 Oct 15:35:37.292 # +new-epoch 1
1278:X 05 Oct 15:35:37.292 # +try-failover master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.293 # +vote-for-leader 3123628221014c13a37183b9e054de2fad79f4f3 1
1278:X 05 Oct 15:35:37.298 # 192.168.207.130:26380 voted for 3123628221014c13a37183b9e054de2fad79f4f3 1
1278:X 05 Oct 15:35:37.299 # 192.168.207.128:26379 voted for 3123628221014c13a37183b9e054de2fad79f4f3 1
1278:X 05 Oct 15:35:37.366 # +elected-leader master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.367 # +failover-state-select-slave master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.451 # +selected-slave slave 192.168.207.130:6381 192.168.207.130 6381 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.452 * +failover-state-send-slaveof-noone slave 192.168.207.130:6381 192.168.207.130 6381 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:37.528 * +failover-state-wait-promotion slave 192.168.207.130:6381 192.168.207.130 6381 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:38.321 # +promoted-slave slave 192.168.207.130:6381 192.168.207.130 6381 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:38.321 # +failover-state-reconf-slaves master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:38.385 * +slave-reconf-sent slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:39.325 * +slave-reconf-inprog slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:39.427 # -odown master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:40.342 * +slave-reconf-done slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:40.426 # +failover-end master mymaster 192.168.207.128 6379
1278:X 05 Oct 15:35:40.426 # +switch-master mymaster 192.168.207.128 6379 192.168.207.130 6381
1278:X 05 Oct 15:35:40.426 * +slave slave 192.168.207.130:6380 192.168.207.130 6380 @ mymaster 192.168.207.130 6381
1278:X 05 Oct 15:35:40.426 * +slave slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
1278:X 05 Oct 15:36:10.476 # +sdown slave 192.168.207.128:6379 192.168.207.128 6379 @ mymaster 192.168.207.130 6381
仔细观察上边的日志输出,会得知各个sentinel进程都发现redis6379进程已不在了,于是就把master的状态设置为sdown,当通过sentinel彼此间通讯后有大多数的sentinel都认为原master挂掉后就把原master设置为odown状态,接下就是开始故障转移的过程,在从节点中选出一个节点来作为新的master,并把其他的从节点的主节点更新为现在新的master节点(会更新sentinel.conf文件),当从节点切换到新的主节点后它会重新同步数据,这在redis实例的日志文件中可观察到详细的操作。
到此redis的高可用的实现了,原理还是比较简单,真正实现起来也是十分容易的。在真正的生产环境下连接redis集群时可不是直接与redis实例所连接,而是和多个sentinel实例连接了。
利用sentinel工具来实现redis的高可用也算是一种集群方案了,但他只是加强了redis的可用性,并没有在性能上对redis进行扩展,同一时间可服务的也只有master节点,这种主从加sentinel的架构能在一般压力环境下能够应对,但是在极高并发的环境下还是不适合,为了应用这样的高并发场景,redis在从3.0开始就支持cluster了,这才是真正意义上的分布式集群系统,他能让redis向外扩展,能让多个redis节点同步接受客户的请求操作,让数据跨跃多个节点并实现自动分片,这大大提高了系统 的吞吐量。既然redis cluster是一个分布式系统 ,那他也遵循CAP理论,在consistency上redis节点之间的数据也是异步的。
redis cluster是一个怎样的集群系统呢?他有以下两个特性:
a、数据的分片是自动完成的;
b、当有部分(少于半数的节点)节点挂点后,系统依然可用。
redis cluster需要两个TCP端口才能工作,一个是redis实例监听的端口,默认是6379,这个端口是让客户端接入的;另一个tcp端口是redis实例端口+10000,此soket是cluster bus(集群总线),如果reids实例商品为6379,那cluster bus的端口就是16379,在此总线上节点对节点间通过二进制协议进行通信,所完成的工作有故障检查、配置更新、故障转移等。客户端工具不应该与此端口进行连接,但各个节点间应该开放此端口,若有防火墙一定要开放出来,不然集群将不能正常的工作。
在这样一个分布式集群环境下,数据是如何存放的?回想一下memcached在分布式运用上是怎样实现的呢?memcached是采用一致性hash算法实现数据的分布式存储,不明此算法的原理请翻看前边的博客。而redis cluster并没有采用一致性hash算法,而是采用了一种叫“hash slot(散列槽)”的机制。在一个cluster中共有16384个散列槽,当需要在 Redis 集群中放置一个 key-value 时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点,即各个节点负责16384个散列槽中的一个子集。hash slot的引入使得redis cluster非常容易向集群中增加节点或是减少节点,如果是增加节点,只要把现在一个节点上的一部分散列槽转移到新节点上,如果是移除节点,只要把这个节点上的散列槽移到另外的节点上,这个增加节点或移除节点都不会导致集群停止提供服务。
接下来谈一下cluster中的master-slave模式。试想一下如果把三个redis实例配置成一个cluster,那每个节点上都分配了一定数据的散列槽,当一个节点挂掉后,请问这个集群还能够正常工作吗?这个问题的答案留在最后作答,但我们明白一个节点挂掉了,那它之前所分配的散列槽就不可用了,如果上边存放有数据,客户端就无法请求到。为了在有节点挂掉后还能保持集群的高度可用,redis提供了这样的一种集群方案,可以为各可对外提供查询的redis实例配置一个从节点,当主节点挂掉后,从节点可立马提升为主节点,这样集群依然是一个完整的集群,仍然可以正常工作。既然这里又涉及到的主从,那意味着主与从的数据不可能做强一致性的,所以也有一定的机率导致数据的丢失,这一点我们也是需要明白的。如果主节点和从节点都一起挂了,那集群也就不能工作了,即在redis cluster中必须有16384个slot存在,如果任何一个节点上的slot子集有缺失,那集群将不会工作。
4.1、redis cluster环境搭建
接下来我们就动手来搭建一个redis cluster环境,来感受一下这神奇的开源项目,规划如下图:
实例名 | redis监听端口 | |
redis7000 | 7000 | |
redis7001 | 7001 | |
redis7002 | 7002 | |
redis7003 | 7003 | |
redis7004 | 7004 | |
redis7005 | 7005 |
依然是上边的两个主机,分别部署三个redis实例,计划让其中的三个实例当作master,另外三个分另为三个master的slave,谁做master,谁是slave这是由redis cluster自己决定的。在配置集群时需要用到“redis-trib.rb”这个ruby脚本,这个脚本在源码目录中可找到,既然是ruby脚本,那就需要ruby环境了。
各实例的目录结构类似如下,这里只列出一个:
redis@master:~$ pwd
/home/redis
redis@master:~$ tree redis7000
redis7000
├── bin
│ ├── redis-benchmark
│ ├── redis-check-aof
│ ├── redis-check-dump
│ ├── redis-cli
│ ├── redis-sentinel -> redis-server
│ ├── redis-server
│ └── redis-trib.rb
└── redis.conf
我们只需要修改各个实例目录下的redis.conf这个配置文件,各个实例的配置文件按照上表规划的做修改即可,如下:
redis@master:~/redis7000$ grep -v "^#" redis.conf | grep -v "^$"
daemonize yes #后台运行
pidfile /home/redis/redis7000/redis7000.pid #pid文件
port 6000 #端口
tcp-backlog 511
bind 192.168.207.128 7000 #绑定的地址及端口
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/home/redis/redis7000/redis7000.log" #日志文件
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes #redis每次更新都记录到日志文件
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes #启用集群
cluster-config-file nodes-7000.conf #集群配置文件
cluster-node-timeout 10000 #集群节点间通讯超时时间
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
注意上边有注释的地方,各个实例只是配置把相应的监听端口、绑定地址等修改成自己的即可
把6个redis实例的配置文件都按照上表进行修改后,那就一一把redfis实例启动起来,确保各个实例都监听在相应的端口上。
接着再回到redis7000所在主机上,因redis-trib.rb是ruby程序,所以系统首先需要安装ruby环境,如下:
root@master:~# aptitude install -y ruby-full
再更换ruby源,你懂的,原ruby源是被墙了的,如下:
root@master:~# gem source -l
*** CURRENT SOURCES ***
https://rubygems.org/
root@master:~# gem source --remove https://rubygems.org/
https://rubygems.org/ removed from sources
root@master:~# gem source -a http://ruby.taobao.org/
http://ruby.taobao.org/ added to sources
root@master:~# gem source -l
*** CURRENT SOURCES ***
http://ruby.taobao.org/
要想使用redis-trib.rb脚本来把各个节点加入cluster还需要ruby下的redis接口的支持,所以要安装redis支持,如下:
root@master:~# gem install redis #安装只是几秒钟的事情
接下就把各个节点加入到集群中,如下命令:
redis@master:~/redis7000$ bin/redis-trib.rb create --replicas 1 192.168.207.128:7000 192.168.207.128:7001 192.168.207.128:7002 192.168.207.130:7003 192.168.207.130:7004 192.168.207.130:7005
##命令中的“create”表示创建一个集群,“--replicas 1”表示每个master都有一个salve
>>> Creating cluster #开始创建集群
Connecting to node 192.168.207.128:7000: OK
Connecting to node 192.168.207.128:7001: OK
Connecting to node 192.168.207.128:7002: OK
Connecting to node 192.168.207.130:7003: OK
Connecting to node 192.168.207.130:7004: OK
Connecting to node 192.168.207.130:7005: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters: #自动选择出3个redis实例作为master节点和3个节点作为slave
192.168.207.128:7000
192.168.207.130:7003
192.168.207.128:7001
Adding replica 192.168.207.130:7004 to 192.168.207.128:7000
Adding replica 192.168.207.128:7002 to 192.168.207.130:7003
Adding replica 192.168.207.130:7005 to 192.168.207.128:7001
M: 7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000
slots:0-5460 (5461 slots) master
M: 568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001
slots:10923-16383 (5461 slots) master
S: 0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002
replicates 8490b5e84bf0359871ea3fa55b97bb9877be0512
M: 8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003
slots:5461-10922 (5462 slots) master
S: 9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004
replicates 7dae923773125c5956605fac6159412850b100b3
S: f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005
replicates 568669e03c61b3c4edc31643dcb47e85bc2c3e23
Can I set the above configuration? (type 'yes' to accept): yes #确定开始配置集群
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.207.128:7000) #检查集群状态,从结果输出中可看出只有master节点分配了hash slots
M: 7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000
slots:0-5460 (5461 slots) master
M: 568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001
slots:10923-16383 (5461 slots) master
M: 0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002
slots: (0 slots) master
replicates 8490b5e84bf0359871ea3fa55b97bb9877be0512
M: 8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003
slots:5461-10922 (5462 slots) master
M: 9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004
slots: (0 slots) master
replicates 7dae923773125c5956605fac6159412850b100b3
M: f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005
slots: (0 slots) master
replicates 568669e03c61b3c4edc31643dcb47e85bc2c3e23
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
至此,redis cluster创建成功
接下就是测试了,这里用一种最简单的方法来测试,直接用redis-cli客户端工具,如下操作:
redis@slave01:~/redis7003$ bin/redis-cli -c -h 192.168.207.130 -p 7003 #“-c”表示是启用cluster模块
192.168.207.130:7003> get foo
-> Redirected to slot [12182] located at 192.168.207.128:7001
(nil)
192.168.207.128:7001> set foo bar
OK
192.168.207.128:7001> get foo
"bar"
192.168.207.128:7001> set key01 value01
OK
192.168.207.128:7001> get key01
"value01"
192.168.207.128:7001> set testkey testvalue
-> Redirected to slot [4757] located at 192.168.207.128:7000
OK
192.168.207.128:7000> get testkey
"testvalue"
192.168.207.128:7000>
从上边的测试来看,当我们在7003上获取foll时被定向到了7001,当在7001上设置testkey时被定向到了7000。那怎样查看一个cluster中各个节点的信息呢?运行下边的命令即可:
192.168.207.128:7000> cluster nodes
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444268798177 2 connected 10923-16383
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 slave 7dae923773125c5956605fac6159412850b100b3 0 1444268797169 5 connected
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444268796159 4 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 myself,master - 0 0 1 connected 0-5460
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444268797169 6 connected
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 master - 0 1444268794141 4 connected 5461-10922
192.168.207.128:7000>
接下来测试一下cluster的可用性,我这样来测试,先去杀掉一个主节点,再查看集群的状态:
redis@master:~/redis7000$ cat redis7000.pid
2184
redis@master:~/redis7000$ kill 2184 #把7000杀掉了
redis@slave01:~/redis7003$ bin/redis-cli -c -h 192.168.207.130 -p 7003
192.168.207.130:7003> cluster nodes
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444268931560 6 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 master,fail - 1444268887245 1444268883792 1 disconnected
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444268933596 4 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444268932580 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 myself,master - 0 0 4 connected 5461-10922
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 master - 0 1444268934613 7 connected 0-5460
从上边的节点信息中可知,在起初7000的从节点是7004,现在7000被kill后,7004变成了master了。
在杀掉7000前,我们设置过testkey这个key,其值为testvalue,现在7000被杀掉后来测试一下此key是否能查询,如下:
192.168.207.128:7001> get testkey
-> Redirected to slot [4757] located at 192.168.207.130:7004
"testvalue"
正确从7004上返回了testkey这个key的值。
如果现在又去把7000启动,那他就会成为7004的从了,如下:
redis@master:~/redis7000$ bin/redis-server redis.conf
192.168.207.130:7004> cluster nodes
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444269002483 6 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444269003498 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 master - 0 1444269005530 4 connected 5461-10922
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444269006546 4 connected
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 myself,master - 0 0 7 connected 0-5460
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave 9d076e70871fc7291485aba97b2623dc9fb3b3b0 0 1444269004514 7 connected
至此,cluster高可用性已得到了验证。
4.2、redis cluster增加节点
随着业务的增长,有一天你发现现在的redis集群规模已不能满足业务需求了,你得向集群中增加节点来扩展redis cluster的处理能力。向一个redis cluster增加节点也是一件愉快的事情,当然首先你得准备好一个空的redis实例,就像我下边一样我先准备好了一个“redis7006”实例:
redis@slave01:~$ ls
redis7003 redis7004 redis7005 redis7006
redis@slave01:~$ tree redis7006
redis7006
├── bin
│ ├── redis-benchmark
│ ├── redis-check-aof
│ ├── redis-check-dump
│ ├── redis-cli
│ ├── redis-sentinel
│ ├── redis-server
│ └── redis-trib.rb
└── redis.conf
配置文件的修改就是修改一端口号、pid文件后缀、日志文件后缀这些信息,这里就不再给出。
接着把新的实例启动起来:
redis@slave01:~$ cd redis7006/
redis@slave01:~/redis7006$ bin/redis-server redis.conf
新的实例准备好了,接下就把此实例增加到集群中吧,它增加到集群中是担任master,还是担任slave呢?这都是可用参数来控制的,默认不指定时是作为master。增加节点的命令如下:
./redis-trib.rb add-node IP:PORT IP:PORT
第一个IP:PORT是指新实例的ip地址和端口号,第二个IP:PORT是指现有集群中的任意一个节点的ip地址和端口,是master可以,是slave也可以。如果增加的节点是做salve,那命令如下:
./redis-trib.rb add-node --slave IP:PORT IP:PORT
第一个IP:PORT是指新实例的ip地址和端口号,第二个IP:PORT是指集群中任意一个节点的ip地址和端口,采用这样的方式增加的slave节点我们并不能控制他是哪个节点的从,如果要想指定加入的节点是哪个节点的从节点,那如下命令:
./redis-trib.rb add-node --slave --master-id ID
IP:PORT IP:PORT
不再解释上边命令了,也是很好理解的。
掌握了增加节点的命令,那就把redis7006这个实例加入到现有集群中,如下操作:
redis@master:~/redis7000$ bin/redis-trib.rb add-node 192.168.207.130:7006 192.168.207.128:7002
>>> Adding node 192.168.207.130:7006 to cluster 192.168.207.128:7002
Connecting to node 192.168.207.128:7002: OK
Connecting to node 192.168.207.130:7003: OK
Connecting to node 192.168.207.130:7004: OK
Connecting to node 192.168.207.128:7001: OK
Connecting to node 192.168.207.128:7000: OK
Connecting to node 192.168.207.130:7005: OK
>>> Performing Cluster Check (using node 192.168.207.128:7002)
S: 0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002
slots: (0 slots) slave
replicates 8490b5e84bf0359871ea3fa55b97bb9877be0512
M: 8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000
slots: (0 slots) slave
replicates 9d076e70871fc7291485aba97b2623dc9fb3b3b0
S: f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005
slots: (0 slots) slave
replicates 568669e03c61b3c4edc31643dcb47e85bc2c3e23
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Connecting to node 192.168.207.130:7006: OK
>>> Send CLUSTER MEET to node 192.168.207.130:7006 to make it join the cluster.
[OK] New node added correctly.
成功把节点加入到集群后再来检查一下集群状态:
192.168.207.130:7003> cluster nodes
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444270807484 0 connected
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444270805452 6 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave 9d076e70871fc7291485aba97b2623dc9fb3b3b0 0 1444270806468 7 connected
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444270804437 4 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444270804437 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 myself,master - 0 0 4 connected 5461-10922
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 master - 0 1444270804437 7 connected 0-5460
从上边的输出中可知redis7006已是一个master节点,但此时此节点不存放任何数据,因为它没有被分配hash slot,所以想让此节点真正的工作起来,你还得做一个切分操作,把在其他节点上的hash slot转移一部份到新增的节点上来,这个操作也是通过redis-trib.rb工具来完成的,具体用法如下:
./redis-trib.rb reshard IP:PORT
采用reshard参数表示重新切分,后边的IP:PORT是集群中任意一个节点的ip地址和端口,redis-trib.rb工具会自动寻找集群中的其他节点。
现在我们就来把7004上的2000个slot转移到7006上,如下操作:
redis@master:~/redis7000$ bin/redis-trib.rb reshard 192.168.207.128:7000
Connecting to node 192.168.207.128:7000: OK
Connecting to node 192.168.207.130:7004: OK
Connecting to node 192.168.207.130:7003: OK
Connecting to node 192.168.207.128:7002: OK
Connecting to node 192.168.207.128:7001: OK
Connecting to node 192.168.207.130:7006: OK
Connecting to node 192.168.207.130:7005: OK
>>> Performing Cluster Check (using node 192.168.207.128:7000)
S: 7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000
slots: (0 slots) slave
replicates 9d076e70871fc7291485aba97b2623dc9fb3b3b0
M: 9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002
slots: (0 slots) slave
replicates 8490b5e84bf0359871ea3fa55b97bb9877be0512
M: 568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006
slots: (0 slots) master
0 additional replica(s)
S: f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005
slots: (0 slots) slave
replicates 568669e03c61b3c4edc31643dcb47e85bc2c3e23
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2000 #输入2000
What is the receiving node ID? b502b7efac704e92ac439933a42edf613f908fea #输入哪个节点想接收这些slot,这里就是7006的ID
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:9d076e70871fc7291485aba97b2623dc9fb3b3b0 #输入想从哪个节点移动slot,我这里输入了7004的ID
Source node #2:done #输入done回车后会输出具体会移动的slot
Moving slot 0 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
.........略........
Moving slot 1995 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
Moving slot 1996 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
Moving slot 1997 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
Moving slot 1998 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
Moving slot 1999 from 9d076e70871fc7291485aba97b2623dc9fb3b3b0
Do you want to proceed with the proposed reshard plan (yes/no)? yes #执行计划,开始真正移动slot
Moving slot 0 from 192.168.207.130:7004 to 192.168.207.130:7006:
.....略.....
Moving slot 1993 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1994 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1995 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1996 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1997 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1998 from 192.168.207.130:7004 to 192.168.207.130:7006:
Moving slot 1999 from 192.168.207.130:7004 to 192.168.207.130:7006:
redis@master:~/redis7000$
移动完成后再查看一下集群状态,如下:
192.168.207.130:7003> cluster nodes
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444273978574 8 connected 0-1999 #已有2000个slot
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444273977559 6 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave 9d076e70871fc7291485aba97b2623dc9fb3b3b0 0 1444273975530 7 connected
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444273977559 4 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444273975530 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 myself,master - 0 0 4 connected 5461-10922
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 master - 0 1444273976544 7 connected 2000-5460 #少了2000个solt
至此,新节点已增加到集群并能正常工作。
4.3、redis cluster删除节点
有增加节点的操作,当然也会有删除一个节点,当业务量下降后,集群中redis节点数量服务能力大大超过了业务所带来的压力,此时我们应该移除一些节点,回收主机资源,以免造成资源的浪费。
移除一个节点用如下命令:
./redis-trib.rb del-node IP:PORT 'node-id'
IP:PORT表示集群中任意一个节点的ip地址和端口,node-id表示你想要移除的节点ID号
接下来就测试一下删除一个节点的操作,我们计划把7004这个节点删除,那运行如下命令:
redis@master:~/redis7000$ bin/redis-trib.rb del-node 192.168.207.128:7000 '9d076e70871fc7291485aba97b2623dc9fb3b3b0'
>>> Removing node 9d076e70871fc7291485aba97b2623dc9fb3b3b0 from cluster 192.168.207.128:7000
Connecting to node 192.168.207.128:7000: OK
Connecting to node 192.168.207.130:7004: OK
Connecting to node 192.168.207.130:7003: OK
Connecting to node 192.168.207.128:7002: OK
Connecting to node 192.168.207.128:7001: OK
Connecting to node 192.168.207.130:7006: OK
Connecting to node 192.168.207.130:7005: OK
[ERR] Node 192.168.207.130:7004 is not empty! Reshard data away and try again.
上边报错了,说是这个节点不是空的,需要重新切片后再尝试,这是因为在这个节点上还有2000-5460的slot,需要把这些slot移走后才能删除此节点,我们就把7004上的所有slot移到7006后再来删除此节点试试:
redis@master:~/redis7000$ bin/redis-trib.rb reshard --from 9d076e70871fc7291485aba97b2623dc9fb3b3b0 --to b502b7efac704e92ac439933a42edf613f908fea --slots 3461 --yes 192.168.207.128:7000
#上边的命令是一种非交互模块下的数据迁移命令,用“--from...--to”的结构表示从哪个节点把数据迁移到哪个节点,官方命令格式是“./redis-trib.rb reshard <host>:<port> --from <node-id> --to <node-id> --slots --yes”这样的,但在实际运用时发现“<host>:<port>”需要写在最后,这个是表示集群中任意一个节点的ip地址和端口信息。
运行上表命令并等待一会儿后,数据就迁移完毕,再来观察一下集群中7004节点的状态信息:
192.168.207.130:7004> cluster nodes
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444283310797 6 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444283307772 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 master - 0 1444283306763 4 connected 5461-10922
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444283308780 4 connected
9d076e70871fc7291485aba97b2623dc9fb3b3b0 192.168.207.130:7004 myself,master - 0 0 7 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave 9d076e70871fc7291485aba97b2623dc9fb3b3b0 0 1444283308780 7 connected
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444283309788 8 connected 0-5460
看见了吧,上边输出信息中7004已没有slot了,现在再来移除此节点试试:
redis@master:~/redis7000$ bin/redis-trib.rb del-node 192.168.207.128:7000 '9d076e70871fc7291485aba97b2623dc9fb3b3b0'
>>> Removing node 9d076e70871fc7291485aba97b2623dc9fb3b3b0 from cluster 192.168.207.128:7000
Connecting to node 192.168.207.128:7000: OK
Connecting to node 192.168.207.130:7004: OK
Connecting to node 192.168.207.130:7003: OK
Connecting to node 192.168.207.128:7002: OK
Connecting to node 192.168.207.128:7001: OK
Connecting to node 192.168.207.130:7006: OK
Connecting to node 192.168.207.130:7005: OK
>>> Sending CLUSTER FORGET messages to the cluster...
>>> 192.168.207.128:7000 as replica of 192.168.207.130:7006
>>> SHUTDOWN the node.
移除成功了,移除后还会重新计算集群中的主从关系,并把移除的节点shutdown,这里就把7000作为7006的从节点了,再来查看集群状态就是这样了:
redis@slave01:~/redis7004$ bin/redis-cli -c -h 192.168.207.130 -p 7003
192.168.207.130:7003> cluster nodes
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444283726189 8 connected 0-5460
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444283728207 6 connected
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave b502b7efac704e92ac439933a42edf613f908fea 0 1444283725182 8 connected
#看下他的主节点是7006
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 1444283729216 4 connected
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444283728207 2 connected 10923-16383
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 myself,master - 0 0 4 connected 5461-10922
4.4、redis cluster主从手动切换
也许在一些特殊的场景你需要切换集群中主从节点,比如你想让一个主节点下线升级维护时,当然你可简单粗爆的把这个主节点shutdown,然后redis cluster会监测到集群中少了一个主节点,集群就会把他的从节点提升为主节点,然后你再把shutdown的节点启动,那他会成为一个从节点运行,这样你就可以做你想做的了。但这样粗爆的行为是会导致集群短暂的停止服务,有一个切换的时间,如果是一个可控的维护工作,我们还是希望更为平滑,那集群中手动切换工具就能满足我们的需求。
我们先连接到集群中,查看一下集群状态信息,如下:
redis@slave01:~/redis7004$ bin/redis-cli -c -h 192.168.207.128 -p 7002
#连接到7002从节点
192.168.207.128:7002> cluster nodes
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 master - 0 1444285238614 4 connected 5461-10922
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 myself,slave 8490b5e84bf0359871ea3fa55b97bb9877be0512 0 0 3 connected
#7002的主节点是7003
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444285240634 2 connected 10923-16383
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444285237604 8 connected 0-5460
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave b502b7efac704e92ac439933a42edf613f908fea 0 1444285239625 8 connected
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444285239625 6 connected
接下来就进行手动切换操作,把7002与7003的主从关系调换:
192.168.207.128:7002> cluster failover
#这就是切换命令
OK
192.168.207.128:7002> cluster nodes
8490b5e84bf0359871ea3fa55b97bb9877be0512 192.168.207.130:7003 slave 0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 0 1444285980634 9 connected
#变成slave了
0becd52cb1fa1a69f8d3135dd98f87cd8ccf9d78 192.168.207.128:7002 myself,master - 0 0 9 connected 5461-10922
#变成master了
568669e03c61b3c4edc31643dcb47e85bc2c3e23 192.168.207.128:7001 master - 0 1444285977604 2 connected 10923-16383
b502b7efac704e92ac439933a42edf613f908fea 192.168.207.130:7006 master - 0 1444285976594 8 connected 0-5460
7dae923773125c5956605fac6159412850b100b3 192.168.207.128:7000 slave b502b7efac704e92ac439933a42edf613f908fea 0 1444285979624 8 connected
f3ba7c62307e0321f5d21310d14027c403c73907 192.168.207.130:7005 slave 568669e03c61b3c4edc31643dcb47e85bc2c3e23 0 1444285978615 6 connected
192.168.207.128:7002>
在上边的主从切换测试中几乎是瞬间完成,这样业务是不受到影响的。值得注意的是“cluster failover”命令只能在slave节点上运行,这也是在上边我是连接到7002的原因。在主从切换时采用何种方法,这也许这就是一个优秀的运维工程师和一个菜鸟的区别吧!
断断续续的写了好几天才完成此博文,通过此文我也对redis有了更深入的认识,也让我更能走近redis,才能发现redis之美,之强。保持一颗求的心继续上路。
参考资料:
http://redis.io/topics/replication#how-redis-replication-works
http://redis.io/topics/sentinel