天天看點

Docker搭建SSDB分片多主叢集(Docker & Twemproxy & SSDB & Replication & Sharding)環境準備基本單例帶配置的單執行個體SSDB分片副本叢集Docker Stack 部署service叢集

Docker搭建Twemproxy SSDB分片多主叢集

  • 環境準備
    • 依賴
    • 安裝Docker
    • 安裝redis-cli
  • 基本單例
    • 啟動
    • 測試
  • 帶配置的單執行個體
    • 編寫配置檔案
    • 啟動
  • SSDB分片副本叢集
    • 建立目錄
    • 編寫twemproxy配置
    • 編寫SSDB配置
    • 建立overlay網絡
    • 啟動SSDB
    • 啟動twemproxy
  • Docker Stack 部署service叢集
    • 清除容器和網絡
    • 編寫ssdb.yaml
    • 更改twemproxy配置
    • 更改SSDB配置
    • 啟動
    • 測試

環境準備

依賴

  • CentOS7.6

安裝Docker

參照安裝(點選)

安裝redis-cli

預先安裝redis-cli用于測試ssdb的連接配接。

yum install -y redis-cli
           

基本單例

啟動

docker pull leobuskin/ssdb-docker
docker run -p 6379:8888 -v /root/volumns/ssdb/var:/ssdb/var --name ssdb  -d leobuskin/ssdb-docker
           

測試

redis-cli set 1 a
redis-cli get 1
           

連接配接成功

帶配置的單執行個體

編寫配置檔案

vi /root/volumns/ssdb/ssdb.conf
           

複制下面的配置資訊到檔案中,修改檔案注意一定要用TAB.

建議在windows上編輯好以後,使用rz傳到遠端host。複制粘貼太容易出現空格問題。

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

檔案配置定義可以參照官方配置文檔,這裡還是據說一下幾個配置點:

  • output配置,可以配置到檔案中,這裡配置成stdout 友善使用docker logs檢視日志
  • cache_size: 配置成實體記憶體的一半。
  • write_buffer_size:範圍[4,128],越大越好,現在的實體記憶體都比較大,可以配置成128。
  • compaction_speed:根據具體的磁盤性能資料填寫。SSD可以填[500, 1000],本地NVME SSD可以填寫更高的數值。也可以降低這個值來進行寫限速。
  • compression:絕大多數情況用yes,獲得10倍于硬碟空的資料存儲内容。

另外,SSDB沒有最大記憶體限制,一般不用關心這個問題。

啟動

docker run -p 6379:8888 -v /root/volumns/ssdb/ssdb.conf:/ssdb/ssdb.conf -v /root/volumns/ssdb/var:/ssdb/var --name ssdb  -d leobuskin/ssdb-docker
           

啟動完成後執行

docker logs ssdb
           

顯示

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(46): ssdb-server 1.9.7
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(47): conf_file        : /ssdb/ssdb.conf
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(48): log_level        : info
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(49): log_output       : stdout
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(50): log_rotate_size  : 1000000000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(52): main_db          : /ssdb/var/data
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(53): meta_db          : /ssdb/var/meta
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(54): cache_size       : 8000 MB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(55): block_size       : 32 KB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(56): write_buffer     : 64 MB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(57): max_open_files   : 1000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(58): compaction_speed : 1000 MB/s
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(59): compression      : yes
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(60): binlog           : yes
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(61): binlog_capacity  : 20000000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(62): sync_speed       : -1 MB/s
2019-08-12 08:20:59.132 [INFO ] binlog.cpp(179): binlogs capacity: 20000000, min: 0, max: 0
2019-08-12 08:20:59.136 [INFO ] server.cpp(159): server listen on 0.0.0.0:8888
2019-08-12 08:20:59.136 [INFO ] server.cpp(169):     auth    : off
2019-08-12 08:20:59.136 [INFO ] server.cpp(209):     readonly: no
2019-08-12 08:20:59.137 [INFO ] serv.cpp(222): key_range.kv: "", ""
2019-08-12 08:20:59.137 [INFO ] ssdb-server.cpp(85): pidfile: /run/ssdb.pid, pid: 1
2019-08-12 08:20:59.137 [INFO ] ssdb-server.cpp(86): ssdb server started.

           

如果顯示如下就是空格問題

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

error loading conf file: '/ssdb/ssdb.conf'
2019-08-12 08:16:47.861 [ERROR] config.cpp(62): invalid line(33): unexpected whitespace char ' '

           

SSDB分片副本叢集

  • 官網上對于分片推薦使用twemproxy。對于twemproxy,也可以做到像mongos那樣的多節點的水準代理層,部署多節點twemproxy。
  • 把twemproxy部署在用戶端是可以的,但會導緻部署架構不夠清晰。
  • 在用戶端可以使用haproxy TCP做反向代理來輪詢可用性,也可使用負載均衡,當單proxy出現故障時可以動态切換。需要注意的是,如果網絡不穩定,可能會出現瞬間的一緻性問題,當然這也取決于架構實作。haproxy并不像twemproxy會合并資料請求,而是原始的使用一對一方式的代理TCP連接配接,不用擔心haproxy的消息在多執行個體模式下的亂序執行問題。與之相比較,twemproxy會合并消息,它的事務一緻性由它本身保證。
  • 副本叢集是SSDB自帶的特性,可以使用slave或者mirror方式,slave對應主從,類似Mongodb 已淘汰的主從;mirror對應多主模式,類似于Mongodb的副本集。備份的執行個體越多,資源消耗越高,這裡主觀的設定一個平衡值,使用雙執行個體。。
  • 部署設計:使用3台機器,使用雙主模式,做3個分片,一共6個SSDB執行個體,再加3個twemproxy執行個體。也就是每台機器上部署一個twemproxy程序和兩個SSDB程序。部署圖:
vm1 vm2 vm3
twemproxy-1 twemproxy-2 twemproxy-3
shard-1-server-1 shard-2-server-1 shard-3-server-1
shard-3-server-2 shard-1-server-2 shard-2-server-2

建立目錄

#vm1
mkdir -p /root/volumns/ssdb-twemproxy-1 /root/volumns/ssdb-shard-1-server-1/var /root/volumns/ssdb-shard-3-server-2/var
#vm2
mkdir -p /root/volumns/ssdb-twemproxy-2 /root/volumns/ssdb-shard-2-server-1/var /root/volumns/ssdb-shard-1-server-2/var
#vm3
mkdir -p /root/volumns/ssdb-twemproxy-3 /root/volumns/ssdb-shard-3-server-1/var /root/volumns/ssdb-shard-2-server-2/var
           

編寫twemproxy配置

  • 因為沒有像configsvr注冊副本集的過程,這裡需要把所有的SSDB執行個體全部放到twemproxy代理下面。所有的twemproxy代理的配置完全一樣。
alpha:
  listen: 0.0.0.0:11211
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
    - shard-1-server-1:8888:1
    - shard-1-server-2:8888:1
    - shard-2-server-1:8888:1
    - shard-2-server-2:8888:1
    - shard-3-server-1:8888:1
    - shard-3-server-2:8888:1
           
  • trwemproxy配置方法見官網
  • hash:雜湊演算法,可選值 one_at_a_time,md5,crc16,crc32 (crc32 implementation compatible with libmemcached),crc32a (correct crc32 implementation as per the spec),fnv1_64,fnv1a_64,fnv1_32,fnv1a_32,hsieh,murmur,jenkins
  • distribution:分片算法,可選值 ketama、modula、random
  • auto_eject_hosts:在節點無法響應時自動從伺服器清單中剔除,重新響應時自動加入伺服器清單中。

編寫SSDB配置

  • 在一台機上部署多個SSDB時,不可使用cache size為實體記憶體一半原則。按照一台機器上部署兩個SSDB程序,配置應該是四分之一實體記憶體。更改配置時注意host變化。

shard-1-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-1-server-2
		type: mirror
		host: shard-1-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

shard-1-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-1-server-1
		type: mirror
		host: shard-1-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

shard-2-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-2-server-2
		type: mirror
		host: shard-2-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

shard-2-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-2-server-1
		type: mirror
		host: shard-2-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

shard-3-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-3-server-2
		type: mirror
		host: shard-3-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

shard-3-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-3-server-1
		type: mirror
		host: shard-3-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes
           

建立overlay網絡

建立overlay網絡

啟動SSDB

逐行執行

#vm1
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-1-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-1-server-1/var:/ssdb/var \
--name=ssdb-shard-1-server-1 --hostname=ssdb-shard-1-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-3-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-3-server-2/var:/ssdb/var \
--name=ssdb-shard-3-server-2 --hostname=ssdb-shard-3-server-2 \
-d leobuskin/ssdb-docker
#vm2
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-2-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-2-server-1/var:/ssdb/var \
--name=ssdb-shard-2-server-1 --hostname=ssdb-shard-2-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-1-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-1-server-2/var:/ssdb/var \
--name=ssdb-shard-1-server-2 --hostname=ssdb-shard-1-server-2 \
-d leobuskin/ssdb-docker
#vm3
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-3-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-3-server-1/var:/ssdb/var \
--name=ssdb-shard-3-server-1 --hostname=ssdb-shard-3-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-2-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-2-server-2/var:/ssdb/var \
--name=ssdb-shard-2-server-2 --hostname=ssdb-shard-2-server-2 \
-d leobuskin/ssdb-docker

           

使用docker logs逐個檢查啟動成功

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(46): ssdb-server 1.9.7
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(47): conf_file        : /ssdb/ssdb.conf
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(48): log_level        : info
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(49): log_output       : stdout
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(50): log_rotate_size  : 1000000000
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(52): main_db          : /ssdb/var/data
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(53): meta_db          : /ssdb/var/meta
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(54): cache_size       : 500 MB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(55): block_size       : 32 KB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(56): write_buffer     : 128 MB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(57): max_open_files   : 500
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(58): compaction_speed : 1000 MB/s
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(59): compression      : yes
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(60): binlog           : yes
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(61): binlog_capacity  : 20000000
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(62): sync_speed       : -1 MB/s
2019-08-12 12:43:07.952 [INFO ] binlog.cpp(179): binlogs capacity: 20000000, min: 0, max: 0
2019-08-12 12:43:07.953 [INFO ] server.cpp(159): server listen on 0.0.0.0:8888
2019-08-12 12:43:07.953 [INFO ] server.cpp(169):     auth    : off
2019-08-12 12:43:07.953 [INFO ] server.cpp(209):     readonly: no
2019-08-12 12:43:07.953 [INFO ] serv.cpp(207): slaveof: shard-2-server-1:8888, type: mirror
2019-08-12 12:43:07.953 [INFO ] serv.cpp(222): key_range.kv: "", ""
2019-08-12 12:43:07.953 [INFO ] ssdb-server.cpp(85): pidfile: /run/ssdb.pid, pid: 1
2019-08-12 12:43:07.953 [INFO ] ssdb-server.cpp(86): ssdb server started.
2019-08-12 12:43:07.954 [INFO ] slave.cpp(171): [shard-2-server-1][0] connecting to master at shard-2-server-1:8888...
2019-08-12 12:43:07.970 [INFO ] slave.cpp(200): [shard-2-server-1] ready to receive binlogs
2019-08-12 12:43:07.970 [INFO ] backend_sync.cpp(54): fd: 19, accept sync client
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(246): [mirror] 127.0.0.1:38452 fd: 19, copy begin, seq: 0, key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(260): 127.0.0.1:38452 fd: 19, copy begin
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(291): new iterator, last_key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(297): iterator created, last_key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(349): 127.0.0.1:38452 fd: 19, copy end
2019-08-12 12:43:07.972 [INFO ] slave.cpp(349): copy begin
2019-08-12 12:43:07.972 [INFO ] slave.cpp(359): copy end, copy_count: 0, last_seq: 0, seq: 0

           

啟動twemproxy

#vm1
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-1/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-1 --hostname=ssdb-twemproxy-1 -d anchorfree/twemproxy
#vm2
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-2/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-2 --hostname=ssdb-twemproxy-2 -d anchorfree/twemproxy
#vm3
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-3/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-3 --hostname=ssdb-twemproxy-3 -d anchorfree/twemproxy
           

啟動後測試

redis-cli set hello 1
           

分别從6個執行個體中尋找資料,隻有2個執行個體有資料

通過docker ps -a檢視 twemproxy狀态中有一個unhealthy顯示,但是并不影響程式運作,這應該是一個小瑕疵。

檢查healthcheck.bat

#!/usr/bin/env bats

@test "[INFRA-6245] [nutcracker] Check nutcracker configuration" {
    /usr/sbin/nutcracker --test-conf -c /opt/nutcracker.yml
}

@test "[INFRA-6245] [nc] Test memcache port" {
    run nc -zv localhost 11211
    [ "$status" -eq 0 ]
    [[ "$output"  == *"open"* ]]
}

@test "[INFRA-6245] [nutcracker] Check nutcracker version" {
    run /usr/sbin/nutcracker --version
    [ "$status" -eq 0 ]
    [[ "$output"  == *"This is nutcracker-0.4.1"* ]]
}

           

可能對端口有限制要求是11211,更改配置裡面的twemproxy的端口為11211,重新嘗試,顯示“healthy”。

Docker Stack 部署service叢集

經測試,雙主模式在stack模式下,隻能夠從一方同步到另外一方,原因是docker stack swarm 不支援同時部署互相強依賴的兩個程序。

清除容器和網絡

docker stop $(docker ps -a -q)
docker container prune
docker network rm overlay
           

同時删除所有的var目錄下的資料

編寫ssdb.yaml

version: '3'
services:
  shard-1-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-1-server-1
    networks:
      - overlay
    ports:
      - 8881:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-1-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-1-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  shard-1-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-1-server-2
    networks:
      - overlay
    ports:
      - 8882:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-1-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-1-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-2-server-1
    networks:
      - overlay
    ports:
      - 8883:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-2-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-2-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-2-server-2
    networks:
      - overlay
    ports:
      - 8884:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-2-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-2-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-3-server-1
    networks:
      - overlay
    ports:
      - 8885:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-3-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-3-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-3-server-2
    networks:
      - overlay
    ports:
      - 8886:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-3-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-3-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1

  twemproxy-1:
    image: anchorfree/twemproxy
    hostname: twemproxy-1
    networks:
      - overlay
    ports:
      - 6379:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-1/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  twemproxy-2:
    image: anchorfree/twemproxy
    hostname: twemproxy-2
    networks:
      - overlay
    ports:
      - 6380:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-2/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  twemproxy-3:
    image: anchorfree/twemproxy
    hostname: twemproxy-3
    networks:
      - overlay
    ports:
      - 6381:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-3/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
networks:
  overlay:
    driver: overlay


           

更改twemproxy配置

alpha:
  listen: 0.0.0.0:11211
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
    - ssdb-shard-1-server-1:8888:1
    - ssdb-shard-2-server-1:8888:1
    - ssdb-shard-3-server-1:8888:1

           

更改SSDB配置

master 即去掉所有slaveof配置,slave将mirror改為sync即可

master截取配置如下

replication:
        binlog: yes
        # Limit sync speed to *MB/s, -1: no limit
        sync_speed: -1
        slaveof:
                # to identify a master even if it moved(ip, port changed)
                # if set to empty or not defined, ip: 0.0.0.0
                #id: svc_2
                # sync|mirror, default is sync
                #type: sync
                #host: localhost
                #port: 8889

           

slave截取配置如下

replication:
        binlog: yes
        # Limit sync speed to *MB/s, -1: no limit
        sync_speed: -1
        slaveof:
                id: ssdb-shard-1-server-1
                type: sync
                host: ssdb-shard-1-server-1
                port: 8888
                # to identify a master even if it moved(ip, port changed)
                # if set to empty or not defined, ip: 0.0.0.0
                #id: svc_2
                # sync|mirror, default is sync
                #type: sync
                #host: localhost
                #port: 8889

           

啟動

docker stack deploy -c ssdb.yaml ssdb
           

檢視services

docker stack services ssdb
           
ID                  NAME                         MODE                REPLICAS            IMAGE                          PORTS
1hzg9nau4ek4        ssdb_ssdb-shard-2-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8884->8888/tcp
8tqpkzmuoz1i        ssdb_ssdb-twemproxy-2        replicated          1/1                 anchorfree/twemproxy:latest    *:6380->11211/tcp
9ffxf3779fvb        ssdb_ssdb-shard-1-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8882->8888/tcp
larlbx0cizlv        ssdb_ssdb-twemproxy-1        replicated          1/1                 anchorfree/twemproxy:latest    *:6379->11211/tcp
mrez447h81p6        ssdb_ssdb-shard-1-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8881->8888/tcp
mu561y479nvq        ssdb_ssdb-twemproxy-3        replicated          1/1                 anchorfree/twemproxy:latest    *:6381->11211/tcp
vr7dfuyp7rb1        ssdb_ssdb-shard-3-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8885->8888/tcp
w8zscndcitku        ssdb_ssdb-shard-2-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8883->8888/tcp
z4t6ojv4fvn3        ssdb_ssdb-shard-3-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8886->8888/tcp
           

測試

測試多個twemproxy端口存儲資料是否相通

[[email protected] ~]# redis-cli
127.0.0.1:6379> set 1 1
OK
127.0.0.1:6379> get 1
"1"
127.0.0.1:6379> 
[[email protected] ~]# redis-cli -p 6380
127.0.0.1:6380> get 1
"1"
           

測試Replicates配置是否正常

[[email protected] volumns]# redis-cli -p 8881 set final 1
OK
[[email protected] volumns]# redis-cli -p 8881 get final
"1"

           

測試單點故障,可以重新開機service

docker service update --force 1hzg9nau4ek4