背景简介

Redis 提供的如下技术「Redis Sentinel『主从切换』、Redis Cluster『分片』」,无效实现了 Redis 的高可用、高性能、高可伸缩性,本文对以上技术进行亲自动手实际。

1. Redis Sentinel「主从切换」

  • 监控主从节点的在线状态,并依据配置自行实现切换「基于raft协定」。
  • 主从复制从容量角度来说,还是单机。

2. Redis Cluster「分片」

  • 通过一致性 hash 的形式,将数据扩散到多个服务器节点:设计了 16384 个哈希槽,并调配到多台 redis-server。
  • 当须要在 Redis Cluster 中存取一个 key 时,Redis 客户端先对 key 应用 CRC16 算法计算一个数值,而后对 16384 取模,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,而后在此槽对应的节点上操作。

一、主从复制

设置详情

# 已知网关 IP 为:172.17.0.1# 启动 master 节点docker run -it --name redis-6380  -p 6380:6379 redisdocker exec -it redis-6380 /bin/bashredis-cli -h 172.17.0.1 -p 6380# 启动slave节点 1docker run -it --name redis-6381  -p 6381:6379 redisdocker exec -it redis-6381 /bin/bashredis-cli -h 172.17.0.1 -p 6381replicaof 172.17.0.1 6380# 启动slave节点 2docker run -it --name redis-6382  -p 6382:6379 redisdocker exec -it redis-6382 /bin/bashredis-cli -h 172.17.0.1 -p 6382replicaof 172.17.0.1 6380

之后可查看 master 节点的信息,在 master-redis 下,执行:

> info Replication# Replicationrole:masterconnected_slaves:2slave0:ip=172.17.0.1,port=6379,state=online,offset=686,lag=0slave1:ip=172.17.0.1,port=6379,state=online,offset=686,lag=1master_replid:79187e2241015c2f8ed98ce68caafa765796dff2master_replid2:0000000000000000000000000000000000000000master_repl_offset:686second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:686

之后操作 master 节点,slave 节点会主动同步。

slave-redis 下执行 replicaof no one 可从新改为主节点。

关键点

  1. 查看网络相干信息:
docker network lsdocker network inspect bridge
  1. 容器之间互拜访,可应用外部端口号,也可应用内部映射端口号;
  2. 执行 docker network inspect bridge 后,可查看到网关 IP 以及各容器 IP,可应用 网关 IP : 内部映射端口,或 容器 IP : 6379 拜访 Redis;

参考资料

  1. 命令:SLAVEOF

二、Sentinel 高可用

以后状态:

  1. 网关IP:172.17.0.1
  2. master端口:6390
  3. slave端口:6391,6392

操作步骤

1. 从新创立 redis 的 docker 容器:

redis.conf 配置内容如下:

# 默认端口6379port 6390# 绑定ip,如果是内网能够间接绑定 127.0.0.1, 或者疏忽, 0.0.0.0 是外网bind 0.0.0.0# 守护过程启动daemonize no

变更监听端口号,并从新创立 redis 容器:

docker run -p 6390:6390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6390 /bin/bashredis-cli -h 172.17.0.1 -p 6390docker run -p 6391:6391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6391 /bin/bashredis-cli -h 172.17.0.1 -p 6391slaveof 172.17.0.1 6390docker run -p 6392:6392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6392 /bin/bashredis-cli -h 172.17.0.1 -p 6392slaveof 172.17.0.1 6390

之后可查看 master 节点的信息,可看到 master 获取到的 slave 的端口号复原了失常。在 master-redis 下,执行:

> info Replication# Replicationrole:masterconnected_slaves:2slave0:ip=172.17.0.1,port=6391,state=online,offset=84,lag=0slave1:ip=172.17.0.1,port=6392,state=online,offset=84,lag=0master_replid:ed2e513ceed2b48a272b97c674c99d82284342a1master_replid2:0000000000000000000000000000000000000000master_repl_offset:84second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:84

2. 创立配置文件

创立 sentinel.conf,文件中写入如下内容:

sentinel monitor bitkylin-master 172.17.0.1 6390 2sentinel down-after-milliseconds bitkylin-master 5000sentinel failover-timeout bitkylin-master 10000sentinel parallel-syncs bitkylin-master 1

命令详解:批示 Sentinel 去监督一个名为 bitkylin-master 的主服务器,将这个主服务器标记为主观下线至多须要 2 个 Sentinel 批准;
响应超时 5 秒标记为主观下线,主观下线后就开始了迁徙流程,超时 10 秒为迁徙超时,暂不知用处。

3. 再创立两个 redis-docker 容器

将配置文件复制到 docker 容器内,共两个容器须要复制该文件:

docker run -it --name redis-6490 redisdocker run -it --name redis-6491 redisdocker cp ./sentinel.conf dcbd015dbc0e:/data/sentinel.confdocker cp ./sentinel.conf 7c8307730bcc:/data/sentinel.conf

4. 执行 redis-sentinel 命令

 redis-sentinel sentinel.conf

5. 最终成果

此时任意启停 redis 容器,能够看到 sentinel 主动实现 redis 的主从切换,主从配置等不须要人工操作。

参考资料

  1. Redis 的 Sentinel 文档
  2. Docker 容器的文件操作
  3. > 笼罩写入; >> 追加写入

三、Cluster 集群

操作步骤

1. 更新 redis 配置文件

次要追加集群配置信息,示例配置文件如下:

# 默认端口6379port 6390# 绑定 ip,如果是内网能够间接绑定 127.0.0.1, 或者疏忽, 0.0.0.0 是外网bind 0.0.0.0# 守护过程启动daemonize no# 集群配置cluster-enabled yescluster-config-file nodes.confcluster-node-timeout 5000appendonly yes

2. 创立 6 个容器

以第二节作为根底,基于最新的配置文件,创立 6 个容器,留神新增集群总线端口映射:

docker run -p 6390:6390 -p 16390:16390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6390 /bin/bashredis-cli -h 172.17.0.1 -p 6390docker run -p 6391:6391 -p 16391:16391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6391 /bin/bashredis-cli -h 172.17.0.1 -p 6391docker run -p 6392:6392 -p 16392:16392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6392 /bin/bashredis-cli -h 172.17.0.1 -p 6392docker run -p 6393:6393 -p 16393:16393 -v D:\develop\shell\docker\redis\conf6393:/usr/local/etc/redis --name redis-conf-6393 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6393 /bin/bashredis-cli -h 172.17.0.1 -p 6393docker run -p 6394:6394 -p 16394:16394 -v D:\develop\shell\docker\redis\conf6394:/usr/local/etc/redis --name redis-conf-6394 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6394 /bin/bashredis-cli -h 172.17.0.1 -p 6394docker run -p 6395:6395 -p 16395:16395 -v D:\develop\shell\docker\redis\conf6395:/usr/local/etc/redis --name redis-conf-6395 redis redis-server /usr/local/etc/redis/redis.confdocker exec -it redis-conf-6395 /bin/bashredis-cli -h 172.17.0.1 -p 6395

3. 间接通过命令创立集群

> redis-cli --cluster create 172.17.0.1:6390 172.17.0.1:6391 172.17.0.1:6392 172.17.0.1:6393 172.17.0.1:6394 172.17.0.1:6395 --cluster-replicas 1# 以下是命令执行后果:>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 172.17.0.1:6394 to 172.17.0.1:6390Adding replica 172.17.0.1:6395 to 172.17.0.1:6391Adding replica 172.17.0.1:6393 to 172.17.0.1:6392>>> Trying to optimize slaves allocation for anti-affinity[WARNING] Some slaves are in the same host as their masterM: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390   slots:[0-5460] (5461 slots) masterM: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391   slots:[5461-10922] (5462 slots) masterM: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392   slots:[10923-16383] (5461 slots) masterS: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393   replicates 41a4976431713cce936220fba8a230627d28d40cS: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394   replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395   replicates a9678b062663957e59bc3b4beb7be4366fa24adcCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.>>> Performing Cluster Check (using node 172.17.0.1:6390)M: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390   slots:[0-5460] (5461 slots) master   1 additional replica(s)S: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394   slots: (0 slots) slave   replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5M: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391   slots:[5461-10922] (5462 slots) master   1 additional replica(s)S: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393   slots: (0 slots) slave   replicates 41a4976431713cce936220fba8a230627d28d40cM: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392   slots:[10923-16383] (5461 slots) master   1 additional replica(s)S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395   slots: (0 slots) slave   replicates a9678b062663957e59bc3b4beb7be4366fa24adc[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

集群创立胜利

留神点

  1. 须要凋谢集群总线端口号,默认为 业务端口号 + 10000
  2. cluster reset 命令能够将以后节点从集群中移除

参考资料

  1. redis-cluster 集群 - 装置与状态验证
  2. Redis 集群教程
  3. Redis 命令参考 - 集群教程