关于docker:接近3w详解Docker搭建Redis集群主从容错主从扩容主从缩容

53次阅读

共计 27425 个字符,预计需要花费 69 分钟才能阅读完成。

cluster(集群)模式 -docker 版 哈希槽分区进行亿级数据存储

1、场景

1~2 亿条数据须要缓存,请问如何设计这个存储案例?

单机单台 100% 不可能,必定是分布式存储,用 redis 如何落地?

解决方案

1、哈希取余分区

2 亿条记录就是 2 亿个 k,v,咱们单机不行必须要分布式多机,假如有 3 台机器形成一个集群,用户每次读写操作都是依据公式:
hash(key) % N 个机器台数,计算出哈希值,用来决定数据映射到哪一个节点上。

长处:

简略粗犷,间接无效,只须要预估好数据布局好节点,例如 3 台、8 台、10 台,就能保障一段时间的数据撑持。应用 Hash 算法让固定的一部分申请落到同一台服务器上,这样每台服务器固定解决一部分申请(并保护这些申请的信息),起到负载平衡 + 分而治之的作用。

毛病:

 原来布局好的节点,进行扩容或者缩容就比拟麻烦了额,不论扩缩,每次数据变动导致节点有变动,映射关系须要从新进行计算,在服务器个数固定不变时没有问题,如果须要弹性扩容或故障停机的状况下,原来的取模公式就会发生变化:Hash(key)/ 3 会变成 Hash(key) /?。此时地址通过取余运算的后果将产生很大变动,依据公式获取的服务器也会变得不可控。某个 redis 机器宕机了,因为台数数量变动,会导致 hash 取余全副数据从新洗牌。
2、一致性哈希算法分区

一致性哈希算法在 1997 年由麻省理工学院中提出的,设计指标是 为了解决分布式缓存数据变动和映射问题,某个机器宕机了,分母数量扭转了,天然取余数不 OK 了。

作用:

提出一致性 Hash 解决方案。目标是当服务器个数产生变动时,尽量减少影响客户端到服务器的映射关系

步骤:

  1. 算法构建一致性哈希环

    一致性哈希环

    ​ 一致性哈希算法必然有个 hash 函数并依照算法产生 hash 值,这个算法的所有可能哈希值会形成一个全量集,这个汇合能够成为一个 hash 空间[0,2^32-1],这个是一个线性空间,然而在算法中,咱们通过适当的逻辑管制将它首尾相连(0 = 2^32), 这样让它逻辑上造成了一个环形空间。

    它也是依照应用取模的办法,后面笔记介绍的节点取模法是对节点(服务器)的数量进行取模。而一致性 Hash 算法是对 2^32 取模,简略来说,一致性 Hash 算法将整个哈希值空间组织成一个虚构的圆环,如假如某哈希函数 H 的值空间为 0 -2^32-1(即哈希值是一个 32 位无符号整形),整个哈希环如下图:整个空间按顺时针方向组织,圆环的正上方的点代表 0,0 点右侧的第一个点代表 1,以此类推,2、3、4、……直到 2^32-1,也就是说 0 点左侧的第一个点代表 2^32-1,0 和 2^32-1 在零点中方向重合,咱们把这个由 2^32 个点组成的圆环称为 Hash 环。

  2. 服务器 IP 节点映射

    节点映射

    将集群中各个 IP 节点映射到环上的某一个地位。
    将各个服务器应用 Hash 进行一个哈希,具体能够抉择服务器的 IP 或主机名作为关键字进行哈希,这样每台机器就能确定其在哈希环上的地位。如果 4 个节点 NodeA、B、C、D,通过 IP 地址的哈希函数计算(hash(ip)),应用 IP 地址哈希后在环空间的地位如下:

  1. key 落到服务器的落键规定

    当咱们须要存储一个 kv 键值对时,首先计算 key 的 hash 值,hash(key),将这个 key 应用雷同的函数 Hash 计算出哈希值并确定此数据在环上的地位,从此地位沿环顺时针“行走”,第一台遇到的服务器就是其应该定位到的服务器,并将该键值对存储在该节点上。
    如咱们有 Object A、Object B、Object C、Object D 四个数据对象,通过哈希计算后,在环空间上的地位如下:依据一致性 Hash 算法,数据 A 会被定为到 Node A 上,B 被定为到 Node B 上,C 被定为到 Node C 上,D 被定为到 Node D 上。

长处:

1. 一致性哈希算法的容错性

容错性
假如 Node C 宕机,能够看到此时对象 A、B、D 不会受到影响,只有 C 对象被重定位到 Node D。个别的,在一致性 Hash 算法中,如果一台服务器不可用,则受影响的数据仅仅是此服务器到其环空间中前一台服务器(即沿着逆时针方向行走遇到的第一台服务器)之间数据,其它不会受到影响。简略说,就是 C 挂了,受到影响的只是 B、C 之间的数据,并且这些数据会转移到 D 进行存储。

2. 一致性哈希算法的扩展性

扩展性
数据量减少了,须要减少一台节点 NodeX,X 的地位在 A 和 B 之间,那收到影响的也就是 A 到 X 之间的数据,从新把 A 到 X 的数据录入到 X 上即可,不会导致 hash 取余全副数据从新洗牌。

毛病:

一致性哈希算法的数据歪斜问题

一致性 Hash 算法在服务节点太少时,容易因为节点散布不平均而造成数据歪斜(被缓存的对象大部分集中缓存在某一台服务器上)问题,
例如零碎中只有两台服务器:

总结:

为了在节点数目产生扭转时尽可能少的迁徙数据

将所有的存储节点排列在收尾相接的 Hash 环上,每个 key 在计算 Hash 后会顺时针找到邻近的存储节点寄存。
而当有节点退出或退出时仅影响该节点在 Hash 环上顺时针相邻的后续节点。

长处 退出和删除节点只影响哈希环中顺时针方向的相邻的节点,对其余节点无影响。

毛病 数据的散布和节点的地位无关,因为这些节点不是平均的散布在哈希环上的,所以数据在进行存储时达不到均匀分布的成果。

3、哈希槽分区

哈希槽本质就是一个数组,数组 [0,2^14 -1] 造成 hash slot 空间。

作用:解决一致性哈希算法的数据歪斜问题

解决平均调配的问题,在数据和节点之间又退出了一层,把这层称为哈希槽(slot),用于治理数据和节点之间的关系,当初就相当于节点上放的是槽,槽里放的是数据。

槽解决的是粒度问题,相当于把粒度变大了,这样便于数据挪动。
哈希解决的是映射问题,应用 key 的哈希值来计算所在的槽,便于数据调配。

哈希槽的计算

Redis 集群中内置了 16384 个哈希槽,redis 会依据节点数量大抵均等的将哈希槽映射到不同的节点。当须要在 Redis 集群中搁置一个 key-value 时,redis 先对 key 应用 crc16 算法算出一个后果,而后把后果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,也就是映射到某个节点上。如下代码,key 之 A、B 在 Node2,key 之 C 落在 Node3 上

2、3 主 3 从集群配置

1、启动 docker

systemctl start docker

2、新建 6 个 docker 容器 redis 实例

docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis --cluster-enabled yes --appendonly yes --port 6381
 
docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis --cluster-enabled yes --appendonly yes --port 6382
 
docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis --cluster-enabled yes --appendonly yes --port 6383
 
docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis --cluster-enabled yes --appendonly yes --port 6384
 
docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis --cluster-enabled yes --appendonly yes --port 6385
 
docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis --cluster-enabled yes --appendonly yes --port 6386
[root@docker ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS     NAMES
d6fc3ef2855b   redis     "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds             redis-node-6
9c8868d69a50   redis     "docker-entrypoint.s…"   3 seconds ago   Up 3 seconds             redis-node-5
7fbb5345951a   redis     "docker-entrypoint.s…"   4 seconds ago   Up 3 seconds             redis-node-4
d53b9d5af1ac   redis     "docker-entrypoint.s…"   4 seconds ago   Up 4 seconds             redis-node-3
fe0e430cb940   redis     "docker-entrypoint.s…"   6 seconds ago   Up 4 seconds             redis-node-2
ee03a7ec212e   redis     "docker-entrypoint.s…"   8 seconds ago   Up 6 seconds             redis-node-1
分步解释
  • docker run:创立并运行 docker 容器实例
  • --name redis-node-6:容器名字
  • --net host:应用宿主机的 IP 和端口,默认、
  • --privileged=true:获取宿主机 root 用户权限
  • -v /data/redis/share/redis-node-6:/data:容器卷,宿主机地址:docker 外部地址
  • redis:redis 镜像和版本号
  • --cluster-enabled yes:开启 redis 集群
  • --appendonly yes:开启长久化
  • --port 6386:redis 端口号

3、进入容器 redis-node- 1 并为 6 台机器构建集群关系

1、进入容器
docker exec -it redis-node-1 /bin/bash
2、构建主从关系

PS:留神本人的实在 IP 地址

redis-cli --cluster create 192.168.130.132:6381 192.168.130.132:6382 192.168.130.132:6383 192.168.130.132:6384 192.168.130.132:6385 192.168.130.132:6386 --cluster-replicas 1
  • --cluster-replicas 1:示意为每个 master 创立一个 slave 节点
[root@docker ~]# docker exec -it redis-node-1 /bin/bash
root@docker:/data# redis-cli --cluster create 192.168.130.132:6381 192.168.130.132:6382 192.168.130.132:6383 192.168.130.132:6384 192.168.130.132:6385 192.168.130.132:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.130.132:6385 to 192.168.130.132:6381
Adding replica 192.168.130.132:6386 to 192.168.130.132:6382
Adding replica 192.168.130.132:6384 to 192.168.130.132:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-5460] (5461 slots) master
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[5461-10922] (5462 slots) master
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[10923-16383] (5461 slots) master
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

到这里,3 主 3 从就构建实现了。

4、链接进入 6381 作为切入点,查看集群状态

root@docker:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:663
cluster_stats_messages_pong_sent:671
cluster_stats_messages_sent:1334
cluster_stats_messages_ping_received:666
cluster_stats_messages_pong_received:663
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1334
127.0.0.1:6381> cluster nodes
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651152474000 3 connected
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 myself,master - 0 1651152472000 1 connected 0-5460
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651152474000 3 connected 10923-16383
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651152476585 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 master - 0 1651152475573 2 connected 5461-10922
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 slave 8335b5349d781c11745ee129f5dbae370dbd3394 0 1651152474566 1 connected
127.0.0.1:6381> 
1、cluster info:查看集群状态
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:663
cluster_stats_messages_pong_sent:671
cluster_stats_messages_sent:1334
cluster_stats_messages_ping_received:666
cluster_stats_messages_pong_received:663
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1334
  • cluster_state: ok状态示意集群能够失常承受查问申请。fail 状态示意,至多有一个哈希槽没有被绑定(阐明有哈希槽没有被绑定到任意一个节点),或者在谬误的状态(节点能够提供服务然而带有 FAIL 标记),或者该节点无奈分割到少数 master 节点。.
  • cluster_slots_assigned: 已调配到集群节点的哈希槽数量(不是没有被绑定的数量)。16384 个哈希槽全副被调配到集群节点是集群失常运行的必要条件.
  • cluster_slots_ok: 哈希槽状态不是FAILPFAIL 的数量.
  • cluster_slots_pfail: 哈希槽状态是 PFAIL的数量。只有哈希槽状态没有被降级到 FAIL 状态,这些哈希槽依然能够被失常解决。PFAIL状态示意咱们以后不能和节点进行交互,但这种状态只是长期的谬误状态。
  • cluster_slots_fail: 哈希槽状态是 FAIL 的数量。如果值不是 0,那么集群节点将无奈提供查问服务,除非 cluster-require-full-coverage 被设置为no .
  • cluster_known_nodes: 集群中节点数量,包含处于 握手 状态还没有成为集群正式成员的节点.
  • cluster_size: 至多蕴含一个哈希槽且可能提供服务的 master 节点数量.
  • cluster_current_epoch: 集群本地 Current Epoch 变量的值。这个值在节点故障转移过程时有用,它总是递增和惟一的。
  • cluster_my_epoch: 以后正在应用的节点的 Config Epoch 值. 这个是关联在本节点的版本值.
  • cluster_stats_messages_sent: 通过 node-to-node 二进制总线发送的音讯数量.
  • cluster_stats_messages_received: 通过 node-to-node 二进制总线接管的音讯数量.
2、cluster nodes:提供了以后连贯节点所属集群的配置信息,信息格式和 Redis 集群在磁盘上存储应用的序列化格局齐全一样(在磁盘存储信息的结尾还存储了一些额定信息)
127.0.0.1:6381> cluster nodes
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651152474000 3 connected
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 myself,master - 0 1651152472000 1 connected 0-5460
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651152474000 3 connected 10923-16383
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651152476585 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 master - 0 1651152475573 2 connected 5461-10922
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 slave 8335b5349d781c11745ee129f5dbae370dbd3394 0 1651152474566 1 connected
127.0.0.1:6381> 

主从关系图:

每行的组成:

<id> <ip:port> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot>
  1. id: 节点 ID, 是一个 40 字节的随机字符串,这个值在节点启动的时候创立,并且永远不会扭转(除非应用 CLUSTER RESET HARD 命令)。
  2. ip:port: 客户端与节点通信应用的地址.
  3. flags: 逗号宰割的标记位,可能的值有: myself, master, slave, fail?, fail, handshake, noaddr, noflags. 下一部分将具体介绍这些标记.

    • myself: 以后连贯的节点.
    • master: 节点是 master.
    • slave: 节点是 slave.
    • fail?: 节点处于PFAIL 状态。以后节点无奈分割,但逻辑上是可达的 (非 FAIL 状态).
    • fail: 节点处于 FAIL 状态. 大部分节点都无奈与其取得联系将会将改节点由 PFAIL 状态降级至FAIL 状态。
    • handshake: 还未获得信赖的节点,以后正在与其进行握手.
    • noaddr: 没有地址的节点(No address known for this node).
    • noflags: 连个标记都没有(No flags at all).
  4. master: 如果节点是 slave,并且已知 master 节点,则这里列出 master 节点 ID, 否则的话这里列出”-“。
  5. ping-sent: 最近一次发送 ping 的工夫,这个工夫是一个 unix 毫秒工夫戳,0 代表没有发送过.
  6. pong-recv: 最近一次收到 pong 的工夫,应用 unix 工夫戳示意.
  7. config-epoch: 节点的 epoch 值(or of the current master if the node is a slave)。每当节点产生失败切换时,都会创立一个新的,独特的,递增的 epoch。如果多个节点竞争同一个哈希槽时,epoch 值更高的节点会抢夺到。
  8. link-state: node-to-node 集群总线应用的链接的状态,咱们应用这个链接与集群中其余节点进行通信. 值能够是 connecteddisconnected.
  9. slot: 哈希槽值或者一个哈希槽范畴. 从第 9 个参数开始,前面最多可能有 16384 个 数 (limit never reached)。代表以后节点能够提供服务的所有哈希槽值。如果只是一个值, 那就是只有一个槽会被应用。如果是一个范畴,这个值示意为 起始槽 - 完结槽,节点将解决包含起始槽和完结槽在内的所有哈希槽。

官网地址:http://www.redis.cn/commands/…

3、主从容错切换迁徙案例

1、数据读写存储

1、启动 6 机构成的集群并通过 exec 进入
root@docker:/data# redis-cli -p 6381
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.130.132:6383
127.0.0.1:6381> set k2 v2\
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
(error) MOVED 8455 192.168.130.132:6382
127.0.0.1:6381> 

显示 k1 和 k4 没有存储进去

(error) MOVED 12706 192.168.130.132:6383:请转到 6383 的 redis 进行存储

2、避免路由生效加参数 - c 并新增两个 key
root@docker:/data# redis-cli -p 6381 -c
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.130.132:6383
OK
192.168.130.132:6383> set k4 v4
-> Redirected to slot [8455] located at 192.168.130.132:6382
OK
192.168.130.132:6382> get k4
"v4"
192.168.130.132:6382> 

Redirected to slot [8455] located at 192.168.130.132:6382:重定向到 6382

3、查看集群状态
redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

2、容错切换迁徙

1、主 6381 和从机切换,先进行主机 6381
[root@docker ~]# docker stop redis-node-1
redis-node-1
[root@docker ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED        STATUS        PORTS     NAMES
d6fc3ef2855b   redis     "docker-entrypoint.s…"   47 hours ago   Up 47 hours             redis-node-6
9c8868d69a50   redis     "docker-entrypoint.s…"   47 hours ago   Up 47 hours             redis-node-5
7fbb5345951a   redis     "docker-entrypoint.s…"   47 hours ago   Up 47 hours             redis-node-4
d53b9d5af1ac   redis     "docker-entrypoint.s…"   47 hours ago   Up 47 hours             redis-node-3
fe0e430cb940   redis     "docker-entrypoint.s…"   47 hours ago   Up 47 hours             redis-node-2
[root@docker ~]# 

6381 主机停了,对应的实在从机上位,也就是 6 号机变成了主机器

2、查看集群状况
127.0.0.1:6382> cluster nodes
4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386@16386 master - 0 1651158299456 7 connected 0-5460
c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384@16384 slave 60fa7e084483feca3af41f269de5a57b526c0ad7 0 1651158300472 2 connected
60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382@16382 myself,master - 0 1651158298000 2 connected 5461-10922
8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383@16383 master - 0 1651158298444 3 connected 10923-16383
8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381@16381 master,fail - 1651154146064 1651154140969 1 disconnected
b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385@16385 slave 8dbe8b347410cf87d62933382b73693405535ba1 0 1651158297000 3 connected
127.0.0.1:6382> 
3、复原 3 主 3 从

重新启动 1 号机之后,6 号机还是主机,1 号机器从之前的主机变成了从机

docker start redis-node-1

停掉 6 号机,再启动 6 号机

docker stop redis-node-6
docker start redis-node-6

查看集群状态

root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

4、主从扩容案例

1、新建 6387、6388 两个节点 + 新建后启动 + 查看是否 8 节点

docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis --cluster-enabled yes --appendonly yes --port 6387
docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis --cluster-enabled yes --appendonly yes --port 6388

2、进入 6387 容器实例外部

docker exec -it redis-node-7 /bin/bash

3、将新增的 6387 节点 (空槽号) 作为 master 节点退出原集群

将新增的 6387 作为 master 节点退出集群
redis-cli –cluster add-node 本人理论 IP 地址:6387 本人理论 IP 地址:6381
6387 就是将要作为 master 新增节点
6381 就是原来集群节点外面的领路人,相当于 6387 拜拜 6381 的码头从而找到组织退出集群

redis-cli --cluster add-node 192.168.130.132:6387 192.168.130.132:6381
root@docker:/data# redis-cli --cluster add-node 192.168.130.132:6387 192.168.130.132:6381
>>> Adding node 192.168.130.132:6387 to cluster 192.168.130.132:6381
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.130.132:6387 to make it join the cluster.
[OK] New node added correctly.
root@docker:/data# 

4、查看集群状况

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 5461 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 5462 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 5461 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 0 keys | 0 slots | 0 slaves.

很显著,6387 没有槽号

5、重新分配槽号

从新分派槽号
命令:redis-cli --cluster reshard IP 地址: 端口号

redis-cli --cluster reshard 192.168.130.:6381

输出须要迁徙的槽数量,此处咱们输出 4096。

指标节点 ID,只能指定一个,因为咱们须要迁徙到 6387 中,因而上面输出 6387 的 ID。

[外链图片转存失败, 源站可能有防盗链机制, 倡议将图片保留下来间接上传(img-p6Mph4vH-1662107599722)(images/1460000038771820.jpeg)]

之后输出源节点的 ID,redis 会从这些源节点中均匀取出对应数量的槽,而后迁徙到 6385 中。最初要输出 done 示意完结。

最初输出 yes 即可。

6、查看集群状况

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 
\
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master

为什么 6387 是 3 个新的区间,以前的还是间断?
重新分配老本太高,所以前 3 家各自匀出来一部分,从 6381/6382/6383 三个旧节点别离匀出 1364 个坑位给新节点 6387

7、为主节点 6387 调配从节点 6388

命令:redis-cli --cluster add-node ip: 新 slave 端口 ip: 新 master 端口 --cluster-slave --cluster-master-id 新主机节点 ID

redis-cli --cluster add-node 192.168.130.132:6388 192.168.130.132:6387 --cluster-slave --cluster-master-id 34b689b791d9945a0b761349f1bc7b64f0be876f

8、查看集群状况

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 4b4b4a8a4d50548e954b46e921ff8085ed555c39 192.168.130.132:6388
   slots: (0 slots) slave
   replicates 34b689b791d9945a0b761349f1bc7b64f0be876f
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

6387 存在一个子机器

5、主从缩容案例

目标:6387 和 6388 下线

1、查看集群状况 1 取得 6388 的节点 ID

redis-cli --cluster check 192.168.130.132:6382
S: 4b4b4a8a4d50548e954b46e921ff8085ed555c39 192.168.130.132:6388
   slots: (0 slots) slave
   replicates 34b689b791d9945a0b761349f1bc7b64f0be876f
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

节点 ID 为:4b4b4a8a4d50548e954b46e921ff8085ed555c39

2、从集群中将节点 6388 删除

命令:redis-cli --cluster del-node ip: 从机端口 从机 6388 节点 ID

redis-cli --cluster del-node 192.168.130.132:6388 4b4b4a8a4d50548e954b46e921ff8085ed555c39
oot@docker:/data# redis-cli --cluster del-node 192.168.130.132:6388 4b4b4a8a4d50548e954b46e921ff8085ed555c39
>>> Removing node 4b4b4a8a4d50548e954b46e921ff8085ed555c39 from cluster 192.168.130.132:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@docker:/data# 

查看集群状况

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

很显著,6387 的从机器曾经被没了,6388 机器也曾经被删除了,只剩下 7 台机器了。

3、将 6387 的槽号清空,重新分配,本例将清进去的槽号都给 6381

redis-cli --cluster reshard 192.168.130.132:6381

这里我没有截到图,以阳哥的截图步骤

4、查看集群状态

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 8192 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6387 (34b689b7...) -> 0 keys | 0 slots | 0 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 34b689b791d9945a0b761349f1bc7b64f0be876f 192.168.130.132:6387
   slots: (0 slots) master
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6381 领有 8192 个槽位

5、将 6387 删除

命令:redis-cli --cluster del-node ip: 端口 6387 节点 ID

redis-cli --cluster del-node 192.168.130.132:6387 34b689b791d9945a0b761349f1bc7b64f0be876f

再次查看集群状况

redis-cli --cluster check 192.168.130.132:6381
root@docker:/data# redis-cli --cluster check 192.168.130.132:6381
192.168.130.132:6381 (8335b534...) -> 2 keys | 8192 slots | 1 slaves.
192.168.130.132:6382 (60fa7e08...) -> 1 keys | 4096 slots | 1 slaves.
192.168.130.132:6383 (8dbe8b34...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.130.132:6381)
M: 8335b5349d781c11745ee129f5dbae370dbd3394 192.168.130.132:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
S: c366905ca5ec2472275bbea9b2ae9b642b92a737 192.168.130.132:6384
   slots: (0 slots) slave
   replicates 60fa7e084483feca3af41f269de5a57b526c0ad7
S: 4051766aa375f0ed4533cb729afa8daf8649f5d2 192.168.130.132:6386
   slots: (0 slots) slave
   replicates 8335b5349d781c11745ee129f5dbae370dbd3394
M: 60fa7e084483feca3af41f269de5a57b526c0ad7 192.168.130.132:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 8dbe8b347410cf87d62933382b73693405535ba1 192.168.130.132:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: b5fd469dd1f8b5a64cacd5ecaed9dd396e1b9217 192.168.130.132:6385
   slots: (0 slots) slave
   replicates 8dbe8b347410cf87d62933382b73693405535ba1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker:/data# 

发现曾经删除了

正文完
 0