共计 19116 个字符,预计需要花费 48 分钟才能阅读完成。
作者:京东物流 宫丙来
一、主从复制概述
- RocketMQ Broker 的主从复制次要包含两局部内容:CommitLog 的音讯复制和 Broker 元数据的复制。
- CommitLog 的音讯复制是产生在音讯写入时,当音讯写完 Broker Master 时,会通过独自的线程,将音讯写入到从服务器,在写入的时候反对同步写入、异步写入两种形式。
- Broker 元数据的写入,则是 Broker 从服务器通过独自的线程每隔 10s 从主 Broker 上获取,而后更新从的配置,并长久化到相应的配置文件中。
- RocketMQ 主从同步一个重要的特色:主从同步不具备主从切换性能,即当主节点宕机后,从不会接管音讯发送,但能够提供音讯读取。
二、CommitLog 音讯复制
2.1、整体概述 CommitLog 主从复制的流程如下:
1.Producer 发送音讯到 Broker Master,Broker 进行音讯存储,并调用 handleHA 进行主从同步;2. 如果是同步复制的话,参考 2.6 章节的同步复制;3. 如果是异步复制的话,流程如下:
1. Broker Master 启动,并在指定端口监听;2. Broker Slave 启动,被动连贯 Broker Master,通过 Java NIO 建设 TCP 连贯;3. Broker Slave 以每隔 5s 的间隔时间向服务端拉取音讯,如果是第一次拉取的话,先获取本地 CommitLog 文件中最大的偏移量,以该偏移量向服务端拉取音讯
4. Broker Master 解析申请,并返回数据给 Broker Slave;5.Broker Slave 收到一批音讯后,将音讯写入本地 CommitLog 文件中,而后向 Master 汇报拉取进度,并更新下一次待拉取偏移量;
咱们先看下异步复制的整体流程,最初再看下同步复制的流程,异步复制的入口为 HAService.start();
public void start() throws Exception {
//broker master 启动,接管 slave 申请,并解决
this.acceptSocketService.beginAccept();
this.acceptSocketService.start();
// 同步复制线程启动
this.groupTransferService.start();
//broker slave 启动
this.haClient.start();}
上面别离对下面的每一步做具体阐明。
2.2、HAService Master 启动
public void beginAccept() throws Exception {this.serverSocketChannel = ServerSocketChannel.open();
this.selector = RemotingUtil.openSelector();
this.serverSocketChannel.socket().setReuseAddress(true);
this.serverSocketChannel.socket().bind(this.socketAddressListen);
this.serverSocketChannel.configureBlocking(false);
this.serverSocketChannel.register(this.selector, SelectionKey.OP_ACCEPT);
}
在 beginAccept 办法中次要创立了 ServerSocketChannel、Selector、设置 TCP reuseAddress、绑定监听端口、设置为非阻塞模式,并注册 OP_ACCEPT(连贯事件)。能够看到在这里是通过 Java 原生的 NIO 来实现的,并没有通过 Netty 框架来实现。
acceptSocketService.start() 启动办法代码如下:
while (!this.isStopped()) {
try {
// 获取事件
this.selector.select(1000);
Set<SelectionKey> selected = this.selector.selectedKeys();
if (selected != null) {for (SelectionKey k : selected) {
// 解决 OP_ACCEPT 事件,并创立 HAConnection
if ((k.readyOps() & SelectionKey.OP_ACCEPT) != 0) {SocketChannel sc = ((ServerSocketChannel) k.channel()).accept();
if (sc != null) {HAConnection conn = new HAConnection(HAService.this, sc);
// 次要是启动 readSocketService,writeSocketService 这两个线程
conn.start();
HAService.this.addConnection(conn);
}
}
}
selected.clear();}
} catch (Exception e) {log.error(this.getServiceName() + "service has exception.", e);
}
}
选择器每 1s 解决一次解决一次连贯就绪事件。连贯事件就绪后,调用 ServerSocketChannel 的 accept() 办法创立 SocketChannel,与服务端数据传输的通道。而后为每一个连贯创立一个 HAConnection 对象,该 HAConnection 将负责 Master-Slave 数据同步逻辑。HAConnection.start 办法如下:
public void start() {this.readSocketService.start();
this.writeSocketService.start();}
2.3、HAClient 启动
while (!this.isStopped()) {
try {
// 和 broker master 建设连贯,通过 java nio 来实现
if (this.connectMaster()) {
// 在心跳的同时,上报 offset
if (this.isTimeToReportOffset()) {
// 上报 offset
boolean result = this.reportSlaveMaxOffset(this.currentReportedOffset);
if (!result) {this.closeMaster();
}
}
this.selector.select(1000);
// 解决网络读申请,也就是解决从 Master 传回的音讯数据
boolean ok = this.processReadEvent();
if (!ok) {this.closeMaster();
}
if (!reportSlaveMaxOffsetPlus()) {continue;}
long interval =
HAService.this.getDefaultMessageStore().getSystemClock().now()
- this.lastWriteTimestamp;
if (interval > HAService.this.getDefaultMessageStore().getMessageStoreConfig()
.getHaHousekeepingInterval()) {
log.warn("HAClient, housekeeping, found this connection[" + this.masterAddress
+ "] expired," + interval);
this.closeMaster();
log.warn("HAClient, master not response some time, so close connection");
}
} else {this.waitForRunning(1000 * 5);
}
} catch (Exception e) {log.warn(this.getServiceName() + "service has exception.", e);
this.waitForRunning(1000 * 5);
}
}
2.3.1、HAService 主从建设连贯
如果 socketChannel 为空,则尝试连贯 Master,如果 Master 地址为空,返回 false。
private boolean connectMaster() throws ClosedChannelException {if (null == socketChannel) {String addr = this.masterAddress.get();
if (addr != null) {SocketAddress socketAddress = RemotingUtil.string2SocketAddress(addr);
if (socketAddress != null) {this.socketChannel = RemotingUtil.connect(socketAddress);
if (this.socketChannel != null) {
// 注册读事件,监听 broker master 返回的数据
this.socketChannel.register(this.selector, SelectionKey.OP_READ);
}
}
}
// 获取以后的 offset
this.currentReportedOffset = HAService.this.defaultMessageStore.getMaxPhyOffset();
this.lastWriteTimestamp = System.currentTimeMillis();}
return this.socketChannel != null;
}
- Broker 主从连贯
Broker Slave 通过 NIO 来进行 Broker Master 连贯,代码如下:
SocketChannel sc = null;
sc = SocketChannel.open();
sc.configureBlocking(true);
sc.socket().setSoLinger(false, -1);
sc.socket().setTcpNoDelay(true);
sc.socket().setReceiveBufferSize(1024 * 64);
sc.socket().setSendBufferSize(1024 * 64);
sc.socket().connect(remote, timeoutMillis);
sc.configureBlocking(false);
- Slave 获取以后 offset
public long getMaxPhyOffset() {return this.commitLog.getMaxOffset();
}
public long getMaxOffset() {return this.mappedFileQueue.getMaxOffset();
}
public long getMaxOffset() {MappedFile mappedFile = getLastMappedFile();
if (mappedFile != null) {return mappedFile.getFileFromOffset() + mappedFile.getReadPosition();}
return 0;
}
能够看到最终还是通过读取 MappedFile 的 position 来获取从的 offset。
2.3.2、上报 offset 工夫判断
private boolean isTimeToReportOffset() {
// 以后工夫 - 上次写的工夫
long interval =
HAService.this.defaultMessageStore.getSystemClock().now() - this.lastWriteTimestamp;
boolean needHeart = interval > HAService.this.defaultMessageStore.getMessageStoreConfig()
.getHaSendHeartbeatInterval();
return needHeart;
}
判断逻辑为以后工夫 - 上次写的工夫 >haSendHeartbeatInterval 时,则进行心跳和 offset 的上报。haSendHeartbeatInterval 默认为 5s,可配置。
2.3.3、上报 offset
private boolean reportSlaveMaxOffset(final long maxOffset) {this.reportOffset.position(0);
this.reportOffset.limit(8);
this.reportOffset.putLong(maxOffset);
this.reportOffset.position(0);
this.reportOffset.limit(8);
// 最多发送三次,reportOffset 是否有残余
for (int i = 0; i < 3 && this.reportOffset.hasRemaining(); i++) {
try {this.socketChannel.write(this.reportOffset);
} catch (IOException e) {log.error(this.getServiceName()
+ "reportSlaveMaxOffset this.socketChannel.write exception", e);
return false;
}
}
return !this.reportOffset.hasRemaining();}
次要还是通过 NIO 发送申请。
2.4、Broker Master 解决申请
在主从建设连贯时创立了 HAConnection 对象,该对象次要蕴含了如下两个重要的线程服务类:
// 负责写,将 commitlog 数据发送到从
private WriteSocketService writeSocketService;
// 负责读,读取从上报的 offset,并依据 offset 从 Broker Master 读取 commitlog
private ReadSocketService readSocketService;
2.4.1、ReadSocketService 接管读申请
readSocketService.run 办法如下:
while (!this.isStopped()) {
try {this.selector.select(1000);
// 解决读事件
boolean ok = this.processReadEvent();
if (!ok) {HAConnection.log.error("processReadEvent error");
break;
}
long interval = HAConnection.this.haService.getDefaultMessageStore().getSystemClock().now() - this.lastReadTimestamp;
if (interval > HAConnection.this.haService.getDefaultMessageStore().getMessageStoreConfig().getHaHousekeepingInterval()) {log.warn("ha housekeeping, found this connection[" + HAConnection.this.clientAddr + "] expired," + interval);
break;
}
} catch (Exception e) {HAConnection.log.error(this.getServiceName() + "service has exception.", e);
break;
}
}
processReadEvent 的逻辑如下:
int readSize = this.socketChannel.read(this.byteBufferRead);
if (readSize > 0) {
readSizeZeroTimes = 0;
this.lastReadTimestamp = HAConnection.this.haService.getDefaultMessageStore().getSystemClock().now();
if ((this.byteBufferRead.position() - this.processPostion) >= 8) {int pos = this.byteBufferRead.position() - (this.byteBufferRead.position() % 8);
// 获取 slave 申请的 offset
long readOffset = this.byteBufferRead.getLong(pos - 8);
this.processPostion = pos;
HAConnection.this.slaveAckOffset = readOffset;
if (HAConnection.this.slaveRequestOffset < 0) {
HAConnection.this.slaveRequestOffset = readOffset;
log.info("slave[" + HAConnection.this.clientAddr + "] request offset" + readOffset);
}
// 如果是同步复制的话,判断申请的 offset 是否 push2SlaveMaxOffset 雷同,雷同的话则唤醒 master GroupTransferService
HAConnection.this.haService.notifyTransferSome(HAConnection.this.slaveAckOffset);
}
}
能够看到 processReadEvent 逻辑很简略,就是从 ByteBuffer 中解析出 offset,而后设置 HAConnection.this.slaveRequestOffset;
2.4.2、WriteSocketService 进行写解决
Broker Master 通过 HAConnection.WriteSocketService 进行 CommitLog 的读取,run 办法主逻辑如下:
this.selector.select(1000);
//nextTransferFromWhere 下次传输 commitLog 的起始地位
if (-1 == this.nextTransferFromWhere) {if (0 == HAConnection.this.slaveRequestOffset) {long masterOffset = HAConnection.this.haService.getDefaultMessageStore().getCommitLog().getMaxOffset();
masterOffset =
masterOffset
- (masterOffset % HAConnection.this.haService.getDefaultMessageStore().getMessageStoreConfig()
.getMapedFileSizeCommitLog());
if (masterOffset < 0) {masterOffset = 0;}
this.nextTransferFromWhere = masterOffset;
} else {this.nextTransferFromWhere = HAConnection.this.slaveRequestOffset;}
log.info("master transfer data from" + this.nextTransferFromWhere + "to slave[" + HAConnection.this.clientAddr
+ "], and slave request" + HAConnection.this.slaveRequestOffset);
}
// 获取 commitLog 数据
SelectMappedBufferResult selectResult = HAConnection.this.haService.getDefaultMessageStore().getCommitLogData(this.nextTransferFromWhere);
// 获取 commitLog 数据
SelectMappedBufferResult selectResult =
HAConnection.this.haService.getDefaultMessageStore().getCommitLogData(this.nextTransferFromWhere);
if (selectResult != null) {int size = selectResult.getSize();
if (size > HAConnection.this.haService.getDefaultMessageStore().getMessageStoreConfig().getHaTransferBatchSize()) {size = HAConnection.this.haService.getDefaultMessageStore().getMessageStoreConfig().getHaTransferBatchSize();
}
long thisOffset = this.nextTransferFromWhere;
this.nextTransferFromWhere += size;
selectResult.getByteBuffer().limit(size);
this.selectMappedBufferResult = selectResult;
// Build Header
this.byteBufferHeader.position(0);
this.byteBufferHeader.limit(headerSize);
this.byteBufferHeader.putLong(thisOffset);
this.byteBufferHeader.putInt(size);
this.byteBufferHeader.flip();
//nio 发送 commitlog
this.lastWriteOver = this.transferData();} else {
// 如果没有获取到 commitLog 数据,期待 100ms
HAConnection.this.haService.getWaitNotifyObject().allWaitForRunning(1
这外面次要包含获取 CommitLog 数据、发送 CommitLog 数据这两个步骤。
2.4.2.1、获取 CommitLog 数据
public SelectMappedBufferResult getData(final long offset, final boolean returnFirstOnNotFound) {int mappedFileSize = this.defaultMessageStore.getMessageStoreConfig().getMapedFileSizeCommitLog();
MappedFile mappedFile = this.mappedFileQueue.findMappedFileByOffset(offset, returnFirstOnNotFound);
if (mappedFile != null) {int pos = (int) (offset % mappedFileSize);
SelectMappedBufferResult result = mappedFile.selectMappedBuffer(pos);
return result;
}
return null;
}
public SelectMappedBufferResult selectMappedBuffer(int pos) {int readPosition = getReadPosition();
if (pos < readPosition && pos >= 0) {if (this.hold()) {ByteBuffer byteBuffer = this.mappedByteBuffer.slice();
byteBuffer.position(pos);
int size = readPosition - pos;
ByteBuffer byteBufferNew = byteBuffer.slice();
byteBufferNew.limit(size);
return new SelectMappedBufferResult(this.fileFromOffset + pos, byteBufferNew, size, this);
}
}
return null;
}
能够看到最终还是依据 offset 从 MappedFile 读取数据。
2.4.2.2、发送 CommitLog 数据
数据次要包含 header、body 两局部,数据发送的话还是通过 NIO 来实现,次要代码如下:
// Build Header
this.byteBufferHeader.position(0);
this.byteBufferHeader.limit(headerSize);
this.byteBufferHeader.putLong(thisOffset);
this.byteBufferHeader.putInt(size);
this.byteBufferHeader.flip();
int writeSize = this.socketChannel.write(this.byteBufferHeader);
// Write Body
if (!this.byteBufferHeader.hasRemaining()) {while (this.selectMappedBufferResult.getByteBuffer().hasRemaining()) {int writeSize = this.socketChannel.write(this.selectMappedBufferResult.getByteBuffer());
if (writeSize > 0) {
writeSizeZeroTimes = 0;
this.lastWriteTimestamp = HAConnection.this.haService.getDefaultMessageStore().getSystemClock().now();} else if (writeSize == 0) {if (++writeSizeZeroTimes >= 3) {break;}
} else {throw new Exception("ha master write body error < 0");
}
}
}
CommitLog 主从发送实现后,Broker Slave 则会监听读事件、获取 CommitLog 数据,并进行 CommitLog 的写入。
2.5、HAClient processReadEvent
在主从建设连贯后,从注册了可读事件,目标就是读取从 Broker Master 返回的 CommitLog 数据,对应的办法为 HAClient.processReadEvent:
int readSize = this.socketChannel.read(this.byteBufferRead);
if (readSize > 0) {lastWriteTimestamp = HAService.this.defaultMessageStore.getSystemClock().now();
readSizeZeroTimes = 0;
boolean result = this.dispatchReadRequest();
if (!result) {log.error("HAClient, dispatchReadRequest error");
return false;
}
}
dispatchReadRequest 办法如下:
// 读取返回的 body data
byte[] bodyData = new byte[bodySize];
this.byteBufferRead.position(this.dispatchPostion + msgHeaderSize);
this.byteBufferRead.get(bodyData);
HAService.this.defaultMessageStore.appendToCommitLog(masterPhyOffset, bodyData);
this.byteBufferRead.position(readSocketPos);
this.dispatchPostion += msgHeaderSize + bodySize;
// 上报从的 offset
if (!reportSlaveMaxOffsetPlus()) {
return false;
外面的外围逻辑次要包含如下三个步骤:
- 从 byteBufferRead 中读取 CommitLog 数据;
<!—->
- 调用 defaultMessageStore.appendToCommitLog 办法,将数据写入到 MappedFile 文件,写入办法如下:
public boolean appendToCommitLog(long startOffset, byte[] data) {
// 将数据写到 commitlog,同一般音讯的存储
boolean result = this.commitLog.appendData(startOffset, data);
// 唤醒 reputMessageService,构建 consumeQueue,index
this.reputMessageService.wakeup();
return result;
}
- 上报从新的 offset,也是读取 MappedFile 的 offset,而后上报 Broker Master;
2.6、同步复制
下面次要介绍了 Broker 的异步复制,上面再来看下 Broker 的同步复制的实现。同步复制的整体流程图如下:
大略阐明如下:
- producer 发送音讯到 broker,broker 进行音讯的存储,将音讯写入到 commitLog;
- broker master 写音讯线程唤醒 WriteSocketService 线程,查问 commitLog 数据,而后发送到从。在 WriteSocketService 获取 commitLog 时,如果没有获取到 commitLog 数据,会期待 100ms。所以当 commitLog 新写入数据的时候,会唤醒 WriteSocketService,而后查问 commitLog 数据,发送到从。
- broker master 创立 GroupCommitRequest,同步期待主从复制实现;
- 从承受新的 commitLog 数据,而后写 commitLog 数据,并返回新的 slave offset 到主;
- 主更新 push2SlaveMaxOffset,并判断 push2SlaveMaxOffset 是否大于等于主从复制申请的 offset,如果大于等于的话,则认为主从复制实现,返回 commitLog.handleHA 办法胜利,从而返回音讯保留胜利。
对应的代码入口为 CommitLog.handleHA 办法。
public void handleHA(AppendMessageResult result, PutMessageResult putMessageResult, MessageExt messageExt) {
// 如果是 broker 主,并且是同步复制的话
if (BrokerRole.SYNC_MASTER == this.defaultMessageStore.getMessageStoreConfig().getBrokerRole()) {
// 获取 HAService
HAService service = this.defaultMessageStore.getHaService();
// 获取 Message 上的 MessageConst.PROPERTY_WAIT_STORE_MSG_OK,默认是须要期待主从复制实现
if (messageExt.isWaitStoreMsgOK()) {
/**
* 判断从是否可用,判断的逻辑是:(主 offset-push2SlaveMaxOffset<1024 * 1024 * 256),也就是如果主从的 offset 差的太多,* 则认为从不可用,Tell the producer, slave not available
* 这里的 result = mappedFile.appendMessage(msg, this.appendMessageCallback);
*/
if (service.isSlaveOK(result.getWroteOffset() + result.getWroteBytes())) {// 组装 GroupCommitRequest,nextOffset=result.getWroteOffset() + result.getWroteBytes(),这里的 nextOffset 指的就是从要写到的 offset
GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes());
/**
* 调用的是 this.groupTransferService.putRequest(request); 将 request 放到 requestsWrite list 中。* HAService 持有 GroupTransferService groupTransferService 援用;
*/
service.putRequest(request);
/**
* 唤醒的是 WriteSocketService,查问 commitLog 数据,而后发送到从。* 在 WriteSocketService 获取 commitLog 时,如果没有获取到 commitLog 数据,期待 100ms
* HAConnection.this.haService.getWaitNotifyObject().allWaitForRunning(100);
* 所以当 commitLog 新写入数据的时候,会唤醒 WriteSocketService,而后查问 commitLog 数据,发送到从。*/
service.getWaitNotifyObject().wakeupAll();
// 期待同步复制实现,判断逻辑是:HAService.this.push2SlaveMaxOffset.get() >= req.getNextOffset();
boolean flushOK =
request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
// 如果同步复制失败的话,设置 putMessageResult 中的状态为同步从超时
if (!flushOK) {log.error("do sync transfer other node, wait return, but failed, topic:" + messageExt.getTopic() + "tags:"
+ messageExt.getTags() + "client address:" + messageExt.getBornHostNameString());
putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_SLAVE_TIMEOUT);
}
}
// Slave problem
else {
// Tell the producer, slave not available
putMessageResult.setPutMessageStatus(PutMessageStatus.SLAVE_NOT_AVAILABLE);
}
}
}
2.6.1、GroupTransferService 启动
在 HAService 启动的时候,启动了 GroupTransferService 线程,代码如下:
public void run() {while (!this.isStopped()) {this.waitForRunning(10);
this.doWaitTransfer();}
}
private void doWaitTransfer() {synchronized (this.requestsRead) {if (!this.requestsRead.isEmpty()) {for (CommitLog.GroupCommitRequest req : this.requestsRead) {
/**
* req.getNextOffset:result.getWroteOffset() + result.getWroteBytes()
* push2SlaveMaxOffset:*/
boolean transferOK = HAService.this.push2SlaveMaxOffset.get() >= req.getNextOffset();
// 在这循环 5 次,最多期待 5s,因为 slave 心跳距离默认 5s
for (int i = 0; !transferOK && i < 5; i++) {this.notifyTransferObject.waitForRunning(1000);
transferOK = HAService.this.push2SlaveMaxOffset.get() >= req.getNextOffset();
}
if (!transferOK) {log.warn("transfer messsage to slave timeout," + req.getNextOffset());
}
// 主从复制实现,唤醒 handleHA 后续操作
req.wakeupCustomer(transferOK);
}
this.requestsRead.clear();}
}
}
wakeupCustomer:
public void wakeupCustomer(final boolean flushOK) {
this.flushOK = flushOK;
this.countDownLatch.countDown();}
2.6.2、唤醒 WriteSocketService
service.getWaitNotifyObject().wakeupAll();
唤醒的是 WriteSocketService,查问 commitLog 数据,而后发送到从。在 WriteSocketService 获取 commitLog 时,如果没有获取到 commitLog 数据,期待 100ms。HAConnection.this.haService.getWaitNotifyObject().allWaitForRunning(100); 所以当 commitLog 新写入数据的时候,会唤醒 WriteSocketService,而后查问 commitLog 数据,发送到从。
2.6.3、同步期待,直到复制实现
boolean flushOK =
request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
public boolean waitForFlush(long timeout) {
try {
// 期待同步复制实现
this.countDownLatch.await(timeout, TimeUnit.MILLISECONDS);
return this.flushOK;
} catch (InterruptedException e) {log.error("Interrupted", e);
return false;
}
}
}
三、元数据的复制
broker 元数据的复制,次要包含 topicConfig、consumerOffset、delayOffset、subscriptionGroup 这几局部,整体流程图如下:
从 broker 通过独自的线程,每隔 10s 进行一次元数据的复制,代码入口为:BrokerController.start -> SlaveSynchronize.syncAll:
slaveSyncFuture = this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
//10s 进行一次主从同步
BrokerController.this.slaveSynchronize.syncAll();}
catch (Throwable e) {log.error("ScheduledTask SlaveSynchronize syncAll error.", e);
}
}
}, 1000 * 3, 1000 * 10, TimeUnit.MILLISECONDS);
public void syncAll() {this.syncTopicConfig();
this.syncConsumerOffset();
this.syncDelayOffset();
this.syncSubscriptionGroupConfig();}
3.1、syncTopicConfig
// 从 Master 获取 TopicConfig 信息,最终调用的是 AdminBrokerProcessor.getAllTopicConfig
TopicConfigSerializeWrapper topicWrapper =
this.brokerController.getBrokerOuterAPI().getAllTopicConfig(masterAddrBak);
if (!this.brokerController.getTopicConfigManager().getDataVersion()
.equals(topicWrapper.getDataVersion())) {this.brokerController.getTopicConfigManager().getDataVersion()
.assignNewOne(topicWrapper.getDataVersion());
this.brokerController.getTopicConfigManager().getTopicConfigTable().clear();
this.brokerController.getTopicConfigManager().getTopicConfigTable()
.putAll(topicWrapper.getTopicConfigTable());
// 将 topicConfig 进行长久化,对应的文件为 topics.json
this.brokerController.getTopicConfigManager().persist();
log.info("Update slave topic config from master, {}", masterAddrBak)
3.2、syncConsumerOffset
// 从 "主 Broker" 获取 ConsumerOffset
ConsumerOffsetSerializeWrapper offsetWrapper =
this.brokerController.getBrokerOuterAPI().getAllConsumerOffset(masterAddrBak);
// 设置从的 offsetTable
this.brokerController.getConsumerOffsetManager().getOffsetTable()
.putAll(offsetWrapper.getOffsetTable());
// 并长久化到从的 consumerOffset.json 文件中
this.brokerController.getConsumerOffsetManager().persist();
3.3、syncDelayOffset
String delayOffset = this.brokerController.getBrokerOuterAPI().getAllDelayOffset(masterAddrBak);
String fileName = StorePathConfigHelper.getDelayOffsetStorePath(this.brokerController
.getMessageStoreConfig().getStorePathRootDir());
MixAll.string2File(delayOffset, fileName);
3.4、syncSubscriptionGroupConfig
SubscriptionGroupWrapper subscriptionWrapper =this.brokerController.getBrokerOuterAPI().getAllSubscriptionGroupConfig(masterAddrBak);
SubscriptionGroupManager subscriptionGroupManager =this.brokerController.getSubscriptionGroupManager();
subscriptionGroupManager.getDataVersion().assignNewOne(subscriptionWrapper.getDataVersion());
subscriptionGroupManager.getSubscriptionGroupTable().clear();
subscriptionGroupManager.getSubscriptionGroupTable().putAll(subscriptionWrapper.getSubscriptionGroupTable());
subscriptionGroupManager.persist();
四、思考与播种
通过下面的分享,咱们基本上理解了 RocketMQ 的主从复制原理,其中有些思维咱们能够后续借鉴下:
- 在功能设计的时候将元数据、程序数据离开治理;
- 主从复制的时候,根本思维都是从申请主,申请时带上 offset,而后主查问数据返回从,从再执行;mysql 的主从复制、redis 的主从复制根本也是这样;
- 主从复制包含异步复制、同步复制两种形式,能够通过配置来决定应用哪种同步形式,这个须要依据理论业务场景来决定;
- 主从复制线程尽量和音讯写线程或者主线程离开;
因为工夫、精力有限,难免会有纰漏、思考不到之处,如有问题欢送沟通、交换。