聊聊nacos的DataSyncer

23次阅读

共计 4480 个字符,预计需要花费 12 分钟才能阅读完成。

本文主要研究一下 nacos 的 DataSyncer

DataSyncer

nacos-1.1.3/naming/src/main/java/com/alibaba/nacos/naming/consistency/ephemeral/distro/DataSyncer.java

@Component
@DependsOn("serverListManager")
public class DataSyncer {

    @Autowired
    private DataStore dataStore;

    @Autowired
    private GlobalConfig partitionConfig;

    @Autowired
    private Serializer serializer;

    @Autowired
    private DistroMapper distroMapper;

    @Autowired
    private ServerListManager serverListManager;

    private Map<String, String> taskMap = new ConcurrentHashMap<>();

    @PostConstruct
    public void init() {startTimedSync();
    }

    public void submit(SyncTask task, long delay) {

        // If it's a new task:
        if (task.getRetryCount() == 0) {Iterator<String> iterator = task.getKeys().iterator();
            while (iterator.hasNext()) {String key = iterator.next();
                if (StringUtils.isNotBlank(taskMap.putIfAbsent(buildKey(key, task.getTargetServer()), key))) {
                    // associated key already exist:
                    if (Loggers.DISTRO.isDebugEnabled()) {Loggers.DISTRO.debug("sync already in process, key: {}", key);
                    }
                    iterator.remove();}
            }
        }

        if (task.getKeys().isEmpty()) {
            // all keys are removed:
            return;
        }

        GlobalExecutor.submitDataSync(new Runnable() {
            @Override
            public void run() {

                try {if (getServers() == null || getServers().isEmpty()) {Loggers.SRV_LOG.warn("try to sync data but server list is empty.");
                        return;
                    }

                    List<String> keys = task.getKeys();

                    if (Loggers.DISTRO.isDebugEnabled()) {Loggers.DISTRO.debug("sync keys: {}", keys);
                    }

                    Map<String, Datum> datumMap = dataStore.batchGet(keys);

                    if (datumMap == null || datumMap.isEmpty()) {
                        // clear all flags of this task:
                        for (String key : task.getKeys()) {taskMap.remove(buildKey(key, task.getTargetServer()));
                        }
                        return;
                    }

                    byte[] data = serializer.serialize(datumMap);

                    long timestamp = System.currentTimeMillis();
                    boolean success = NamingProxy.syncData(data, task.getTargetServer());
                    if (!success) {SyncTask syncTask = new SyncTask();
                        syncTask.setKeys(task.getKeys());
                        syncTask.setRetryCount(task.getRetryCount() + 1);
                        syncTask.setLastExecuteTime(timestamp);
                        syncTask.setTargetServer(task.getTargetServer());
                        retrySync(syncTask);
                    } else {
                        // clear all flags of this task:
                        for (String key : task.getKeys()) {taskMap.remove(buildKey(key, task.getTargetServer()));
                        }
                    }

                } catch (Exception e) {Loggers.DISTRO.error("sync data failed.", e);
                }
            }
        }, delay);
    }

    public void retrySync(SyncTask syncTask) {Server server = new Server();
        server.setIp(syncTask.getTargetServer().split(":")[0]);
        server.setServePort(Integer.parseInt(syncTask.getTargetServer().split(":")[1]));
        if (!getServers().contains(server)) {
            // if server is no longer in healthy server list, ignore this task:
            return;
        }

        // TODO may choose other retry policy.
        submit(syncTask, partitionConfig.getSyncRetryDelay());
    }

    public void startTimedSync() {GlobalExecutor.schedulePartitionDataTimedSync(new TimedSync());
    }

    //......

    public List<Server> getServers() {return serverListManager.getHealthyServers();
    }

    public String buildKey(String key, String targetServer) {return key + UtilsAndCommons.CACHE_KEY_SPLITER + targetServer;}
}
  • DataSyncer 定义了 submit、retrySync、startTimedSync、getServers 等方法,其 init 方法会执行 startTimedSync
  • submit 方法对于 retryCount 为 0 的任务会判断 taskMap 是否存在该任务如果存在则移除其 taskKey,之后使用 GlobalExecutor.submitDataSync 提交一个 sync 任务,它主要是通过 amingProxy.syncData 来同步,成功则移除,不成功则使用 retrySync 重试
  • retrySync 则重新构建 server 调用 submit 执行;startTimedSync 方法则是使用 GlobalExecutor.schedulePartitionDataTimedSync 提交 TimedSync 任务;getServers 则通过 serverListManager.getHealthyServers() 返回健康的实例

TimedSync

nacos-1.1.3/naming/src/main/java/com/alibaba/nacos/naming/consistency/ephemeral/distro/DataSyncer.java

    public class TimedSync implements Runnable {

        @Override
        public void run() {

            try {if (Loggers.DISTRO.isDebugEnabled()) {Loggers.DISTRO.debug("server list is: {}", getServers());
                }

                // send local timestamps to other servers:
                Map<String, String> keyChecksums = new HashMap<>(64);
                for (String key : dataStore.keys()) {if (!distroMapper.responsible(KeyBuilder.getServiceName(key))) {continue;}

                    keyChecksums.put(key, dataStore.get(key).value.getChecksum());
                }

                if (keyChecksums.isEmpty()) {return;}

                if (Loggers.DISTRO.isDebugEnabled()) {Loggers.DISTRO.debug("sync checksums: {}", keyChecksums);
                }

                for (Server member : getServers()) {if (NetUtils.localServer().equals(member.getKey())) {continue;}
                    NamingProxy.syncCheckSums(keyChecksums, member.getKey());
                }
            } catch (Exception e) {Loggers.DISTRO.error("timed sync task failed.", e);
            }
        }
    }
  • TimedSync 会使用 NamingProxy.syncCheckSums 同步 keyChecksums 进行校验

小结

  • DataSyncer 定义了 submit、retrySync、startTimedSync、getServers 等方法,其 init 方法会执行 startTimedSync
  • submit 方法对于 retryCount 为 0 的任务会判断 taskMap 是否存在该任务如果存在则移除其 taskKey,之后使用 GlobalExecutor.submitDataSync 提交一个 sync 任务,它主要是通过 amingProxy.syncData 来同步,成功则移除,不成功则使用 retrySync 重试
  • retrySync 则重新构建 server 调用 submit 执行;startTimedSync 方法则是使用 GlobalExecutor.schedulePartitionDataTimedSync 提交 TimedSync 任务;getServers 则通过 serverListManager.getHealthyServers() 返回健康的实例

doc

  • DataSyncer

正文完
 0