zookeeper
原文
https://www.baeldung.com/apache-curator
1. Introduction
Apache Curator is a Java client for Apache Zookeeper, the popular coordination service for distributed applications.
Apache Curator 是 Apache Zookeeper 的 Java 客户端,后者是分布式应用程序的风行协调服务。
In this tutorial, we’ll introduce some of the most relevant features provided by Curator:
- Connection Management – managing connections and retry policies
- Async – enhancing existing client by adding async capabilities and the use of Java 8 lambdas
- Configuration Management – having a centralized configuration for the system
- Strongly-Typed Models – working with typed models
- Recipes – implementing leader election, distributed locks or counters
在本文中,咱们讲介绍一些 Curator 提供的最实用的性能实现:
- 连贯治理:治理连贯和重试策略
- 异步:通过增加异步性能和应用 Java 8 lambda 来加强现有客户端
- 配置管理:零碎集中配置。
- 强类型模型:作用于类型模型上
- Recipes:实现领导选举,分布式锁或者计数器
2. Prerequisites
To start with, it’s recommended to take a quick look at the Apache Zookeeper and its features.
首先,倡议疾速浏览一下 Apache Zookeeper 及其性能。
For this tutorial, we assume that there’s already a standalone Zookeeper instance running on _127.0.0.1:2181_; here are instructions on how to install and run it, if you’re just getting started.
对于本教程,咱们假如曾经有一个独立的 Zookeeper 实例在 127.0.0.1:2181 上运行;这里 会介绍如何装置并且运行
[[【Zookeeper】基于 3 台 linux 虚拟机搭建 zookeeper 集群]]
First, we’ll need to add the curator-x-async dependency to our _pom.xml_:
首先,咱们须要增加 curator-x-async 对咱们的 pom.xml 的依赖:
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-x-async</artifactId>
<version>4.0.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
</exclusions>
</dependency>
The latest version of Apache Curator 4.X.X has a hard dependency with Zookeeper 3.5.X which is still in beta right now.
最新版本的 Apache Curator 4.X.X 与 Zookeeper 3.5.X 具备硬依赖性。
And so, in this article, we’re going to use the currently latest stable Zookeeper 3.4.11 instead.
在本文中会应用文章编写时候的最初一个稳固版本 Zookeeper 3.4.11 代替
So we need to exclude the Zookeeper dependency and add the dependency for our Zookeeper version to our _pom.xml_:
因而,咱们须要排除 Zookeeper 依赖项并增加咱们的 Zookeeper 版本的依赖项 到咱们的 _pom.xml_:
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.11</version>
</dependency>
For more information about compatibility, please refer to this link.
更多信息能够查看:this link
3. Connection Management
The basic use case of Apache Curator is connecting to a running Apache Zookeeper instance.
最根底的应用案例,是应用 Curator 连贯并且运行一个 Apach Zk 实例。
The tool provides a factory to build connections to Zookeeper using retry policies:
工具提供了一个工厂构建 ZK 连贯,并且须要手动指定重试器。
int sleepMsBetweenRetries = 100;
int maxRetries = 3;
RetryPolicy retryPolicy = new RetryNTimes(maxRetries, sleepMsBetweenRetries);
CuratorFramework client = CuratorFrameworkFactory
.newClient("127.0.0.1:2181", retryPolicy);
client.start();
assertThat(client.checkExists().forPath("/")).isNotNull();
In this quick example, we’ll retry 3 times and will wait 100 ms between retries in case of connectivity issues.
在这个疾速应用案例中,咱们指定了重试三次,并且每次重试的距离期待 100ms。
Once connected to Zookeeper using the _CuratorFramework_ client, we can now browse paths, get/set data and essentially interact with the server.
应用 CuratorFramework 客户端连贯到 Zookeeper 后,咱们当初能够浏览门路、获取 / 设置数据并与服务器进行交互。
4. Async
The Curator Async module wraps the above _CuratorFramework_ client to provide non-blocking capabilities using the CompletionStage Java 8 API.
Curator Async 模块应用 CompletionStage Java 8 API 对上述 CuratorFramework 客户端进行封装,以提供非阻塞性能。
Let’s see how the previous example looks like using the Async wrapper:
让咱们看看应用 Async 封装器的前一个示例是怎么的:
int sleepMsBetweenRetries = 100;
int maxRetries = 3;
RetryPolicy retryPolicy
= new RetryNTimes(maxRetries, sleepMsBetweenRetries);
CuratorFramework client = CuratorFrameworkFactory
.newClient("127.0.0.1:2181", retryPolicy);
client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
AtomicBoolean exists = new AtomicBoolean(false);
async.checkExists()
.forPath("/")
.thenAcceptAsync(s -> exists.set(s != null));
await().until(() -> assertThat(exists.get()).isTrue());
Now, the _checkExists()_ operation works in asynchronous mode, not blocking the main thread. We can also chain actions one after each other using the _thenAcceptAsync()_ method instead, which uses the CompletionStage API.
当初,_checkExists()_ 操作以异步模式运行,不会阻塞主线程。咱们还能够应用_thenAcceptAsync()_办法(该办法应用 CompletionStage API)将操作一个接一个地串联起来。
5. Configuration Management
In a distributed environment, one of the most common challenges is to manage shared configuration among many applications. We can use Zookeeper as a data store where to keep our configuration.
在分布式环境中,最常见的挑战之一就是治理多个应用程序之间的共享配置。** 咱们能够应用 Zookeeper 作为保留配置的数据存储。
Let’s see an example using Apache Curator to get and set data:
让咱们来看一个应用 Apache Curator 获取和设置数据的示例:
CuratorFramework client = newClient();
client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
String key = getKey();
String expected = "my_value";
client.create().forPath(key);
async.setData()
.forPath(key, expected.getBytes());
AtomicBoolean isEquals = new AtomicBoolean();
async.getData()
.forPath(key)
.thenAccept(data -> isEquals.set(new String(data).equals(expected)));
await().until(() -> assertThat(isEquals.get()).isTrue());
In this example, we create the node path, set the data in Zookeeper, and then we recover it checking that the value is the same. The _key_ field could be a node path like _/config/dev/my_key_.
在这个示例中,咱们创立节点门路,在 Zookeeper 中设置数据,而后查看值是否雷同,再复原数据。_key_ 字段能够是一个节点门路,如 _/config/dev/my_key_。
5.1. Watchers[](https://www.baeldung.com/apache-curator#1-watchers)
Another interesting feature in Zookeeper is the ability to watch keys or nodes. It allows us to listen to changes in the configuration and update our applications without needing to redeploy.
Zookeeper 的另一个乏味性能是监督键或节点。它容许咱们监听配置中的变动并更新应用程序,而无需重新部署。
Let’s see how the above example looks like when using watchers:
让咱们看看下面的示例在应用 Watch 时是什么样子的:
CuratorFramework client = newClient()
client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
String key = getKey();
String expected = "my_value";
async.create().forPath(key);
List<String> changes = new ArrayList<>();
async.watched()
.getData()
.forPath(key)
.event()
.thenAccept(watchedEvent -> {
try {changes.add(new String(client.getData()
.forPath(watchedEvent.getPath())));
} catch (Exception e) {// fail ...}});
// Set data value for our key
async.setData()
.forPath(key, expected.getBytes());
await()
.until(() -> assertThat(changes.size()).isEqualTo(1));
We configure the watcher, set the data, and then confirm the watched event was triggered. We can watch one node or a set of nodes at once.
咱们配置监视器、设置数据,而后确认被监督事件已触发。咱们能够同时察看一个节点或一组节点。
6. Strongly Typed Models
Zookeeper primarily works with byte arrays, so we need to serialize and deserialize our data. This allows us some flexibility to work with any serializable instance, but it can be hard to maintain.
Zookeeper 次要应用字节数组,因而咱们须要对数据进行序列化和反序列化。这让咱们能够灵便地应用任何可序列化的实例,但也很难保护。
To help here, Curator adds the concept of typed models which delegates the serialization/deserialization and allows us to work with our types directly. Let’s see how that works.
为了在这方面提供帮忙,Curator 增加了类型化模型的概念,它 了序列化 / 反序列化,并容许咱们间接 咱们的类型。让咱们看看它是如何工作的。
First, we need a serializer framework. Curator recommends using the Jackson implementation, so let’s add the Jackson dependency to our _pom.xml_:
首先,咱们须要一个序列化框架。Curator 举荐应用 Jackson 实现,因而让咱们在 pom.xml 中增加 Jackson 依赖项:
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.13.0</version>
</dependency>
Now, let’s try to persist our custom class _HostConfig_:
当初,让咱们尝试长久化自定义类 _HostConfig_:
public class HostConfig {
private String hostname;
private int port;
// getters and setters
}
We need to provide the model specification mapping from the _HostConfig_ class to a path, and use the modeled framework wrapper provided by Apache Curator:
咱们须要提供从 HostConfig 类到门路的模型标准映射,并应用 Apache Curator 提供的建模框架包装器:
ModelSpec<HostConfig> mySpec = ModelSpec.builder(ZPath.parseWithIds("/config/dev"),
JacksonModelSerializer.build(HostConfig.class))
.build();
CuratorFramework client = newClient();
client.start();
AsyncCuratorFramework async
= AsyncCuratorFramework.wrap(client);
ModeledFramework<HostConfig> modeledClient
= ModeledFramework.wrap(async, mySpec);
modeledClient.set(new HostConfig("host-name", 8080));
modeledClient.read()
.whenComplete((value, e) -> {if (e != null) {fail("Cannot read host config", e);
} else {assertThat(value).isNotNull();
assertThat(value.getHostname()).isEqualTo("host-name");
assertThat(value.getPort()).isEqualTo(8080);
}
});
The _whenComplete()_ method when reading the path _/config/dev_ will return the _HostConfig_ instance in Zookeeper.
读取 /config/dev 门路时,_whenComplete()_ 办法将返回 Zookeeper 中的 HostConfig 实例。
7. Recipes
Zookeeper provides this guideline to implement high-level solutions or recipes such as leader election, distributed locks or shared counters.
Zookeeper 提供了本指南来实现 高层次的解决方案或秘诀,如领导者选举、分布式锁或共享计数器。
Apache Curator provides an implementation for most of these recipes. To see the full list, visit the Curator Recipes documentation.
Apache Curator 提供了其中大部分计划的实现办法。要查看残缺列表,请拜访 Curator Recipes 文档。
All of these recipes are available in a separate module:
所有这些 API 都能够在独自的模块中找到:
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>4.0.1</version>
</dependency>
Let’s jump right in and start understanding these with some simple examples.
让咱们通过一些简略的例子,间接开始理解这些常识。
7.1. Leader Election
In a distributed environment, we may need one master or leader node to coordinate a complex job.
在分布式环境中,咱们可能须要一个主节点或领导节点来协调简单的工作。
This is how the usage of the Leader Election recipe in Curator looks like:
这就是 领导人选举配方 在 Curator 中的用法:
CuratorFramework client = newClient();
client.start();
LeaderSelector leaderSelector = new LeaderSelector(client,
"/mutex/select/leader/for/job/A",
new LeaderSelectorListener() {
@Override
public void stateChanged(
CuratorFramework client,
ConnectionState newState) { }
@Override
public void takeLeadership(CuratorFramework client) throws Exception {}});
// join the members group
leaderSelector.start();
// wait until the job A is done among all members
leaderSelector.close();
When we start the leader selector, our node joins a members group within the path _/mutex/select/leader/for/job/A_. Once our node becomes the leader, the _takeLeadership_ method will be invoked, and we as leaders can resume the job.
当咱们启动领导者选择器时,咱们的节点会退出 /mutex/select/leader/for/job/A 门路下的成员组。一旦咱们的节点成为领导者,_takeLeadership_ 办法就会被调用,作为领导者的咱们就能够继续执行工作。
7.2. Shared Locks
The Shared Lock recipe is about having a fully distributed lock:
共享锁配方 是对于齐全分布式锁的:
CuratorFramework client = newClient();
client.start();
InterProcessSemaphoreMutex sharedLock = new InterProcessSemaphoreMutex(client, "/mutex/process/A");
sharedLock.acquire();
// do process A
sharedLock.release();
When we acquire the lock, Zookeeper ensures that there’s no other application acquiring the same lock at the same time.
当咱们获取锁时,Zookeeper 会确保没有其余应用程序同时获取雷同的锁。
7.3. Counters
The Counters recipe coordinates a shared _Integer_ among all the clients:
计数器配方 协调所有客户端共享的_整数:
CuratorFramework client = newClient();
client.start();
SharedCount counter = new SharedCount(client, "/counters/A", 0);
counter.start();
counter.setCount(counter.getCount() + 1);
assertThat(counter.getCount()).isEqualTo(1);
In this example, Zookeeper stores the _Integer_ value in the path _/counters/A_ and initializes the value to _0_ if the path has not been created yet.
在此示例中,Zookeeper 将 Integer 值存储在 /counters/A 门路中,如果门路尚未创立,则将该值初始化为 _0_。
8. Conclusion[](https://www.baeldung.com/apache-curator#conclusion)
In this article, we’ve seen how to use Apache Curator to connect to Apache Zookeeper and take benefit of its main features.
在这篇文章中,咱们介绍了如何应用 Apach Curator 连贯 Apach Zookeeper,并利用其次要性能。
We’ve also introduced a few of the main recipes in Curator.
咱们还介绍了 Curator 中的一些次要不见。
As usual, sources can be found over on GitHub.
与平常一样,您能够在 GitHub 上找到源代码。