关于大数据:ClickHouse集群搭建二

36次阅读

共计 23171 个字符,预计需要花费 58 分钟才能阅读完成。

重叠泪痕缄锦字,人生只有情难死。

分布式集群装置

在上一章咱们曾经实现 ClickHouse 分布式集群装置,也创立本地表和分布式表进行了测试,然而,如果停掉一个节点会产生神马状况?

node03killclickhouse-server过程

[root@node03 ~]# ps -ef | grep clickhouse
clickho+  2233     1 73 13:07 ?        00:00:02 clickhouse-server --daemon --pid-file=/var/run/clickhouse-server/clickhouse-server.pid --config-file=/etc/clickhouse-server/config.xml
root      2306  1751  0 13:07 pts/0    00:00:00 grep --color=auto clickhouse
[root@node03 ~]# service clickhouse-server stop
Stop clickhouse-server service: DONE
[root@node03 ~]# ps -ef | grep clickhouse
root      2337  1751  0 13:07 pts/0    00:00:00 grep --color=auto clickhouse

node01上查问分布式表

node01 :) select * from cluster3s1r_all; # node03 没有被杀掉时

SELECT *
FROM cluster3s1r_all

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘
┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐
│  3 │ http://www.xxxxx.cn/ │ xxxxx  │ 2020-11-28 │ 2020 │
└────┴──────────────────────┴────────┴────────────┴──────┘

3 rows in set. Elapsed: 0.037 sec. 

node01 :) select * from cluster3s1r_all; # node03 节点被杀掉时

SELECT *
FROM cluster3s1r_all

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
↘ Progress: 1.00 rows, 59.00 B (8.87 rows/s., 523.62 B/s.)  0%
Received exception from server (version 20.8.3):
Code: 279. DB::Exception: Received from localhost:9000. DB::Exception: All connection tries failed. Log: 

Code: 32, e.displayText() = DB::Exception: Attempt to read after eof (version 20.8.3.18)
Code: 210, e.displayText() = DB::NetException: Connection refused (node03:9000) (version 20.8.3.18)
Code: 210, e.displayText() = DB::NetException: Connection refused (node03:9000) (version 20.8.3.18)

: While executing Remote. 

1 rows in set. Elapsed: 0.114 sec. 

只返回了 node01 节点上的数据,node03节点上的两条数据失落。

数据备份

但在 ClickHouse 中,replica是挂在 shard 上的,因而要用多正本,必须先定义shard

最简略的状况:1 个分片多个正本。

批改 metrika.xml 文件

node01 上批改 /etc/clickhouse-server/metrika.xml集群配置文件

<yandex>
<!-- 集群配置 -->
<clickhouse_remote_servers>
    <!-- 1 分片 2 备份 -->
    <cluster_1shards_2replicas>
        <!-- 数据分片 1  -->
        <shard>
        <!-- false 代表一次性写入所有正本,true 示意写入其中一个正本,配合 zk 来进行数据复制 -->
        <internal_replication>false</internal_replication>
            <replica>
                <host>node01</host>
                <port>9000</port>
            </replica>
            <replica>
                <host>node02</host>
                <port>9000</port>
            </replica>    
        </shard>
    </cluster_1shards_2replicas>
</clickhouse_remote_servers>
</yandex>

将批改后的配置散发到 node02 机器上

[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD
metrika.xml                                                                                                                                                  100%  674   618.9KB/s   00:00    

如果配置文件没有问题,是不必重启 clickhouse-server 的,会主动加载配置文件,node01上查看集群信息

[root@node01 clickhouse-server]# clickhouse-client -m
ClickHouse client version 20.8.3.18.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.

node01 :) select * from system.clusters;

SELECT *
FROM system.clusters

┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ cluster_1shards_2replicas         │         1 │            1 │           1 │ node01    │ 192.168.10.100 │ 9000 │        1 │ default │                  │            0 │                       0 │
│ cluster_1shards_2replicas         │         1 │            1 │           2 │ node02    │ 192.168.10.110 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1 │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2 │ 127.0.0.2      │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost              │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost │ ::1            │ 9440 │        0 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         2 │            1 │           1 │ localhost │ ::1            │    1 │        0 │ default │                  │            0 │                       0 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

10 rows in set. Elapsed: 0.018 sec.

测试数据备份

node01node02上别离创立本地表cluster1s2r_local

CREATE TABLE default.cluster1s2r_local
(
    `id` Int32,
    `website` String,
    `wechat` String,
    `FlightDate` Date,
    Year UInt16
)
ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);

node01 机器上创立分布式表, 留神集群名称

CREATE TABLE default.cluster1s2r_all AS cluster1s2r_local
ENGINE = Distributed(cluster_1shards_2replicas, default, cluster1s2r_local, rand());

往分布式表 cluster1s2r_all 插入数据,cluster1s2r_all 会全副插入到 node01node02节点的 cluster1s2r_local

插入数据

INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java 干货','2020-11-28',2020);
INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(2,'http://www.merryyou.cn/','javaganhuo','2020-11-28',2020);
INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(3,'http://www.xxxxx.cn/','xxxxx','2020-11-28',2020);

查问分布式表和本地表

node01 :) select * from cluster1s2r_all; # 查问分布式表

SELECT *
FROM cluster1s2r_all

┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐
│  3 │ http://www.xxxxx.cn/ │ xxxxx  │ 2020-11-28 │ 2020 │
└────┴──────────────────────┴────────┴────────────┴──────┘
┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

3 rows in set. Elapsed: 0.018 sec. 

node01 :) select * from cluster1s2r_local; # node01 节点查问本地表

SELECT *
FROM cluster1s2r_local

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘
┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐
│  3 │ http://www.xxxxx.cn/ │ xxxxx  │ 2020-11-28 │ 2020 │
└────┴──────────────────────┴────────┴────────────┴──────┘

3 rows in set. Elapsed: 0.015 sec. 

node02 :)  select * from cluster1s2r_local;  # node02 节点查问本地表

SELECT *
FROM cluster1s2r_local

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘
┌─id─┬─website──────────────┬─wechat─┬─FlightDate─┬─Year─┐
│  3 │ http://www.xxxxx.cn/ │ xxxxx  │ 2020-11-28 │ 2020 │
└────┴──────────────────────┴────────┴────────────┴──────┘

3 rows in set. Elapsed: 0.007 sec. 

查问 node01node02本地表 cluster1s2r_local 都是全量数据,即便 sotp 到其中一个节点数据也不会失落,数据正本曾经失效。

数据正本一致性问题

既然有多正本,就有个一致性的问题:退出写入数据时,挂掉一台机器,会怎么?

模仿写入分布式表是某一个节点 down

  1. 停掉 node02 节点服务
    service clickhouse-server stop
  2. node01 节点上向分布式表 cluster1s2r_all 插入数据

    sql INSERT INTO default.cluster1s2r_all (id,website,wechat,FlightDate,Year)values(4,'http://www.yyyyyy.cn/','yyyyy','2020-11-29',2020);

  3. 启动 node02 节点服务
  4. 查问验证是否同步
    查看node01node02机器的cluster1s2r_local、以及cluster1s2r_all,发现都是总数据量都减少了 1 条,阐明这种状况下,集群节点之间可能主动同步

下面是通过向分布式表 cluster1s2r_all 插入数据,如果通过本地表cluster1s2r_local,数据还能同步吗?

  1. node01 上往 cluster1s2r_local 插入 1 条数据;
  2. 查问 node02cluster1s2r_local 数据没有同步

综上所述,通过散布表写入数据,会主动同步数据;而通过本地表表写入数据,不会同步;个别 i 状况下是没什么大问题。

然而生产状况总比实践简单的多,以上配置可能会存在数据不统一的问题

官网文档形容如下:

Each shard can have the internal_replication parameter defined in the config file.

If this parameter is set to true, the write operation selects the first healthy replica and writes data to it. Use this alternative if the Distributed table“looks at”replicated tables. In other words, if the table where data will be written is going to replicate them itself.

If it is set to false (the default), data is written to all replicas. In essence, this means that the Distributed table replicates data itself. This is worse than using replicated tables, because the consistency of replicas is not checked, and over time they will contain slightly different data.

翻译如下:

分片可在配置文件中定义‘internal_replication’参数。此参数设置为«true»时,写操作只选一个失常的正本写入数据。如果分布式表的子表是复制表(*ReplicaMergeTree),请应用此计划。换句话说,这其实是把数据的复制工作交给理论须要写入数据的表自身而不是分布式表。若此参数设置为«false»(默认值),写操作会将数据写入所有正本。本质上,这意味着要分布式表自身来复制数据。这种形式不如应用复制表的好,因为不会查看正本的一致性,并且随着工夫的推移,正本数据可能会有些不一样。

简略了解如下:

这个为 true 代表 zk 会筛选一个适合的节点写入, 而后在后盾进行多个节点之间数据的同步.
如果是 false, 则是一次性写入所有节点, 以这种反复写入的办法实现节点之间数据的同步.

主动数据备份

主动数据备份是表的行为,引擎为 ReplicatedXXX的表反对主动同步。

Replicated前缀只用于 MergeTree 系列(MergeTree是最罕用的引擎)。

重点阐明:Replicated表主动同步与之前的集群主动同步不同,是表的行为,与 metrika.xml 中的 <clickhouse_remote_servers> 配置没有关系,只有有 zookeeper 配置就行了。

node01批改 metrika.xml 配置

<yandex>
<zookeeper-servers>
        <node index="1">
            <host>node01</host>
            <port>2181</port>
        </node>
        <node index="2">
            <host>node02</host>
            <port>2181</port>
        </node>
        <node index="3">
            <host>node03</host>
            <port>2181</port>
        </node>
    </zookeeper-servers>
</yandex>

将批改后的配置散发到 node02 机器上

[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD
metrika.xml   

重启 `clickhouse-server`, 因为之前的表存在导致启动是失败。`error` 日志

​```shell
[root@node01 clickhouse-server]# tail -f /var/log/clickhouse-server/clickhouse-server.err.log 
7. DB::StorageDistributed::startup() @ 0x10f1bd40 in /usr/bin/clickhouse
8. ? @ 0x1151d922 in /usr/bin/clickhouse
9. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa43d6ad in /usr/bin/clickhouse
10. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa43dd93 in /usr/bin/clickhouse
11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa43cc4d in /usr/bin/clickhouse
12. ? @ 0xa43b3ff in /usr/bin/clickhouse
13. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
14. clone @ 0xfe8dd in /usr/lib64/libc-2.17.so
 (version 20.8.3.18)
2020.11.29 14:43:01.163530 [3643] {} <Error> Application: DB::Exception: Requested cluster 'cluster_1shards_2replicas' not found: while loading database `default` from path /var/lib/clickhouse/metadata/default

删除之前的建表语句

[root@node01 default]# rm -rf /var/lib/clickhouse/metadata/default/*.sql

启动clickhouse-server

node01node02节点上创立数据库表

-- node01 节点
CREATE TABLE `cluster_zk` 
    (
    `id` Int32,
    `website` String,
    `wechat` String,
    `FlightDate` Date,
    Year UInt16
)
 ENGINE = ReplicatedMergeTree('/clickhouse/tables/cluster_zk', 'replica01', FlightDate, (Year, FlightDate), 8192);

-- node02 节点
CREATE TABLE `cluster_zk` 
    (
    `id` Int32,
    `website` String,
    `wechat` String,
    `FlightDate` Date,
    Year UInt16
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/cluster_zk', 'replica02', FlightDate, (Year, FlightDate), 8192);

node01节点上插入数据

INSERT INTO default.cluster_zk (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java 干货','2020-11-28',2020);

node01,node02节点上查问数据


node01 :) select * from cluster_zk; # node01 节点

SELECT *
FROM cluster_zk

┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

1 rows in set. Elapsed: 0.004 sec. 

node02 :) select * from cluster_zk; # node02 节点

SELECT *
FROM cluster_zk

┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

1 rows in set. Elapsed: 0.004 sec. 

查问 zk 信息

[zk: localhost:2181(CONNECTED) 2] ls /clickhouse/tables/cluster_zk/replicas
[replica02, replica01]
[zk: localhost:2181(CONNECTED) 3]

主动数据备份集群配置

node01批改 metrika.xml 配置, 留神此处 internal_replicationtrue

<yandex>
<clickhouse_remote_servers>
    <perftest_1shards_2replicas>
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>node01</host>
                        <port>9000</port>
                    </replica>
                    <replica>
                        <host>node02</host>
                        <port>9000</port>
                    </replica>
                </shard>
    </perftest_1shards_2replicas>
</clickhouse_remote_servers>
<zookeeper-servers>
        <node index="1">
            <host>node01</host>
            <port>2181</port>
        </node>
        <node index="2">
            <host>node02</host>
            <port>2181</port>
        </node>
        <node index="3">
            <host>node03</host>
            <port>2181</port>
        </node>
    </zookeeper-servers>
</yandex>

将批改后的配置散发到 node02 机器上

[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD
metrika.xml   

查问集群信息

node01 :) select * from system.clusters;

SELECT *
FROM system.clusters

┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_1shards_2replicas        │         1 │            1 │           1 │ node01    │ 192.168.10.100 │ 9000 │        1 │ default │                  │            0 │                       0 │
│ perftest_1shards_2replicas        │         1 │            1 │           2 │ node02    │ 192.168.10.110 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1 │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2 │ 127.0.0.2      │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost              │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost │ ::1            │ 9440 │        0 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         2 │            1 │           1 │ localhost │ ::1            │    1 │        0 │ default │                  │            0 │                       0 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

10 rows in set. Elapsed: 0.018 sec. 

创立分布式表

CREATE TABLE default.clusterzk_all AS cluster_zk
ENGINE = Distributed(perftest_1shards_2replicas, default, cluster_zk, rand());

分布式表查问数据

ode01 :) select * from clusterzk_all;

SELECT *
FROM clusterzk_all

┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

1 rows in set. Elapsed: 0.020 sec. 

分布式表写入

上文曾经提到,internal_replicationtrue,则通过散布表写入数据时,会主动找到“最衰弱”的正本写入,而后其余正本通过表本身的复制性能同步数据,最终达到数据统一。

分片 + 数据备份整合

ip 主机 clickhouse 分片正本
192.168.10.100 node01 9000 01/01
192.168.10.100 node01 9001 03/02
192.168.10.100 node02 9000 02/01
192.168.10.100 node02 9001 01/02
192.168.10.100 node03 9000 03/01
192.168.10.100 node03 9001 02/02

3 分片 2 正本.

node01,node02,node039001端口再启动一个 clickhouse-server 实例。

shard1 的两个正本放到 node01 9000node02 9001 两个机器上,shard2的两个正本放到 node02 9000node03 9001 上,shard3的两个正本放到 node03 9000node01 9001 上.

node01创立并批改config1.xml

[root@node01 clickhouse-server]# cp /etc/clickhouse-server/config.xml  /etc/clickhouse-server/config1.xml
[root@node01 clickhouse-server]# vim /etc/clickhouse-server/config1.xml

批改以下内容

<?xml version="1.0"?>
<yandex>
    <!-- 省略其余 -->
    <http_port>8124</http_port>
    <tcp_port>9001</tcp_port>
    <mysql_port>9005</mysql_port>
    <interserver_http_port>9010</interserver_http_port>
     <log>/var/log/clickhouse-server/clickhouse-server-1.log</log>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err-1.log</errorlog>
     <!-- Path to data directory, with trailing slash. -->
    <path>/var/lib/clickhouse1/</path>
    <!-- Path to temporary data for processing hard queries. -->
    <tmp_path>/var/lib/clickhouse1/tmp/</tmp_path>
     <user_files_path>/var/lib/clickhouse1/user_files/</user_files_path>
     <format_schema_path>/var/lib/clickhouse1/format_schemas/</format_schema_path>
    <include_from>/etc/clickhouse-server/metrika1.xml</include_from>
     <!-- 省略其余 -->
</yandex>

node01创立并批改metrika.xml

<yandex>
<!--ck 集群节点 -->
<clickhouse_remote_servers>
    <!--ck 集群名称 -->
    <perftest_3shards_2replicas>
        <shard>
             <internal_replication>true</internal_replication>
            <replica>
                <host>node01</host>
                <port>9000</port>
            </replica>
        <replica>
                <host>node02</host>
                <port>9001</port>
            </replica>
        </shard>
        <shard>
       <internal_replication>true</internal_replication>
            <replica>
                <host>node02</host>
                <port>9000</port>
            </replica>
        <replica>
                <host>node03</host>
                <port>9001</port>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>node03</host>
                <port>9000</port>
            </replica>
        <replica>
                <host>node01</host>
                <port>9001</port>
            </replica>
        </shard>
    </perftest_3shards_2replicas>
</clickhouse_remote_servers>

<!--zookeeper 相干配置 -->
<zookeeper-servers>
    <node index="1">
        <host>node01</host>
        <port>2181</port>
    </node>
    <node index="2">
        <host>node02</host>
        <port>2181</port>
    </node>
    <node index="3">
        <host>node03</host>
        <port>2181</port>
    </node>
</zookeeper-servers>

<macros>
    <shard>01</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node01</replica> <!-- 以后节点主机名 -->
</macros>

<networks>
    <ip>::/0</ip>
</networks>

<!-- 压缩相干配置 -->
<clickhouse_compression>
    <case>
        <min_part_size>10000000000</min_part_size>
        <min_part_size_ratio>0.01</min_part_size_ratio>
        <method>lz4</method> <!-- 压缩算法 lz4 压缩比 zstd 快, 更占磁盘 -->
    </case>
</clickhouse_compression>
</yandex>

复制 metrika.xml 文件为 metrika1.xml,批改macros 配置

node01`metrika.xml`macros配置
<macros>
    <shard>01</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node01</replica> <!-- 以后节点主机名 -->
</macros>
node01`metrika1.xml`macros配置
<macros>
    <shard>03</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node01</replica> <!-- 以后节点主机名 -->
</macros>
node02`metrika.xml`macros配置
<macros>
    <shard>02</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node02</replica> <!-- 以后节点主机名 -->
</macros>
node02`metrika1.xml`macros配置
<macros>
    <shard>01</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node02</replica> <!-- 以后节点主机名 -->
</macros>
node03`metrika.xml`macros配置
<macros>
    <shard>03</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node03</replica> <!-- 以后节点主机名 -->
</macros>
node03`metrika.1xml`macros配置
<macros>
    <shard>02</shard>  <!-- 分 ID, 同一分片内的正本配置雷同的分 ID-->
    <replica>node03</replica> <!-- 以后节点主机名 -->
</macros>

创立并批改clickhouse-server-1

[root@node01 clickhouse-server]# cp /etc/rc.d/init.d/clickhouse-server  /etc/rc.d/init.d/clickhouse-server-1
You have new mail in /var/spool/mail/root
[root@node01 clickhouse-server]# vim  /etc/rc.d/init.d/clickhouse-server-1    

批改以下内容

CLICKHOUSE_CONFIG=$CLICKHOUSE_CONFDIR/config1.xml
CLICKHOUSE_PIDFILE="$CLICKHOUSE_PIDDIR/$PROGRAM-1.pid"

散发配置文件到 node02node03节点

[root@node01 clickhouse-server]# scp /etc/rc.d/init.d/clickhouse-server-1 node02:/etc/rc.d/init.d/
clickhouse-server-1                                                                                                                                          100%   11KB   4.0MB/s   00:00    
You have new mail in /var/spool/mail/root
[root@node01 clickhouse-server]# scp /etc/rc.d/init.d/clickhouse-server-1 node03:/etc/rc.d/init.d/
clickhouse-server-1                                                                                                                                          100%   11KB   4.0MB/s   00:00    

[root@node01 clickhouse-server]# scp /etc/clickhouse-server/config1.xml node02:$PWD
config1.xml                                                                                                                                                  100%   33KB  10.2MB/s   00:00    
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/config1.xml node03:$PWD
config1.xml                                                                                                                                                  100%   33KB   9.7MB/s   00:00   
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node02:$PWD
metrika.xml                                                                                                                                                  100% 2008     1.0MB/s   00:00    
You have new mail in /var/spool/mail/root
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika.xml node03:$PWD
metrika.xml                                                                                                                                                  100% 2008     1.1MB/s   00:00    
[root@node01 clickhouse-server]# 
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika1.xml node02:$PWD
metrika1.xml                                                                                                                                                 100% 2008     1.0MB/s   00:00    
[root@node01 clickhouse-server]# scp /etc/clickhouse-server/metrika1.xml node03:$PWD
metrika1.xml                  

批改 node02node03macros 配置

启动 ClickHouse 实例

node01clickhouse-server-1 实例

node02clickhouse-server-1 实例

node03clickhouse-server 实例

node03clickhouse-server-1 实例

service clickhouse-server restart
service clickhouse-server-1 restart

查看集群信息

node01 :) select * from system.clusters;

SELECT *
FROM system.clusters

┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address───┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_3shards_2replicas        │         1 │            1 │           1 │ node01    │ 192.168.10.100 │ 9000 │        1 │ default │                  │            0 │                       0 │
│ perftest_3shards_2replicas        │         1 │            1 │           2 │ node02    │ 192.168.10.110 │ 9001 │        0 │ default │                  │            0 │                       0 │
│ perftest_3shards_2replicas        │         2 │            1 │           1 │ node02    │ 192.168.10.110 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ perftest_3shards_2replicas        │         2 │            1 │           2 │ node03    │ 192.168.10.120 │ 9001 │        0 │ default │                  │            0 │                       0 │
│ perftest_3shards_2replicas        │         3 │            1 │           1 │ node03    │ 192.168.10.120 │ 9000 │        0 │ default │                  │            0 │                       0 │
│ perftest_3shards_2replicas        │         3 │            1 │           2 │ node01    │ 192.168.10.100 │ 9001 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1 │ 127.0.0.1      │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2 │ 127.0.0.2      │ 9000 │        0 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost              │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost │ ::1            │ 9440 │        0 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         1 │            1 │           1 │ localhost │ ::1            │ 9000 │        1 │ default │                  │            0 │                       0 │
│ test_unavailable_shard            │         2 │            1 │           1 │ localhost │ ::1            │    1 │        0 │ default │                  │            0 │                       0 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

14 rows in set. Elapsed: 0.019 sec.
测试分片 + 正本集群

创立可复制表,node01节点执行即可,其余节点会主动创立。

CREATE TABLE `cluster32r_local` ON cluster perftest_3shards_2replicas
    (
    `id` Int32,
    `website` String,
    `wechat` String,
    `FlightDate` Date,
    Year UInt16
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime','{replica}', FlightDate, (Year, FlightDate), 8192);
node01 :) CREATE TABLE `cluster32r_local` ON cluster perftest_3shards_2replicas
:-] (:-] `id` Int32,
:-]     `website` String,
:-]     `wechat` String,
:-] `FlightDate` Date,
:-] Year UInt16
:-] )
:-] ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime','{replica}', FlightDate, (Year, FlightDate), 8192);

CREATE TABLE cluster32r_local ON CLUSTER perftest_3shards_2replicas
(
    `id` Int32,
    `website` String,
    `wechat` String,
    `FlightDate` Date,
    `Year` UInt16
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime', '{replica}', FlightDate, (Year, FlightDate), 8192)

┌─host───┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ node03 │ 9001 │      0 │       │                   5 │                0 │
│ node03 │ 9000 │      0 │       │                   4 │                0 │
│ node01 │ 9001 │      0 │       │                   3 │                0 │
│ node01 │ 9000 │      0 │       │                   2 │                0 │
│ node02 │ 9000 │      0 │       │                   1 │                0 │
└────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host───┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ node02 │ 9001 │      0 │       │                   0 │                0 │
└────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

6 rows in set. Elapsed: 46.994 sec. 

创立分布式表

CREATE TABLE cluster32r_all AS cluster32r_local ENGINE = Distributed(perftest_3shards_2replicas, default, cluster32r_local, rand());

往第一个 shard 的正本插入数据(node01 9000), 能够在第二个正本中查看数据(node02 9001

INSERT INTO default.cluster32r_local (id,website,wechat,FlightDate,Year)values(1,'https://niocoder.com/','java 干货','2020-11-28',2020);
INSERT INTO default.cluster32r_local (id,website,wechat,FlightDate,Year)values(2,'http://www.merryyou.cn/','javaganhuo','2020-11-28',2020);

应用客户端链接 node02 9001 实例查看

[root@node02 ~]# clickhouse-client --port 9001 -m
ClickHouse client version 20.8.3.18.
Connecting to localhost:9001 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.

node02 :) show tables;

SHOW TABLES

┌─name─────────────┐
│ cluster32r_local │
└──────────────────┘

1 rows in set. Elapsed: 0.010 sec. 

node02 :) select * from cluster32r_local;

SELECT *
FROM cluster32r_local

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

2 rows in set. Elapsed: 0.018 sec. 

分布式表查问

node01 :) select * from cluster32r_all;

SELECT *
FROM cluster32r_all

┌─id─┬─website─────────────────┬─wechat─────┬─FlightDate─┬─Year─┐
│  2 │ http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │
└────┴─────────────────────────┴────────────┴────────────┴──────┘
┌─id─┬─website───────────────┬─wechat───┬─FlightDate─┬─Year─┐
│  1 │ https://niocoder.com/ │ java 干货 │ 2020-11-28 │ 2020 │
└────┴───────────────────────┴──────────┴────────────┴──────┘

2 rows in set. Elapsed: 0.030 sec. 

所有正本节点均可本地表和分布式表均可读写数据

下载

关注微信公众号 java 干货回复【clickhouse】

正文完
 0