关于clickhouse:源码分析-ClickHouse和他的朋友们12神奇的物化视图Materialized-View与原理

53次阅读

共计 6630 个字符,预计需要花费 17 分钟才能阅读完成。

本文首发于 2020-09-03 21:22:14

《ClickHouse 和他的敌人们》系列文章转载自圈内好友 BohuTANG 的博客,原文链接:
https://bohutang.me/2020/08/3…
以下为注释。

在 ClickHouse 里,物化视图 (Materialized View) 能够说是一个神奇且弱小的货色,用处标新立异。

本文从底层机制进行剖析,看看 ClickHouse 的 Materalized View 是怎么工作的,以不便更好的应用它。

什么是物化视图

对大部分人来说,物化视图这个概念会比拟形象,物化?视图?。。。

为了更好的了解它,咱们先看一个场景。

假如你是 *hub 一个“幸福”的小程序员,某天产品经理有个需要:实时统计每小时视频下载量。

用户下载明细表:

clickhouse> SELECT * FROM download LIMIT 10;
+---------------------+--------+--------+
| when                | userid | bytes  |
+---------------------+--------+--------+
| 2020-08-31 18:22:06 |     19 | 530314 |
| 2020-08-31 18:22:06 |     19 | 872957 |
| 2020-08-31 18:22:06 |     19 | 107047 |
| 2020-08-31 18:22:07 |     19 | 214876 |
| 2020-08-31 18:22:07 |     19 | 820943 |
| 2020-08-31 18:22:07 |     19 | 693959 |
| 2020-08-31 18:22:08 |     19 | 882151 |
| 2020-08-31 18:22:08 |     19 | 644223 |
| 2020-08-31 18:22:08 |     19 | 199800 |
| 2020-08-31 18:22:09 |     19 | 511439 |
... ....

计算每小时下载量:

clickhouse> SELECT toStartOfHour(when) AS hour, userid, count() as downloads, sum(bytes) AS bytes FROM download GROUP BY userid, hour ORDER BY userid, hour;
+---------------------+--------+-----------+------------+
| hour                | userid | downloads | bytes      |
+---------------------+--------+-----------+------------+
| 2020-08-31 18:00:00 |     19 |      6822 | 3378623036 |
| 2020-08-31 19:00:00 |     19 |     10800 | 5424173178 |
| 2020-08-31 20:00:00 |     19 |     10800 | 5418656068 |
| 2020-08-31 21:00:00 |     19 |     10800 | 5404309443 |
| 2020-08-31 22:00:00 |     19 |     10800 | 5354077456 |
| 2020-08-31 23:00:00 |     19 |     10800 | 5390852563 |
| 2020-09-01 00:00:00 |     19 |     10800 | 5369839540 |
| 2020-09-01 01:00:00 |     19 |     10800 | 5384161012 |
| 2020-09-01 02:00:00 |     19 |     10800 | 5404581759 |
| 2020-09-01 03:00:00 |     19 |      6778 | 3399557322 |
+---------------------+--------+-----------+------------+
10 rows in set (0.13 sec)

很容易嘛,不过有个问题:每次都要以 download 表为根底数据进行计算,*hub 数据量太大,无法忍受。

想到一个方法:如果对 download 进行预聚合,把后果保留到一个新表 download_hour_mv,并随着 download 增量实时更新,每次去查问download_hour_mv 不就能够了。

这个新表能够看做是一个物化视图,它在 ClickHouse 是一个一般表。

创立物化视图

clickhouse> CREATE MATERIALIZED VIEW download_hour_mv
ENGINE = SummingMergeTree
PARTITION BY toYYYYMM(hour) ORDER BY (userid, hour)
AS SELECT
  toStartOfHour(when) AS hour,
  userid,
  count() as downloads,
  sum(bytes) AS bytes
FROM download WHERE when >= toDateTime('2020-09-01 04:00:00')
GROUP BY userid, hour

这个语句次要做了:

  • 创立一个引擎为 SummingMergeTree 的物化视图 download_hour_mv
  • 物化视图的数据来源于 download 表,并依据 select 语句中的表达式进行相应“物化”操作
  • 选取一个将来工夫 (以后工夫是 2020-08-31 18:00:00) 作为开始点 WHERE when >= toDateTime('2020-09-01 04:00:00'),示意在2020-09-01 04:00:00 之后的数据才会被同步到 download_hour_mv

这样,目前 download_hour_mv 是一个空表:

clickhouse> SELECT * FROM download_hour_mv ORDER BY userid, hour;
Empty set (0.02 sec)

留神:官网有 POPULATE 关键字,然而不倡议应用,因为视图创立期间 download 如果有写入数据会失落,这也是咱们加一个 WHERE 作为数据同步点的起因。

那么,咱们如何让源表数据能够一致性的同步到 download_hour_mv 呢?

物化全量数据

2020-09-01 04:00:00 之后,咱们能够通过一个带 WHERE 快照的INSERT INTO SELECT...download 历史数据进行物化:

clickhouse> INSERT INTO download_hour_mv
SELECT
  toStartOfHour(when) AS hour,
  userid,
  count() as downloads,
  sum(bytes) AS bytes
FROM download WHERE when < toDateTime('2020-09-01 04:00:00')
GROUP BY userid, hour

查问物化视图:

clickhouse> SELECT * FROM download_hour_mv ORDER BY hour, userid, downloads DESC;
+---------------------+--------+-----------+------------+
| hour                | userid | downloads | bytes      |
+---------------------+--------+-----------+------------+
| 2020-08-31 18:00:00 |     19 |      6822 | 3378623036 |
| 2020-08-31 19:00:00 |     19 |     10800 | 5424173178 |
| 2020-08-31 20:00:00 |     19 |     10800 | 5418656068 |
| 2020-08-31 21:00:00 |     19 |     10800 | 5404309443 |
| 2020-08-31 22:00:00 |     19 |     10800 | 5354077456 |
| 2020-08-31 23:00:00 |     19 |     10800 | 5390852563 |
| 2020-09-01 00:00:00 |     19 |     10800 | 5369839540 |
| 2020-09-01 01:00:00 |     19 |     10800 | 5384161012 |
| 2020-09-01 02:00:00 |     19 |     10800 | 5404581759 |
| 2020-09-01 03:00:00 |     19 |      6778 | 3399557322 |
+---------------------+--------+-----------+------------+
10 rows in set (0.05 sec)

能够看到数据曾经“物化”到 download_hour_mv

物化增量数据

写一些数据到 download表:

clickhouse> INSERT INTO download
       SELECT
         toDateTime('2020-09-01 04:00:00') + number*(1/3) as when,
         19,
         rand() % 1000000
       FROM system.numbers
       LIMIT 10;

查问物化视图 download_hour_mv:

clickhouse> SELECT * FROM download_hour_mv ORDER BY hour, userid, downloads;
+---------------------+--------+-----------+------------+
| hour                | userid | downloads | bytes      |
+---------------------+--------+-----------+------------+
| 2020-08-31 18:00:00 |     19 |      6822 | 3378623036 |
| 2020-08-31 19:00:00 |     19 |     10800 | 5424173178 |
| 2020-08-31 20:00:00 |     19 |     10800 | 5418656068 |
| 2020-08-31 21:00:00 |     19 |     10800 | 5404309443 |
| 2020-08-31 22:00:00 |     19 |     10800 | 5354077456 |
| 2020-08-31 23:00:00 |     19 |     10800 | 5390852563 |
| 2020-09-01 00:00:00 |     19 |     10800 | 5369839540 |
| 2020-09-01 01:00:00 |     19 |     10800 | 5384161012 |
| 2020-09-01 02:00:00 |     19 |     10800 | 5404581759 |
| 2020-09-01 03:00:00 |     19 |      6778 | 3399557322 |
| 2020-09-01 04:00:00 |     19 |        10 |    5732600 |
+---------------------+--------+-----------+------------+
11 rows in set (0.00 sec)

能够看到最初一条数据就是咱们增量的一个物化聚合,曾经实时同步,这是如何做到的呢?

物化视图原理

ClickHouse 的物化视图原理并不简单,在 download 表有新的数据写入时,如果检测到有物化视图跟它关联,会针对这批写入的数据进行物化操作。

比方下面新增数据是通过以下 SQL 生成的:

clickhouse> SELECT
    ->          toDateTime('2020-09-01 04:00:00') + number*(1/3) as when,
    ->          19,
    ->          rand() % 1000000
    ->        FROM system.numbers
    ->        LIMIT 10;
+---------------------+------+-------------------------+
| when                | 19   | modulo(rand(), 1000000) |
+---------------------+------+-------------------------+
| 2020-09-01 04:00:00 |   19 |                  870495 |
| 2020-09-01 04:00:00 |   19 |                  322270 |
| 2020-09-01 04:00:00 |   19 |                  983422 |
| 2020-09-01 04:00:01 |   19 |                  759708 |
| 2020-09-01 04:00:01 |   19 |                  975636 |
| 2020-09-01 04:00:01 |   19 |                  365507 |
| 2020-09-01 04:00:02 |   19 |                  865569 |
| 2020-09-01 04:00:02 |   19 |                  975742 |
| 2020-09-01 04:00:02 |   19 |                   85827 |
| 2020-09-01 04:00:03 |   19 |                  992779 |
+---------------------+------+-------------------------+
10 rows in set (0.02 sec)

物化视图执行的语句相似:

INSERT INTO download_hour_mv
SELECT
  toStartOfHour(when) AS hour,
  userid,
  count() as downloads,
  sum(bytes) AS bytes
FROM [新增的 10 条数据] WHERE when >= toDateTime('2020-09-01 04:00:00')
GROUP BY userid, hour

代码导航:

  1. 增加视图 OutputStream,InterpreterInsertQuery.cpp

    if (table->noPushingToViews() && !no_destination)
        out = table->write(query_ptr, metadata_snapshot, context);
    else
        out = std::make_shared<PushingToViewsBlockOutputStream>(table, metadata_snapshot, context, query_ptr, no_destination);
  2. 结构 Insert,PushingToViewsBlockOutputStream.cpp

    ASTPtr insert_query_ptr(insert.release());
    InterpreterInsertQuery interpreter(insert_query_ptr, *insert_context);
    BlockIO io = interpreter.execute();
    out = io.out;
  3. 物化新增数据:PushingToViewsBlockOutputStream.cpp
Context local_context = *select_context;
local_context.addViewSource(
    StorageValues::create(storage->getStorageID(), metadata_snapshot->getColumns(), block, storage->getVirtuals()));
select.emplace(view.query, local_context, SelectQueryOptions());
in = std::make_shared<MaterializingBlockInputStream>(select->execute().getInputStream()

总结

物化视图的用处较多。

比方能够解决表索引问题,咱们能够用物化视图创立另外一种物理序,来满足某些条件下的查问问题。

还有就是通过物化视图的实时同步数据能力,咱们能够做到更加灵便的表构造变更。

更弱小的中央是它能够借助 MergeTree 家族引擎(SummingMergeTree、Aggregatingmergetree 等),失去一个实时的预聚合,满足疾速查问。

原理是把增量的数据依据 AS SELECT ... 对其进行解决并写入到物化视图表,物化视图是一种一般表,能够间接读取和写入。


欢送关注我的微信公众号【数据库内核】:分享支流开源数据库和存储引擎相干技术。

题目 网址
GitHub https://dbkernel.github.io
知乎 https://www.zhihu.com/people/…
思否(SegmentFault) https://segmentfault.com/u/db…
掘金 https://juejin.im/user/5e9d3e…
开源中国(oschina) https://my.oschina.net/dbkernel
博客园(cnblogs) https://www.cnblogs.com/dbkernel

正文完
 0