当大家看到这个题目时,就曾经晓得了上面几点:
(1)出了撒子问题?ok,集群所有KV节点存储硬盘应用80%以上,凌晨触发频繁报警,搞DB的兄弟们还能不能欢快的睡觉?
(2)谁搞的?ok,GC不干活。
(3)真正导致问题的起因是啥?OK,TiCDC背锅。
完了,看到这里本篇文章就根本介绍完了,会不会被劝退?然而老铁千万别走,且听我将细节娓娓道来,前面有“干货”,我会阐明我到底做了啥子操作升高硬盘应用;GC为啥不干活?GC的原理是啥?还有为啥是TiCDC背锅。
凌晨的报警和解决
早晨11点收到多条tikv集群硬盘空间报警,爬起来关上电脑看看到底出了啥问题,个别TiKV应用磁盘空间个别都有上面几个起因:
- 原本硬盘空间就比拟缓和,是不是有哪个业务又早晨“偷偷”灌数据了?这种状况个别看下grafana的overview监控就能定位。
- tikv日志暴涨,之前遇到过集群降级后tikv日志因为静默reigon开启导致的日志量过大。这个看下tikv日志产生状况,如果是日志适量,间接rm日志即可,另外tikv反对配置日志保留策略来主动删除日志。
- 其余状况
本次就是“其余状况”的类型,这个集群原本应用方就多,写入也没有突增啥的,tikv日志也没有啥异样,因为晓得是测试集群,所以就是删除了一些2021年的tikv日志先将KV存储升高到80%以内不报警了。
第二天.......
好好拾掇下测试TiDB集群,解决Tikv存储问题,靠2种办法:
(1)找资源扩容KV
(2)看看是不是有表过大或者其余异样。
因为是测试集群,不焦急间接扩容,万一是有一些大表能够清理下数据啥的,资源就有了。所以我先通过上面的命令来统计库表资源应用状况(不太准,能够参考):
如果不波及分区表用上面的形式查看表的应用状况:select TABLE_SCHEMA,TABLE_NAME,TABLE_ROWS,(DATA_LENGTH+INDEX_LENGTH)/1024/1024/1024 as table_size from tables order by table_size desc limit 20;另外partition表提供了分区表和非分区表的资源应用状况(咱们应用分区表较多):select TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,TABLE_ROWS,(DATA_LENGTH+INDEX_LENGTH)/1024/1024/1024 as table_size from information_schema.PARTITIONS order by table_size desc limit 20;
另外官网也提供了接口来统计table应用状况(比拟精确):
tidb-ctl table disk-usage --database(-d) [database name] --table(-t) [table name]
先看下SQL的统计后果,我发现有些表都几亿,并且有些表rows为0,居然也有size,因为我之前常常遇到统计信息的不准导致的tables零碎表信息不准的状况,为此,我居然回收了下集群所有table的统计信息。
而后做的操作就是跟业务沟通,删除一个没用的分区,或者turncate一些table数据,而后我就期待着资源回收了。然而我都等了半个多小时,在没有多少新写入的状况下,region数量和磁盘空间丝毫没有动静,难道GC出了问题?
OK,那我就先从mysql.tidb表看下GC的状况,不看不晓得,一看吓一跳啊,我的gc-save-point居然还在2021年的4月份,那集群中到底累积了多少MVCC版本??我的天。
剖析和解决GC的问题
通过上图曾经能够发现是GC的问题,咱们须要找出GC出了什么问题。个别通过日志和监控。先查看tidb server的日志:
{"level":"INFO","time":"2022/03/16 22:55:06.268 +08:00","caller":"gc_worker.go:230","message":"[gc worker] quit","uuid":"5fdb15468e00003"}{"level":"INFO","time":"2022/03/16 22:55:22.603 +08:00","caller":"gc_worker.go:202","message":"[gc worker] start","uuid":"5fe4ce7c7200002"}{"level":"INFO","time":"2022/03/16 22:58:22.629 +08:00","caller":"gc_worker.go:608","message":"[gc worker] there's another service in the cluster requires an earlier safe point. gc will continue with the earlier one","uuid":"5fe4ce7c7200002","ourSafePoint":431867062548692992,"minSafePoint":424280964185194498}{"level":"INFO","time":"2022/03/16 22:58:22.637 +08:00","caller":"gc_worker.go:329","message":"[gc worker] starts the whole job","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"concurrency":5}{"level":"INFO","time":"2022/03/16 22:58:22.639 +08:00","caller":"gc_worker.go:1036","message":"[gc worker] start resolve locks","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"concurrency":5}{"level":"INFO","time":"2022/03/16 22:58:41.268 +08:00","caller":"gc_worker.go:1057","message":"[gc worker] finish resolve locks","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"regions":81662}{"level":"INFO","time":"2022/03/16 22:59:22.617 +08:00","caller":"gc_worker.go:300","message":"[gc worker] there's already a gc job running, skipped","leaderTick on":"5fe4ce7c7200002"}{"level":"INFO","time":"2022/03/16 23:00:21.331 +08:00","caller":"gc_worker.go:701","message":"[gc worker] start delete ranges","uuid":"5fe4ce7c7200002","ranges":0}{"level":"INFO","time":"2022/03/16 23:00:21.331 +08:00","caller":"gc_worker.go:750","message":"[gc worker] finish delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0,"cost time":"677ns"}{"level":"INFO","time":"2022/03/16 23:00:21.336 +08:00","caller":"gc_worker.go:773","message":"[gc worker] start redo-delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0}{"level":"INFO","time":"2022/03/16 23:00:21.336 +08:00","caller":"gc_worker.go:802","message":"[gc worker] finish redo-delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0,"cost time":"4.016µs"}{"level":"INFO","time":"2022/03/16 23:00:21.339 +08:00","caller":"gc_worker.go:1562","message":"[gc worker] sent safe point to PD","uuid":"5fe4ce7c7200002","safe point":424280964185194498}{"level":"INFO","time":"2022/03/16 23:08:22.638 +08:00","caller":"gc_worker.go:608","message":"[gc worker] there's another service in the cluster requires an earlier safe point. gc will continue with the earlier one","uuid":"5fe4ce7c7200002","ourSafePoint":431867219835092992,"minSafePoint":424280964185194498}{"level":"INFO","time":"2022/03/16 23:08:22.647 +08:00","caller":"gc_worker.go:329","message":"[gc worker] starts the whole job","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"concurrency":5}{"level":"INFO","time":"2022/03/16 23:08:22.648 +08:00","caller":"gc_worker.go:1036","message":"[gc worker] start resolve locks","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"concurrency":5}{"level":"INFO","time":"2022/03/16 23:08:43.659 +08:00","caller":"gc_worker.go:1057","message":"[gc worker] finish resolve locks","uuid":"5fe4ce7c7200002","safePoint":424280964185194498,"regions":81647}{"level":"INFO","time":"2022/03/16 23:09:22.616 +08:00","caller":"gc_worker.go:300","message":"[gc worker] there's already a gc job running, skipped","leaderTick on":"5fe4ce7c7200002"}{"level":"INFO","time":"2022/03/16 23:10:22.616 +08:00","caller":"gc_worker.go:300","message":"[gc worker] there's already a gc job running, skipped","leaderTick on":"5fe4ce7c7200002"}{"level":"INFO","time":"2022/03/16 23:10:23.727 +08:00","caller":"gc_worker.go:701","message":"[gc worker] start delete ranges","uuid":"5fe4ce7c7200002","ranges":0}{"level":"INFO","time":"2022/03/16 23:10:23.727 +08:00","caller":"gc_worker.go:750","message":"[gc worker] finish delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0,"cost time":"701ns"}{"level":"INFO","time":"2022/03/16 23:10:23.729 +08:00","caller":"gc_worker.go:773","message":"[gc worker] start redo-delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0}{"level":"INFO","time":"2022/03/16 23:10:23.730 +08:00","caller":"gc_worker.go:802","message":"[gc worker] finish redo-delete ranges","uuid":"5fe4ce7c7200002","num of ranges":0,"cost time":"710ns"}{"level":"INFO","time":"2022/03/16 23:10:23.733 +08:00","caller":"gc_worker.go:1562","message":"[gc worker] sent safe point to PD","uuid":"5fe4ce7c7200002","safe point":424280964185194498}{"level":"INFO","time":"2022/03/16 23:18:22.647 +08:00","caller":"gc_worker.go:608","message":"[gc worker] there's another service in the cluster requires an earlier safe point. gc will continue with the earlier one","uuid":"5fe4ce7c7200002","ourSafePoint":431867377121492992,"minSafePoint":424280964185194498}
通过日志发现GC始终等晚期的safe point:424280964185194498
there's another service in the cluster requires an earlier safe point. gc will continue with the earlier one","uuid":"5fe4ce7c7200002","ourSafePoint":431870050977185792,"minSafePoint":424280964185194498
查看下该TSO对应的零碎工夫,发现跟下面mysql.tidb截图的tikv-gc-safe-point对应的工夫:2021-04-16 00:17:13统一。
$ tiup ctl:v5.4.0 pd -u http://10.203.178.96:2379 tso 424280964185194498system: 2021-04-16 00:17:13.934 +0800 CSTlogic: 2
须要执行 pd-ctl service-gc-safepoint --pd 命令查问以后的 GC safepoint 与 service GC safepoint,发现卡住的 safe point 跟 ticdc 的 safe\_point: 424280964185194498 统一。
$ tiup ctl:v5.4.0 pd -u http://10.xxxx.96:2379 service-gc-safepoint{ "service_gc_safe_points": [ { "service_id": "gc_worker", "expired_at": 9223372036854775807, "safe_point": 431880793651412992 }, { "service_id": "ticdc", "expired_at": 1618503433, "safe_point": 424280964185194498 } ], "gc_safe_point": 424280964185194498}
查看该集群的ticdc工作,发现卡住的safe point跟changefeed id为dxl-xxxx-task这个同步工作的TSO统一:
到此终于找到GC问题的起因。ticdc同步工作没有及时删除导致的。解决方案:删除ticdc中该同步工作
tiup cdc:v5.4.0 cli changefeed remove --pd='http://10.xxxx.96:2379' --changefeed-id=dxl-replication-task
等一段时间察看safe-point的变动,发现曾经开始GC:
不过又卡在了另一个同步工作(ticdc-demo-xxxx-test-shbt-core)上,持续remove
tiup cdc:v5.4.0 cli changefeed remove --pd='http://10.xxxx.96:2379' --changefeed-id=ticdc-demo-xxxx-test-shbt-core
删除后,期待较长时间(几个小时),最终GC safe point恢复正常:
GC回收结束后单KV存储又之前的800G升高到200G左右,大家也能够从下图中左边的 cf size的监控看进去,write CF从之前的3.2T升高到790G左右,降落的最多,大家晓得对于tikv的列簇来说,write cf是存数据的版本信息(MVCC)的,这个的降落也印证了TiCDC导致的MVCC版本过多的问题,当初问题齐全解决:
问题尽管解决结束了,然而Ticdc默认有个24小时的GC hold的工夫,超过这个工夫,就不会再影响GC,为啥有下面的问题,我也征询了官网的产研同学:OK,首先定性:”是个BUG“,这个问题在 4.0.13, 5.0.2, 5.1.0 这几个版本中修复了,因为测试集群,版本从3.0一路降级而来,因该是用了老版本踩到了坑,修复 PR见前面的链接: https://github.com/pingcap/ti...
GC原理
尽管讲明确了ticdc->gc->硬盘报警的问题解决,然而咱们还须要对GC的原理进行下简略的阐明。
TiDB server架构中的GC模块
GC作为TiDB Server的模块,次要的作用就是清理不再须要的旧数据。
大家都晓得Tikv底层应用是基于LSM的数据架构,这种数据存储的形式在做数据 update 时,不会更新原来的数据,而是从新写一份新的数据,并且给每份数据都给定一个事务工夫戳,这种机制也是咱们常常提到的 MVCC(多版本并发管制)机制,为了防止同一条记录的版本过多,TiDB设定了一个 tikv\_gc\_life\_time 来管制版本的保留工夫(默认10min),并且每通过 tikv\_gc\_run\_interval 这个工夫距离,就会触发GC的操作,革除历史版本的记录,上面具体聊下GC的流程。
GC的流程
GC都是由GC Leader触发的,一个tidb server的cluster会选举出一个GC leader,具体是哪个节点能够查看mysql.tidb表,gc leader只是一个身份状态,真正run RC的是该Leader节点的gc worker,另外GC的流程其实从文章下面的日志能够看进去。
- (1)Resolve Locks 阶段
从英文翻译过去很直白:该阶段次要是清理锁,通过GC leader日志能够看出,有 start resolve locks;finish resolve locks的操作,并且都跟着具体的safepoint,其实这个safepoint就是一个工夫戳,该阶段会扫描所有的region,只有在这个工夫戳之前由2PC产生的锁,该提交的提交,该回滚的回滚。
- (2)Delete Ranges阶段
该阶段次要解决由drop/truncate等操作产生的大量间断数据的删除。tidb会将要删除的range首先记录到mysql.gc\_delete\_range,而后拿这些曾经记录的range找对应的sst文件,调用RocksDB的UnsafeDestroyRange接口进行物理删除,删除后的range会记录到mysql.gc\_delete\_range\_done表。从日志中也能够看到start delete ranges;finish delete ranges的操作。
- (3)Do GC阶段
这个阶段次要是清理由DML操作产生的大量历史数据版本,因为默认的保留工夫为10min,那么我一张元数据表一年都没有更新过,难道也会被清理?不要缓和,GC会对每个key都保留safe point 前的最初一次写入时的数据(也就是说,就算表数据不变动,GC也不会“误清理”数据的)。另外这部分由DML产生的GC,不会马上开释空间,须要RocksDB的compaction来真正的回收空间,对了还可能波及“大量空region的merge”(如何回收空region的解决方案)。这个阶段的日志有:start redo-delete ranges;finish delete ranges
- (4)发送safe point给PD,完结本轮的GC。日志为:sent safe point to PD
GC的监控查看
波及到GC的次要看上面的几个监控即可:
tikv-detail中的GC模块->GC task:跟上文所形容,如果执行了drop/truncate操作就看看上面监控中total-unafe-destroy-range的metric是否有值,另外日常的DML的GC,能够看看total-gc-keys的task状况。
tikv-detail中的GC模块->GC task duration:
不同gc task的删除耗时。
tikv-detail中的GC模块->GC speed:
依照keys/s的旧MVCC版本的删除速度。
PS:想理解更多的GC内容能够查看官网文档或者具体浏览asktug中坤爷和娇姐写的GC三部曲:第一篇为 『GC 原理浅析』,第二篇为『GC 监控及日志解读』 ,而最初一篇则为『GC 解决案例 & FAQ』。
原文作者:@代晓磊_360 发表于: 2022-03-28
原文链接:https://tidb.io/blog/36c58d32