关于mysql:故障分析-mysql-mgr-多主数据不能更新案例浅析

45次阅读

共计 14282 个字符,预计需要花费 36 分钟才能阅读完成。

作者:付祥

现居珠海,次要负责 Oracle、MySQL、mongoDB 和 Redis 保护工作。

本文起源:原创投稿

* 爱可生开源社区出品,原创内容未经受权不得随便应用,转载请分割小编并注明起源。


1. 故障景象

一套运行快两年的 MGR 三节点多主环境(5.7.25),在节点 1 胜利导入一批数据后,开发反馈程序修改这批数据报错,报错信息如下:

update match_equip set name = ?, type = ?, equips = ?,score = ? where id = ? and
person_id = ?
Received #3101 error from MySQL server: "Plugin instructed the server to rollback
the current transaction."

1.1. 尝试故障复原操作 1

通过初步剖析,发现导入的这批数据,在导入节点 1 能够更新,在其余节点更新失败,狐疑 1 节点有问题,本着疾速复原故障准则,询问开发得悉 1 节点能够重启,于是对其进行重启,重启后不能退出组复制,看来重启大法也不好使,报错信息如下:

2021-05-27T07:37:53.290267Z 0 [ERROR] Plugin group_replication reported: 'This
member has more executed transactions than those present in the group. Local
transactions: 91f9d301-c234-11e9-

b15f-fa163e13423a:1-156817757:156843131-157503127:158192163-158412212,

a71d98a2-c234-11e9-b6db-fa163e3407f8:1-92,

eba21052-c250-11e9-b0d0-fa163e134234:1-3 > Group transactions: 91f9d301-c234-
11e9-b15f-fa163e13423a:1-156817825:156843131-157503172:158192163-158412212,

eba21052-c250-11e9-b0d0-fa163e134234:1-3'2021-05-27T07:37:53.290348Z 0 [ERROR] Plugin group_replication reported:'The
member contains transactions not present in the group. The member will now exit
the group.'

Local transactions:

Local transactions:

91f9d301-c234-11e9-b15f-fa163e13423a:1-156817757:156843131-157503127:158192163-
158412212,

a71d98a2-c234-11e9-b6db-fa163e3407f8:1-92,

eba21052-c250-11e9-b0d0-fa163e134234:1-3

Group transactions:

Group transactions:

91f9d301-c234-11e9-b15f-fa163e13423a:1-156817825:156843131-157503172:158192163-
158412212,

eba21052-c250-11e9-b0d0-fa163e134234:1-3

通过比拟,本地节点已执行的 gitd 汇合中 a71d98a2-c234-11e9-b6db-fa163e3407f8:1-92 gtid 并不在 Group transactions 中,即 Local transactions 不是 Group transactions 子集,无奈通过验证,退出集群失败。

1.2. 尝试故障复原操作 2

通过失常节点物理备份还原的形式退出集群,还是报错:

2021-05-27T08:35:01.331704Z 13 [ERROR] Plugin group_replication reported: 'The
certification information could not be set in this server: 'Certification
information is too large for transmission.''2021-05-27T08:35:01.331752Z 13 [ERROR] Plugin group_replication reported:'Error
when processing Certification information in the Recovery process'2021-05-27T08:35:01.331762Z 13 [ERROR] Plugin group_replication reported:'Fatal
error during the Recovery process of Group Replication. The server will leave the
group.

因为传输的集群抵触校验信息太大而退出集群失败:

Certification information is too large for transmission

查看以后集群冲突检测信息:

root@3306 performance_schema> select COUNT_TRANSACTIONS_ROWS_VALIDATING from
performance_schema.replication_group_member_stats;

+------------------------------------+
| COUNT_TRANSACTIONS_ROWS_VALIDATING |
+------------------------------------+
| 11239426 |
+------------------------------------+
1 row in set (0.00 sec)

以后冲突检测信息有 1 千多万条记录,正是因为这个信息太大导致退出集群失败。事件到这里,预计有人会认为碰上了 mysql bug,实则不然,要解决这个问题,还得从 mgr 事务冲突检测机制说起。

2. 故障剖析与复现

2.1. 事务认证机制

参考文章:https://zhuanlan.zhihu.com/p/…

MGR 事务认证模块用于决定进入 MGR 的事务是持续提交还是须要被回滚,实现该性能的根据是判断冲突检测数据库(certification_info)中被该事务更新的各个主键的 gtid_set 是否为事务的快照版本 snapshot_version 的子集,若是则提交,否则进行回滚。

对于可持续提交的事务,为其调配 GTID。随后该事务 writeset 中蕴含的各个主键会被插入 / 更新到 certification_info 中。能够预感,随着越来越多的事务被认证,certification_info 中的记录会越来越多。

如何清理 certification_info 中的记录?很显然,如果一个事务 A 通过认证后,曾经在 MGR 集群的各个节点都提交了,也就是说各节点的 gtid_executed 都蕴含了该事务 GTID,因为事务在集群中的全局有序性,还未被认证的事务肯定是在本节点事务 A 之后或同时(无相互依赖)执行的,那么能够确定后续须要在 MGR 中认证的事务都不会跟事务 A 有抵触。所以,certification_info 中快照版本(snapshot_version)为事务 A 快照版本子集的那些记录都能够被清理掉。MGR 的各个节点每隔 60s 会播送一次本人的 gtid_executed。各节点收集一轮残缺的节点 gtid_executed 取交加(stable_gtid_set),即可基于该交加施行清理操作。

certification_info 清理时,须要遍历其中的每条记录,将其中的 GTID_SET 跟 stable_gtid_set 进行比照,如果记录的 GTID_SET 是 stable_gtid_set 的子集则可清理。

2.2. 故障剖析

2.2.1. 以后 mgr 中 certification_info 有 11239426 条记录,mgr 每隔 60s 清理一次,为何会这么大?

节点 1 存在独立本地事务 a71d98a2-c234-11e9-b6db-fa163e3407f8:1-92,当在 1 节点进行 dml 时,只有抵触认证通过,事务 writeset 中蕴含的各个主键便会被插入 / 更新到 certification_info 中,snapshot_version 信息便带有本地事务标识 a71d98a2-c234-11e9-b6db-fa163e3407f8:1-92,当 certification_info 清理时,各节点 gtid_executed 交加永远不会蕴含 a71d98a2-c234-11e9-b6dbfa163e3407f8:1-92 这段 gitd 汇合,故 1 节点 dml 操作产生的 certification_info 永远不会被清理,与日俱增,certification_info 就达到了 1000 多万。

2.2.2. 节点 1 dml 操作提交后的数据为何在其余节点不能更新?

假如:

节点 1、2、3 的 server_uuid 别离为 A、B、C, 全局事务 uuid 为:G

节点 1 存在本地独立事务:A:1-XXX

节点 1 Executed_Gtid_SetG:1-XXX,A:1-XXX

节点 2 Executed_Gtid_SetG:1-XXX

节点 3 Executed_Gtid_SetG:1-XXX

节点 1 dml产生的 certification_info 信息为:G:1-XXX,A:1-XXX,其余节点对节点 1 批改后的数据进行更新时,certification_info 中 snapshot_version:G:1-XXX,A:1-XXX,以后事务快照版本 G:1-XXXG:1-XXX,A:1-XXX 永远不可能是 G:1-XXX 子集,故抵触认证失败,事务回滚:

Plugin instructed the server to rollback the current transaction

2.3. 故障模拟复现

2.3.1. 环境筹备,结构节点 3 含有本地事务

节点 2、节点 3 成心不敞开参数 sql_log_in 批改 root 初始密码产生一个本地 gtid:

[root@mysql.sock][(none)]> show master status;

ERROR 1820 (HY000): You must reset your password using ALTER USER statement
before executing this statement.

[root@mysql.sock][(none)]>

[root@mysql.sock][(none)]>

[root@mysql.sock][(none)]> alter user root@'localhost' identified by '***';

Query OK, 0 rows affected (0.05 sec)

[root@mysql.sock][(none)]> show master status;

+------------------+----------+--------------+------------------+----------------
------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
|
+------------------+----------+--------------+------------------+----------------
------------------------+
| MySql-bin.000002 | 391 | | | f1b251a9-bf54-11eb-a4fd-
000c299b4d6c:1 |
+------------------+----------+--------------+------------------+----------------
------------------------+

1 row in set (0.00 sec)

接下来各个节点顺次执行一些初始化操作:

SET SQL_LOG_BIN=0;

CREATE USER rpl_user@'%' IDENTIFIED BY '*****';

GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';

FLUSH PRIVILEGES;

SET SQL_LOG_BIN=1;

CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD=''*****' FOR CHANNEL
'group_replication_recovery';

INSTALL PLUGIN group_replication SONAME 'group_replication.so';
2.3.1.1. 节点 1 作为疏导节点启动 mgr 集群
SET GLOBAL group_replication_bootstrap_group=ON;

START GROUP_REPLICATION;

SET GLOBAL group_replication_bootstrap_group=OFF;
2.3.1.2. 节点 2 退出集群:

2 号节点存在本地事务 start group_replication 失败:

[ERROR] Plugin group_replication reported: 'This member has more executed
transactions than those present in the group. Local transactions: 332ee50a-bf55-
11eb-b407-000c293fd895:1 > Group transactions: 91f9d3

01-c234-11e9-b15f-fa163e13423a:1,

f1b251a9-bf54-11eb-a4fd-000c299b4d6c:1-2'

reset master,重置 binlog 后再退出胜利。

2.3.1.3. 节点 3 退出集群:

同理,3 节点退出集群同样失败,通过设置参数:

set global group_replication_allow_local_disjoint_gtids_join=1

不校验本地事务后胜利退出集群。

2.3.2. 故障模拟及复原
2.3.2.1. 以后环境信息:节点 3 含有本地事务
root@mysql.sock][fxtest]> select * from
performance_schema.replication_group_members;

+---------------------------+--------------------------------------+-------------
+-------------+--------------+

| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT |
MEMBER_STATE |

+---------------------------+--------------------------------------+-------------
+-------------+--------------+

| group_replication_applier | 332ee50a-bf55-11eb-b407-000c293fd895 | fxtest02 |
3307 | ONLINE |

| group_replication_applier | 34668704-bf55-11eb-b120-000c29ed3768 | fxtest03 |
3307 | ONLINE |

| group_replication_applier | f1b251a9-bf54-11eb-a4fd-000c299b4d6c | fxtest01 |
3307 | ONLINE |

+---------------------------+--------------------------------------+-------------
+-------------+--------------+

查看每个节点 Executed_Gtid_Set:

# 节点 1:root@mysql.sock][fxtest]> show master status;

+------------------+----------+--------------+------------------+----------------
---------------------------+

| File     | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
       |

+------------------+----------+--------------+------------------+----------------
---------------------------+

| MySql-bin.000001 | 1152 | | | 91f9d301-c234-11e9-b15ffa163e13423a:1-29 |

+------------------+----------+--------------+------------------+----------------
---------------------------+

# 节点 2:root@mysql.sock][fxtest]> show master status;

+------------------+----------+--------------+------------------+----------------
---------------------------+

| File     | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
        |

+------------------+----------+--------------+------------------+----------------
---------------------------+

| MySql-bin.000001 | 1152 | | | 91f9d301-c234-11e9-b15ffa163e13423a:1-29 |

+------------------+----------+--------------+------------------+----------------
---------------------------+

1 row in set (0.02 sec)

# 节点 3:root@mysql.sock][performance_schema]> show master status;

+------------------+----------+--------------+------------------+----------------
-------------------------------------------------------------------+

| File      | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
        |

+------------------+----------+--------------+------------------+----------------
-------------------------------------------------------------------+

| MySql-bin.000001 | 1319 | | | 34668704-bf55-11eb-b120-000c29ed3768:1,

91f9d301-c234-11e9-b15f-fa163e13423a:1-29 |

+------------------+----------+--------------+------------------+----------------
-------------------------------------------------------------------+

1 row in set (0.00 sec)

能够看到节点 3 多了一个本地事务 34668704-bf55-11eb-b120-000c29ed3768:1

2.3.2.2. 场景 1:节点 3 新增数据后,其余节点进行更新

在节点 3 上执行新增数据:

root@mysql.sock][fxtest]> create table mgr_test1(id int primary key,name
varchar(10));

Query OK, 0 rows affected (0.02 sec)

[root@mysql.sock][fxtest]> insert into mgr_tes
t1 values(1,'a');

Query OK, 1 row affected (0.00 sec)

节点 1 执行批改:

root@mysql.sock][fxtest]> update mgr_test1 set name='aa' where id=1;

ERROR 3101 (HY000): Plugin instructed the server to rollback the current
transaction.

[root@mysql.sock][fxtest]>

节点 2 执行批改:

root@mysql.sock][fxtest]> update mgr_test1 set name='aa' where id=1;

ERROR 3101 (HY000): Plugin instructed the server to rollback the current
transaction.

[root@mysql.sock][fxtest]>

论断:节点 1、2 都不能批改节点 3 新增的数据。

2.3.2.3. 场景 2:其余节点新增数据,节点 3 批改后,其余节点再次更新

节点 1:

root@mysql.sock][fxtest]> insert into mgr_test1 values(11,'aa');

Query OK, 1 row affected (0.01 sec)

节点 2:

root@mysql.sock][fxtest]> insert into mgr_test1 values(22,'aa');

Query OK, 1 row affected (0.00 sec)

节点 3:

root@mysql.sock][fxtest]> update mgr_test1 set name='11' where id=11;

Query OK, 1 row affected (0.01 sec)

Rows matched: 1 Changed: 1 Warnings: 0

[root@mysql.sock][fxtest]> update mgr_test1 set name='22' where id=22;

Query OK, 1 row affected (0.01 sec)

Rows matched: 1 Changed: 1 Warnings: 0

之后节点 1、2 再批改数据报错:

root@mysql.sock][fxtest]> update mgr_test1 set name='aaa' where id=11;

ERROR 3101 (HY000): Plugin instructed the server to rollback the current
transaction.

[root@mysql.sock][fxtest]> update mgr_test1 set name='aaa' where id=22;

ERROR 3101 (HY000): Plugin instructed the server to rollback the current
transaction.

[root@mysql.sock][fxtest]>
2.3.2.4. 故障修复
2.3.2.5. 场景 1:节点 3 本地事务对应 binlog 还存在,如何修复

只需重启节点 1、2 组复制即可同步过去节点 3 的本地事务。上面演示重启节点 1:

root@mysql.sock][fxtest]> show master status;

+------------------+----------+--------------+------------------+----------------
-------------------------------------------+

| File   | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
       |

+------------------+----------+--------------+------------------+----------------
-------------------------------------------+

| MySql-bin.000001 | 3386 | | | 91f9d301-c234-11e9-b15ffa163e13423a:1-35:1000029:2000029 |

+------------------+----------+--------------+------------------+----------------
-------------------------------------------+

1 row in set (0.03 sec)

[root@mysql.sock][fxtest]> stop group_replication;

Query OK, 0 rows affected (9.73 sec)

[root@mysql.sock][fxtest]> start group_replication;

Query OK, 0 rows affected (3.24 sec)

[root@mysql.sock][fxtest]> show master status;

+------------------+----------+--------------+------------------+----------------
---------------------------------------------------------------------------------
--+

| File   | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
      |

+------------------+----------+--------------+------------------+----------------
---------------------------------------------------------------------------------
--+

| MySql-bin.000001 | 4935 | | | 34668704-bf55-11eb-b120-
000c29ed3768:1,
91f9d301-c234-11e9-b15f-fa163e13423a:1-36:1000029:2000029 |

+------------------+----------+--------------+------------------+----------------
---------------------------------------------------------------------------------
--+

1 row in set (0.00 sec)

重启 1 号节点组复制胜利,且节点 3 本地事务 34668704-bf55-11eb-b120-000c29ed3768:1 也同步过去了。

2.3.2.6. 场景 2:节点 3 本地事务对应 binlog 曾经不在了,如何修复

清空 3 号节点 binlog,模仿本地事务对应 binlog 被删除,并敞开 1 节点组复制,重启 2 号节点组复制,使其只能从 3 号节点同步。

# 清空节点 3 本地事务对应 binlog

root@mysql.sock][fxtest]> show binary logs;

+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| MySql-bin.000001 | 4968 |
+------------------+-----------+

1 row in set (0.00 sec)

[root@mysql.sock][fxtest]> flush logs;

Query OK, 0 rows affected (0.00 sec)

[root@mysql.sock][fxtest]> show binary logs;

+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| MySql-bin.000001 | 5011 |

| MySql-bin.000002 | 262 |
+------------------+-----------+

2 rows in set (0.00 sec)

[root@mysql.sock][fxtest]> purge binary logs to 'MySql-bin.000002';

Query OK, 0 rows affected (0.01 sec)

[root@mysql.sock][fxtest]> show binary logs;

+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| MySql-bin.000002 | 262 |
+------------------+-----------+

1 row in set (0.00 sec)

敞开 1 号节点组复制:

root@mysql.sock][fxtest]> stop group_replication;

Query OK, 0 rows affected (9.57 sec)

重启 2 号节点组复制失败,2 号节点从 3 号节点同步数据,3 号节点本地事务对应 binlog 被删除了,故 2 号节点退出集群失败,始终处于 recovering 状态,2 号节点日志如下:

2021-05-28T09:58:28.091303Z 276 [ERROR] Error reading packet from server for
channel 'group_replication_recovery': The slave is connecting using CHANGE MASTER
TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing
GTIDs that the slave requires. Replicate the missing transactions from elsewhere,
or provision a new slave from backup. Consider increasing the master's binary log
expiration period. The GTID set sent by the slave is '91f9d301-c234-11e9-b15ffa163e13423a:1-36:1000029:2000029', and the missing transactions are '34668704-
bf55-11eb-b120-000c29ed3768:1'. (server_errno=1236)

root@mysql.sock][fxtest]> select * from
performance_schema.replication_group_members;

+---------------------------+--------------------------------------+-------------
+-------------+--------------+

| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT |
MEMBER_STATE |

+---------------------------+--------------------------------------+-------------
+-------------+--------------+

| group_replication_applier | 332ee50a-bf55-11eb-b407-000c293fd895 | fxtest02 |
3307 | RECOVERING |

| group_replication_applier | 34668704-bf55-11eb-b120-000c29ed3768 | fxtest03 |
3307 | ONLINE |

故 2 号节点只能通过 reset master,set global gtid_purged 形式退出集群:

stop group_replication;

reset master;

set global gtid_purged='91f9d301-c234-11e9-b15f-fa163e13423a:1-
36:1000029:2000029,34668704-bf55-11eb-b120-000c29ed3768:1';

start group_replication;

3. 总结

在搭建 MGR 时:

1、肯定要保障 mgr 是洁净的,即只含有全局 gtid 事务,不含本地 gtid 事务,在启动 mgr 前的初始化阶段,操作命令前要敞开写 binlog 日志。

2、当成员退出集群,启动组复制报错:

This member has more executed transactions than those present in the group

这阐明环境不洁净,有本地事务,千万不要设置参数 group_replication_allow_local_disjoint_gtids_join=1 这会给当前留下隐患,预计官网曾经意识到这个问题严重性,在 8.0.4 版本后被移除:

group_replication_allow_local_disjoint_gtids_join: Allow current server to join
group even if it has transactions not present in group. Removed in MySQL 8.0.4.

正文完
 0