共计 10694 个字符,预计需要花费 27 分钟才能阅读完成。
作者:梁行 万里数据库 DBA,善于数据库性能问题诊断、事务与锁问题的剖析等,负责解决客户 MySQL 日常运维中的问题,对开源数据库相干技术十分感兴趣。
- GreatSQL 社区原创内容未经受权不得随便应用,转载请分割小编并注明起源。
一、景象阐明
在排查问题时发现 MySQL 主备做了切换,而查看 MySQL 服务是失常的,DBA 也没有做切换操作,服务器也无保护操作,万幸的是业务还没有受到大的波及。这到底是是为什么呢?
假如原主服务器地址为:172.16.87.72,原备主服务器地址为:172.16.87.123。
二、排查思路
通过监控查看 MySQL 的各个指标
查看双主(keepalived)服务切换日志
MySQL 谬误日志信息
三、问题排查
3.1 通过监控零碎查看 MySQL 监控指标,判断故障产生的具体工夫(通过流量判断大抵切换工夫点)
通过监控查看当初主库 MySQL(撑持业务)的监控指标,看一天的、十五天的 (以每秒查问数量速率为例)
发现从 22 号开始陡增,初步狐疑,可能 22 号产生了主备切换,再看一下备库的折线图,进一步确认 查看双主(keepalived)服务切换日志,确定主从切换工夫
3.2 查看 keepalived 判断为什么会做主从切换
先登录 87.72 查看 keepalived 的切换日志,日志信息如下:注:零碎是 ubuntu 而且 keepalived 没有指定输入日志输入,所以 keepalived 会将日志输入到零碎默认日志文件 syslog.log 中
shell> cat syslog.6|grep "vrrp"
Oct 23 15:50:34 bjuc-mysql1 Keepalived_vrrp: VRRP_Script(check_mysql) failed
Oct 23 15:50:34 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'
Oct 23 15:50:35 bjuc-mysql1 Keepalived_vrrp: VRRP_Instance(VI_74) Received higher prio advert
Oct 23 15:50:35 bjuc-mysql1 Keepalived_vrrp: VRRP_Instance(VI_74) Entering BACKUP STATE
Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'
Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'
Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'
Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'
Oct 23 15:50:56 bjuc-mysql1 Keepalived_vrrp: VRRP_Script(check_mysql) succeeded
日志剖析后果
从日志中能够发现,在 Oct 23 15:50:34 因为检测脚本检测 MySQL 服务失败,导致了 vip 切换 再去 87.123 上查看 keepalvied 的切换,确定一下切换工夫
Oct 23 15:50:35 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) forcing a new MASTER election
Oct 23 15:50:36 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) Transition to MASTER STATE
Oct 23 15:50:37 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) Entering MASTER STATE
通过以上日志信息,咱们发现双主是在 Oct 23 15:50:35 产生了切换
3.3 疑难?那么检测脚本为什么会检测失败呢?
查看检测脚本内容
shell> cat check_mysql.sh
#!/bin/bash
export PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER=xxx
PASSWORD=xxx
PORT=3306
IP=127.0.0.1
mysql --connect-timeout=1 -u$USER -p$PASSWORD -h$IP -P$PORT -e "select 1;"
exit $?
咱们发现,检测脚本很简略,执行“select 1”命令,超时工夫为 1s,这个 sql 不影响 MySQL 任何操作,消耗性能很小,那为什么还会失败呢?咱们去 MySQL 查看一下日志,看看有没有发现
3.4 查看 MySQL 日志
191023 15:50:54 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:55 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:55 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:56 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:56 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:57 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:58 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:58 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:50:59 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress
191023 15:51:28 [ERROR] Slave SQL: Error 'Lock wait timeout exceeded; try restarting transaction' on query. Default database: 'statusnet'. Query: 'UPDATE ticket SET max_id=max_id+step WHERE biz_tag='p2p_file'', Error_code: 1205
191023 15:54:43 [ERROR] Slave SQL thread retried transaction 10 time(s) in vain, giving up. Consider raising the value of the slave_transaction_retries variable.
191023 15:54:43 [Warning] Slave: Lock wait timeout exceeded; try restarting transaction Error_code: 1205
191023 15:54:43 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.001432' position 973809430
191023 16:04:13 [ERROR] Error reading packet from server: Lost connection to MySQL server during query (server_errno=2013)
191023 16:04:13 [Note] Slave I/O thread killed while reading event
191023 16:04:13 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.001432', position 998926454
191023 16:04:20 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.001432' at position 973809430, relay log './mysql-relay-bin.000188' position: 54422
191023 16:04:20 [Note] 'SQL_SLAVE_SKIP_COUNTER=1' executed at relay_log_file='./mysql-relay-bin.000188', relay_log_pos='54422', master_log_name='mysql-bin.001432', master_log_pos='973809430' and new position at relay_log_file='./mysql-relay-bin.000188', relay_log_pos='54661', master_log_name='mysql-bin.001432', master_log_pos='973809669'
191023 16:04:20 [Note] Slave I/O thread: connected to master 'zhu@x:3306',replication started in log 'mysql-bin.001432' at position 998926454
从日志中能够发现,MySQL 在 15:50:54 报 Server shutdown in progress 谬误,在 15:54:43 主从关系挂了,在 16:04:20 的时候主从关系主动恢复正常。那么这个报错是什么呢,Server shutdown in progress 是什么意思呢?这个报错信息是因为在 MySQL 5.5 时,如果被动 kill 一个长的慢查问,而且这个 sql 还用到了 sort buffer,就会呈现相似 MySQL 重启的状况,连贯也会敞开。
官网 bug 地址:https://bugs.mysql.com/bug.ph…
连忙去线上环境看看本人的环境是不是 MySQL 5.5
mysql> \s
--------------
mysql Ver 14.14 Distrib 5.5.22, for debian-linux-gnu (x86_64) using readline 6.2
Connection id: 76593111
Current database:
Current user: xxx@127.0.0.1
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.5.22-0ubuntu1-log (Ubuntu)
Protocol version: 10
Connection: 127.0.0.1 via TCP/IP
Server characterset: utf8
Db characterset: utf8
Client characterset: utf8
Conn. characterset: utf8
TCP port: 3306
Uptime: 462 days 11 hours 50 min 11 sec
Threads: 9 Questions: 14266409536 Slow queries: 216936 Opens: 581411 Flush tables: 605 Open tables: 825 Queries per second avg: 357.022
咱们看到线上环境是 MySQL 5.5 版本,运行工夫为 462 days 11 hours 50 min 11 sec,证实 MySQL 过程没有重启,那么通过方才 error 日志(主从关系重启)验证了 MySQL 的这个 bug so,keepalived 在这一个发现 MySQL 不可用,就产生了 failed。
3.5 那么咱们再深挖一下,是什么慢 sql 导致被 kill 呢?
咱们持续查找相应工夫点的 slow 日志
# Time: 191023 15:50:39
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 35.571088 Lock_time: 0.742899 Rows_sent: 1 Rows_examined: 72
SET timestamp=1571817039;
SELECT g.id,g.group_pinyin, g.external_type,g.external_group_id,g.mount_id,g.is_display as group_type,g.nickname,g.site_id,g.group_logo,g.created,g.modified,g.misc,g.name_update_flag as name_flag, g.logo_level,g.group_status,g.conversation, g.only_admin_invite,g.reach_limit_count,g.water_mark FROM user_group AS g LEFT JOIN group_member as gm ON g.id = gm.group_id WHERE g.is_display in (1,2,3) AND gm.profile_id = 7382782 AND gm.join_state != 2 AND UNIX_TIMESTAMP(g.modified) > 1571722305 ORDER BY g.modified ASC LIMIT 0, 1000;
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 34.276504 Lock_time: 8.440094 Rows_sent: 1 Rows_examined: 1
SET timestamp=1571817039;
SELECT conversation, is_display as group_type, group_status FROM user_group WHERE id=295414;
# Time: 191023 15:50:40
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 31.410245 Lock_time: 0.561083 Rows_sent: 0 Rows_examined: 7
SET timestamp=1571817040;
SELECT id, clientScope, msgTop FROM uc_application where site_id=106762 AND msgTop=1;
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 38.442446 Lock_time: 2.762945 Rows_sent: 2 Rows_examined: 38
SET timestamp=1571817040;
SELECT g.id,g.group_pinyin, g.external_type,g.external_group_id,g.mount_id,g.is_display as group_type,g.nickname,g.site_id,g.group_logo,g.created,g.modified,g.misc,g.name_update_flag as name_flag, g.logo_level,g.group_status,g.conversation, g.only_admin_invite,g.reach_limit_count,g.water_mark FROM user_group AS g LEFT JOIN group_member as gm ON g.id = gm.group_id WHERE g.is_display in (1,2,3) AND gm.profile_id = 9867566 AND gm.join_state != 2 AND UNIX_TIMESTAMP(g.modified) > 1571796860 ORDER BY g.modified ASC LIMIT 0, 1000;
# Time: 191023 15:50:46
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 40.481602 Lock_time: 8.530303 Rows_sent: 0 Rows_examined: 1
SET timestamp=1571817046;
update heartbeat set modify=1571817004 where session_id='89b480936c1384305e35b531221b705332a4aebf';
# Time: 191023 15:50:41
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 32.748755 Lock_time: 0.000140 Rows_sent: 0 Rows_examined: 3
SET timestamp=1571817041;
SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(1316262,1352179,1338753) AND UNIX_TIMESTAMP(modified) > 1571023021 ORDER BY created ASC;
# Time: 191023 15:50:46
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 40.764872 Lock_time: 4.829707 Rows_sent: 0 Rows_examined: 7
SET timestamp=1571817046;
SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(673950,1261923,1261748,1262047,1038545,1184675,1261825) AND UNIX_TIMESTAMP(modified) > 1571744114 ORDER BY created ASC;
如上所示: 发现在这个时间段这条 sql 被执行了屡次,工夫都超过了 30~60s,这个属于重大慢 sql 了
3.6 那么是什么操作执行了 kill 呢?
通过询问及排查,发现咱们通过 pt-kill 用到了主动查杀性能,配置的查问超时工夫为 30s
shell> ps -aux|grep 87.72
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root 3813 0.0 0.0 218988 8072 ? Ss Oct19 9:54 perl /usr/bin/pt-kill --busy-time 30 --interval 10 --match-command Query --ignore-user rep|repl|dba|ptkill --ignore-info 40001 SQL_NO_CACHE --victims all --print --kill --daemonize --charset utf8 --log=/tmp/ptkill.log.kill_____xxx_____172.16.87.72 -h172.16.87.72 -uxxxx -pxxxx
shell> cat /tmp/ptkill.log.kill_____xxx_____172.16.87.72
2019-10-23T15:45:37 KILL 75278028 (Query 39 sec) statusnet SELECT t.conv_id, t.notice_id, t.thread_type, t.parent_conv_id, t.seq, t.reply_count, t.last_reply_seq1, t.last_reply_seq2, t.revocation_seq, t.creator_id, t.status, t.operator_id, t.extra1 FROM notice_thread t WHERE parent_conv_id = 3075773 and notice_id in (1571816367355,1571814620251,1571814611880,1571814601809,1571814589604,1571814578823,1571814568507,1571814559399,1571812785487,1571812769810)
2019-10-23T15:45:37 KILL 75277932 (Query 39 sec) statusnet SELECT t.conv_id, t.notice_id, t.thread_type, t.parent_conv_id, t.seq, t.reply_count, t.last_reply_seq1, t.last_reply_seq2, t.revocation_seq, t.creator_id, t.status, t.operator_id, t.extra1 FROM notice_thread t WHERE parent_conv_id = 6396698 and notice_id in (18606128)
2019-10-23T15:45:37 KILL 75191686 (Query 39 sec) statusnet SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(988091,736931,843788,739447,737242,1252702,736180,1150776) AND UNIX_TIMESTAMP(modified) > 1571740050 ORDER BY created ASC
发现果然是 pt-kill 被动去干掉了,联合之前的发现,假相曾经在咱们眼前了
3.7 那为什么这个 sql 跑了这么长时间,就单单这个工夫报了呢?
咱们联合其余工夫的 slow 日志和以后的工夫做个比照
# User@Host: dstatusnet[dstatusnet] @ [x]
# Query_time: 2.046068 Lock_time: 0.000079 Rows_sent: 1 Rows_examined: 1
SET timestamp=1572306496;
SELECT ... FROM user_group WHERE id in(693790) AND UNIX_TIMESTAMP(modified) > 0 ORDER BY created ASC;
咱们发现一个法则,当 where id in(单个值时)时,基本上是两秒,多个值时,工夫就不固定了,这个论断有待其余证据进行进一步确认
四、那么咱们如何解决呢?
解决及防止方法:
- 能够抉择降级 MySQL 的版本(5.7 的最高版本)
- 联合业务优化 sql
Enjoy GreatSQL :)
本文由博客一文多发平台 OpenWrite 公布!