作者:梁行 万里数据库DBA,善于数据库性能问题诊断、事务与锁问题的剖析等,负责解决客户MySQL日常运维中的问题,对开源数据库相干技术十分感兴趣。

  • GreatSQL社区原创内容未经受权不得随便应用,转载请分割小编并注明起源。

一、景象阐明

在排查问题时发现MySQL主备做了切换,而查看MySQL服务是失常的,DBA也没有做切换操作,服务器也无保护操作,万幸的是业务还没有受到大的波及。这到底是是为什么呢?

假如原主服务器地址为:172.16.87.72,原备主服务器地址为:172.16.87.123。

二、排查思路

通过监控查看MySQL的各个指标
查看双主(keepalived)服务切换日志
MySQL谬误日志信息

三、问题排查

3.1 通过监控零碎查看MySQL监控指标,判断故障产生的具体工夫(通过流量判断大抵切换工夫点)

通过监控查看当初主库MySQL(撑持业务)的监控指标,看一天的、十五天的 (以每秒查问数量速率为例)

发现从22号开始陡增,初步狐疑,可能22号产生了主备切换,再看一下备库的折线图,进一步确认 查看双主(keepalived)服务切换日志,确定主从切换工夫

3.2 查看keepalived判断为什么会做主从切换

先登录87.72查看keepalived的切换日志,日志信息如下:注:零碎是ubuntu而且keepalived没有指定输入日志输入,所以keepalived会将日志输入到零碎默认日志文件syslog.log中

shell> cat syslog.6|grep "vrrp"Oct 23 15:50:34 bjuc-mysql1 Keepalived_vrrp: VRRP_Script(check_mysql) failedOct 23 15:50:34 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'Oct 23 15:50:35 bjuc-mysql1 Keepalived_vrrp: VRRP_Instance(VI_74) Received higher prio advertOct 23 15:50:35 bjuc-mysql1 Keepalived_vrrp: VRRP_Instance(VI_74) Entering BACKUP STATEOct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'Oct 23 15:50:35 bjuc-mysql1 syslog-ng[31634]: Error opening file for writing; filename='/var/log/tang/Keepalived_vrrp.log', error='No such file or directory (2)'Oct 23 15:50:56 bjuc-mysql1 Keepalived_vrrp: VRRP_Script(check_mysql) succeeded

日志剖析后果

从日志中能够发现,在Oct 23 15:50:34 因为检测脚本检测MySQL服务失败,导致了vip切换 再去87.123上查看keepalvied的切换,确定一下切换工夫

Oct 23 15:50:35 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) forcing a new MASTER election Oct 23 15:50:36 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) Transition to MASTER STATE Oct 23 15:50:37 prod-UC-yewu-db-slave2 Keepalived_vrrp: VRRP_Instance(VI_74) Entering MASTER STATE 

通过以上日志信息,咱们发现双主是在Oct 23 15:50:35产生了切换

3.3 疑难?那么检测脚本为什么会检测失败呢?

查看检测脚本内容

shell> cat check_mysql.sh#!/bin/bashexport PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binUSER=xxxPASSWORD=xxxPORT=3306IP=127.0.0.1mysql  --connect-timeout=1 -u$USER -p$PASSWORD -h$IP -P$PORT -e "select 1;"exit $?

咱们发现,检测脚本很简略,执行“select 1”命令,超时工夫为1s,这个sql不影响MySQL任何操作,消耗性能很小,那为什么还会失败呢?咱们去MySQL查看一下日志,看看有没有发现

3.4 查看MySQL日志

191023 15:50:54 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:55 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:55 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:56 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:56 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:57 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:58 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:58 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:50:59 [ERROR] /usr/sbin/mysqld: Sort aborted: Server shutdown in progress191023 15:51:28 [ERROR] Slave SQL: Error 'Lock wait timeout exceeded; try restarting transaction' on query. Default database: 'statusnet'. Query: 'UPDATE ticket SET max_id=max_id+step WHERE biz_tag='p2p_file'', Error_code: 1205191023 15:54:43 [ERROR] Slave SQL thread retried transaction 10 time(s) in vain, giving up. Consider raising the value of the slave_transaction_retries variable.191023 15:54:43 [Warning] Slave: Lock wait timeout exceeded; try restarting transaction Error_code: 1205191023 15:54:43 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.001432' position 973809430191023 16:04:13 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)191023 16:04:13 [Note] Slave I/O thread killed while reading event191023 16:04:13 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.001432', position 998926454191023 16:04:20 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.001432' at position 973809430, relay log './mysql-relay-bin.000188' position: 54422191023 16:04:20 [Note] 'SQL_SLAVE_SKIP_COUNTER=1' executed at relay_log_file='./mysql-relay-bin.000188', relay_log_pos='54422', master_log_name='mysql-bin.001432', master_log_pos='973809430' and new position at relay_log_file='./mysql-relay-bin.000188', relay_log_pos='54661', master_log_name='mysql-bin.001432', master_log_pos='973809669'191023 16:04:20 [Note] Slave I/O thread: connected to master 'zhu@x:3306',replication started in log 'mysql-bin.001432' at position 998926454

从日志中能够发现,MySQL在15:50:54报 Server shutdown in progress谬误,在15:54:43 主从关系挂了,在16:04:20的时候主从关系主动恢复正常。那么这个报错是什么呢,Server shutdown in progress是什么意思呢?这个报错信息是因为在MySQL 5.5时,如果被动kill一个长的慢查问,而且这个sql还用到了sort buffer,就会呈现相似MySQL重启的状况,连贯也会敞开。

官网bug地址:https://bugs.mysql.com/bug.ph...

连忙去线上环境看看本人的环境是不是MySQL 5.5

mysql> \s--------------mysql  Ver 14.14 Distrib 5.5.22, for debian-linux-gnu (x86_64) using readline 6.2Connection id:  76593111Current database:Current user:  xxx@127.0.0.1SSL:   Not in useCurrent pager:  stdoutUsing outfile:  ''Using delimiter: ;Server version:  5.5.22-0ubuntu1-log (Ubuntu)Protocol version: 10Connection:  127.0.0.1 via TCP/IPServer characterset: utf8Db     characterset: utf8Client characterset: utf8Conn.  characterset: utf8TCP port:  3306Uptime:   462 days 11 hours 50 min 11 secThreads: 9  Questions: 14266409536  Slow queries: 216936  Opens: 581411  Flush tables: 605  Open tables: 825  Queries per second avg: 357.022

咱们看到线上环境是MySQL 5.5版本,运行工夫为462 days 11 hours 50 min 11 sec,证实MySQL过程没有重启,那么通过方才error日志(主从关系重启)验证了MySQL的这个bug so,keepalived在这一个发现MySQL不可用,就产生了failed。

3.5 那么咱们再深挖一下,是什么慢sql导致被kill呢?

咱们持续查找相应工夫点的slow日志

# Time: 191023 15:50:39# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 35.571088  Lock_time: 0.742899 Rows_sent: 1  Rows_examined: 72SET timestamp=1571817039;SELECT g.id,g.group_pinyin, g.external_type,g.external_group_id,g.mount_id,g.is_display as group_type,g.nickname,g.site_id,g.group_logo,g.created,g.modified,g.misc,g.name_update_flag as name_flag, g.logo_level,g.group_status,g.conversation, g.only_admin_invite,g.reach_limit_count,g.water_mark  FROM user_group AS g LEFT JOIN group_member as gm ON g.id = gm.group_id WHERE g.is_display in (1,2,3) AND gm.profile_id = 7382782 AND gm.join_state != 2 AND UNIX_TIMESTAMP(g.modified) > 1571722305 ORDER BY g.modified ASC LIMIT 0, 1000;# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 34.276504  Lock_time: 8.440094 Rows_sent: 1  Rows_examined: 1SET timestamp=1571817039;SELECT conversation, is_display as group_type, group_status FROM user_group WHERE id=295414;# Time: 191023 15:50:40# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 31.410245  Lock_time: 0.561083 Rows_sent: 0  Rows_examined: 7SET timestamp=1571817040;SELECT id, clientScope, msgTop FROM uc_application where site_id=106762 AND msgTop=1;# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 38.442446  Lock_time: 2.762945 Rows_sent: 2  Rows_examined: 38SET timestamp=1571817040;SELECT g.id,g.group_pinyin, g.external_type,g.external_group_id,g.mount_id,g.is_display as group_type,g.nickname,g.site_id,g.group_logo,g.created,g.modified,g.misc,g.name_update_flag as name_flag, g.logo_level,g.group_status,g.conversation, g.only_admin_invite,g.reach_limit_count,g.water_mark  FROM user_group AS g LEFT JOIN group_member as gm ON g.id = gm.group_id WHERE g.is_display in (1,2,3) AND gm.profile_id = 9867566 AND gm.join_state != 2 AND UNIX_TIMESTAMP(g.modified) > 1571796860 ORDER BY g.modified ASC LIMIT 0, 1000;# Time: 191023 15:50:46# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 40.481602  Lock_time: 8.530303 Rows_sent: 0  Rows_examined: 1SET timestamp=1571817046;update heartbeat set modify=1571817004 where session_id='89b480936c1384305e35b531221b705332a4aebf';# Time: 191023 15:50:41# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 32.748755  Lock_time: 0.000140 Rows_sent: 0  Rows_examined: 3SET timestamp=1571817041;SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(1316262,1352179,1338753) AND UNIX_TIMESTAMP(modified) > 1571023021 ORDER BY created ASC;# Time: 191023 15:50:46# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 40.764872  Lock_time: 4.829707 Rows_sent: 0  Rows_examined: 7SET timestamp=1571817046;SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(673950,1261923,1261748,1262047,1038545,1184675,1261825) AND UNIX_TIMESTAMP(modified) > 1571744114 ORDER BY created ASC;

如上所示: 发现在这个时间段这条sql被执行了屡次,工夫都超过了30~60s,这个属于重大慢sql了

3.6 那么是什么操作执行了kill呢?

通过询问及排查,发现咱们通过pt-kill用到了主动查杀性能,配置的查问超时工夫为30s

shell> ps -aux|grep 87.72Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQroot      3813  0.0  0.0 218988  8072 ?        Ss   Oct19   9:54 perl /usr/bin/pt-kill --busy-time 30 --interval 10 --match-command Query --ignore-user rep|repl|dba|ptkill --ignore-info 40001 SQL_NO_CACHE --victims all --print --kill --daemonize --charset utf8 --log=/tmp/ptkill.log.kill_____xxx_____172.16.87.72 -h172.16.87.72 -uxxxx -pxxxxshell> cat /tmp/ptkill.log.kill_____xxx_____172.16.87.722019-10-23T15:45:37 KILL 75278028 (Query 39 sec) statusnet SELECT t.conv_id, t.notice_id, t.thread_type, t.parent_conv_id, t.seq, t.reply_count, t.last_reply_seq1, t.last_reply_seq2, t.revocation_seq, t.creator_id, t.status, t.operator_id, t.extra1 FROM notice_thread t WHERE parent_conv_id = 3075773 and notice_id in (1571816367355,1571814620251,1571814611880,1571814601809,1571814589604,1571814578823,1571814568507,1571814559399,1571812785487,1571812769810)2019-10-23T15:45:37 KILL 75277932 (Query 39 sec) statusnet SELECT t.conv_id, t.notice_id, t.thread_type, t.parent_conv_id, t.seq, t.reply_count, t.last_reply_seq1, t.last_reply_seq2, t.revocation_seq, t.creator_id, t.status, t.operator_id, t.extra1 FROM notice_thread t WHERE parent_conv_id = 6396698 and notice_id in (18606128)2019-10-23T15:45:37 KILL 75191686 (Query 39 sec) statusnet SELECT id,nickname,mount_id,is_display,external_type,misc,site_id,conversation as conv_id,only_admin_invite,reach_limit_count,water_mark, group_logo,logo_level,created,modified,name_update_flag as name_flag, group_pinyin, group_status,external_group_id,event_id FROM user_group WHERE id in(988091,736931,843788,739447,737242,1252702,736180,1150776) AND UNIX_TIMESTAMP(modified) > 1571740050 ORDER BY created ASC

发现果然是pt-kill被动去干掉了,联合之前的发现,假相曾经在咱们眼前了

3.7 那为什么这个sql跑了这么长时间,就单单这个工夫报了呢?

咱们联合其余工夫的slow日志和以后的工夫做个比照

# User@Host: dstatusnet[dstatusnet] @  [x]# Query_time: 2.046068  Lock_time: 0.000079 Rows_sent: 1  Rows_examined: 1SET timestamp=1572306496;SELECT ... FROM user_group WHERE id in(693790) AND UNIX_TIMESTAMP(modified) > 0 ORDER BY created ASC;

咱们发现一个法则,当where id in(单个值时)时,基本上是两秒,多个值时,工夫就不固定了,这个论断有待其余证据进行进一步确认

四、那么咱们如何解决呢?

解决及防止方法:

  • 能够抉择降级MySQL的版本(5.7的最高版本)
  • 联合业务优化sql

Enjoy GreatSQL :)

本文由博客一文多发平台 OpenWrite 公布!