明天借助一个例子持续讲ASH的用法。
客户报告:一个RAC形成的环境,在11:20左右开始解决慢。提供材料:AWR Report ASH 请求事项:起因确认解决办法
先简略看一AWR Report 的总体情况。
・Node1 DB Time: 967.74 (mins)・Node2 DB Time: 414.41 (mins)・Node3 DB Time: 354.11 (mins)・Node4 DB Time: 460.29 (mins)・Node5 DB Time: 551.66 (mins)
依据以上的信息,能够看到Node1的“DB Time : 967.74 (mins)”,Node2--Node5 的两倍左右。
所以,咱们能够看看每个INSTANCE的“Top 5 Timed Foreground Events”。
・Node1Top 5 Timed Foreground Events~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DBEvent Waits Time(s) (ms) time Wait Class------------------------------ ------------ ----------- ------ ------ ----------db file sequential read 4,893,299 26,714 5 46.0 User I/Olog file sync 476,854 6,132 13 10.6 CommitDB CPU 5,009 8.6Disk file operations I/O 163,640 4,128 25 7.1 User I/Ogc current block 3-way 2,866,969 3,163 1 5.4 Cluster・Node2Top 5 Timed Foreground Events~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DBEvent Waits Time(s) (ms) time Wait Class------------------------------ ------------ ----------- ------ ------ ----------gc cr block busy 256,891 4,668 18 18.8 ClusterDB CPU 4,613 18.6db file sequential read 3,089,328 3,822 1 15.4 User I/Ogc current block 3-way 2,533,718 2,717 1 10.9 Clustergc cr grant 2-way 2,424,954 1,698 1 6.8 Cluster・Node3Top 5 Timed Foreground Events~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DBEvent Waits Time(s) (ms) time Wait Class------------------------------ ------------ ----------- ------ ------ ----------DB CPU 4,368 20.6gc cr block busy 241,547 4,166 17 19.6 Clusterdb file sequential read 2,272,733 2,652 1 12.5 User I/Ogc current block 3-way 2,123,690 2,157 1 10.2 ClusterDisk file operations I/O 227,537 1,604 7 7.5 User I/O・Node4Top 5 Timed Foreground Events~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DBEvent Waits Time(s) (ms) time Wait Class------------------------------ ------------ ----------- ------ ------ ----------db file sequential read 5,651,562 6,052 1 21.9 User I/ODB CPU 5,315 19.2gc cr block busy 195,097 3,457 18 12.5 Clustergc cr grant 2-way 4,318,113 2,704 1 9.8 Clustergc current block 3-way 2,451,795 2,571 1 9.3 Cluster・Node5Top 5 Timed Foreground Events~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DBEvent Waits Time(s) (ms) time Wait Class------------------------------ ------------ ----------- ------ ------ ----------db file sequential read 5,631,340 6,850 1 20.7 User I/ODB CPU 5,586 16.9gc cr block busy 230,530 4,129 18 12.5 Clustergc current block 3-way 3,321,250 3,498 1 10.6 Clustergc cr grant 2-way 4,412,873 2,792 1 8.4 Cluster
看到这里,因为所有INSTANCE的待机EVENT都和I/O关联,基本上能够判断为是业务解决集中的问题。
上面就须要仔细分析ASH数据,找到能证实论断的货色。
首先,咱们看一下分钟单位的Active Session数。
◆SQL文SQL> select to_char(sample_time,'yyyy/mm/dd hh24:mi'),count(*)from m_dba_hist_active_sess_historywhere instance_number=<&instance_number>group by to_char(sample_time,'yyyy/mm/dd hh24:mi')order by to_char(sample_time,'yyyy/mm/dd hh24:mi');
通过下面曲线图,咱们能够清晰的看到在11:18 ,所有INSTANCE的Active Session数都有不同水平的减少。
而后,针对增长水平最大的Node1进行进一步的剖析。先来看看Active Session的 PROGRAM散布状况。
通过简略的剖析,咱们能够晓得通过 sqlplus 和 JDBC Thin Client 连上来的SESSION数量最大,并且增长幅度最大。
SQL> select to_char(sample_time,'yyyy/mm/dd hh24:mi'),count(*)from m_dba_hist_active_sess_historywhere instance_number=1and PROGRAM like 'sqlplus%'group by to_char(sample_time,'yyyy/mm/dd hh24:mi')having count(*)>10order by to_char(sample_time,'yyyy/mm/dd hh24:mi');TO_CHAR(SAMPLE_TIME,'YYYY/MM/DDHH24:MI') COUNT(*)------------------------------------------------ ----------。。。 。。。2021/03/22 11:14 122021/03/22 11:15 122021/03/22 11:16 122021/03/22 11:17 122021/03/22 11:18 24 ★SESSION数减少1倍2021/03/22 11:19 222021/03/22 11:20 262021/03/22 11:21 192021/03/22 11:22 162021/03/22 11:23 182021/03/22 11:24 172021/03/22 11:25 182021/03/22 11:26 182021/03/22 11:27 202021/03/22 11:28 222021/03/22 11:29 212021/03/22 11:30 23。。。 。。。SQL> select to_char(sample_time,'yyyy/mm/dd hh24:mi'),count(*)2 from m_dba_hist_active_sess_history3 where instance_number=14 and PROGRAM like 'JDBC Thin Client'5 group by to_char(sample_time,'yyyy/mm/dd hh24:mi')having count(*)>10order by to_char(sample_time,'yyyy/mm/dd hh24:mi'); 6 7TO_CHAR(SAMPLE_TIME,'YYYY/MM/DDHH24:MI') COUNT(*)------------------------------------------------ ---------- 。。。 。。。2021/03/22 11:13 992021/03/22 11:14 1042021/03/22 11:15 1162021/03/22 11:16 1182021/03/22 11:17 1192021/03/22 11:18 168 ★増加した2021/03/22 11:19 1002021/03/22 11:20 1002021/03/22 11:21 1372021/03/22 11:22 1212021/03/22 11:23 1342021/03/22 11:24 1132021/03/22 11:25 1082021/03/22 11:26 1192021/03/22 11:27 892021/03/22 11:28 1072021/03/22 11:29 100 。。。 。。。
到当初为止,基本上能够断定为11:18左右,通过 sqlplus 和 JDBC Thin Client 连上来的SESSION数过于集中,
引起了I/O解决过多,导致了这次问题的产生。
那这种问题应该如何解决呢?
有上面两个方向:
1. 扩散业务解决。2. 找到I/O多的具体解决,看看有没有I/O少的办法。
第2个方向波及SQL TUNING 的常识,这里不再细说了。
2021/03/23 @ Dalian