一:背景
1. 讲故事
这个月中旬,有位敌人加我 wx 求助他的程序线程占有率很高,寻求如何解决,截图如下:
说实话,和不同行业的程序员聊天还是蛮有意思的,广交朋友,也能扩充本人的圈子,敌人说他因为这个 bug 还导致我的项目黄了一个 … 😂😂😂
哈哈,看样子是客户不买账,验收不了,害。。。早找到我,这客户不就捞回来啦,这兴许就是技术的价值吧!😁😁😁
既然找到我,那就让这个挂死问题彻底隐没吧,上 windbg 谈话。
二:Windbg 剖析
1. 查看线程状况
既然敌人说线程高,那就从线程动手,用 !t
命令即可。
0:000> !t
ThreadCount: 1006
UnstartedThread: 0
BackgroundThread: 1005
PendingThread: 0
DeadThread: 0
Hosted Runtime: no
Lock
DBG ID OSID ThreadOBJ State GC Mode GC Alloc Context Domain Count Apt Exception
0 1 10c8 00000000004D89A0 2a020 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 MTA
2 2 13c0 000000000031FF70 2b220 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 MTA (Finalizer)
3 3 12cc 000000000032B780 102a220 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 MTA (Threadpool Worker)
4 5 138c 000000000039E3C0 8029220 Preemptive 00000000B6D3CCA0:00000000B6D3D260 00000000002f1070 0 MTA (Threadpool Completion Port)
6 6 106c 0000000019E562A0 3029220 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 MTA (Threadpool Worker)
8 11 7f0 0000000019F8F9E0 20220 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 Ukn
9 1949 323c 000000009AA69E40 8029220 Preemptive 00000000B6BB8AD0:00000000B6BB94E0 00000000002f1070 0 MTA (Threadpool Completion Port)
10 1637 b3c 000000009AA1C260 8029220 Preemptive 00000000B6CD4220:00000000B6CD47E0 00000000002f1070 0 MTA (Threadpool Completion Port)
11 1947 223c 000000009ADB72E0 8029220 Preemptive 00000000B6D88D68:00000000B6D89550 00000000002f1070 0 MTA (Threadpool Completion Port)
12 1968 2e74 000000009AA1E330 8029220 Preemptive 00000000B6A8CD40:00000000B6A8D300 00000000002f1070 0 MTA (Threadpool Completion Port)
...
994 313 1fa4 000000009A81FFC0 8029220 Preemptive 00000000B6BFC1B8:00000000B6BFC410 00000000002f1070 0 MTA (Threadpool Completion Port)
995 1564 18ec 000000009A835510 8029220 Preemptive 00000000B6AC1ED0:00000000B6AC2490 00000000002f1070 0 MTA (Threadpool Completion Port)
996 1581 4ac 000000001C2E36E0 8029220 Preemptive 00000000B6C51500:00000000B6C51AC0 00000000002f1070 0 MTA (Threadpool Completion Port)
997 814 2acc 000000009A73B5E0 8029220 Preemptive 00000000B6D67BF8:00000000B6D683E0 00000000002f1070 0 MTA (Threadpool Completion Port)
998 517 25dc 000000009A838990 8029220 Preemptive 00000000B6D2CA10:00000000B6D2CFD0 00000000002f1070 0 MTA (Threadpool Completion Port)
999 670 2a10 000000001C2E4400 8029220 Preemptive 00000000B6CD0490:00000000B6CD0A50 00000000002f1070 0 MTA (Threadpool Completion Port)
1000 183 1704 000000009A81F930 8029220 Preemptive 00000000B6AE8670:00000000B6AE8C30 00000000002f1070 0 MTA (Threadpool Completion Port)
1001 117 1bcc 000000009A73BC70 8029220 Preemptive 00000000B6B92780:00000000B6B92D40 00000000002f1070 0 MTA (Threadpool Completion Port)
1002 1855 1d68 000000009A81E580 8029220 Preemptive 00000000B6B28460:00000000B6B28A20 00000000002f1070 0 MTA (Threadpool Completion Port)
1003 1070 2ef0 000000009A73C300 8029220 Preemptive 00000000B6B8F640:00000000B6B8FC00 00000000002f1070 0 MTA (Threadpool Completion Port)
1004 1429 210c 000000001C2E4A90 8029220 Preemptive 00000000B6D5F488:00000000B6D5FC70 00000000002f1070 0 MTA (Threadpool Completion Port)
1005 1252 2f38 000000009A838300 8029220 Preemptive 00000000B6A99240:00000000B6A99800 00000000002f1070 0 MTA (Threadpool Completion Port)
1006 1317 3118 000000001C2E5120 8029220 Preemptive 00000000B6DA3A30:00000000B6DA4440 00000000002f1070 0 MTA (Threadpool Completion Port)
1007 1837 3120 000000009A8375E0 8029220 Preemptive 00000000B6D38F10:00000000B6D394D0 00000000002f1070 0 MTA (Threadpool Completion Port)
1009 1964 2f64 000000009A81DEF0 1029220 Preemptive 0000000000000000:0000000000000000 00000000002f1070 0 MTA (Threadpool Worker)
能够看到以后有 1006 个线程,其中 1000 个是 Threadpool Completion Port
,这么多 IO 线程卡死也是第一次遇到,🐂👃。
说实话,看到 Threadpool Completion Port
我就想到这是一个异步操作的回调,那为什么会有这么多 IO 线程被卡死 ? 要想寻找答案,抽个线程看一下。
0:1000> ~1000s
ntdll!NtNotifyChangeDirectoryFile+0xa:
00000000`77c7a75a c3 ret
0:1000> !clrstack
OS Thread Id: 0x1704 (1000)
Child SP IP Call Site
00000000A99FF4C0 0000000077c7a75a [InlinedCallFrame: 00000000a99ff4c0] Interop+Kernel32.ReadDirectoryChangesW(Microsoft.Win32.SafeHandles.SafeFileHandle, Byte[], Int32, Boolean, Int32, Int32 ByRef, System.Threading.NativeOverlapped*, IntPtr)
00000000A99FF4C0 000007fe8e87bd20 [InlinedCallFrame: 00000000a99ff4c0] Interop+Kernel32.ReadDirectoryChangesW(Microsoft.Win32.SafeHandles.SafeFileHandle, Byte[], Int32, Boolean, Int32, Int32 ByRef, System.Threading.NativeOverlapped*, IntPtr)
00000000A99FF470 000007fe8e87bd20 DomainBoundILStubClass.IL_STUB_PInvoke(Microsoft.Win32.SafeHandles.SafeFileHandle, Byte[], Int32, Boolean, Int32, Int32 ByRef, System.Threading.NativeOverlapped*, IntPtr)
00000000A99FF560 000007fef19dab6e System.IO.FileSystemWatcher.Monitor(AsyncReadState) [E:\A\_work\322\s\corefx\src\System.IO.FileSystem.Watcher\src\System\IO\FileSystemWatcher.Win32.cs @ 141]
00000000A99FF5E0 000007fef19dae6c System.IO.FileSystemWatcher.ReadDirectoryChangesCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) [E:\A\_work\322\s\corefx\src\System.IO.FileSystem.Watcher\src\System\IO\FileSystemWatcher.Win32.cs @ 227]
00000000A99FF630 000007feedbe0af9 System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\191\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 167]
00000000A99FF6B0 000007feede094dc System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) [E:\A\_work\191\s\src\mscorlib\src\System\Threading\Overlapped.cs @ 108]
00000000A99FF7F0 000007feee359ed3 [GCFrame: 00000000a99ff7f0]
00000000A99FF9D0 000007feee359ed3 [DebuggerU2MCatchHandlerFrame: 00000000a99ff9d0]
我去,又见 FileSystemWatcher
,追这个系列的敌人应该晓得,我上个月剖析了一篇 记一次 .NET 某流媒体独角兽 API 句柄透露剖析 , 这其中就是 FileSystemWatcher 导致的 文件句柄
爆高,他的造成起因是 定时刷新 appsetttings + reloadOnChange=true
所致,这世界真小,莫不会撞车了。。。接下来咱们重点关注一下它。
2. 探索 FileSystemWatcher
要想进一步剖析,先用 !dso
命令看一下以后的线程栈对象。
0:1000> !dso
OS Thread Id: 0x1704 (1000)
RSP/REG Object Name
00000000A99FF508 00000000263285d8 System.Byte[]
00000000A99FF518 00000000242aeb10 System.Threading._IOCompletionCallback
00000000A99FF560 00000000242ae1b0 Microsoft.Win32.SafeHandles.SafeFileHandle
00000000A99FF568 00000000242aeaa8 System.Threading.PreAllocatedOverlapped
00000000A99FF578 00000000242aeb10 System.Threading._IOCompletionCallback
00000000A99FF5E0 00000000242a8538 System.IO.FileSystemWatcher
00000000A99FF5E8 00000000242aea10 System.IO.FileSystemWatcher+AsyncReadState
00000000A99FF608 00000000242aea10 System.IO.FileSystemWatcher+AsyncReadState
00000000A99FF610 0000000023206e30 System.Threading.ExecutionContext
00000000A99FF618 0000000001032928 System.Threading.ContextCallback
00000000A99FF630 00000000242a8538 System.IO.FileSystemWatcher
00000000A99FF678 00000000b6a69a40 System.Threading.Thread
00000000A99FF688 00000000242aeb10 System.Threading._IOCompletionCallback
00000000A99FF690 0000000023206e30 System.Threading.ExecutionContext
00000000A99FF6C0 0000000021fa55d8 System.Threading._IOCompletionCallback
00000000A99FF6C8 000000002052e6e0 System.Threading.ExecutionContext
00000000A99FF7E0 000000000560d2b0 System.Threading.OverlappedData
因为线程栈上的对象是向小扩大的,所以看那个最小地址上的 System.Byte[]
内容就晓得以后回调的是啥啦,截图如下:
通过上一役的剖析教训,到这里我根本就搞清楚了,这又是一个一直构建 ConfigurationRoot 时配了 reloadOnChange: true
的经典案例,它的结果会导致内存中新增大量的 FileSystemWatcher
和 ConfigurationRoot
无奈开释,而诱发点就是上图中的日志文件的一直变更导致的海量回调函数触发的卡死案例,具体详情 … 请听我缓缓合成,先验证下这两个类在托管堆上的个数。
0:1000> !dumpheap -stat -type FileSystemWatcher
Statistics:
MT Count TotalSize Class Name
000007fe8ed5bc90 2 160 System.Collections.Generic.Dictionary`2[[System.String, System.Private.CoreLib],[System.IO.FileSystemWatcher, System.IO.FileSystem.Watcher]]
000007fe8e9f11a0 34480 1930880 System.IO.FileSystemWatcher+AsyncReadState
000007fe8e9d69c8 34480 4137600 System.IO.FileSystemWatcher
Total 68962 objects
0:1000> !dumpheap -stat -type ConfigurationRoot
Statistics:
MT Count TotalSize Class Name
000007fe8e9f1e70 34480 827520 Microsoft.Extensions.Configuration.ConfigurationRoot+<>c__DisplayClass2_0
000007fe8e999560 34480 1103360 Microsoft.Extensions.Configuration.ConfigurationRoot
Total 68960 objects
果不其然,托管堆有 3.4w
的 FileSystemWatcher 和 ConfigurationRoot,接下来就得和敌人沟通了。
3. 到底是什么代码引起的?
询问敌人为什么会有 3.4w
的 ConfigurationRoot
对象,实践上一个程序只会有一个,依据这些信息,敌人很快就找到了问题代码,截图如下:
就是因为上图中的 reloadOnChange: true
让底层构建了 FileSystemWatcher 对 appsettings.json
的实时监控,从而导致内存呈现了 3.4w
的对象无奈开释。
三:为什么日志变更会造成程序卡死
1. 当初的困惑
其实我当初剖析到这里的时候也是很困惑的,就算内存中有 3.4w
的 FileSystemWatcher,那也只是对 appsettings.json
的监控,只有这个文件不变动就不会触发 3.4w
的回调,不是吗?可当我剖析完 ConfigurationRoot
源码之后,我发现自己真 tmd 的天真。
2. 从源码中寻找答案
首先咱们看看 FileSystemWatcher 到底监控的是啥?能够在它的构造函数中设一个断点,如下图所示:
很显著的看到,它 watch 的是程序根目录,这就能解释为什么日志文件有变更就会触发文件变更的回调函数,为了验证,我能够在 ReadDirectoryChangesCallback
办法中下一个断点,再丢一个日志文件到根目录,看是否能触发就晓得了。。。截图如下:
回到本案例,也就是说一旦有日志变动,就会触发 3.4w
个回调函数,如果变动 100 次,就会触发 340w
次回调,而日志变更不进行,天然就会因为海量的回调把程序搞死。。。对吧。。。
四:总结
本次事变可能是因为敌人偷懒,没有将 Configuration
或 IOptions
注上来,而是采纳从新构建 ConfigurationRoot 的形式获取 ConnectionString,并谬误的配置 reloadOnChange: true
,导致 IO 线程无奈及时处理因为日志文件的变更导致的海量回调函数,进而导致程序挂死。
晓得整个前因后果之后,优化措施就很简略了,提供两种办法。
- 将
reloadOnChange: true
改成reloadOnChange: false
。 - 想方法将 Configuration 注入到 DataBaseConfig 类中,做成动态变量也行。😁😁😁
最初上一个彩蛋,敌人太客气了。