关于oom:OOM-手记

48次阅读

共计 2412 个字符,预计需要花费 7 分钟才能阅读完成。

最近我的项目常常会被 OOM kill 掉,然而没有在 /var/log/messages 看到 OOM 日志。
其实不同 Linux 发行版 OOM 日志在不同的中央,然而能够通过 dmesg 查看。

[33350932.058517] Task in /kubepods/burstable/podbed004f6-87f6-415f-8b42-696e12d6096a/851f02a6960c8b872559de9e29feda1b8ed41d397096d95d3dc8d6f74e9de061 killed as a result of limit of /kubepods/burstable/podbed004f6-87f6-415f-8b42-696e12d6096a
[33350932.058524] memory: usage 14680064kB, limit 14680064kB, failcnt 811012
[33350932.058525] memory+swap: usage 14680064kB, limit 9007199254740988kB, failcnt 0
[33350932.058526] kmem: usage 194960kB, limit 9007199254740988kB, failcnt 0
[33350932.058526] Memory cgroup stats for /kubepods/burstable/podbed004f6-87f6-415f-8b42-696e12d6096a: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
[33350932.058540] Memory cgroup stats for /kubepods/burstable/podbed004f6-87f6-415f-8b42-696e12d6096a/3bf687dcb3254c8e7b66ad7b5a93918f65076cae647d84424f6f590ef13ba451: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:48KB inactive_file:0KB active_file:0KB unevictable:0KB
[33350932.058550] Memory cgroup stats for /kubepods/burstable/podbed004f6-87f6-415f-8b42-696e12d6096a/851f02a6960c8b872559de9e29feda1b8ed41d397096d95d3dc8d6f74e9de061: cache:0KB rss:14482160KB rss_huge:0KB shmem:0KB mapped_file:528KB dirty:0KB writeback:3960KB swap:0KB inactive_anon:0KB active_anon:14485012KB inactive_file:16KB active_file:8KB unevictable:0KB
[33350932.058559] Tasks state (memory values in pages):
[33350932.058560] [pid]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[33350932.059085] [53131]     0 53131      256        1    32768        0          -998 pause
[33350932.059090] [53484]     0 53484     3804      826    73728        0           945 sh
[33350932.059093] [53511]     0 53511    27556      708   253952        0           945 sshd
[33350932.059096] [53565]     0 53565    21333     1104   212992        0           945 su
[33350932.059099] [53567]     0 53567    27538      711    86016        0           945 opagent
[33350932.059101] [53579]  1602 53579 12617494  3622901 34770944        0           945 java
[33350932.059104] [59796]     0 59796     1933      180    57344        0           945 sleep
[33350932.059248] Memory cgroup out of memory: Kill process 53579 (java) score 1934 or sacrifice child
[33350932.065172] Killed process 53579 (java) total-vm:50469976kB, anon-rss:14454940kB, file-rss:36664kB, shmem-rss:0kB
[33350933.005245] oom_reaper: reaped process 53579 (java), now anon-rss:0kB, file-rss:8kB, shmem-rss:0kB

Docker 给了 15G 的内存,JVM 调配了 12G,还有各种 Buffer 和 Cache 的内存,后果就超过了限度,被 OOM 了。

共事听到 OOM kill,第一反馈大部分是难道 JVM 内存透露了?

其实 OOM kill 是 Linux 服务器自身的内存满了,不是 JVM 的堆内存满了。

参考链接
https://stackoverflow.com/que…

正文完
 0