Out Of Memory Error - Kill Process or Sacrifice Child!!


Recommended Posts

Sacrifices happen  ;) and well I think 1/2 of my memory disappeared, causing such an issue.

If someone versed in looking at this kind of info can confirm, I'd appreciate it.

If I reboot, I'm nearly certain all will be well again for a while.

 

In the likely all too related issues I'm having with memory, I've recently started getting OOM related issues.

My primary VM was shutdown, and when looking at the syslog I see this:

Aug  1 03:01:15 Server kernel: Out of memory: Kill process 24203 (qemu-system-x86) score 267 or sacrifice child
Aug  1 03:01:15 Server kernel: Killed process 24203 (qemu-system-x86) total-vm:9391912kB, anon-rss:9009808kB, file-rss:22508kB

 

I have 32GB installed, and only 20GB to VM's, approximately 5GB used for Docker, and the rest for UnRAID.

 

Looking at the dashboard I see 32.082GB allocated and 32GB installed, usage at 88%.

 

Here's some memory related outputs:

root@Server:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          32081       16102         418       11261       15561        3764
Swap:             0           0           0

This is interesting since it shows 32081 total, with 16102 used, and somehow only 418 free..  ???

Edit: The rest is likely in buff/cache and shared, so maybe I'm mistaken.

 

root@Server:~# cat /proc/meminfo
MemTotal:       32851508 kB
MemFree:          638380 kB
MemAvailable:    4001440 kB
Buffers:             140 kB
Cached:         14123816 kB
SwapCached:            0 kB
Active:          8227844 kB
Inactive:       12607492 kB
Active(anon):    7091536 kB
Inactive(anon): 11151660 kB
Active(file):    1136308 kB
Inactive(file):  1455832 kB
Unevictable:     9309128 kB
Mlocked:         9309128 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:             12732 kB
Writeback:           436 kB
AnonPages:      16021112 kB
Mapped:           118628 kB
Shmem:          11531304 kB
Slab:            1747484 kB
SReclaimable:    1277764 kB
SUnreclaim:       469720 kB
KernelStack:       15168 kB
PageTables:        47964 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16425752 kB
Committed_AS:   28726276 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:   5226496 kB
DirectMap4k:       11668 kB
DirectMap2M:     1916928 kB
DirectMap1G:    31457280 kB

root@Server:~# vmstat -s
     32851508 K total memory
     16161984 K used memory
      8056764 K active memory
     12634340 K inactive memory
       782400 K free memory
          140 K buffer memory
     15906984 K swap cache
            0 K total swap
            0 K used swap
            0 K free swap
     20001635 non-nice user cpu ticks
          656 nice user cpu ticks
      9901986 system cpu ticks
    268785745 idle cpu ticks
      3960507 IO-wait cpu ticks
            0 IRQ cpu ticks
       206487 softirq cpu ticks
            0 stolen cpu ticks
   1794507562 pages paged in
   1879844435 pages paged out
            0 pages swapped in
            0 pages swapped out
   1553446086 interrupts
    229120367 CPU context switches
   1469832426 boot time
      4671468 forks

root@Server:~# top
top - 16:20:34 up 2 days, 22:33,  1 user,  load average: 1.10, 1.36, 1.09
Tasks: 457 total,   1 running, 456 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.5 us,  5.7 sy,  0.0 ni, 90.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32851508 total,   761320 free, 16161548 used, 15928640 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  4182392 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
7956 root      20   0 5013540 4.472g  14964 S  12.9 14.3 869:04.77 qemu-syste+
24351 root      20   0 4918704 4.434g  22484 S   5.9 14.2 594:54.11 qemu-syste+
21145 root      20   0 4949424 4.464g  22484 S   5.6 14.2 109:46.80 qemu-syste+
20806 nobody    20   0 7412476 1.046g  15568 S   3.0  3.3  90:47.51 mono-sgen
9585 root      20   0   91468   5400   3036 S   1.7  0.0  18:16.84 emhttp
10148 root      20   0   10144   3052   2124 S   1.3  0.0  19:36.68 cache_dirs
    3 root      20   0       0      0      0 S   1.0  0.0   3:56.64 ksoftirqd/0
26015 root      20   0   16760   3100   2328 R   1.0  0.0   0:00.20 top
7959 root      20   0       0      0      0 S   0.7  0.0  34:56.63 vhost-7956
    7 root      20   0       0      0      0 S   0.3  0.0   9:08.15 rcu_preempt
2558 root      20   0    9684   2576   2124 S   0.3  0.0  12:57.14 cpuload
19971 nobody    20   0 5848452 301296   4212 S   0.3  0.9   1:11.33 java
20457 nobody    20   0  500920 155172   3848 S   0.3  0.5   1:18.91 mysqld
21007 nobody    20   0 3395164  61004   3568 S   0.3  0.2   0:47.48 python
21151 nobody    20   0 1936056  21472   1708 S   0.3  0.1   1:25.06 kodi.bin
    1 root      20   0    4372   1548   1440 S   0.0  0.0   0:07.54 init
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.02 kthreadd
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:+
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_sched
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.01 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:01.38 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   0:01.73 migration/1
   12 root      20   0       0      0      0 S   0.0  0.0   0:23.63 ksoftirqd/1
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0
   14 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:+
   15 root      rt   0       0      0      0 S   0.0  0.0   0:01.58 migration/2
   16 root      20   0       0      0      0 S   0.0  0.0   2:14.49 ksoftirqd/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:01.60 migration/3
   20 root      20   0       0      0      0 S   0.0  0.0   1:04.77 ksoftirqd/3
   22 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/3:+
   23 root      rt   0       0      0      0 S   0.0  0.0   0:01.36 migration/4
   24 root      20   0       0      0      0 S   0.0  0.0   1:05.02 ksoftirqd/4
   25 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/4:0
   26 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/4:+
   27 root      rt   0       0      0      0 S   0.0  0.0   0:01.49 migration/5
   28 root      20   0       0      0      0 S   0.0  0.0   1:04.93 ksoftirqd/5
   30 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/5:+
   31 root      rt   0       0      0      0 S   0.0  0.0   0:02.65 migration/6
   32 root      20   0       0      0      0 S   0.0  0.0   1:32.13 ksoftirqd/6
   33 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/6:0
   35 root      rt   0       0      0      0 S   0.0  0.0   0:02.00 migration/7
   36 root      20   0       0      0      0 S   0.0  0.0   0:10.35 ksoftirqd/7
   37 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/7:0
   38 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/7:+
   39 root      rt   0       0      0      0 S   0.0  0.0   0:02.86 migration/8
   40 root      20   0       0      0      0 S   0.0  0.0   1:10.84 ksoftirqd/8

 

So what's the verdict? It looks to me like 1/2 of it ran away.. IDK.

I have a replacement MB arriving soon that I plan to swap over to which will (hopefully) fix all the issues that I'v been experiencing.

server-diagnostics-20160801-1615.zip

server-syslog-20160801-1616.zip

Link to comment

I've decided to completely disable all running VM's, this results in a drop from 88% showing on the dashboard, to 42% showing.

This was a total of 12GB's assigned, with 3 VM's that were active.

The drop in % of 46% is ~correct for the 32GB installed with 12GB's being available.

It was ~15% free'd after each one shutdown.

 

So if there really is 32GB available and I'm not dropping 1/2 of it for no good damn reason, what in the heck is using 42% with very little running?

 

Completely disabling Docker also leads to an additional 2-3% drop in memory usage, with a dashboard reading of 40% usage.

I removed cache directories plugin and it made 0% difference, so I put it back.

 

I have the following optional plugins installed:

Unassigned Devices
Community Applications
Dynamix Cache Directories
Dynamix Schedules
Dynamix SSD Trim
Dynamix System Buttons
Dynamix System Information
Dynamix System Temperature
Fix Common Problems
Nerd Tools
Open Files
Powerdown
Preclear Disks
Server Layout
Tips and Tweaks
UnBalance

 

Does any of the memory outputs or diagnostics show a specific PID or process consuming far more memory than normal?

I'm not seeing anything in particular, and ~40% with no VM's or Dockers running is a LOT!

 

Edit:

 

Here is the output of top with no Dockers or VM's running:

root@Server:~# top
top - 17:05:03 up 2 days, 23:17,  1 user,  load average: 0.65, 0.71, 0.78
Tasks: 361 total,   2 running, 358 sleeping,   0 stopped,   1 zombie
%Cpu(s):  3.6 us,  6.7 sy,  0.0 ni, 89.6 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32851508 total, 13708548 free,   460912 used, 18682048 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 19863664 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
8074 root      20   0   11960   2332   2028 R  55.6  0.0   0:01.68 find
9585 root      20   0   91600   5436   3036 S   3.6  0.0  19:34.14 emhttp
   12 root      20   0       0      0      0 S   0.7  0.0   0:24.60 ksoftirqd/1
7910 root      20   0   16772   3256   2344 R   0.7  0.0   0:00.05 top
10903 root      20   0    9832   2732   2120 S   0.7  0.0   0:04.47 cache_dirs
    3 root      20   0       0      0      0 S   0.3  0.0   3:59.42 ksoftirqd/0
    7 root      20   0       0      0      0 S   0.3  0.0   9:14.45 rcu_preempt
   20 root      20   0       0      0      0 S   0.3  0.0   1:05.60 ksoftirqd/3
   24 root      20   0       0      0      0 S   0.3  0.0   1:06.63 ksoftirqd/4
   40 root      20   0       0      0      0 S   0.3  0.0   1:11.43 ksoftirqd/8
2529 root      20   0  297196  15552  13028 S   0.3  0.0   0:11.46 smbd
2558 root      20   0    9684   2576   2124 S   0.3  0.0  13:05.95 cpuload
9683 avahi     20   0   34496   3180   2764 S   0.3  0.0   0:20.01 avahi-daemon
13852 nobody    20   0   15232   7648   6476 S   0.3  0.0   0:03.88 unbalance
    1 root      20   0    4372   1548   1440 S   0.0  0.0   0:07.56 init
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.02 kthreadd
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.01 rcu_sched
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.01 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:01.41 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   0:01.76 migration/1
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0
   14 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0H
   15 root      rt   0       0      0      0 S   0.0  0.0   0:01.60 migration/2
   16 root      20   0       0      0      0 S   0.0  0.0   2:15.15 ksoftirqd/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:01.62 migration/3
   22 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/3:0H
   23 root      rt   0       0      0      0 S   0.0  0.0   0:01.38 migration/4
   26 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/4:0H
   27 root      rt   0       0      0      0 S   0.0  0.0   0:01.52 migration/5
   28 root      20   0       0      0      0 S   0.0  0.0   1:06.10 ksoftirqd/5
   30 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/5:0H
   31 root      rt   0       0      0      0 S   0.0  0.0   0:02.70 migration/6
   32 root      20   0       0      0      0 S   0.0  0.0   1:33.22 ksoftirqd/6
   35 root      rt   0       0      0      0 S   0.0  0.0   0:02.05 migration/7
   36 root      20   0       0      0      0 S   0.0  0.0   0:11.15 ksoftirqd/7
   37 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/7:0
   38 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/7:0H
   39 root      rt   0       0      0      0 S   0.0  0.0   0:02.90 migration/8
   41 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/8:0
   42 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/8:0H
   43 root      rt   0       0      0      0 S   0.0  0.0   0:02.71 migration/9
   44 root      20   0       0      0      0 S   0.0  0.0   0:48.48 ksoftirqd/9
   46 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/9:0H
   47 root      rt   0       0      0      0 S   0.0  0.0   0:02.18 migration/10
   48 root      20   0       0      0      0 S   0.0  0.0   0:39.52 ksoftirqd/10
   49 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/10:0
   50 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/10:0H
   51 root      rt   0       0      0      0 S   0.0  0.0   0:02.36 migration/11
   52 root      20   0       0      0      0 S   0.0  0.0   0:41.12 ksoftirqd/11
   53 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/11:0
   55 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kdevtmpfs
   56 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 netns
   59 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 perf
  313 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 writeback
  315 root      25   5       0      0      0 S   0.0  0.0   0:00.00 ksmd

 

root@Server:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          32081         446       13392       11261       18243       19402
Swap:             0           0           0

 

root@Server:~# cat /proc/meminfo
MemTotal:       32851508 kB
MemFree:        13713264 kB
MemAvailable:   19868064 kB
Buffers:            1420 kB
Cached:         16295608 kB
SwapCached:            0 kB
Active:          2330584 kB
Inactive:       14018372 kB
Active(anon):     432144 kB
Inactive(anon): 11151056 kB
Active(file):    1898440 kB
Inactive(file):  2867316 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         51348 kB
Mapped:            36480 kB
Shmem:          11531644 kB
Slab:            2383884 kB
SReclaimable:    1895888 kB
SUnreclaim:       487996 kB
KernelStack:        6592 kB
PageTables:         5844 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16425752 kB
Committed_AS:   11707996 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:      8192 kB
DirectMap4k:       11668 kB
DirectMap2M:     1916928 kB
DirectMap1G:    31457280 kB

 

 

Also odd.. This popped up just now while SSH'd to my server in the terminal window:

Message from syslogd@Server at Aug  1 17:05:18 ...
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1

Message from syslogd@Server at Aug  1 17:05:28 ...
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1

 

 

 

Link to comment

I've decided to completely disable all running VM's, this results in a drop from 88% showing on the dashboard, to 42% showing.

This was a total of 12GB's assigned, with 3 VM's that were active.

The drop in % of 46% is ~correct for the 32GB installed with 12GB's being available.

It was ~15% free'd after each one shutdown.

 

So if there really is 32GB available and I'm not dropping 1/2 of it for no good damn reason, what in the heck is using 42% with very little running?

 

Completely disabling Docker also leads to an additional 2-3% drop in memory usage, with a dashboard reading of 40% usage.

I removed cache directories plugin and it made 0% difference, so I put it back.

 

I have the following optional plugins installed:

Unassigned Devices
Community Applications
Dynamix Cache Directories
Dynamix Schedules
Dynamix SSD Trim
Dynamix System Buttons
Dynamix System Information
Dynamix System Temperature
Fix Common Problems
Nerd Tools
Open Files
Powerdown
Preclear Disks
Server Layout
Tips and Tweaks
UnBalance

 

Does any of the memory outputs or diagnostics show a specific PID or process consuming far more memory than normal?

I'm not seeing anything in particular, and ~40% with no VM's or Dockers running is a LOT!

 

Edit:

 

Here is the output of top with no Dockers or VM's running:

root@Server:~# top
top - 17:05:03 up 2 days, 23:17,  1 user,  load average: 0.65, 0.71, 0.78
Tasks: 361 total,   2 running, 358 sleeping,   0 stopped,   1 zombie
%Cpu(s):  3.6 us,  6.7 sy,  0.0 ni, 89.6 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32851508 total, 13708548 free,   460912 used, 18682048 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 19863664 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
8074 root      20   0   11960   2332   2028 R  55.6  0.0   0:01.68 find
9585 root      20   0   91600   5436   3036 S   3.6  0.0  19:34.14 emhttp
   12 root      20   0       0      0      0 S   0.7  0.0   0:24.60 ksoftirqd/1
7910 root      20   0   16772   3256   2344 R   0.7  0.0   0:00.05 top
10903 root      20   0    9832   2732   2120 S   0.7  0.0   0:04.47 cache_dirs
    3 root      20   0       0      0      0 S   0.3  0.0   3:59.42 ksoftirqd/0
    7 root      20   0       0      0      0 S   0.3  0.0   9:14.45 rcu_preempt
   20 root      20   0       0      0      0 S   0.3  0.0   1:05.60 ksoftirqd/3
   24 root      20   0       0      0      0 S   0.3  0.0   1:06.63 ksoftirqd/4
   40 root      20   0       0      0      0 S   0.3  0.0   1:11.43 ksoftirqd/8
2529 root      20   0  297196  15552  13028 S   0.3  0.0   0:11.46 smbd
2558 root      20   0    9684   2576   2124 S   0.3  0.0  13:05.95 cpuload
9683 avahi     20   0   34496   3180   2764 S   0.3  0.0   0:20.01 avahi-daemon
13852 nobody    20   0   15232   7648   6476 S   0.3  0.0   0:03.88 unbalance
    1 root      20   0    4372   1548   1440 S   0.0  0.0   0:07.56 init
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.02 kthreadd
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.01 rcu_sched
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.01 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:01.41 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   0:01.76 migration/1
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0
   14 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0H
   15 root      rt   0       0      0      0 S   0.0  0.0   0:01.60 migration/2
   16 root      20   0       0      0      0 S   0.0  0.0   2:15.15 ksoftirqd/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:01.62 migration/3
   22 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/3:0H
   23 root      rt   0       0      0      0 S   0.0  0.0   0:01.38 migration/4
   26 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/4:0H
   27 root      rt   0       0      0      0 S   0.0  0.0   0:01.52 migration/5
   28 root      20   0       0      0      0 S   0.0  0.0   1:06.10 ksoftirqd/5
   30 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/5:0H
   31 root      rt   0       0      0      0 S   0.0  0.0   0:02.70 migration/6
   32 root      20   0       0      0      0 S   0.0  0.0   1:33.22 ksoftirqd/6
   35 root      rt   0       0      0      0 S   0.0  0.0   0:02.05 migration/7
   36 root      20   0       0      0      0 S   0.0  0.0   0:11.15 ksoftirqd/7
   37 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/7:0
   38 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/7:0H
   39 root      rt   0       0      0      0 S   0.0  0.0   0:02.90 migration/8
   41 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/8:0
   42 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/8:0H
   43 root      rt   0       0      0      0 S   0.0  0.0   0:02.71 migration/9
   44 root      20   0       0      0      0 S   0.0  0.0   0:48.48 ksoftirqd/9
   46 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/9:0H
   47 root      rt   0       0      0      0 S   0.0  0.0   0:02.18 migration/10
   48 root      20   0       0      0      0 S   0.0  0.0   0:39.52 ksoftirqd/10
   49 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/10:0
   50 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/10:0H
   51 root      rt   0       0      0      0 S   0.0  0.0   0:02.36 migration/11
   52 root      20   0       0      0      0 S   0.0  0.0   0:41.12 ksoftirqd/11
   53 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/11:0
   55 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kdevtmpfs
   56 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 netns
   59 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 perf
  313 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 writeback
  315 root      25   5       0      0      0 S   0.0  0.0   0:00.00 ksmd

 

root@Server:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          32081         446       13392       11261       18243       19402
Swap:             0           0           0

 

root@Server:~# cat /proc/meminfo
MemTotal:       32851508 kB
MemFree:        13713264 kB
MemAvailable:   19868064 kB
Buffers:            1420 kB
Cached:         16295608 kB
SwapCached:            0 kB
Active:          2330584 kB
Inactive:       14018372 kB
Active(anon):     432144 kB
Inactive(anon): 11151056 kB
Active(file):    1898440 kB
Inactive(file):  2867316 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         51348 kB
Mapped:            36480 kB
Shmem:          11531644 kB
Slab:            2383884 kB
SReclaimable:    1895888 kB
SUnreclaim:       487996 kB
KernelStack:        6592 kB
PageTables:         5844 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16425752 kB
Committed_AS:   11707996 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:      8192 kB
DirectMap4k:       11668 kB
DirectMap2M:     1916928 kB
DirectMap1G:    31457280 kB

 

 

Also odd.. This popped up just now while SSH'd to my server in the terminal window:

Message from syslogd@Server at Aug  1 17:05:18 ...
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1

Message from syslogd@Server at Aug  1 17:05:28 ...
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1

 

I don't see an awful lot wrong on first glance. I raised an issue a while back about Memory usage and was advised (quite glibly) that unRAID requires a "fair" amount of memory with Docker and VM services running even without much being hosted by them. Also with the amount of Plugins you are running I would expect the memory usage to be up there.

 

As for the unregister_netdevice message, I get this too. I have also seen others writing posts about it. It seems the prevailing opinion is that it is an issue which is benign and harmless. I don't know if it has made its way over to 6.2 but I just ignore it.

Link to comment

I don't see an awful lot wrong on first glance. I raised an issue a while back about Memory usage and was advised (quite glibly) that unRAID requires a "fair" amount of memory with Docker and VM services running even without much being hosted by them.

 

As for the unregister_netdevice message, I get this too. I have also seen others writing posts about it. It seems the prevailing opinion is that it is benign and harmless. I don't know if it has made its way over to 6.2 but I just ignore it.

 

This is actually a newer issue with me lately, never had the OOM errors before.

However I certainly have some form of memory issues (which do not show in Memtest) lately, which I hope to resolve soon. I have another thread related to that and everyone seems stumped that I've talked to about it. I think there may be a memory related issue with my MB, will know that soon when evaluating a new one.

 

What makes 0! (zero!) sense is, okay I have 40% used with minimal things running (UnRAID and some rather lightweight plugins) why doesn't any output show me what is using 40% (relates to 12.8GB's) of ram? That's a decent amount allocated to processes not being used as buffer or cache! I'm nearly certain if I reboot I will be back in the 10-20% range with no Docker/VM's loaded.

 

When I first started my primary (8GB assigned) VM after updating to RC3 two days ago it took a LONG time to start. When it finally did I received this message:

Jul 30 06:07:32 Server kernel: pmd_set_huge: Cannot satisfy [mem 0x383fb0000000-0x383fb0200000] with a huge-page mapping due to MTRR override.

I did some searching, nothing too helpful found, but I thought I should mention it at this point.

 

As for the unregister_netdevice message, I get this too. I have also seen others writing posts about it. It seems the prevailing opinion is that it is benign and harmless. I don't know if it has made its way over to 6.2 but I just ignore it.

 

Well I'm on 6.2, so I'd say it arrived..  ::)  It just happened to popup while I had it loaded, so figured I'd report it.

 

 

I'm hoping to find something using this extra ram prior to just rebooting.

Link to comment

One more, interesting results!!

 

See attached pic.

I'm showing 9.38GB used, 20.02GB cached, with 3.46GB free.

I'm currently showing memory usage at 59% with all plugins back loaded (made almost zero difference and 1 4GB VM running now as the wife was complaining).

 

The 9.38GB used sounds very reasonable considering 4GB for VM, which means ~5GB for Docker, Plugins, and UnRAID itself..

So then, why am I showing 59% memory usage?

 

Does system stats use a different reporting method than the Dashboard memory usage?

 

 

system_stats.png.06b32bf51ad4719044ff1c088bef4972.png

Link to comment

If you want to see the amount of memory any particular docker app is using check out CA's resource monitor (or cAdvisor).  Note that at any particular moment in time a docker app can use all available (cached / free) memory for its own purposes unless you outright limit it.

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

If you want to see the amount of memory any particular docker app is using check out CA's resource monitor (or cAdvisor).  Note that at any particular moment in time a docker app can use all available (cached / free) memory for its own purposes unless you outright limit it.

 

Sent from my LG-D852 using Tapatalk

 

Yep, I get it... You da man and all, but not too much going on in my Docker world (see pic).

I've even disabled the Docker service and it didn't change much at all, at the most 5% reported on the Dashboard.

 

Edit: Well not exactly "not too much going on" but ~4.8GB or so total for Docker, which agrees with the System stats total used amount of ~10GB.

In no way agreeing with the 60% I now see on the Dashboard used amount.

resource_monitor.png.a26d1fdd0c1f80f15e1b7e0fafca8383.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.