Out of memory


Recommended Posts

In low memory situations, Linux tries to kill processes that are consuming lots of memory. There are some further conditions that are checked, before the process that will be killed is selected. (i won't sum them). This happened 6 times on a docker instance, in your case the one running Sonarr.

 

    Line 3560: Jan 10 19:59:13 ffs2 kernel: Killed process 17778 (mono) total-vm:3574628kB, anon-rss:2057924kB, file-rss:0kB, shmem-rss:4kB
    Line 3778: Jan 14 21:09:07 ffs2 kernel: Killed process 15431 (mono) total-vm:3158360kB, anon-rss:2034704kB, file-rss:0kB, shmem-rss:4kB
    Line 4029: Jan 17 21:53:00 ffs2 kernel: Killed process 6146 (mono) total-vm:3016800kB, anon-rss:2049280kB, file-rss:0kB, shmem-rss:4kB
    Line 4260: Jan 21 19:31:30 ffs2 kernel: Killed process 25086 (mono) total-vm:3102052kB, anon-rss:2041428kB, file-rss:0kB, shmem-rss:4kB
    Line 4479: Jan 24 17:25:16 ffs2 kernel: Killed process 13894 (mono) total-vm:3638768kB, anon-rss:2048868kB, file-rss:0kB, shmem-rss:4kB
    Line 4619: Jan 27 00:18:22 ffs2 kernel: Killed process 24288 (mono) total-vm:3546380kB, anon-rss:2043724kB, file-rss:0kB, shmem-rss:4kB

 

There are plenty of complaints on the internet about mono + sonarr combination eating up memory. U could try restarting the docker instance on regular basis before the system runs out of memory or mono hogging to much memory. Trying a different sonarr docker template, could be a option too. Could be a bug as well in sonarr / mono it self that happens on your configuration or content.

 

 

 

Link to comment

@tucansam I have just had a little trawl of the support form to see if anyone was complaining of RAM usage. My lowly little server only has 8gb of ECC RAM but this has always been ample until about the last month and I seem to be constantly running at 90%+ capacity. I've started limiting the amount of RAM my dockers use but the two things I don't limit is unRAID itself (because you can't) and my Plex docker as that is the primary use of the server. I am using the Linuxserver.IO docker template for Plex and I am noticing that if I restart the docker it looks like that is responsible for at least 25% of the RAM utilisation. So I'm not sure if it is a change within this docker or unRAID itself causing the additional RAM usage. Are you using the Plex docker?

Link to comment

@Poprin Are u not confusing used with cached? U could run the following in the console:

"top -o %MEM" (with out brackets) it will sort your memory usage. With the "e" u can cycle in unit size. (kb mb gb)

The values under RES corresponded with the %MEM, and an accurate representation of how much actual physical memory a process is consuming.

 


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
19019 nobody    20   0 2272.0m 908.7m  16.8m S   1.7   5.7 136:40.05 mono
11112 nobody    20   0 2335.4m 436.6m  41.1m S   0.3   2.8   1:19.72 Plex Media Serv
12899 nobody    20   0 2412.6m 372.0m   6.7m S   0.3   2.3   8:15.03 python2
 4878 nobody    20   0 3751.8m 316.2m  19.9m S   0.0   2.0   0:25.01 java

 

My case Radarr and Plex are top consumers. But most of it is cache.

 

 

 

 

 

Link to comment
 21:09:07 ffs2 kernel: Task in /docker/dd8723fd3523efe1c260023f12c740e1125a79b095d62f1e5b724c606ee33b05 killed as a result of limit of /docker/dd8723fd3523efe1c260023f12c740e1125a79b095d62f1e5b724c606ee33b05
Jan 14

Its a task within one of the docker apps that's being killed off because of the memory limits placed on the container.

 

My best guess with very limited info is Unifi logging damn near everything?

--logpath /usr/lib/unifi/logs/mongod.log

(And it runs via mono, which has been known to be a pig on resources)

Link to comment

@SiNtEnEl Yes you are correct it is mostly cached rather than utilised in fairness, I installed the cAdvisor plugin to get a closer look as to what was going on. In conclusion I think my utilisation of the server has increased and I'm popping another 4GB in today to see if it gives me enough headroom. Failing that I still have expansion for another 4GB... failing that it's new server time!! It is getting a bit long in the tooth.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.