Jaytie Posted February 10, 2023 Share Posted February 10, 2023 Good evening, so I'm following the advice and opening a thread after seeing an "Out of memory" Error in my Fix Common Problems Plugin. After looking at the logs, it might have something to do with Rsync. Found this in the logfile: Feb 9 16:05:25 Dockerbox kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=de0cfc44e0f23b22886296a3fdf0b85379af75a1e42099b33f9043bbcc605304,mems_allowed=0,global_oom,task_memcg=/,task=avahi-daemon,pid=21551,uid=61 Feb 9 16:05:25 Dockerbox kernel: Out of memory: Killed process 21551 (avahi-daemon) total-vm:3607048kB, anon-rss:3601876kB, file-rss:4kB, shmem-rss:2804kB, UID:61 pgtables:7092kB oom_score_adj:0 I started/configured a rsync daemon folling this post: I want my Unraid Server to be a rsync target for Synology Hyper Backup, which worked and is actually running right now. Logfiles attached. Can you give me some further information? diagnostics-20230210-0057.zip Quote Link to comment
Jaytie Posted February 13, 2023 Author Share Posted February 13, 2023 Hello again, no ideas yet? Just had another "out of memory" error in my logs, which killed one docker-container. I'm not 100% sure, but it might have happened as I accessed a folder with a lot of pictures on my Nextcloud with the NC Web-Interface. Added the diagnostic files again. Help would me much appreciated! Greetings diagnostics-20230213-1650.zip Quote Link to comment
JorgeB Posted February 13, 2023 Share Posted February 13, 2023 If it keeps happening try limiting more the RAM for VMs and/or docker containers, alternatively a small swap file on disk might help, you can use the swapfile plugin: https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/ Quote Link to comment
Jaytie Posted February 13, 2023 Author Share Posted February 13, 2023 (edited) I was now able to reproduce it. When accessing a nextcloud folder with a lot of thumbnails to create, memory usage of the nextcloud container bumps up and the 'out of memory' error occurs. 39 minutes ago, JorgeB said: If it keeps happening try limiting more the RAM for VMs and/or docker containers Okay thanks! Whats the best way to do this? Should I set a parameter here: Like: "--memory=2G" Will have a look at the swapfile, thanks! Edit: I'm a little bit confused here. As I wrote in my initial post, it also runs out of memory, when a client reaches the Unraid-Server via rsync. Does somebody have a clue, why that's the case? Edited February 13, 2023 by Jaytie Quote Link to comment
Jaytie Posted February 14, 2023 Author Share Posted February 14, 2023 The Swapfile-Plugin helps with the symtoms but the problem is still present. Its the "avahi-daemon" that uses all the memory. Quote Link to comment
Solution Jaytie Posted February 14, 2023 Author Solution Share Posted February 14, 2023 Another update and hopefully the solution for my issue: As mentioned above, the issue reproducedable occured, when connecting with the Hyper Backup Client via rsync to Unraid. I guess because of failing reverse lookups the avahi-daemon used all the ram and most of the CPU: Feb 10 00:55:03 Dockerbox rsync[2997]: connect from 192.168.10.2 (192.168.10.2) Feb 10 00:55:03 Dockerbox rsyncd[2997]: name lookup failed for 192.168.10.2: Name or service not known Feb 10 00:55:03 Dockerbox rsyncd[2997]: connect from UNKNOWN (192.168.10.2) Also Hyper Backup wasn't able to connect during that time. Eventually the system was out of memory killed a process (not always the same) and Hyper Backup connected. Solution: I added the line reverse lookup = no to my /boot/custom/etc/rsyncd.conf This results in no more failed name lookups, Hyper Backup connects instantly and Ram usage stays as it should. I hope this was the main issue. Maybe this will help someone in the future. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.