casperse Posted February 18 Share Posted February 18 Hi All So I just successfully moved my Unraid installation to a brand new platform. (I love UNRAID!) But my happiness was short lived !!! On my old server I did sometimes experience the "Your server has run out of memory" but I could never find the cause? And I always ended up rebooting the server, when this happened. My new server have 128G RAM !!! (Twice that of the old one) and after running for only a couple of hours I got the error 😞 (I did do a burn-in and a extended memtest of the new platform and all is 100% ok!) So two different platforms but with the same error, what in my UNRAID is causing this? Diagnostic attached. Out-of-memory.zip Quote Link to comment
JorgeB Posted February 19 Share Posted February 19 If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin: https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/ Quote Link to comment
casperse Posted February 19 Author Share Posted February 19 This is happening with no docker and no VM's running! I just did a reboot and got the error message away from the "Fix common problem" The only thing I am doing right now is moving my appdata back to the cachedrive. But looking at the RAM on the dashboard I dont see high usage: Why is it saying maximun size 256 GiB? (I only have 128GiB) ZFS is high (But only like 15,6G) is ZFS capped to a special % of the total RAM? I dont have any special logging enabled, and not using syslog either? Alos I have only one browser open to Unraid connect. Quote Link to comment
JorgeB Posted February 19 Share Posted February 19 Try booting in safe mode and/or closing any browser windows open to the GUI, only open when you need to use it then close again. Quote Link to comment
casperse Posted February 19 Author Share Posted February 19 Sure, but currently waiting for the mover to finish (Very slow), but I will do that afterwards and see if this makes a difference. Thanks for helping! This seems to be for systems with little RAM and BTRFS? (I dont have any BTRFS drives anymore) I am just in the progress of moving to ZFS for all my cache drives - so I can use snapshot etc. So would this still be a viable solution? Quote Link to comment
JorgeB Posted February 19 Share Posted February 19 This issue should be unrelated to btrfs, but you can try zfs to see if it makes a difference. Quote Link to comment
casperse Posted February 23 Author Share Posted February 23 The btrfs question was related to the plugin you suggested for the swapfile looks like it needs btrfs formatted drives? Also got a new error, I havent seen before but resulting in the same memory error message? I have not installed any new dockers? (Just moved everything to the new server now with ZFS cache?) Quote Link to comment
JorgeB Posted February 24 Share Posted February 24 That looks like Plex getting killed because the server was running low on RAM, you can try limiting the RAM for that container, also make sure Plex is up to data since I believe at some point it had a memory leak, possibly still has. Quote Link to comment
casperse Posted February 24 Author Share Posted February 24 (edited) Thanks JorgeB, I can see my Plex is one update short. Will update right away! mgutt helped me long time ago setting up at RamScratch folder for Plex at boot (Script), but I guess you talk about the docker advanced settings memory limit? I was told that it would be best to remove them but that was in 2022 🙂 #!/bin/bash mkdir /tmp/PlexRamScratch chmod -R 777 /tmp/PlexRamScratch mount -t tmpfs -o size=40g tmpfs /tmp/PlexRamScratch (The 40g size is to accommodate the download feature in Plex). I did install the swapfile plugin and created it on a single U3 cache drive with btrfs - any recommendation on the size? I went with the default values (Size: 20G) I still think its strange that after upgrading from 64G to 128G of newer and faster RAM I have these low on RAM problems? Is it fragmentation or is this some Kernel Memory Limit causing the OOM on Docker hosting? Edited February 24 by casperse Quote Link to comment
JorgeB Posted February 24 Share Posted February 24 1 hour ago, casperse said: Is it fragmentation It can be. Quote Link to comment
casperse Posted February 24 Author Share Posted February 24 I just discovered I have both the script and the advanced settings for RAMscratch --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 And the above scripts, to create the RAMscratch 🙂 Any recommendation to use one over the other? (I cant remember if one solution is better to cleanup than the other in RAM?) And did you want me to do a "--memory=8G" for the Plex docker? Any size recommendation for the swapfile? Sorry I have read many of the posts (Really old) and I am curious to if this have any effect on my memory problems? (I also now have ZFS drives and I can see they also allocate more RAM) Quote Link to comment
JorgeB Posted February 24 Share Posted February 24 Sorry, never used Plex so not really sure what would be better, you can try both and see. Quote Link to comment
casperse Posted February 24 Author Share Posted February 24 (edited) Ok so my trouble shooting continues 🙂 I Have installed the swapfile plugin (You recommended above) successfully (Standard size setting is around 2G) I have moved the RamScratch settings to each of the dockers and also set memory limits (PLEX) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/PlexRamScratch,tmpfs-size=68719476736 --memory=64G (EMBY) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/EmbyRamScratch,tmpfs-size=8589934592 --memory=8G (JELLYFIN) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/JellyRamScratch,tmpfs-size=8589934592 --memory=8G After doing this I get a docker Warning: Your kernel does not support swap limit capabilities? (Running the RamScratch as a script I never saw any warnings like below, did I do something wrong?) I can see that the memory limit is implemented on the docker webpage: But I still see this: And again today: If these are to be just ignored then it would be nice not to have them in RED letters 🙂 Edited February 25 by casperse Quote Link to comment
JorgeB Posted February 25 Share Posted February 25 You may have other apps causing the server to run OOM, you can disable everything and start enabling one by one, then retest. Quote Link to comment
casperse Posted February 25 Author Share Posted February 25 (edited) 1 hour ago, JorgeB said: You may have other apps causing the server to run OOM, you can disable everything and start enabling one by one, then retest. Yes, from the above error I can see a docker ID starting with c9e4ebfe searching for this I get the culprit? : But I can see that this docker already have a limit of 1G: --memory=1G --no-healthcheck --log-opt max-size=50m The dev/shm will always be set to 50% of the available memory right? So any input on what to do next? any other settings to limit memory for specific dockers Edited February 25 by casperse Quote Link to comment
JorgeB Posted February 25 Share Posted February 25 Can't really help with this, docker is not my specialty, maybe someone else can. Quote Link to comment
trurl Posted February 25 Share Posted February 25 On 2/24/2024 at 5:34 AM, casperse said: The 40g size is to accommodate the download feature in Plex If you're going to use that feature probably better to transcode to a path on ssd Quote Link to comment
casperse Posted February 25 Author Share Posted February 25 4 hours ago, trurl said: If you're going to use that feature probably better to transcode to a path on ssd After Plex removed the Sync feature and replaced it with the new Download feature, its not so bad, and the RamScratch gets empty pretty quickly. Some restraint to the amount and size of files should be limited, but my initial tests have been okay. But I am not using this now, I am focusing on eliminating the memory error - so it has no impact on the errors I currently get. I have now set memory limit for all my dockers, hope to see a difference. No morelogs since 16:00 what does this mean? Quote Link to comment
casperse Posted February 26 Author Share Posted February 26 Please can anyone help me? I have installed the swapfile plugin I have set a memory limit on all my dockers to 1G (If all dockers 0bey the rules of the limit then I shouldn't see anymore errors?) I have tried stopping all dockers and only some dockers but I still get the Memory error? Is there anyway to find out what is causing this? Would syslog be able to find out? This happened again today: Systems are still running, but the error is resulting in Unraid killing random processes? Quote Link to comment
JorgeB Posted February 26 Share Posted February 26 See if you can find out which container is that, alternatively boot in safe mode and stop all containers/VMs, then start enabling one by one and retest, to see if you can find the culprit Quote Link to comment
casperse Posted February 26 Author Share Posted February 26 (edited) My problem is that i occurs every 10-12 hours so with the amount of dockers I have this would be very hard to do. Update: So this could be caused by a single docker with a memory limit that breaks it? Anyway to identify the docker, from the error message? Edited February 26 by casperse Quote Link to comment
JorgeB Posted February 26 Share Posted February 26 45 minutes ago, casperse said: So this could be caused by a single docker with a memory limit that breaks it? It's possible. 46 minutes ago, casperse said: Anyway to identify the docker, from the error message? Not that I know of. 46 minutes ago, casperse said: My problem is that i occurs every 10-12 hours so with the amount of dockers I have this would be very hard to do. It can be a pain, but it's still the best suggestion I have, extremely unlike this is un Unraid issue. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.