silasfelinus Posted October 6, 2022 Share Posted October 6, 2022 Unraid 6.11. I'm getting out of memory issues if I run all my apps for over 1-2 days. The server was last running for 2 days, 18 hours, and I woke up to the "Out of memory errors detected on your server" error and instructions to post to this forum [Definitely not the first time, but this time I'm following the advice. Apparently, it takes me a while to ask for help]. 1.5 days ago, I had every app running, my CPU load spiked to 100%, and it settled once I killed Komga (a go-to troubleshooting step unfortunately, I love the app but it's clearly got a problem independent of this one). I kept Komga off, and the CPU load dropped to normal operating levels and continued that way, as far as I know, until sometime last night. Diagnostics attached, please let me know if I can offer any more info. Thanks! alexandria-diagnostics-20221006-0635.zip Quote Link to comment
dlandon Posted October 6, 2022 Share Posted October 6, 2022 Remove NerdPack and reboot. It's not compatible with 6.11. It looks like you are installing the mcelog package, but that is already in Unraid. Your best bet would be to boot in safe mode, install ca manually and start adding back plugins and docker containers to see if one of them is causing the problem. Quote Link to comment
silasfelinus Posted October 6, 2022 Author Share Posted October 6, 2022 I have removed nerdpack, and disabled deep scan and anything that said it could be resource intensive on my komga libraries. I'm unclear on what you meant by the mcelog package, was it something in Nerdpack? >Your best bet would be to boot in safe mode, install ca manually and start adding back plugins and docker containers to see if one of them is causing the problem. I really wish I could l create a more structured environment to test, but the problems take even longer to appear when less apps are running, and running a hobbled server for the length of time it would take to test feels untenable. Thank you for the wise advise I may one day rue ignoring. At this point I have the network running and everything seems stable. I'm going to keep monitoring, and watch for the next spike and see what I see in the logs. Quote Link to comment
Solution dlandon Posted October 6, 2022 Solution Share Posted October 6, 2022 10 minutes ago, silasfelinus said: I'm unclear on what you meant by the mcelog package, was it something in Nerdpack? Yes. Quote I really wish I could l create a more structured environment to test, but the problems take even longer to appear when less apps are running, and running a hobbled server for the length of time it would take to test feels untenable. Thank you for the wise advise I may one day rue ignoring. Add more memory. Quote Link to comment
silasfelinus Posted October 8, 2022 Author Share Posted October 8, 2022 On 10/6/2022 at 1:46 PM, dlandon said: Yes. Add more memory. Excellent advice. I'll be doing so right after I print this out and show it to my wife. Just kidding (mostly). I hadn't actually thought my 32 GB DDR4 could be a bottleneck, but it makes sense that my 50+ containers could be overtaxing. That's remarkably simple of a solution. I'll throttle down the containers as my default, and report back if the problems persist after I scrape together the upgrade. I'd honestly missed your last line at first, not realizing it probably had the fix. Thanks for the help. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.