Drewster727 Posted April 15, 2017 Share Posted April 15, 2017 (edited) When running the mover, I noticed it will sometimes (usually) freeze the whole system up. Unable to load the web UI and unable to SSH in. So, I've ran into this before and it has prevented me from running diagnostics, therefore I decided to run a "tail -f /var/log/syslog" output to a file from a remote box on the same network. Here's the tail of the log: putty.log It starts at about 17:26:24, right after the mover finishes, cache_dirs runs, coincidentally, that's when it all hangs. Apr 15 17:26:24 Tower kernel: cache_dirs: page allocation stalls for 54081ms, order:0, mode:0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) Now, it's possible the tail I was running (from another machine) wasn't able to grab the log line that shows what caused the hang, but it's very coincidental with the cache_dirs (and related) logs at that time. I may need to tail to a file on the unraid box itself. Any ideas? This is super annoying to have to hard reboot my box when it happens. Even attaching a keyboard/monitor locally after it happens, the command line is unresponsive. Edited April 15, 2017 by Drewster727 Quote Link to comment
JorgeB Posted April 16, 2017 Share Posted April 16, 2017 You getting an OOM error, see if this helps: Quote Link to comment
Drewster727 Posted April 16, 2017 Author Share Posted April 16, 2017 Thanks @johnnie.black I will give that a shot. It's strange, I have 16GB of RAM. However, cache uses up most of that, but it was my understanding that if the system needed any of those resources, it could obtain them without issue. Quote Link to comment
JorgeB Posted April 16, 2017 Share Posted April 16, 2017 It should, from what I've read it's a Linux problem on kernels 4.8 and 4.9, supposedly fixed on 4.10 Quote Link to comment
Drewster727 Posted April 16, 2017 Author Share Posted April 16, 2017 (edited) I changed the following values: vm.dirty_background_ratio: was at 10, set it to 5 vm.dirty_ratio: was at 20, set it to 10 Hopefully that helps... Will report back on results. Thanks again @johnnie.black Will be waiting anxiously for the next unraid version with the newer kernel. Edited April 16, 2017 by Drewster727 Quote Link to comment
Drewster727 Posted April 19, 2017 Author Share Posted April 19, 2017 (edited) @johnnie.black well, it crashed again when the mover ran, same out of memory exception. Pushing those values down to 1 and 2. Edited April 19, 2017 by Drewster727 Quote Link to comment
Drewster727 Posted April 19, 2017 Author Share Posted April 19, 2017 (edited) Anyone know if this is normal behavior with the cache amount? Was using this much before and after tweaking my 'dirty' cache settings... just curious. Edited April 19, 2017 by Drewster727 Quote Link to comment
Drewster727 Posted April 26, 2017 Author Share Posted April 26, 2017 @johnnie.black hey man -- after adjusting those vm.dirty_ values down to 1 and 2, everything has been very stable. I have kicked off the mover several times the past week and have had 0 issues. Thanks again! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.