Tuftuf Posted April 1, 2018 Share Posted April 1, 2018 (edited) Any ideas how to figure out the cause of this spike in memory usage? ngnix shouldn't be using close to 40%... ^^^the last line, nginx.. Restart nginx and memory usage drops. Previous thread related to this tower-diagnostics-20180401-2019.zip Edited April 1, 2018 by Tuftuf Quote Link to comment
BRiT Posted April 1, 2018 Share Posted April 1, 2018 @bonienl @limetech @jonp more nginx related memory issues. Quote Link to comment
Tuftuf Posted April 1, 2018 Author Share Posted April 1, 2018 Happening since the upgrade to 6.5.0 If it reaches close to 99% it will freeze all dockers, but VM's continue ok. Restarting nginx fixes it for a time but I often lose access to docker/vm icons on dashboard after restarting nginx. Not always though. Quote Link to comment
limetech Posted April 1, 2018 Share Posted April 1, 2018 30 minutes ago, BRiT said: @bonienl @limetech @jonp more nginx related memory issues. I don't see how we could possibly continue as company without you Brit. Quote Link to comment
BRiT Posted April 1, 2018 Share Posted April 1, 2018 2 minutes ago, limetech said: I don't see how we could possibly continue as company without you Brit. Happy Easter to you too. Quote Link to comment
limetech Posted April 1, 2018 Share Posted April 1, 2018 37 minutes ago, Tuftuf said: Happening since the upgrade to 6.5.0 If it reaches close to 99% it will freeze all dockers, but VM's continue ok. Restarting nginx fixes it for a time but I often lose access to docker/vm icons on dashboard after restarting nginx. Not always though. In your current config, are you able to run in 'safe mode' (to prevent plugins installing)? Quote Link to comment
Tuftuf Posted April 1, 2018 Author Share Posted April 1, 2018 4 minutes ago, limetech said: In your current config, are you able to run in 'safe mode' (to prevent plugins installing)? I could but I'm not eager to try it. What did you have in mind? I was thinking of to the latest beta. It took 34 hours since my reboot for the issue to occur, I can't sit in safe mode for that long. Quote Link to comment
limetech Posted April 1, 2018 Share Posted April 1, 2018 7 minutes ago, Tuftuf said: I could but I'm not eager to try it. What did you have in mind? It's possible that a plugin is installing code that is down-rev or otherwise incompatible with the running OS. Also possible a plugin is accessing nginx in such a way that we have not anticipated. We have several servers running and don't ever see nginx memory footprint growing in an unbounded manner. This means there is something running on your server which is different from our servers (obviously). 10 minutes ago, Tuftuf said: I was thinking of to the latest beta. For unexplained issues it's always best to run the latest unRAID OS available since that is what we are running. 11 minutes ago, Tuftuf said: It took 34 hours since my reboot for the issue to occur, I can't sit in safe mode for that long. Maybe you only have to run long enough to see if memory is ever increasing? Quote Link to comment
John_M Posted April 1, 2018 Share Posted April 1, 2018 44 minutes ago, limetech said: In your current config, are you able to run in 'safe mode' (to prevent plugins installing)? What is a typical ballpark figure for the amount of memory you'd expect nginx worker process to use? I'm seeing around 700 MB, which is much less than @Tuftuf but it still seems quite a lot to me. I'm not running any dockers or VMs on the server I just checked. Possibly related, but possibly not, I'm seeing a lot of nginx-related error messages in my syslog about failing to allocate memory. I saw it on all three of my servers during the monthly parity check today: I don't understand what it is trying to do that fails but I'll start up in safe mode and run another parity check and see if it continues to happen. Quote Link to comment
limetech Posted April 1, 2018 Share Posted April 1, 2018 2 minutes ago, John_M said: I don't understand what it is trying to do that fails but I'll start up in safe mode and run another parity check and see if it continues to happen. Also I'd urge you to use the latest 6.5.1-rc prerelease. Quote Link to comment
John_M Posted April 1, 2018 Share Posted April 1, 2018 4 minutes ago, limetech said: Also I'd urge you to use the latest 6.5.1-rc prerelease. I don't want to hijack this thread in case the two aren't related but I'll update to the rc and run a parity check in safe mode and report any findings in the other thread. Quote Link to comment
Tuftuf Posted April 1, 2018 Author Share Posted April 1, 2018 In my previous thread, it was around 6Gb usage, but I generally run my system close to memory limit. I've stopped a few things to avoid this issue recently, which is why I've seen larger memory usage numbers. It's grown to fill the gap. I will update to the latest prerelease later today. Currently, the system is providing my internet connection while I rebuild something else, trying to avoid that much downtime atm. Quote Link to comment
Tuftuf Posted April 1, 2018 Author Share Posted April 1, 2018 Just now, John_M said: I don't want to hijack this thread in case the two aren't related but I'll update to the rc and run a parity check in safe mode and report any findings in the other thread. I was running a parity check for 20hours of the 32hours uptime. Quote Link to comment
John_M Posted April 1, 2018 Share Posted April 1, 2018 2 minutes ago, Tuftuf said: I was running a parity check for 20hours of the 32hours uptime. Do you see any nginx-related errors in your syslog, like I'm seeing? Quote Link to comment
BRiT Posted April 1, 2018 Share Posted April 1, 2018 For additional reference, on 6.5.1-RC2 after 8 days of uptime and taken at 98% through a monthly parity check: Only 3 dockers running, EggDrop, NzbGets, and PyTivo. top - 17:38:38 up 8 days, 8:42, 1 user, load average: 1.50, 1.44, 1.44 Tasks: 445 total, 2 running, 219 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.2 us, 1.0 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5967 root 20 0 64164 3636 24 S 0.0 0.0 0:00.00 `- nginx 5968 nobody 20 0 70912 7932 3664 S 0.3 0.0 3:08.11 `- nginx 6015 root 20 0 9852 2872 2612 S 0.0 0.0 5:54.55 `- diskload 19360 root 20 0 4504 780 720 S 0.0 0.0 0:00.00 `- sleep 6115 root 20 0 153508 532 112 S 0.0 0.0 0:00.00 `- shfs 6128 root 20 0 365624 15308 612 S 0.7 0.0 81:54.38 `- shfs 6237 root 20 0 26440 11456 2144 S 0.0 0.0 28:20.11 `- cache_dirs 19183 root 20 0 4504 732 668 S 0.0 0.0 0:00.00 `- sleep 6676 root 20 0 2435848 51276 30932 S 0.3 0.0 23:43.29 `- dockerd Quote Link to comment
Tuftuf Posted April 1, 2018 Author Share Posted April 1, 2018 2 minutes ago, John_M said: Do you see any nginx-related errors in your syslog, like I'm seeing? I just searched the syslogs in my older diags but I don't see any. They are linked in the other thread in the first post. I did see some OOM / memory errors when this occurred a few days ago but I believe they were docker attempting to write something. Can't seem to find them now. Quote Link to comment
John_M Posted April 2, 2018 Share Posted April 2, 2018 (edited) 14 hours ago, limetech said: Also I'd urge you to use the latest 6.5.1-rc prerelease. I still get nginx-related errors in Safe Mode. Details and diagnostics here. UPDATE: The parity check completed successfully but my syslog has many nginx-related errors. Latest diagnostics here. Edited April 2, 2018 by John_M Update 1 Quote Link to comment
DZMM Posted April 2, 2018 Share Posted April 2, 2018 I've been having the same problem - occurred with 6.5rc6 and now with 6.5.1rc3. My memory usage is creeping up slowly - currently at 90% whereas normal is around 57-67%. highlander-diagnostics-20180402-0848.zip How do I safely restart nginx rather than rebooting my server? Quote Link to comment
Tuftuf Posted April 2, 2018 Author Share Posted April 2, 2018 (edited) To restart nginx i've been having to use both /etc/rc.d/rc.nginx restart and /etc/rc.d/rc.nginx stop Quite often checking its status will show its still running. So make sure it's really closed. It doesn't want to close gracefully /etc/rc.d/rc.nginx status Running 6.5.1rc3 for around 12hours now, but my system is still under 50%. I think I need to be starting new dockers, checking app store using dockerhub etc before I see the memory grow. Will see how the next day or two goes. Edited April 2, 2018 by Tuftuf 1 Quote Link to comment
DZMM Posted April 2, 2018 Share Posted April 2, 2018 (edited) 3 hours ago, Tuftuf said: To restart nginx i've been having to use both /etc/rc.d/rc.nginx restart and /etc/rc.d/rc.nginx stop Quite often checking its status will show its still running. So make sure it's really closed. It doesn't want to close gracefully /etc/rc.d/rc.nginx status Running 6.5.1rc3 for around 12hours now, but my system is still under 50%. I think I need to be starting new dockers, checking app store using dockerhub etc before I see the memory grow. Will see how the next day or two goes. thanks /etc/rc.d/rc.nginx restart worked - took me from 98% to 58% !!!! I'm going to add that to an overnight script to stop things getting out of control Edited April 2, 2018 by DZMM Quote Link to comment
limetech Posted April 7, 2018 Share Posted April 7, 2018 For those affected by this issue: please try out 6.5.1-rc4 next release. If issue persist, please post a report in Prerelease Support. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.