Jump to content

LilDrunkenSmurf

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by LilDrunkenSmurf

  1. Changing to IPV4 only seems to have resolved it. 4 days with no crash, and I leave the tab open on multiple machines constantly. Hilariously, it also fixed my minecraft docker containers failing to start (2/3), because the addons were failing to resolve.
  2. I'll try it out. Is this still an issue with varlog filling up and nginx crashing as result?
  3. Still happening. SSH'd in, but diagnostics hangs via CLI. Here's a post-nginx restart diagnostics. smurf-raid-diagnostics-20230630-0719.zip
  4. It also seems to be impacting NFS mounts when I restart nginx, so I need to restart the server anyways, or nfs refuses all connections
  5. Any updates? I'm basically rebooting this daily, so I don't need to constantly restart nginx.
  6. Is there a way to delay the startup of the plugin on server start? Currently, I'm using `custom -> apcupsd-ups` driver to pull from the apcupsd daemon, since it gives more information to me, but since they start at the same time, NUT fails to start. This means I need to go into the plugin, stop it, and manually start it again to get any data from it.
  7. So what was the NFS change that wasn't in the release notes?
  8. Just came to confirm, this is still happening. It's just longer between crashes. Should also note that `diagnostics` CLI command hangs via SSH after nginx has crashed, so I can't get a reliable one, unless I manually restart nginx and then pull it. I'm not sure if that would have the same information or not. It feels odd that the thread isn't really getting any traction on a fix.
  9. I also had an issue with this. Upgraded to 6.12.1 and my NFS mounts were no longer available. This is the diagnostics file I grabbed before rolling back. I do not have any custom NFS settings, other than running `exportfs -ra` on a schedule, due to stale file issues when the mover runs. smurf-raid-diagnostics-20230621-0816.zip
  10. Ironic that you say that. I did update my cloudflared today, and my WebUI is still up, however I haven't restarted my server since manually killing nginx and restarting it via CLI. I can try to reboot it and keep an eye on it. Mine typically dies after ~6 hours.
  11. That link doesn't have a solution. In fact they came back saying it doesn't work. You're basically asking us to lose functionality (cloudflared), because Unraid doesn't properly log rotate, and effectively crashes nginx?
  12. Interesting. I'm hosting cloudflared in k8s, and I have k8s doing reverse proxy for my services (including the WebUI via split DNS).
  13. I'm having a similar issue. My server is available for ~5-6 hours and then nginx crashes. Killing nginx and restarting it returns access to the WebGUI. I have my server behind a reverse proxy so I wonder if the calls from nginx-ingress are what's overwhelming it? But this wasn't an issue in 6.11.5, this just started after the upgrade to 6.12. I get a few hundred of these: Jun 18 18:45:31 <SERVER> nginx: 2023/06/18 18:45:31 [alert] 15440#15440: worker process 18572 exited on signal 6 Jun 18 18:45:31 <SERVER> nginx: 2023/06/18 18:45:31 [alert] 15440#15440: shared memory zone "memstore" was locked by 18572
  14. Apparently, the apcuspd-ups driver pulls from the apcupsd daemon. So I can run APC, and pull the data in, which is working. Works for me, for now:
  15. I just installed your plugin (which is how I found this thread). I'm trying to use the `apcupsd-ups` driver, but I guess it might not be installed. I can't find where the NUT version would be:
  16. I have a similar issue. When I run apcupsd, I get all information, including load, nominal power, etc. With NUT, I get a red W for nominal, and no data for load or load%. I'm trying to run NUT so I can scrape it with prometheus, since I'm already running it on an RPI hooked up to another UPS.
×
×
  • Create New...