jbuszkie

Members
  • Posts

    651
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Westminster, MA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jbuszkie's Achievements

Enthusiast

Enthusiast (6/14)

7

Reputation

  1. When I had this (I believe) I stopped my dockers one by one until it stopped coming.. And for me it was just the first one... Reboot will work as well...
  2. Sorry.. But then It's beyond my help.. I'm nowhere near the expert here!
  3. I don't remember the details of the problem or the fix... And might be useless.. But might you try a full reboot? Not just restarting the dockers?
  4. Did you undo any "fixes" you implemented before the upgrade?
  5. @limetech Is there any way you guys can start looking at this? There are more and more folks that are seeing this. For the rest of us... Maybe we should start listing our active dockers to see if one of them is triggering the bug. Maybe there is one common to us that's the cause. If we can give limetech a place to start so to more frequently trigger this condition, it would probably help them.. For me I have Home_assist bunch ESPHome stuckless-sagetv-server-java16 CrashPlanPRO crazifuzzy-opendct Gafana Influxdb MQTT I have no running VMs My last two fails did not have any web terms open at all. I may have had a putty terminal, but I don't think that would cause it? I do have the dashboard open on several machines (randomly powered on) at the same time.. Jim
  6. You posted the space available for your cache and others.. But how much space do you have left in /var? Was it full? The first thing I do is delete the syslog.1 to make some space so it can write the log, then I restart nginx. Then I tail the syslog to see if the writes stop. My syslog.1 is usually huge and frees up a lot of space so it can write to the syslog The time before last time, I still had some of those errors in the syslog after the restart.. So I was going to stop my dockers one by one and see if it stopped. And well it did with my first one. Two days ago when then happened to me.. I didn't have to do that. the restart was all I needed...
  7. strange that /etc/rc.d/rc.nginx restart didn't fix it. I assume you made room in /var/log for more messages to come through? After the restart did you still have stuff spewing in the log file? I do recall a time where I had to do a restart to fix it completely. Jim
  8. This happened again to me last night. I've been really good about not leaving web terminal windows open. Well.. I didn't have any last night open or for a while. What I did have different was I had set my grafana window to auto refresh every 10s. I wonder if that had anything to do with this problem? Also.. The restart command didn't completely fix it this time. I was still getting a bunch of these... Aug 23 09:09:49 Tower nginx: 2021/08/23 09:09:49 [alert] 25382#25382: worker process 1014 exited on signal 6 Aug 23 09:09:50 Tower nginx: 2021/08/23 09:09:50 [alert] 25382#25382: worker process 1044 exited on signal 6 Aug 23 09:09:51 Tower nginx: 2021/08/23 09:09:51 [alert] 25382#25382: worker process 1202 exited on signal 6 Aug 23 09:09:53 Tower nginx: 2021/08/23 09:09:53 [alert] 25382#25382: worker process 1243 exited on signal 6 Aug 23 09:09:54 Tower nginx: 2021/08/23 09:09:54 [alert] 25382#25382: worker process 1275 exited on signal 6 Aug 23 09:09:55 Tower nginx: 2021/08/23 09:09:55 [alert] 25382#25382: worker process 1311 exited on signal 6 Aug 23 09:09:56 Tower nginx: 2021/08/23 09:09:56 [alert] 25382#25382: worker process 1342 exited on signal 6 Aug 23 09:09:57 Tower nginx: 2021/08/23 09:09:57 [alert] 25382#25382: worker process 1390 exited on signal 6 Aug 23 09:09:58 Tower nginx: 2021/08/23 09:09:58 [alert] 25382#25382: worker process 1424 exited on signal 6 Aug 23 09:09:59 Tower nginx: 2021/08/23 09:09:59 [alert] 25382#25382: worker process 1455 exited on signal 6 I started to kill my dockers and after I stopped the HASSIO group, That message stopped. I restarted the docker group and it hasn't comeback. I really wish we could get to the bottom of this!! FYI.. I'm now on 6.9.2
  9. Yeah.. It works fine now! Can't wait for it to be able to send encrypted flash images! I have to upgrade my flash drive though! It's only one Gig.... And maybe 11 years old! Might be time anyway!! lol
  10. Chrome I tried that with the same result. Sure.. I looked in the logs and saw nothing. This is the only spot were I get this. If I change the "Use SSL/TLS" to No.. That change happens just fine.. It's only if I muck with the "My servers" section. Not sure what the update DNS is supposed to do? (generate a new certificate?) But that seems to not come back either. tower-diagnostics-20210810-1401.zip
  11. Yeah.. I figured that out. I was able to setup a DNSMASQ setting to allow unraid.net in my tomato config Thanks!
  12. Update.. I do get the spinning circle every time I try to apply when I change something in my servers area. If I disable remote access... I get the spinning circle. But when I go back to something else or refresh, I see the new setting. Same thing if I enable .. I get the spinning applying that never goes away.. But if I refresh, it sticks. However... If I try to change the port, it never sticks. I always see the 443 port. Double however... if I actually try the remote access, I see the new port being used and it works fine even though the webpage settings still says port 443. I do have the flash backup disabled if that has anything to do with this... Bug? Something with my setup?
  13. Trying to updated the WAN port on myservers and I just get the spinning circle on "applying" button. If I go to another page and then come back.. I still see the default port of 443. Second question.. Do I have to keep "prevent DNS rebind attacks" unchecked on my router once I provisioned?