TheDon

Members
  • Posts

    27
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TheDon's Achievements

Noob

Noob (1/14)

3

Reputation

1

Community Answers

  1. So my unraid server was acting similar to what was described here, so i wanted to post a solution here in case those watching this post (as i did for weeks), dont see their problems solved with the recent docker networking changes in 6.12.4. A symptom i missed from this whole ordeal was that the unraid GUI was crashing (docker containers all appeared fine, but deluge was ALWAYS not functioning). If this sounds similar to you, check out the post below. TLDR for below: binhex torrent container has a special release tag (like :latest) to help prevent this issue from happening. ":libtorrentv1"
  2. @JorgeBThanks again for the share, I have been stable ever since! marked your response as a solution.
  3. I was linked to this, and it appears to have solved my problem. TLDR: i had to switch the version of a docker container i run to a special release tag, to prevent some issue with libtorrent 2.X
  4. @JorgeBThanks for linking me to this one, I had not yet made the connection to Deluge, I have also found the first command that lets me get my webgui back without having to force an unclean shutdown "/etc/rc.d/rc.docker restart", that on its own is a huge relief. I have updated the repo for my binhex container, and will bee keeping an eye on stability for the next few days, thanks for the forum link, this seems promising!
  5. Man it really felt like it was finally solved, but it eventually crashed out saturday, (so I grabbed the first diags below), and then between then and sunday it did it agaion (second diag). I am having to ssh into the server, run diagnostics, and then send reboot command twice in order to get the box to restart (so I can get the gui going again). Any advice on what to try next? oxygen-diagnostics-20230909-2153.zip oxygen-diagnostics-20230910-1711.zip
  6. After: Post reboot: @ljm42What do the call traces look like in the logs?
  7. @ljm42 These are my current settings: I will enable bridging, and make sure the system is ipvlan.
  8. @ljm42 good idea on the other thread, dont want to bog down the release thread. I have been following the other thread (the one where you notified everyone of 6.12.4rcXX) for the docker stuff, and this entire time I was thinking this was was my issues (especially since I was getting the nginx 500 if i let me page try and load for long enough
  9. I am also still seeing crashes, seems more frequently than before. IPV4 only, ipvlan since 6.12.3, bridging and bonding set to No oxygen-diagnostics-20230903-2316.zip oxygen-diagnostics-20230904-2039.zip
  10. I am still seeing my UI crash, even with the update to 6.12.4. I am set to ipvlan since 6.12.3, bridging and bonding set to No. Any ideas what might also be causing this, since Lime has seemed to resolve the docker networking issues (originally what i thought was causing the issue). oxygen-diagnostics-20230903-2316.zip oxygen-diagnostics-20230904-2039.zip
  11. Actually @Mainfrezzer I have achieved 2 days, 10 hours of stability with safe mode on (v6.11.5), so I guess this might mean my issue is plugin related?
  12. I have seen a lot of people mention ipv6, but i started this seeing this issue, and have had ipv6 disabled for a long time. When i found this thread, and read through it I went to check if i had ipv6 enabled, but it was completely disabled, and i think i did that pretty early on. I am running 6.11.5 now, and still cant keep the GUI running for more than 12-24 hours. I am attempting safe mode right now, see if it helps at all. oxygen-diagnostics-20230810-1137.zip syslog-10.0.0.8-20230810.log
  13. Problem still exists in 6.11.5 for me, really not sure what else I can do at this point. Next gui crash i can try restarting the nginx, that didnt work for me in 6.12.x but maybe now? oxygen-diagnostics-20230804-2248.zip
  14. I have been performing downgrades [6.12.3] - "current stable", this is where i started. I dont think my issues started on this version, but its when i started dealing with it [6.12.2] - Issue was still present [6.12.0] - Issue seem to take longer to present itself, I got to over 19 hours runtime [6.11.5] - This broken all of my docker containers from starting automatically, I had to change the network to something else, and then back to correct setting to launch all my docker containers. Just booted into 6.11.5, so havent been able to give it a 24hr stability test.
  15. root@oxygen:~# ps -aux | grep nginx root 1104 0.0 0.0 7928 5016 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx nobody 1129 0.0 0.0 8520 4852 ? S Jul29 0:00 nginx: worker process nobody 1130 0.0 0.0 8520 4852 ? S Jul29 0:00 nginx: worker process nobody 1131 0.0 0.0 8520 4776 ? S Jul29 0:00 nginx: worker process nobody 1132 0.0 0.0 8520 4780 ? S Jul29 0:00 nginx: worker process root 9633 0.0 0.0 147024 4016 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf root 9634 0.0 0.0 148236 8096 ? S Jul29 0:13 nginx: worker process root 13064 0.0 0.0 4052 2224 pts/0 S+ 13:16 0:00 grep nginx root 15478 0.0 0.0 212 20 ? S Jul29 0:00 s6-supervise svc-nginx root 15826 0.0 0.0 7812 3932 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx nobody 15932 0.0 0.0 8160 2988 ? S Jul29 0:00 nginx: worker process nobody 15933 0.0 0.0 8160 2160 ? S Jul29 0:00 nginx: worker process nobody 15934 0.0 0.0 8160 2984 ? S Jul29 0:00 nginx: worker process nobody 15935 0.0 0.0 8160 2984 ? S Jul29 0:00 nginx: worker process nobody 21461 0.0 0.0 48488 11428 pts/0 Ss+ Jul29 0:00 nginx: master process nginx nobody 26207 0.0 0.0 49152 9440 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26208 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26209 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26210 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26211 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26212 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26214 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26215 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26216 0.0 0.0 47956 6604 pts/0 S+ 12:20 0:00 nginx: cache manager process @srirams From the list of many nginx processes that I have going, do i need to kill all that say master? (1104, 9633, 15826, 19862, 21461). /etc/rc.d/rc.nginx stop ^ just hangs on "Shutdown Nginx gracefully..."