TheDon

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by TheDon

  1. So my unraid server was acting similar to what was described here, so i wanted to post a solution here in case those watching this post (as i did for weeks), dont see their problems solved with the recent docker networking changes in 6.12.4. A symptom i missed from this whole ordeal was that the unraid GUI was crashing (docker containers all appeared fine, but deluge was ALWAYS not functioning). If this sounds similar to you, check out the post below. TLDR for below: binhex torrent container has a special release tag (like :latest) to help prevent this issue from happening. ":libtorrentv1"
  2. @JorgeBThanks again for the share, I have been stable ever since! marked your response as a solution.
  3. I was linked to this, and it appears to have solved my problem. TLDR: i had to switch the version of a docker container i run to a special release tag, to prevent some issue with libtorrent 2.X
  4. @JorgeBThanks for linking me to this one, I had not yet made the connection to Deluge, I have also found the first command that lets me get my webgui back without having to force an unclean shutdown "/etc/rc.d/rc.docker restart", that on its own is a huge relief. I have updated the repo for my binhex container, and will bee keeping an eye on stability for the next few days, thanks for the forum link, this seems promising!
  5. Man it really felt like it was finally solved, but it eventually crashed out saturday, (so I grabbed the first diags below), and then between then and sunday it did it agaion (second diag). I am having to ssh into the server, run diagnostics, and then send reboot command twice in order to get the box to restart (so I can get the gui going again). Any advice on what to try next? oxygen-diagnostics-20230909-2153.zip oxygen-diagnostics-20230910-1711.zip
  6. After: Post reboot: @ljm42What do the call traces look like in the logs?
  7. @ljm42 These are my current settings: I will enable bridging, and make sure the system is ipvlan.
  8. @ljm42 good idea on the other thread, dont want to bog down the release thread. I have been following the other thread (the one where you notified everyone of 6.12.4rcXX) for the docker stuff, and this entire time I was thinking this was was my issues (especially since I was getting the nginx 500 if i let me page try and load for long enough
  9. I am also still seeing crashes, seems more frequently than before. IPV4 only, ipvlan since 6.12.3, bridging and bonding set to No oxygen-diagnostics-20230903-2316.zip oxygen-diagnostics-20230904-2039.zip
  10. I am still seeing my UI crash, even with the update to 6.12.4. I am set to ipvlan since 6.12.3, bridging and bonding set to No. Any ideas what might also be causing this, since Lime has seemed to resolve the docker networking issues (originally what i thought was causing the issue). oxygen-diagnostics-20230903-2316.zip oxygen-diagnostics-20230904-2039.zip
  11. Actually @Mainfrezzer I have achieved 2 days, 10 hours of stability with safe mode on (v6.11.5), so I guess this might mean my issue is plugin related?
  12. I have seen a lot of people mention ipv6, but i started this seeing this issue, and have had ipv6 disabled for a long time. When i found this thread, and read through it I went to check if i had ipv6 enabled, but it was completely disabled, and i think i did that pretty early on. I am running 6.11.5 now, and still cant keep the GUI running for more than 12-24 hours. I am attempting safe mode right now, see if it helps at all. oxygen-diagnostics-20230810-1137.zip syslog-10.0.0.8-20230810.log
  13. Problem still exists in 6.11.5 for me, really not sure what else I can do at this point. Next gui crash i can try restarting the nginx, that didnt work for me in 6.12.x but maybe now? oxygen-diagnostics-20230804-2248.zip
  14. I have been performing downgrades [6.12.3] - "current stable", this is where i started. I dont think my issues started on this version, but its when i started dealing with it [6.12.2] - Issue was still present [6.12.0] - Issue seem to take longer to present itself, I got to over 19 hours runtime [6.11.5] - This broken all of my docker containers from starting automatically, I had to change the network to something else, and then back to correct setting to launch all my docker containers. Just booted into 6.11.5, so havent been able to give it a 24hr stability test.
  15. root@oxygen:~# ps -aux | grep nginx root 1104 0.0 0.0 7928 5016 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx nobody 1129 0.0 0.0 8520 4852 ? S Jul29 0:00 nginx: worker process nobody 1130 0.0 0.0 8520 4852 ? S Jul29 0:00 nginx: worker process nobody 1131 0.0 0.0 8520 4776 ? S Jul29 0:00 nginx: worker process nobody 1132 0.0 0.0 8520 4780 ? S Jul29 0:00 nginx: worker process root 9633 0.0 0.0 147024 4016 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf root 9634 0.0 0.0 148236 8096 ? S Jul29 0:13 nginx: worker process root 13064 0.0 0.0 4052 2224 pts/0 S+ 13:16 0:00 grep nginx root 15478 0.0 0.0 212 20 ? S Jul29 0:00 s6-supervise svc-nginx root 15826 0.0 0.0 7812 3932 ? Ss Jul29 0:00 nginx: master process /usr/sbin/nginx nobody 15932 0.0 0.0 8160 2988 ? S Jul29 0:00 nginx: worker process nobody 15933 0.0 0.0 8160 2160 ? S Jul29 0:00 nginx: worker process nobody 15934 0.0 0.0 8160 2984 ? S Jul29 0:00 nginx: worker process nobody 15935 0.0 0.0 8160 2984 ? S Jul29 0:00 nginx: worker process nobody 21461 0.0 0.0 48488 11428 pts/0 Ss+ Jul29 0:00 nginx: master process nginx nobody 26207 0.0 0.0 49152 9440 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26208 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26209 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26210 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26211 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26212 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26214 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26215 0.0 0.0 48724 6464 pts/0 S+ 12:20 0:00 nginx: worker process nobody 26216 0.0 0.0 47956 6604 pts/0 S+ 12:20 0:00 nginx: cache manager process @srirams From the list of many nginx processes that I have going, do i need to kill all that say master? (1104, 9633, 15826, 19862, 21461). /etc/rc.d/rc.nginx stop ^ just hangs on "Shutdown Nginx gracefully..."
  16. I am extremely jealous. I have no clue what's happening with my server, and its very frustrating having to perform a reboot at minimum once a day. Thanks for directing me to that forum post, it seems like the one to follow for this issue.
  17. Can anyone provide guidance on how to work around this issue without shutting down? My current solution has been SSH, captures diagnostics, and then "poweroff". Sometimes Ill wait, nothing seems to happen, and I send poweroff again, and the machine seems to shutdown way too quick (and on boot, when i push power button), it reports a unclean shutdown. I cancel parity check, and then rinse and repeat in 12-24 hours. Am I using the wrong command? Is there a better way to get nginx to restart properly so i dont have to perform this everyday? oxygen-diagnostics-20230729-2004.zip
  18. @david279 As far as I can tell in the thread, there is still no solution? Are you having the issue as well?
  19. My unraid server is experiencing a weird issue where the UI for unraid eventually stops responding. To resolve this i usually have to ssh in and then run the poweroff command twice. When i try and load the webui, sometimes the top of the UI will load (nothing under the area in line with the server version unber and tabs, this may just be webpage caching thought.) Eventually the webapge stops trying to load, and shows "500 Internal Server Error nginx" MY docker containers seem to run as normal (although one does seem to have issues, but most of my other container web interfaces can be interacted with when the server is like this. I have attempted the /etc/rc.d/rc.php-fpm restart command, and while that seems to run sucessfully, it appears as if nothing is resolved. I ran the diagnostics over the ssh, but I wasnt sure if I needed to add a special flag or something to make them anonymous like you do in the UI (a checkbox for anonymous). I basically restart/poweroff command my server every day, and by end of work day or a bit later, the unraid ui no longer functions. So i rinse and repeat. Parity checks trigger everytime (on boot), so the poweroff command isnt working like I hoped to perform a clean shutdown. Anyone have any ideas what I could look into to try and diagnose this kind of system crash, or what im looking for in my diagnostics files.
  20. Does the application still have the issue requiring fresh databases often? Also I had not heard about the requiring separate instances for eBook vs audiobook yet, thats unfortunate. Thanks for your template work on this @binhex! @awediohead I have a large collection of audiobooks too, but haven't really taken the plunge of shoving it into plex, got any guides or advice on how to tackle that? Ive seen a couple of guides around before last time I look into in, curious what you tried.
  21. Explains my situation then, I'll wait for that to hit master. Thanks @Squid!
  22. I ran into this same problem, but even after clearing the containers in an attempt to start fresh, i still dont have the default config, logs, etc. Is there some way to make sure the templates are updated? Also, if you are setting up tdarr, and it asks for server IP, do you put the host (brdige mode) or reference itself? Same question for node, it asks for both server IP and node. I assume i put unraid IP (bridge again) for the server, but for node is it referring to self.
  23. Pull down of container was fine, but I get the same error as other mention above. The webUI wouldn't load in chrome each attempt to load appeared to create more of the error below in logs, but I loaded in MSFT Edge (new), and the page loaded fine (and I was able to log in with admin:admin). "invalid HTTP request size (max 4096)...skip" I can provide more info if you think it would be helpful
  24. I can confirm myself, and a colleague are also getting having an issue with Exit code '56' using ca-vancouver as @jedimstr indicates above.
  25. I don't think this is necessarily true, since using the paths i stated, the locations should be on my cache drive in my appdata share, which i confirmed the directories did get created, but are empty ( i have only run the docker startup, so that might be normal if i have yet to import and account or anything like that). My x2 512 nvme cache drives are pooled and contain all my appdata. I don't recall where, but i think i have read that its better to use the file path /mnt/user/sharename rather than /mnt/cache or /mnt/disk? I still may not be understanding Squid's post 100%.