binhex

Community Developer
  • Posts

    7898
  • Joined

  • Last visited

  • Days Won

    37

Everything posted by binhex

  1. most probably cookie related, try clearing cookies in your browser next time.
  2. it could be a transcode issue i have seen the transcode binaries become corrupt sometimes, or more likely is that the audio transcode is maxing out your CPU causing the playback issue, bear in mind audio transcode is done using the CPU NOT the GPU. For the permissions issue, post an issue here and tell them what you did exactly (point at the guide or detail commands).
  3. what 'update error'? this has a freeware licence for another 50 days yet:-
  4. When it's out of bets it will be built Sent from my 22021211RG using Tapatalk
  5. This also looks like a neat and fast solution, but PLEASE for the love of god change the password from adminadmin after doing this:- https://forums.unraid.net/topic/75539-support-binhex-qbittorrentvpn/?do=findComment&comment=1330952
  6. in a word, no, VPN_OUTPUT_PORTS is not used for that purpose, see the guide:- https://github.com/binhex/documentation/blob/master/docker/guides/vpn.md you do not need to open any additional outbound ports, all outbound connections are permitted via the vpn connection.
  7. FYI regards no password shown in the log, its a qbittorrent bug - https://github.com/binhex/arch-qbittorrentvpn/issues/208#issuecomment-1824091412
  8. it doesnt exist in qbittorrent-nox (web ui) only the gui version of qbittorrent has the creator
  9. To be clear, from qbittorrent news post:- so if adminadmin doesnt work then you know the reason why, check supervisrod.log file and use the randomised password, or better yet stop using the default password and set one.
  10. from your logs it looks like you have hit the servers defined inotify instances limit, from your log:- and a quick google reveals this unraid post, instances matches watches btw:- alternatively simply restart your server (not container) and i suspect the issue will go away, but it may come back if you hit your inotifiy watches limit again.
  11. A quick post on this, i see from the issue raised that several people are having zero crashes after updating to kernel 6.5, and as kernel 6.6 has just got the LTS approval i am hopeful unraid @SpencerJwill move to this new kernel in the future (6.13 perhaps?), which will finally put an end to this bug. https://github.com/arvidn/libtorrent/issues/6952#issuecomment-1719203595 https://github.com/arvidn/libtorrent/issues/6952#issuecomment-1719224295 https://github.com/arvidn/libtorrent/issues/6952#issuecomment-1770925098 https://github.com/arvidn/libtorrent/issues/6952#issuecomment-1770936648
  12. If you can tell me which bits are not clear then I can improve the faq Sent from my 22021211RG using Tapatalk
  13. both are available, use tag name 'libtorrentv1' if you want to use v1
  14. done, please pull down 'latest' tagged image.
  15. OK guys i have been chin wagging with @ich777 and he tells me he was also seeing hangs with unrar with the sabnzbd docker image he produced and switched over to unrar v7 beta and this seems to of fixed the issue, so with that in mind i have pushed out a new image with unrar v7 included, please pull down docker image with tag name 'unrar-compile' and let me know the outcome, if it's all good (will take several days to verify) then i will merge into main/master.
  16. this is an interesting read https://forums.sabnzbd.org/viewtopic.php?t=23923, this is the conclusion the sabnzbd dev comes to, so looks like a possible bug in unrar :-https://forums.sabnzbd.org/viewtopic.php?p=118851#p118851
  17. i guess i MIGHT be able to detect a stall condition and then kill all processes in the container thus forcing the container to stop, if you had it set to auto restart then it would then start back up and unpack the stalled queued item, worth investigating?
  18. OK been playing with this, so to review your findings and mine, the issue is two-fold:- 1. deluged and deluge-web fork (default action), and thus dumb-init sigterm never reaches deluged or deluge-web as they are not child processes of the script. 2. dumb-init does not wait for processes to exit after the process is sent sigterm. To fix 1. we simply prevent deluged and deluge-web from running daemonised (but we do background deluged), this then ensures the process is run as a child process of the script and thus gets sent sigterm - see https://github.com/binhex/arch-deluge/commit/8117d7e6286d41f2de6a3a3f35004367b3a432ef To fix 2. i have to create a script which traps sigterm and then waits for all child processes to exit - see https://github.com/binhex/scripts/blob/master/shell/arch/docker/waitproc.sh Once these two are in place then you can tick and untick plugins and this state is saved, i havent looked at torrent states so i will leave that up to you guys, but it should also fix this. To test please pull down the image with tag name 'wait-proc' please let me know if it works as i will need to merge into master/main if its good - note this is JUST for arch-deluge so far, so arch-delugevpn does NOT have this tag.
  19. Thanks to you both @mhertz @ambipro - included the -v flag for info in the image, this will get included in the next image build. In the meantime i shall take a look at dumb-init again and see what is going on, quite disappointed its not working as intended, process management and zombie reaping are a PITA in docker :-(.
  20. Hmm was digging around a bit and happen to come across this post on reddit:- which then lead me to this repo, as you can see it was last updated 2 days ago and looks like the guy has releases working correctly:- https://github.com/nzbgetcom/nzbget
  21. no idea without a log file, do this:- https://github.com/binhex/documentation/blob/master/docker/faq/help.md
  22. updates now switched off in the latest image, please pull at your convenience. btw i did take a look at the tagged version for nzbget-ng but they are in a poor state and do not compile at present, so we are stuck on develop branch until the dev produces 'releases'. p.s. still no unpacking issues?
  23. OK i see your issue, you have given the container a set ip address, this then causes a clash as the docker network and the lan are then the same, from your log:- 2023-11-08 06:32:02,593 DEBG 'start-script' stdout output: [debug] Docker IP defined as 192.168.4.5 2023-11-08 06:32:02,599 DEBG 'start-script' stdout output: [debug] Docker netmask defined as 255.255.252.0 2023-11-08 06:32:02,935 DEBG 'start-script' stdout output: [info] Docker network defined as 192.168.4.0/22 the fix is either to setup another docker network separate to your lan and use that, or simply switch it back to 'bridge'.