ados

Members
  • Posts

    107
  • Joined

  • Last visited

Everything posted by ados

  1. Tried a few apps and then found this one, I really like its simplicity as my family are not technical. I have setup in docker but cannot get large downloads to work. If the download takes more than say 8+ minutes it has a network error in chrome and resuming starts the download from the beginning. As long as the same file/s download in less than 5 minutes its fine, it's like something is timing out. I have NPM running as normal, other dockers are fine but I do see the docker comes bundled with NGINX itself and I assume that is the issue. Is there a solution, I have checked logs and apart from the initial GET log there is nothing. Additional 19/02/23: I have tested and confirmed the issue is having it behind NPM. If forwarding is directly configured to the container it works fine, I need it to work behind NPM. I can see there is a Github issue for this exact issue a year old so is this project still being developed?
  2. Had to upgrade main system because the license upgrade function wouldn't work, button did nothing and I needed pro, it's broken everything. Dockers don't work in assorted ways, I've spent a 20+ hours troubleshooting over the past 2 week. Hours reading forum suggestions haven't worked, what did work was restoring 250,000 file permissions from backups. I set no custom docker permissions, everything was template based. I thought this was the end of the issues but now every time unraid is shutdown or rebooted all the permissions are broken. I have to restore them from backup to get dockers full of errors to resolve. 6.11.5 has caused massive issues and downgrading is next on the list unless anyone has a solution?
  3. Yeah not sure what is happening, just tested again with download at 79% and rebooted which dropped it to 39%. It then continued to slowly download from there as if past had never been downloaded. Forced a recheck and it corrected itself, will jump back to Deluge. Thank you for assisting me. 👍
  4. It was a fresh install of 4.5.0. I deleted the instance and installed 4.4.3.1-3-01. This does not resolve the issues, torrents get stuck sometimes on stalled whereby you need to pause and start to fix. The other issue where they start at the beginning unless you force a recheck is there too; bummer.
  5. Just switched back to qBittorrent and never seen before issue. Reboot the docker and torrents start at 0% every time. They increase from there at the download rate. Say it gets to 5% and then reboot docker and its 0% again, then 0.1 etc. Pause the download, force a recheck and it increases to the correct downloaded amount and then resumes from there. Reboot again and its 0% again, makes the docker not usable. Is this a bug, version 4.5.0
  6. Shame because its soo much better than Jackett. I will give it a go but I've tried in the past to use proxy sockets but my ISP blocks it as it only hides the DNS queries. I had to switch to Jackett VPN which works but I'm trying to switch Prowlarr.
  7. Is there a roadmap to support VPN directly within the docker container. Right now I have to use passthroughvpn which sucks because it then cannot communicate with other dockers via host. I have to route via the internal docker 172.x IP which is not guaranteed to remain the same. I have to ensure that the dockers start with time delays to ensure they get IPs in sequence as expected. Having VPN support would be amazing.
  8. Went from 6.9 to 6.11.5 and had issues with that smaller Unraid system, particularly docker, rolled back and everything perfect. Had to upgrade main big system because the license upgrade function wouldn't work and I needed pro, its basically broken everything. Dockers don't work in assorted ways and I've spent a few hours reading forum suggestions which haven't worked. I set no custom docker permissions, everything was template based. The suggested app data permissions reset has broken more dockers, mainly Nextcloud, OpenProject and anything SQL. Will try rolling back to 6.9 and stay there 😫
  9. +1 for multiple arrays, I left FreeNAS for Unraid years ago but its the only thing calling me back. Right now I'm needing to merge 2 Unraid servers into one, this is primary to reduce costs. The larger server can handle the workload of the smaller system and reduce electricity costs once smaller is decommissioned. However, the data on the smaller system is more critical and thus has 2 parity drives but smaller drives. The larger system relies on new drives so multi rapid failures are unlikely and has backups to another system. If I bring the data into the larger system I decrease parity protection, if I increase the parity protection I loose another large drive. We need multiple array possibilities so we are afforded flexibility in maximising drive capacity and parity to user needs.
  10. I have a few Unraid systems, staggering them in updates from least priority to most. I recently updated one system from 6.9.2 -> 6.11.5 and Nexcloud took a dive. Standard reboots and force update of the docker container had the same issues. Reverted the upgrade back and everything worked again. Seems the permissions in one of the later Unraid builds tightens security so that Nextcloud breaks. Do we have a fix for this, at the moment the only reason I have to update is for the new SSL certificate system.
  11. UniFi has finally added custom DNS support, closed
  12. Apologies if this is more network related than UNRAID but I think its worth the investigation. I recently switched from consumer routing hardware to a more enterprise solution from Ubiquity, UniFi Dream Machine. Once implemented the UNRAID interface (GUI) was no longer accessible. Since I had setup different VLANs and firewall restrictions following least privileged access I was quick to blame my implementation. I slowly peeled back the restrictions and nothing worked, it still resulted in this: Eventually I lowered all security until there was none and while I can ping the server the GUI would not load. I confirmed using GUI mode that the server was fine and both DHCP and DNS matched the testing desktop devices. Finally I identified that https://IP:443 worked once trusted cert is dismissed. This lead me to the "solution", add the host URL UNRAID redirects to into the PC host file which then works as before. This confirms its the combination of UDM and UNRAID not UNRAID itself but it does raise the question of how to fix at the router or UNRAID level. The most obvious solution would be a custom DNS record for the UDM but agaisnt my knowedge before purchasing the device it doesn't support custom DNS records despite years of cries from customers. Most use a secondary DNS device such as Pi-hole. However that just adds extra complexity and devices to manage, are their any UDM owners out there that have a solution to this? Thanks 😃
  13. I have updated the guide to remove many errors, sorry. 🙁 I have also found a compatibility issue with new versions of NGINX Manager. Through comparing differences in previous backups to new versions I have resolved the issue and added to the guide.
  14. I cannot seem to get this to work for me. Based on what has been said, it does not wait to start the docker but instead starts it and then asks to wait. If that's the case it would explain why its not working for me, I'm intending to control the internal IP order as I need dockers to communicate directly and thus need the IP to remain the same. I am aware of the custom IP but cannot use that, am I up a creek without a paddle?
  15. The main issue with that application is support which has decreased over time. You need to stay on older versions if you wish to have plugin support. Keeping it short as this belongs over on the Deluge forum, you need to be on a 2.0.X version. 2.0.4 is a good one. 😉
  16. Ha ha let me know when virtual beers are a thing. Sorry for the delay, work has been keeping me busy. Good to hear your liking Organizr, it does incorporate all your instances into one place for easy management. Since your using Swag and I don't have too much knowledge on that other than its also NGINX I might not be able to provide much direction. The "include" shouldn't be needed, as for the "auth_request" part you can either have this specified for each sub domain or in the main function of the config file. The reason you would divide it would be to allow greater control over which sub domains are authorisation controlled. However, you can just specify in the sub domain "auth_request off;" to omit it from authentication. As for Ombi, I didn't have that issue as I stopped using it in favor of another platform and I never liked the mobile app. All my interaction was from the web interface which I found better and can be pinned as such to the mobile home screen it more or less functions as an app. Since the Ombi app would have its own http/s requests to the docker instance it would have its own cookies and authentication, this causes an issue as it never contains the authentication cookie for Organiser which is passed in any web browser. I don't think there will be an easy way around this without using a firewall.
  17. Sorry I don't know what your asking. Are you wanting to know if your NGINX is secure?
  18. Don't apologise, looking for improvements and better systems is the idea but for your use case openVPN is the way to go. You already have PIA which is the best place to start for port forwarding servers.
  19. Wireguard is a new VPN protocol written from the ground up as an alternative to the openVPN standard. However, unless its changed fundamentally it works only with UDP and not TCP, for what your doing the slightly slower but more reliable TCP of openVPN will be better. You should also be focusing on support which with Wireguard is less since its new, less server nodes support it and you need nodes with port forwarding support which are already a rare before looking at Wireguard.
  20. Wireguard is not required, if your in this forum then speed is not going to improve anything when you can be using TCP and not UDP.
  21. @wobblewoo if your not too committed to your VPN provider switch to PIA. They are affordable, fast and support port forwarding which you generally need for what your doing.
  22. @tetrapod you might find Organizr interesting, check out the new guide.
  23. For those wanting to restrict login access to their sub domains or sub-folders from public access through NGINX Proxy Manager I have created a guide to use Organizr. Its a powerful SSO based platform for accessing multiple resources and allows for configuration using their API to restrict access to URL resources without authentication to Organizr. Guide:
  24. For those wanting to restrict login access to their sub domains or sub-folders from public access through NGINX Proxy Manager I have created a guide to use Organizr. Its a powerful SSO based platform for accessing multiple resources and allows for configuration using their API to restrict access to URL resources without authentication to Organizr. Guide:
  25. For those wanting to restrict login access to their sub domains or sub-folders from public access through NGINX Proxy Manager I have created a guide to use Organizr. Its a powerful SSO based platform for accessing multiple resources and allows for configuration using their API to restrict access to URL resources without authentication to Organizr. Guide: