• Posts

  • Joined

  • Last visited

Everything posted by Jorgen

  1. Glad you worked it out! It can be confusing with the two different methods of using the VPN tunnel, each one with its own quirks on how to set it up. Sent from my iPhone using Tapatalk
  2. Wireguard is supported and has been for a while, but maybe I’m misunderstanding your question? Sent from my iPhone using Tapatalk
  3. I ended up adding all of the download apps in the delugeVPN network, including NzbGet (the non-VPN version from binhex). I am seeing slower download speeds for nzbget this way, but at least it’s working and all apps can talk to each other again. Binhex is working on a secure solution to allow apps inside the VPN network to talk to apps outside it, so it might be possible to run NzbGet on the outside again in the future. Your only other option is to configure sonarr/radarr to talk to NzbGet on the internal docker ip, e.g. 172.x.x.x, but beware that the actual up is dynamic for each container and may change on restarts. Edit: yeah jackett is of no use for NZB’s, you need to set them up as separate indexers in radarr/sonarr. Which shouldn’t be a problem, both apps should talk to the internet freely over the VPN tunnel Edit2: sorry ignore the first bit of my reply, you’re using privoxy, not network binding. Sorry for the confusion.
  4. I see, learn something new everyday. This container is based on arch which seems to support firewalld, but that’s obviously up to binhex what to use. From all the effort he’s put into the iptables I’d hazard a guess that he’s not keen to change it anytime soon. Sent from my iPhone using Tapatalk
  5. Just out of curiosity, what do you think it should use instead of iptables? They seem well suited to the task at hand of stopping any data leaking outside the VPN tunnel? Sent from my iPhone using Tapatalk
  6. This doesn't answer your question directly, but in Sonarr/Radarr you can also set them up to use all your configured indexers in jacket, see example from Radarr below. That way you only have to set up one indexer, once, in sonarr/radarr. If you add new indexers to jackett, they will be automatically included by the other apps. Doesn't work if you need different indexers for the different apps though...
  7. I have the same problem, where everything works as expected (after adding ADDITIONAL PORTS and adjusting application settings to use localhost) except being able to connect to bridge containers from any of the container with network binding to delugeVPN. Jackett, Radarr and Sonarr are all bound to the DelugeVPN network. Proxy/Privoxy is not used by any application. NzbGet is using the normal bridge network. I can access all application UIs and the VPN tunnel is up. Each application can communicate with the internet. In Sonarr and Radarr: I can connect to all configured indexers, both Jackett (localhost) and nzbgeek directly (public dns name) I can connect to delugeVPN as a download client (using localhost) I CAN NOT connect to NzbGet as a download client using <unraidIP>:6790. Connection times out. I CAN connect to NzbGet using it's docker bridge IP (172.x.x.x:6790) It's my understanding that the docker bridge IP is dynamic and may change on container restart, so I don't really want to use that. @binhex it seems like the new iptable tightening is preventing delugeVPN (and other containers sharing it's network) from communicating with containers running on bridge network on the same host? Here's a curl output from the DelugeVPN console to the same NzbGet container using unraid host IP ( and docker network IP ( sh-5.1# curl -v * Trying * connect to port 6789 failed: Connection timed out * Failed to connect to port 6789: Connection timed out * Closing connection 0 curl: (28) Failed to connect to port 6789: Connection timed out sh-5.1# curl -v * Trying * Connected to ( port 6789 (#0) > GET / HTTP/1.1 > Host: > User-Agent: curl/7.75.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 401 Unauthorized < WWW-Authenticate: Basic realm="NZBGet" < Connection: close < Content-Type: text/plain < Content-Length: 0 < Server: nzbget-21.0 < * Closing connection 0 sh-5.1# Edit: retested after resetting nzbget port numbers back to defaults. Raised issue on github:
  8. No. You can use privoxy from any other docker or computer on your network by simply configuring the proxy settings of the application/computer to point to the privoxy address:port. For example, you would do this under settings in the Radarr Web UI. Or in Firefox proxy setting on you normal PC. But for dockers running on you unraid server, like radarr/sonarr/lidarr, there is an alternative way. You can make the other docker use the same network as deluge, by adding the net=container... into the extra parameters. It has some benefits in that you are guaranteed that all docker application traffic goes via the VPN. When using privoxy, only http traffic is routed via the VPN, and only if the application itself has implemented the proxy function properly. But doing it the net=container way, you shouldn't also use the proxy function in the application itself. So one or the other, depending on your use case and needs, but not both.
  9. I'm not familiar with PrivateVPN, so not sure how much help I can offer on this. Have you considered using PIA instead? Either way, I think we need to see more detailed logs at this point, can you follow this guide and post the results please? Remember to redact your user name and password from the logs before posting!
  10. This is almost certainly the main problem. If surfshark doesn't support port forwarding your speeds will be slow, sorry. Maybe someone else is using surfshark and can confirm if they have managed to get good speeds despite this? Strict_port_forward is only used with port forwarding so you might as well leave it at "no" It has definetly helped others, so it's worth a shot. But you really are fighting an uphill battle without port forwarding. So this is worth pursuing for sure, due to the error you're seeing. The defaults include PIA name servers, which I think have been depreciated by now. There are also other considerations, i.e. don't use google, see Notes at the bottom of this page: Try replacing the name servers with this:,,,,, If the error persists, it's likely something wrong with your DNS settings for unraid itself. No idea about this one, sorry
  11. If you haven't done so already, work through all the suggestions under Q6 here:
  12. Ok, I'm really guessing here, @binhex will need to chime in with the real answer, but I think you need to: 1. Remove the ipv6 reference 2. Remove the DNS entry (maybe, it might also be ignored already. Either way it would be better to move it to the DNS settings of the docker) 3. Add the wireguardup and down scripts 3. Ensure the endpoint address is correct. "" does not resolve to a public IP for me and I'm pretty sure it needs to for wireguard to be able to connect to the endpoint and the tunnel to be established. 4. Try removing the /16 postfix from the Address line So apart from #3, try the below as wg0.conf. Although I'm pretty sure it will fail still because of #3. [Interface] PrivateKey = fffff Address = PostUp = '/root/' PostDown = '/root/' [Peer] PublicKey = fffff AllowedIPs = Endpoint =
  13. @Nimrad this is what the wg0.conf file looks like when using PIA. I assume it needs to have the same info when using other VPN providers as well. The error in you log specifically states that the Endpoint line is missing from your .conf file, but I don't know what it needs to be set to for your provider/endpoint. [Interface] Address = PrivateKey = <redacted> PostUp = '/root/' PostDown = '/root/' [Peer] PublicKey = <redacted> AllowedIPs = Endpoint =
  14. Can you post your wg0.conf file (but redact any sensitive info)? Which VPN provider are you using? Sent from my iPhone using Tapatalk
  15. It should be Bridge network, and you have already found the correct application setting to fix AP auto-adoption when running inside a Docker container (in Bridge mode). Set Controller Hostname/IP to the IP of your Unraid server, then tick the box next to "Override infrom host with controller hostname/IP" See for more details, especially this:
  16. Thanks for this, just confirming that method 2 pulls down the correct installer for me as well. Sent from my iPhone using Tapatalk
  17. @SpaceInvaderOne I just pulled down your latest container and trying to install Big Sur it actually runs the Catalina installer. I've deleted everything and started over a few times (with unraid reboot in between for good measure) with the same result, is there a bug in the latest image? The installer is correctly named BigSur-install.img but when you run it from the new VM it is definetly trying to install Catalina:
  18. Possibly, not sure how you would find out though. Can you ask the tracker? Sent from my iPhone using Tapatalk
  19. I don’t think you can, sorry. The port is randomly given out by PIA from its pool for the particular server you are connecting to. The container uses a script that asks for a new (random) port on each reconnection to the VPN server. Sent from my iPhone using Tapatalk
  20. Using iCloud is the way to go in my experience. Using the same library from two different macs is not safe. Apple recommends against it and there are many report of corruptions from users doing it. An alternative would be to share the library from within Photos on the VM. Other macs can then access it in read-only mode on the same network. Sent from my iPhone using Tapatalk
  21. For backups yes, if you have enough space on your Mac to store the full library in the first place (I don’t). There’s also a live database in the folder structure so syncing that without corruptions will need consideration. Probably easier to use TimeMachine instead of custom script as Apple have already worked it out for you. Sent from my iPhone using Tapatalk
  22. Should cut down on the support requests in this thread... Just out of interest, how did you manage to do this? I thought this was a limitation in unraid’s VM implementation? Sent from my iPhone using Tapatalk
  23. My MediaCover directory is in appdata on my cache disk, not inside the docker.img. As far as I know I haven't done anything special to put it there, wasn't even aware of it until I saw your post. It should be inside /config from the container perspective, which should be mapped to your appdata share.
  24. I’m successfully doing exactly this, using a vdisk on a user share added to a Mac VM XML as a disk. Have been running for 1-2 years on two separate VMs without a single problem, one for me, one for the wife. I use iCloud Photo Library and only sync the latest photos (or whatever the OS thinks I can afford to store locally) to the local storage on my phone, laptop and iPad. I can still access all photos, and if the high res original is not on the device it will sync it down from iCloud automatically when required. The VM has a full copy of the library, you know just in case iCloud loses all my data. I only spin up the VM once a week to sync down the latest photos. I also use time machine in the VM to backup the whole photo library to another array disk. I really don’t want to lose my photos. Or my wife’s, which would be much worse. I figure cloud + 2 local copies should be enough. Edit: just realized it’s actually not exactly the same. I don’t mount the disk image from the share inside the VM, I mount it as a extra disk via the VM XML. I think I decided it was easier and more reliable to get the disk mounted this way, since I start and stop the VM automatically on a schedule.
  25. Have a look at the WireGuard section here: Looks like you are missing —sysctl="net.ipv4.conf.all.src_valid_mark=1" Also, from your volume mappings it looks like you’re running this on a Windows host? I know wireguard relies on support in the Linux kernel, so not sure if that would cause problems when running on Windows. Sent from my iPhone using Tapatalk