Jorgen

Members
  • Posts

    269
  • Joined

  • Last visited

Everything posted by Jorgen

  1. I’m successfully doing exactly this, using a vdisk on a user share added to a Mac VM XML as a disk. Have been running for 1-2 years on two separate VMs without a single problem, one for me, one for the wife. I use iCloud Photo Library and only sync the latest photos (or whatever the OS thinks I can afford to store locally) to the local storage on my phone, laptop and iPad. I can still access all photos, and if the high res original is not on the device it will sync it down from iCloud automatically when required. The VM has a full copy of the library, you know just in case iCloud loses all my data. I only spin up the VM once a week to sync down the latest photos. I also use time machine in the VM to backup the whole photo library to another array disk. I really don’t want to lose my photos. Or my wife’s, which would be much worse. I figure cloud + 2 local copies should be enough. Edit: just realized it’s actually not exactly the same. I don’t mount the disk image from the share inside the VM, I mount it as a extra disk via the VM XML. I think I decided it was easier and more reliable to get the disk mounted this way, since I start and stop the VM automatically on a schedule.
  2. Have a look at the WireGuard section here: https://github.com/binhex/arch-delugevpn Looks like you are missing —sysctl="net.ipv4.conf.all.src_valid_mark=1" Also, from your volume mappings it looks like you’re running this on a Windows host? I know wireguard relies on support in the Linux kernel, so not sure if that would cause problems when running on Windows. Sent from my iPhone using Tapatalk
  3. I think you’re mixing Bytes and bits there. 80MiB/s is pretty close to the maximum practical speed of a 1Gb/s line. Sent from my iPhone using Tapatalk
  4. Yeah me neither, but I’m out of my depth here. Hopefully someone more knowledgeable can jump in and help. Sent from my iPhone using Tapatalk
  5. Hiding DNS lookups from your ISP would be one example of traffic that normally bypasses the proxy. But the main use case is to use the VPN tunnel for docker apps that don’t support the use of a proxy at all. NZBget would be one example, but there are many more. Sent from my iPhone using Tapatalk
  6. Try rebooting unraid to shake out any leftover port usage. But what is this in your run command? —net=eth1 That looks non-standard to me, and might be part of the problem? Sent from my iPhone using Tapatalk
  7. Glad it's working for you again, but you probably shouldn't be using the :test tag anymore. It was only temporary for testing new functionality when it was first introduced. Just removing ":test" from the repository field and saving the changes should get you back on the latest normal release of the container.
  8. Lots of people are running successfully on the nextgen servers. If you post your logs (remove username and password first) we should be able to help you get it working. The logs also contain a list off all endpoints that support port forwarding. Sent from my iPhone using Tapatalk
  9. It won’t help in this case, the VM to unRAID share networking is not affected by the NIC speed, it’s using a virtual network. You should already be able to write to the cache drive at full disk speed from the VM Sent from my iPhone using Tapatalk
  10. Just adding to your learnings: enabling turbo-write has no effect when you also disable the parity disk. There is a good explanation in the wiki about how normal vs turbo-write parity calculation is achieved and why the latter is faster. It comes at the expense of needing all your drives spun up though. I’m on my phone so won’t even attempt to find the article and link it, but I’m sure you can find it yourself if you want to dig deeper. Sent from my iPhone using Tapatalk
  11. [mention]jwoolen [/mention] sorry don’t know what might be wrong in that case. Hopefully someone with better networking knowledge than me can chip in. Sent from my iPhone using Tapatalk
  12. I assume the screenshots are from a browser on your local PC and you have configured the browser to use Privoxy as the proxy server? In that case the browser routes http traffic via the proxy server and VPN tunnel. However, the browser will use the OS mechanism for DNS resolution (DNS is different to http). Since your OS doesn't use the privoxy proxy it will fail the DNS leak test. The DNS servers you set in the container setting has no effect on the browser behaviour in this case. I believe they are only used by the container before the VPN tunnel is established, but maybe @binhex can confirm this? When you are using the PIA app on teh other hand, all internet traffic is routed via the VPN tunnel on an OS level, including DNS resolution. So DNS passes the leak test. So how do you get the results you want? Two options that I know of that should work (but see note below): 1. If your browser supports it, set DNS resolution to use http protocol. In Firefox this is called "Enable DNS over HTTPS" under the proxy configuration settings. I assume other browsers have something similar. 2. Enable SOCKS v5 proxy in Privoxy and set up your browser to use that. See here for details on how that works: https://stackoverflow.com/questions/33099569/how-does-sock-5-proxy-ing-of-dns-work-in-browsers Now, I just tested both methods and could not get the browser to pass the DNS leak test for either. Not sure what I'm doing wrong but I'm not that worried about it as I use the PIA app on my PC anyway. But maybe this will point you in the right direction. Please report back if you try it and get it to work for you. Actually, you might also be able to set up your OS to use Privoxy as the proxy, but I have not tested that at all. Edit: looks like I need to use FoxyProxy extension for Firefox to be able to pass the username/password when using Socks. Hopefully other browser have better support for Socks...
  13. Ok, had a look at earlier posts and I take it you will buy new 8TB drives. Assuming the 10TB spare has data on it, this is what I would do: Assign the 8TB drives to the array and let unraid clear and format them. Do not assign a parity drive yet. And either don't assign a cache rive at all, or make sure you user share(s) are set up without cache as Trurl mentions above. Mount the share on your Windows box Use TeraCopy to copy all data to share(s). At this point you have two copies of your data, one on original drives, one on unraid share Add spare 10TB drive to unraid and assign it as parity drive. Unraid will start building the parity. This will take quite some time, 1-2 days most likely. During this process any data that was previously on the spare 10TB drive will be wiped and only exist on unraid, but it will be unprotected as the parity has not been built yet. You need to decide if you are willing to live with this risk. Once parity is built, you can delete any duplicate data from the non-spare 10TB drive that is still in your Win box. Add or enable cache for the share etc. Optinally add another parity drive if that was your plan. If you are not comfortable with the risk in step 4, you're only option is to add another (new) disk as parity at the start of the process to ensure the data is protected at all times. But it will slow down the data transfer.
  14. So you have a spare 10TB drive you want to re-use for unraid, right? Is it empty or does it have data on it? Is that the only existing drive you will be re-using? And how many other new drives (and what size) are you planning to buy? Sent from my iPhone using Tapatalk
  15. If you want to run the copy from your win box over the network, TeraCopy will let you verify file integrity and has other good features for large copy jobs: https://www.codesector.com/teracopy You could also mount the drive in unraid using the unassigned devices plugin and use something like rsync from the unraid terminal. This would likely be faster (no network bottleneck) but is more advanced so the scope for process errors goes up. And while we’re on the subject of large data migrations. Some people like to transfer the initial data without a parity drive assigned. This way the writes are much quicker. Then once the data has been copied, assign a parity drive and let it build parity for protection against disk failures. Just something for you to consider. Sent from my iPhone using Tapatalk
  16. So is the VPN still up when the download drops to 0? I guess it must be if you can access the deluge web UI. You have tried other endpoints? Debug logs might reveal something, but I agree that this seems to be a problem outside the container, especially since the other container has the same problem. Sent from my iPhone using Tapatalk
  17. Maybe try the WireGuard option instead of OpenVPN? It’s working very well for me, none of these cipher problems. Although your problem seems unrelated if it actually connects successfully at first. Are your trackers blocking you? Have you run out of space on any disks that deluge are using? Sent from my iPhone using Tapatalk
  18. The cache is only used for writing to the array (if you’re not using VMs and dockers) so won’t give you any performance benefit for Kodi reading from the shares. From your use case I don’t think you need one. On another note, that motherboard has a Realtek NIC which can cause problems with unRAID and is not recommended. Safer to use an intel based card. But you can add that later IF you have problems with the onboard NIC. Other than that I don’t see any problems with using your gear as a pure NAS. Sent from my iPhone using Tapatalk
  19. Did you add the VNC port to the Privoxy container? Sent from my iPhone using Tapatalk
  20. PIA offers three connection options: 1. legacy servers via OpenVPN 2. next-gen servers via OpenVPN 3. next-gen servers via WireGuard Only 1 and 2 are currently supported by this container. And 2 is the recommended option. Support for 3 is being worked on. Sent from my iPhone using Tapatalk
  21. WireGuard is not (yet) supported by the container. Use the next-gen open VPN files instead. Sent from my iPhone using Tapatalk
  22. Looking very good here. Successfully acquired a port within 21 seconds of starting the new container. As far as I can see it only took one try, no re-tries. But then again I don't have debug logs on so not sure if there-tries would show?
  23. Very happy with the next-gen testing. Had to throttle the download bandwidth in deluge because downloading an Ubuntu ISO maxed out my internet connection!
  24. I was getting the same, but left it for a few minutes and it came good on it's own. Getting great speeds finally from an endpoint on my own continent (au-sydney)! @binhex do you need anything specific from us guinea pigs?
  25. Port forwarding does not work with any nextgen servers. PIA does not support it (yet?) You have to use one of the supported “current gen” servers and from there it’s hit-and-miss whether you get a port or not. Changing the server endpoints and/or restarting the container normally gets it going after a while. FWIW I just connected (also from AU) to DE Berlin with a working port forward and good speeds. Sent from my iPhone using Tapatalk