-
Posts
267 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Jorgen
-
-
Confused, what traffic besides web traffic would you want to route through the tunnel? Or rather what traffic would not be routed through the proxy? If I shutdown my delugevpn container I can get to the other apps webui but they can't get out to anything else
Hiding DNS lookups from your ISP would be one example of traffic that normally bypasses the proxy.
But the main use case is to use the VPN tunnel for docker apps that don’t support the use of a proxy at all. NZBget would be one example, but there are many more.
Sent from my iPhone using Tapatalk -
Thanks for the info for my password... i erase all docker so no port is in use i guess.
Try rebooting unraid to shake out any leftover port usage.
But what is this in your run command?—net=eth1
That looks non-standard to me, and might be part of the problem?
Sent from my iPhone using Tapatalk -
53 minutes ago, themoose said:
Hi, I'm using the binhex/arch-delugevpn:test container for some time
Glad it's working for you again, but you probably shouldn't be using the :test tag anymore. It was only temporary for testing new functionality when it was first introduced.
Just removing ":test" from the repository field and saving the changes should get you back on the latest normal release of the container.
-
1
-
-
So I have tried everything and I still cant get my deluge vpn back up and running with the nextgen Pia OVPN. I also cant find the list of which allow port forwarding . Has anyone been successful?
Lots of people are running successfully on the nextgen servers.
If you post your logs (remove username and password first) we should be able to help you get it working.
The logs also contain a list off all endpoints that support port forwarding.
Sent from my iPhone using Tapatalk -
Gotcha! I'll turn on turbo-write for large transfers.
I recall reading that the virtio drivers emulates a 10G NIC but my motherboard only has a 1G NIC. Am I technically bottlenecking myself at 1G because of it?
Specifically, if I change the NIC to a 10G card (and network switch), would I transfer to my cache drive at 10G instead?
It won’t help in this case, the VM to unRAID share networking is not affected by the NIC speed, it’s using a virtual network. You should already be able to write to the cache drive at full disk speed from the VM
Sent from my iPhone using Tapatalk -
Just adding to your learnings: enabling turbo-write has no effect when you also disable the parity disk.
There is a good explanation in the wiki about how normal vs turbo-write parity calculation is achieved and why the latter is faster. It comes at the expense of needing all your drives spun up though.
I’m on my phone so won’t even attempt to find the article and link it, but I’m sure you can find it yourself if you want to dig deeper.
Sent from my iPhone using Tapatalk-
1
-
-
[mention]jwoolen [/mention] sorry don’t know what might be wrong in that case. Hopefully someone with better networking knowledge than me can chip in.
Sent from my iPhone using Tapatalk -
2 hours ago, jwoolen said:
Anyone else having issues with DNS leaks since the changover with PIA? My DNS settings are set to the PIA DNS servers: 209.222.18.218, 209.222.18.222. No matter what it defaults to the one shown below.
I assume the screenshots are from a browser on your local PC and you have configured the browser to use Privoxy as the proxy server?
In that case the browser routes http traffic via the proxy server and VPN tunnel. However, the browser will use the OS mechanism for DNS resolution (DNS is different to http). Since your OS doesn't use the privoxy proxy it will fail the DNS leak test. The DNS servers you set in the container setting has no effect on the browser behaviour in this case. I believe they are only used by the container before the VPN tunnel is established, but maybe @binhex can confirm this?
When you are using the PIA app on teh other hand, all internet traffic is routed via the VPN tunnel on an OS level, including DNS resolution. So DNS passes the leak test.
So how do you get the results you want? Two options that I know of that should work (but see note below):
1. If your browser supports it, set DNS resolution to use http protocol. In Firefox this is called "Enable DNS over HTTPS" under the proxy configuration settings. I assume other browsers have something similar.
2. Enable SOCKS v5 proxy in Privoxy and set up your browser to use that. See here for details on how that works: https://stackoverflow.com/questions/33099569/how-does-sock-5-proxy-ing-of-dns-work-in-browsers
Now, I just tested both methods and could not get the browser to pass the DNS leak test for either. Not sure what I'm doing wrong but I'm not that worried about it as I use the PIA app on my PC anyway. But maybe this will point you in the right direction. Please report back if you try it and get it to work for you.
Actually, you might also be able to set up your OS to use Privoxy as the proxy, but I have not tested that at all.
Edit: looks like I need to use FoxyProxy extension for Firefox to be able to pass the username/password when using Socks. Hopefully other browser have better support for Socks...
-
Ok, had a look at earlier posts and I take it you will buy new 8TB drives. Assuming the 10TB spare has data on it, this is what I would do:
- Assign the 8TB drives to the array and let unraid clear and format them. Do not assign a parity drive yet. And either don't assign a cache rive at all, or make sure you user share(s) are set up without cache as Trurl mentions above.
- Mount the share on your Windows box
- Use TeraCopy to copy all data to share(s). At this point you have two copies of your data, one on original drives, one on unraid share
- Add spare 10TB drive to unraid and assign it as parity drive. Unraid will start building the parity. This will take quite some time, 1-2 days most likely. During this process any data that was previously on the spare 10TB drive will be wiped and only exist on unraid, but it will be unprotected as the parity has not been built yet. You need to decide if you are willing to live with this risk.
- Once parity is built, you can delete any duplicate data from the non-spare 10TB drive that is still in your Win box.
- Add or enable cache for the share etc.
- Optinally add another parity drive if that was your plan.
If you are not comfortable with the risk in step 4, you're only option is to add another (new) disk as parity at the start of the process to ensure the data is protected at all times. But it will slow down the data transfer.
-
So you have a spare 10TB drive you want to re-use for unraid, right?
Is it empty or does it have data on it?
Is that the only existing drive you will be re-using?
And how many other new drives (and what size) are you planning to buy?
Sent from my iPhone using Tapatalk -
One other thing, anyone got a software they'd recommend for handling the transfer of data from my existing system drives to the NAS once it's built. Windows copy paste is cool but there's no way to check nothing got corrupted in the move and it's not exactly ideal for very large amounts of data.
If you want to run the copy from your win box over the network, TeraCopy will let you verify file integrity and has other good features for large copy jobs: https://www.codesector.com/teracopy
You could also mount the drive in unraid using the unassigned devices plugin and use something like rsync from the unraid terminal. This would likely be faster (no network bottleneck) but is more advanced so the scope for process errors goes up.
And while we’re on the subject of large data migrations. Some people like to transfer the initial data without a parity drive assigned. This way the writes are much quicker. Then once the data has been copied, assign a parity drive and let it build parity for protection against disk failures. Just something for you to consider.
Sent from my iPhone using Tapatalk -
Couldn't find much on deluge forums - only that people had success switching to QBitTorrent. I switched to QBitTorrent (by binhex) and I'm getting the same result. The torrent(s) is added to the client, then almost immediately drops to 0 KiB/s after starting (upon container reboot). I'm at a loss here, not sure what else I can try. Keep in mind, everything worked fine just a few days ago.
So is the VPN still up when the download drops to 0? I guess it must be if you can access the deluge web UI.
You have tried other endpoints?
Debug logs might reveal something, but I agree that this seems to be a problem outside the container, especially since the other container has the same problem.
Sent from my iPhone using Tapatalk -
This behavior is the same after each deluge reboot, no matter what I change it seems. Any ideas?
Maybe try the WireGuard option instead of OpenVPN? It’s working very well for me, none of these cipher problems.
Although your problem seems unrelated if it actually connects successfully at first.
Are your trackers blocking you? Have you run out of space on any disks that deluge are using?
Sent from my iPhone using Tapatalk-
1
-
-
do i need a (ssd) disc to function as cache? i would want the kodi client to function as fluid as possible (saw a lot of topics about kodi hanging and stalling)
The cache is only used for writing to the array (if you’re not using VMs and dockers) so won’t give you any performance benefit for Kodi reading from the shares.
From your use case I don’t think you need one.
On another note, that motherboard has a Realtek NIC which can cause problems with unRAID and is not recommended. Safer to use an intel based card. But you can add that later IF you have problems with the onboard NIC.
Other than that I don’t see any problems with using your gear as a pure NAS.
Sent from my iPhone using Tapatalk -
Thanks for the reply guys, my container runs on the same IP as my Unraid server in bridge mode, so the VNC URL should be same (If I'm not wrong)
Tried with VNC Viewer too with the correct ports and it's still not working
Did you add the VNC port to the Privoxy container?
Sent from my iPhone using Tapatalk -
Did I misunderstand binhex's comment right above mine?
PIA offers three connection options:
1. legacy servers via OpenVPN
2. next-gen servers via OpenVPN
3. next-gen servers via WireGuard
Only 1 and 2 are currently supported by this container. And 2 is the recommended option.
Support for 3 is being worked on.
Sent from my iPhone using Tapatalk -
I did also copy in a new ovpn file to use the new next-gen wireguard.
WireGuard is not (yet) supported by the container. Use the next-gen open VPN files instead.
Sent from my iPhone using Tapatalk
-
1 hour ago, binhex said:
Evening furry guinea pigs! :-), there will be a new test tagged image available in the next hour from now, this one includes a fix for the multiple retry issue due to login failure, so you should now get assigned a incoming port on first run for next-gen!, let me know how you get on.
Looking very good here. Successfully acquired a port within 21 seconds of starting the new container. As far as I can see it only took one try, no re-tries. But then again I don't have debug logs on so not sure if there-tries would show?
-
1
-
-
Very happy with the next-gen testing. Had to throttle the download bandwidth in deluge because downloading an Ubuntu ISO maxed out my internet connection!
-
1
-
-
30 minutes ago, MisterOrange said:
I am getting an error when it is trying to retrieve a token from PIA. "parse error: Invalid numeric literal at line 4, column 0", and it is unable to get the payload from PIA (bottom of the log). It then goes into a loop re-connecting to PIA and trying again.
I was getting the same, but left it for a few minutes and it came good on it's own.
Getting great speeds finally from an endpoint on my own continent (au-sydney)!
@binhex do you need anything specific from us guinea pigs?
-
I couldn't get any 'port forwarding' server working here in AU with latest PIA openvpn-nextgen files.
I went with wgstarks suggestion for now until fixed.
Port forwarding does not work with any nextgen servers. PIA does not support it (yet?)
You have to use one of the supported “current gen” servers and from there it’s hit-and-miss whether you get a port or not. Changing the server endpoints and/or restarting the container normally gets it going after a while.
FWIW I just connected (also from AU) to DE Berlin with a working port forward and good speeds.
Sent from my iPhone using Tapatalk -
Hmm, I don't actually read/write to the shares much from my Mac (Catalina), especially big files. But I just did a test and the speeds aren't as fast as I expected for me either. I'm getting 10-20MB/s write, and a bit less on reads (single large file).
There are some SMB config tweaks floating around on the forum specifically for Mac transfer speeds, I will look into to this to see if it makes a difference and report back.
In the mean time, have you enabled "Enhanced macOS interoperability" under Settings/SMB Settings in unraid? If not, do that and see if it helps (probably requires restart of array at least).
-
I use macOS, my Unraid server is really really slow, but I mean, a lot. Sometimes it goes down to 2-3 mbps. How can I do? Does a cache drive fix this?
jonny-diagnostics-20200821-2143.zip
I haven’t looked at your diagnostics yet, but I think you need to give us a bit more info about the scenario.
- What version of Mac OS are you using?
- How are you connecting to the unRAID server? WiFi or ethernet? Are you sure your network is ok?
- Have you tested if you are getting the same or better speeds from a different computer/OS?
- How are you mounting the share on the Mac? I assume via SMB?
- What exactly are you doing when you see those speeds? E.g. copying one 10GB file from your Mac to the unRAID share, or copying lots of small file from unRAID to the Mac?
Sent from my iPhone using Tapatalk-
1
-
-
Are you using iCloud to sync photos from your phone? In that case you can do what I do. I have Mac VM on unRAID that runs for a few hours per week (scheduled start and stops). On the VM I run Photos that syncs with iCloud. The storage is a disk image on the unRAID array.
So any new photos/videos on my iPhone are immediately synced to iCloud, then synced to unRAID once a week.
I also run TimeMachine to a separate unRAID disk, which is probably overkill. But I’ve lost irreplaceable photos in the past from phones dying and really don’t want it happening again.
In the unlikely scenario that my phone dies AND apple somehow loses all my iCloud storage at the same time, I’m now only losing maximum one week of recent photos. Which I’m ok with.
Sent from my iPhone using Tapatalk-
1
-
[Support] binhex - DelugeVPN
in Docker Containers
Posted
Yeah me neither, but I’m out of my depth here. Hopefully someone more knowledgeable can jump in and help.
Sent from my iPhone using Tapatalk