Jump to content

[Support] binhex - PrivoxyVPN


Recommended Posts

Posted (edited)
26 minutes ago, binhex said:

ok lets check iptables isn't blocking, can you do the following:-

  1. start the container
  2. left click container and click on 'console'
  3. type 'iptables -S' and paste the result here.
[root@eff812cdf72f /]# iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -j ACCEPT


Couple notes if they help:

1. I'm not running UNRAID, I could move this to github if it's more appropriate. `6.7.9-amd64` Debian on a generic Desktop build via docker compose.

2. I'm using pihole as a DNS sink, and queries that aren't to my internal DNS over port 53 get blocked.

With 3.0.34-3-02 and before, this combo played well with `VPN_NAMESERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1` Even though my router will only allow port 53 traffic to 10.2.10.10/10.2.10.20.

 

Seeing the iptables, had me thinking I could use the internal DNS to my network on the NAME_SERVERS env. Well, that worked (VPN_NAMESERVERS=10.2.10.10,...) better but some services still don't reverse proxy as expected, but wg went up in the privoxyVPN container. I don't know what's different between the 2 containers, this also happens with your others. I'd prefer not to use my internal DNS for any of this if possible explicitly.

Edited by vocoder
Link to comment
5 minutes ago, vocoder said:

I'm using pihole as a DNS sink, and queries that aren't to my internal DNS get blocked.

Isn't this your issue then?, remove this block and you should then be able to use public NS.
 

6 minutes ago, vocoder said:

VPN_NAMESERVERS

what is this?, VPN_NAMESERVERS is an env var not defined for any of my images so it will do nothing, the correct name is NAME_SERVERS.
 

8 minutes ago, vocoder said:

I could use the internal DNS.

No you can't, i actively block internal DNS to prevent ip leakage.

Link to comment
Posted (edited)

1. I could do that, but like I said, it worked better in the old version (wish I knew what changed), so unfortunately I'm sticking with that for now. I have no desire to let clients on my network go to cloudflare directly to resolve DNS.

2. VPN_NAMESERVERS is just what I called NAME_SERVERS in my `docker-compose.yaml > .env`. Sorry for the confusion. The variable is not being transposed or declared wrong, its just redeclared in my .env. (NAME_SERVERS: ${VPN_NAMESERVERS})

3. I'm not saying I know how this whole thing works, I'm trying to provide you useful debugging information. Before, `:latest` I am able to bring up any of your containers without changing my firewall rules.

 

Thanks again.

 

Edited by vocoder
Link to comment
4 minutes ago, vocoder said:

1. I could do that, but like I said, it worked better in the old version, so unfortunately I'm sticking with that for now. I have no desire to let clients on my network go to cloudflare to resolve DNS.

i have no idea how the previous version worked for you, if you are blocking port 53 then name resolution should only happen by your pihole, which will be blocked once the vpn tunnel is established.
 

6 minutes ago, vocoder said:

2. VPN_NAMESERVERS is just what I called NAME_SERVERS in my `docker-compose.yaml > .env`. Sorry for the confusion. The variables is not being transposed or declared wrong, its just redeclared in my .env.

Ahh fair enough!.

7 minutes ago, vocoder said:

3. I'm not saying I know how this works, I'm trying to provide you useful debugging information.

i appreciate the info!, I'm just letting you know that local name resolution will not work.
 

Link to comment
Posted (edited)
Quote

i have no idea how the previous version worked for you, if you are blocking port 53 then name resolution should only happen by your pihole, which will be blocked once the vpn tunnel is established.

I had this exact same thought, lol.

 

According to my internal DNS logs, it looks like the container is reaching out to my DNS servers properly in the previous build, but cannot in this new one. Specifically, it can resolve `us-newjersey.privacy.network` and let itself move on. I know for a fact it doesn't use my DNS after that, because I block everything internally DNS wise I use on the VPN but that all works on the container network after wg is up.

 

I assume (in previous builds) it knows where to go based on the host container base settings, then once it does the iptables or whatever, it rewrites the resolv.conf. My hypothesis on why it worked before.

 

Now it might start with a modified resolv.conf and never work; if that makes sense.

 

But it might be something totally different, hope that helps.

Edited by vocoder
  • Like 1
Link to comment
7 minutes ago, vocoder said:

I assume (in previous builds) it knows where to go based on the host container base settings, then once it does the iptables or whatever, it rewrites the resolv.conf. My hypothesis on why it worked before.

That is an interesting idea and would explain it, i have had to be more aggressive with blocking due to this issue, and thus the name resolution is now forced straight away and will only use the defined NAME_SERVERS (a good thing in my opinon), whereas before that code was further down the chain and thus a potential name lookup could happen using the hosts define name servers before the re-write of resolv.conf, so yeah I think this probably is the case, sadly though for you I will not be reversing this change due to the linked issue.

Link to comment
Posted (edited)

Any chance there could be a ENV var for a DNS server used only to resolve the wireguard endpoint? I know that's asking a lot... Maybe resolve & cache, then flush & launch container?

 

Is it even possible? Going to study that issue now.

 

Yep that's most certainly what caused it. There has to be a way to cache DNS though with nslookup or dig then use whatever DNS you want after you're happy with the endpoint IP. (it can cache the PIA VPN endpoint to IP to /etc/hosts), then if it fails to connect it could remove it from /etc/hosts to try and re-resolve.

 

But alas, I appreciate all of your continued efforts on this project, and well, it's not a simple issue and I understand your stance so if you can't fix it, thanks anyhow!

 

Edit #2, nevermind, I see. thinking now.

Edited by vocoder
  • Upvote 1
Link to comment

Has something changed so that privoxy now requires that a port forwarding endpoint be selected? 

 

I am using PIA, using wireguard.  Yesterday my privoxy stopped working and the only indication was in the logs where it said my selected endpoint did not support port forwarding.  When I changed to an endpoint that supports port forwarding it worked again.

Link to comment
1 minute ago, mattekure said:

Has something changed so that privoxy now requires that a port forwarding endpoint be selected? 

 

I am using PIA, using wireguard.  Yesterday my privoxy stopped working and the only indication was in the logs where it said my selected endpoint did not support port forwarding.  When I changed to an endpoint that supports port forwarding it worked again.

port forwarding support has been added to all vpn images, if you don't want to have to connect to a port forward enabled endpoint then set STRICT_PORT_FORWARD to 'no'

Link to comment
3 minutes ago, mattekure said:

excellent, thanks.  I wasnt using port forwarding, so it didnt make sense that it stopped working all of a sudden. appreciate the quick response.

no probs, if you are thinking 'why would i want port forwarding in any case for a proxy server?' then the answer is that people do share networking from other container with vpn enabled containers such as this, and these other containers may require a working incoming port, e.g. torrent client, soulseek, etc

Link to comment
Posted (edited)

Well, food for thought: A docker healthcheck that determines if WG is up could be made. In any case, I personally would love such a feature.

 

If you implemented this, a reasonable docker compose file would prevent the situation in the linked issue (as any linked container service could have defined within the docker-compose.yaml : `depends_on: arch-privoxyvpn service healthy) and perhaps there could be a override to let it work 'the old way' with this caveat DNS wise at the beginning, or some kind of adaptation to let it work?

 

In all honestly, I do this with most every other container with dependencies and it feels like the right way to go instead of a footrace. Docker does a great job at making sure containers respect their dependencies. It will start/restart/stop stuff if it needs to.

 

But again, I guess you'd be solving my issue, and maybe some other edge cases. But in reality, most people should have dependency chains for something like this anyway. The fact that it works without it, is cool, but it leads to stuff starting with errors that refresh themselves away. (e.g. maybe .arr programs trying to connect to something too quickly)

Edited by vocoder
  • Upvote 1
Link to comment
Posted (edited)
6 hours ago, vocoder said:

Well, food for thought: A docker healthcheck that determines if WG is up could be made. In any case, I personally would love such a feature.

 

If you implemented this, a reasonable docker compose file would prevent the situation in the linked issue (as any linked container service could have defined within the docker-compose.yaml : `depends_on: arch-privoxyvpn service healthy) and perhaps there could be a override to let it work 'the old way' with this caveat DNS wise at the beginning, or some kind of adaptation to let it work?

 

In all honestly, I do this with most every other container with dependencies and it feels like the right way to go instead of a footrace. Docker does a great job at making sure containers respect their dependencies. It will start/restart/stop stuff if it needs to.

 

But again, I guess you'd be solving my issue, and maybe some other edge cases. But in reality, most people should have dependency chains for something like this anyway. The fact that it works without it, is cool, but it leads to stuff starting with errors that refresh themselves away. (e.g. maybe .arr programs trying to connect to something too quickly)

You're not alone in this. I too block port 53 on my network and block any DoH/DoT servers I can to force devices to use DNS through my pihole servers and those make calls out to predefined DNS servers.

I also agree that having a LOCAL_NAME_SERVERS would be helpful to do the initial resolution of whatever you're wanting and then use NAME_SERVERS after the tunnel is created.

For now I added my local name servers to the NAME_SERVERS list and while it takes longer to start the container it seems to work.

Edited by ekalp
Additional thought.
Link to comment
1 minute ago, tazire said:

rtorrent

lol, it won't have, that is not maintained any more.

i'm aware of the issue guys and i think i have a fix, i shall post back when i have built a fixed image.

Link to comment

Ok guys a new image has been produced, for anybody with the name resolution issue please do the following:-
 

  1. login to unraid web ui
  2. go to docker tab
  3. toggle switch top right to 'advanced view'
  4. click on 'force update' for the container
  5. check logs for errors.
Link to comment
1 hour ago, binhex said:

lol, it won't have, that is not maintained any more.

i'm aware of the issue guys and i think i have a fix, i shall post back when i have built a fixed image.

 

Haha. And its been rock solid for me. XD 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...