Jump to content
bonienl

Dynamix WireGuard VPN

405 posts in this topic Last Reply

Recommended Posts

Posted (edited)
4 minutes ago, ljm42 said:

You can run the underlying wg commands if you want, but you can fully monitor everything right from the Unraid dashboard. Pretty cool.

image.png.93f51074bcae6dc782bd35a13f094143.png

That's probably out of topic a little bit but I use Check_MK to keep stats on all my dockers/VMs/servers so I do need the API if I want a nice chart over 400 days of active tunnels and activity. With the wg command I can easily make a local check in shell or bash.

 

Edit: Just realized I need 6.8 to get this working, I'm still under 6.7 with over 200 days of uptime. I will wait a little bit but will definitely try it. 

Edited by dnLL

Share this post


Link to post
16 hours ago, Korshakov said:

Would it be possible to add inverted option? EG: Allow only IPs specified and block everything else.

See updated version 2020.01.02

Share this post


Link to post

Is it intended behaviour that adding a new peer switches the WireGuard status to inactive or is this a bug?   

 

EDIT:  On further investigation this seems to only happen sometimes and I have not yet discerned the pattern.

Share this post


Link to post
4 hours ago, itimpi said:

Is it intended behaviour that adding a new peer switches the WireGuard status to inactive or is this a bug?   

A configuration change is made effective by inactivating the tunnel with the current (old) configuration and next activating it with the updated (new) configuration,

 

If a tunnel stays inactive, it should indicate some configuration conflict.

Share this post


Link to post
15 hours ago, pmcnano said:

@bonienl not sure if you missed my post in the previous page? Wireguard crashing my server.

Please try with bonding disabled and only use bridging for eth0.

 

Your attached diagnostics are from AFTER rebooting the system, we need diagnostics BEFORE you reboot and while the system is having the issue.

Share this post


Link to post
8 hours ago, bonienl said:

Please try with bonding disabled and only use bridging for eth0.

 

Your attached diagnostics are from AFTER rebooting the system, we need diagnostics BEFORE you reboot and while the system is having the issue.

I will try that, question tho, how would I get the diagnostics before, if I can't access it?

 

Thanks!

Share this post


Link to post
18 minutes ago, pmcnano said:

I will try that, question tho, how would I get the diagnostics before, if I can't access it?

 

Thanks!

If remote connection is completely lost, a solution can be to enable syslog mirroring to the flash device. See Settings -> Syslog Server

This saves a copy of your syslog information on the flash device in the /logs folder.

 

If you still can SSH (telnet) into your system, you can manually start diagnostics by typing

diagnostics

 

Share this post


Link to post
Posted (edited)
7 hours ago, bonienl said:

If remote connection is completely lost, a solution can be to enable syslog mirroring to the flash device. See Settings -> Syslog Server

This saves a copy of your syslog information on the flash device in the /logs folder.

 

If you still can SSH (telnet) into your system, you can manually start diagnostics by typing


diagnostics

 

 

Hey, so, disabled my bond, I actually just used eth0. Briding enabled only for eth0 too. Same thing happened, I tried grabbing diagnostics, but it said (https://share.getcloudapp.com/WnuE0Ljd):

Starting diagnostics collection...

I waited for 40 minutes and nothing, just more kernel panics in the logs. Here are the logs (pastebin as it's pretty large, same KP over and over again as far as I can tell): https://pastebin.com/gQd8P7Zt

 

Thanks!

 

edit: I created a new diagnostics set, from console after the restart and it took 5 seconds, so yeah. Attaching it in case it helps with anything.

tower-diagnostics-20200105-1932.zip

Edited by pmcnano

Share this post


Link to post
Posted (edited)

First of all thanks for the wireguard gui creating a vpn has never been easier.

 

Like a lot of people here I couldn't access my dockers on custom IP address using the default macvlan network that unraid creates.

However there seems to be a workaround. I found this blog by Lars Kellogg-Stedman which describes the problem and a solution.          

 

Instead of letting unraid create the docker network do it yourself and use the --aux-address option.

Then create another macvlan network to communicate to the containers.

 

This is what I did.

I deleted the network that the unraid gui made then I set up my docker network with the following.

docker network create -d macvlan -o parent=br0 --subnet 192.168.1.0/24 --gateway 192.168.1.1 --ip-range 192.168.1.128/28 --aux-address 'host=192.168.1.223' mynet

Then I added the other macvlan and these ip routes. I also added them to the go file.

ip link add mynet-shim link br0 type macvlan  mode bridge
ip addr add 192.168.1.223/32 dev mynet-shim
ip link set mynet-shim up
ip route add 192.168.1.128/28 dev mynet-shim

Now I can access all my dockers :)

Hope this helps people and thank Lars for his blog.

Edited by Selmak
  • Like 1
  • Thanks 1

Share this post


Link to post

Interesting workaround. I'll have a look how to integrate with Unraid.

 

Share this post


Link to post

Does using wireguard interfere with other vpns like delugevpn (or vice versa) ?

Share this post


Link to post
5 minutes ago, ozboss said:

Does using wireguard interfere with other vpns like delugevpn (or vice versa) ?

Most VPN services are running on port 1194. Wireguard is different port and also actually as well different standard.

Share this post


Link to post

Hey I have a problem...

 

Wireguard randomly stops working (disables itself) and I have to manually enable it. And one additional problem is that when adding multiple (8 users) seems to crash the VPN itself. I just get a blank screen. And this screen doesn't change as well as the tunnel which doesn't ever start again. I can connect but handshake fails and keeps failing.

 

I have attached a screenshot.

Capture.PNG

kodeljevo-diagnostics-20200108-0910.zip

Share this post


Link to post

Your diagnostics are from AFTER a reboot and don't show anything that can help.

 

A couple of observations:

1. Try to correct the items which Fix Common Problems reports about

2. You are using a single interface eth0, disable the bond function and see if that makes a difference

3. Your interface eth0 is set to a speed of 100M, while it supports 1000M. Check for cable issues or replace the cable

 

As a test I created 10 peers on the same tunnel, which works fine for me.

In your case are all peers active at the same time?

 

image.thumb.png.80b88a180070a622823869751355d0db.png

Share this post


Link to post

The screenshot is from after the reboot. But I have found out that the no tunnels showing problem is because of a VM not showing it so it's not a bug with Wireguard.

Only one user is active (me) and the rest were just created. Even if I change just the Peer name the software disables itself.

I have changed the IP address to another subnet and I'll see if it helps although this subnet should be perfectly fine.

Share this post


Link to post
23 minutes ago, gxs said:

Only one user is active (me) and the rest were just created. Even if I change just the Peer name the software disables itself.

Any time you make a change then the WireGuard service will be stopped and then restarted.   I myself have noticed that it "sometimes" seems to leave the service inactive, but I have not found a pattern to reliably reproduce this for further investigation.

Share this post


Link to post
37 minutes ago, itimpi said:

Any time you make a change then the WireGuard service will be stopped and then restarted.   I myself have noticed that it "sometimes" seems to leave the service inactive, but I have not found a pattern to reliably reproduce this for further investigation.

That is normal as it happens on all my servers. What happens only on this one server is that it does stop but then doesn't restart back.

The funny thing is that it's a barebones server so there is no reason for it to not work.

 

What I did notice (half an hour ago) and will check if it matters is that I have bonding enabled and one of my links is not working due to faulty cables or bad router (one port is down and another is working with 100mbs). I wonder if that is the cause. I have ordered them to change the cables and I'll see if it's a weird combination of bonding and wiring problems.

 

Share this post


Link to post
On 1/6/2020 at 7:34 AM, Selmak said:

However there seems to be a workaround.

With Unraid containers may have either fixed addresses or dynamic addresses when used on a custom (macvlan) network.

To ensure that "any" container can be accessed by the host, I took the approach as described in the blog, and modified it to the needs for Unraid.

 

Instead of defining a subnet associated with the DHCP pool for containers, the complete macvlan subnet is split in two smaller subnets, e.g. a 1 x /24 becomes 2 x /25 and these subnets are used to set up a "shim" network which allows the host (Unraid) to access any container in the associated macvlan network.

 

To make use of this feature it is a simple matter of enabling it in the Docker settings page (new setting which defaults to "disabled").

 

image.thumb.png.8e16f43266d393bd0a311beecddadd5d.png

 

Now I can ping the container "Pi-hole" with (fixed) address 10.0.101.100 on custom network br0

root@vesta:/# ping 10.0.101.100
PING 10.0.101.100 (10.0.101.100) 56(84) bytes of data.
64 bytes from 10.0.101.100: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 10.0.101.100: icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from 10.0.101.100: icmp_seq=3 ttl=64 time=0.032 ms
64 bytes from 10.0.101.100: icmp_seq=4 ttl=64 time=0.020 ms

And I can ping the container "Tautulli" with (dynamic) address 10.0.101.128 on custom network br0

root@vesta:/# ping 10.0.101.128
PING 10.0.101.128 (10.0.101.128) 56(84) bytes of data.
64 bytes from 10.0.101.128: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 10.0.101.128: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 10.0.101.128: icmp_seq=3 ttl=64 time=0.026 ms
64 bytes from 10.0.101.128: icmp_seq=4 ttl=64 time=0.024 ms

 

There is ONE CAVEAT ...

 

When remotely accessing a container on a custom network over a WireGuard tunnel, you MUST define a route on your router (gateway) which points back to the tunnel on the server. E.g. route 10.253.0.0/24 ==> 192.168.1.2 (Unraid server)

 

This is required because it is not possible to use NAT between a custom network and a WG tunnel, since everything is handled internally on the server and never leaves the physical interface, hence NAT is never in the picture here.

 

Share this post


Link to post

Awesome! Thanks.

 

Ps. Did you read my reply with new information some posts back? :( 

Share this post


Link to post
2 minutes ago, pmcnano said:

Did you read my reply with new information some posts back?

Sorry I am a little lost here.

Can you give a description of your current situation? Are there still crashes?

Share this post


Link to post
14 minutes ago, bonienl said:

Sorry I am a little lost here.

Can you give a description of your current situation? Are there still crashes?

Here :)

 

Share this post


Link to post
53 minutes ago, bonienl said:

Instead of defining a subnet associated with the DHCP pool for containers, the complete macvlan subnet is split in two smaller subnets, e.g. a 1 x /24 becomes 2 x /25 and these subnets are used to set up a "shim" network which allows the host (Unraid) to access any container in the associated macvlan network.

 

To make use of this feature it is a simple matter of enabling it in the Docker settings page (new setting which defaults to "disabled").

Wow this sounds great!

 

53 minutes ago, bonienl said:

There is ONE CAVEAT ...

 

When remotely accessing a container on a custom network over a WireGuard tunnel, you MUST define a route on your router (gateway) which points back to the tunnel on the server. E.g. route 10.253.0.0/24 ==> 192.168.1.2 (Unraid server)

 

This is required because it is not possible to use NAT between a custom network and a WG tunnel, since everything is handled internally on the server and never leaves the physical interface, hence NAT is never in the picture here.

 

Does "Local server uses NAT" have any effect on whether WG can access these Docker networks, or does it work regardless?

 

When "Local server uses NAT" is set to "No", the gui tells you what static route you need to add to your router. I'm wondering if we should show a similar message when it is set to "Yes"? It isn't always required, but it will be helpful in this case where there are custom docker networks.

 

 

 

Share this post


Link to post
5 minutes ago, ljm42 said:

Does "Local server uses NAT" have any effect on whether WG can access these Docker networks, or does it work regardless?

This setting has no effect when talking to custom (macvlan) networks, but it is used when talking to other devices in your LAN network.

 

With NAT enabled, all other devices in your LAN "think" they are talking to the server instead of the WG tunnel and hence don't require additional routing.

 

9 minutes ago, ljm42 said:

When "Local server uses NAT" is set to "No", the gui tells you what static route you need to add to your router. I'm wondering if we should show a similar message when it is set to "Yes"?

Yes, I need to make that clearer in the WG configuration page. Not done yet.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.