Dynamix WireGuard VPN


bonienl

Recommended Posts

21 minutes ago, bonienl said:

WireGuard works with "peers" and there is no such thing as server/client.

In other words your Unraid server can connect as a peer to any other system running WireGuard.

Yes, I am aware of this, but none of the available peer access types seemed to fit what I was looking for. However, I just compared the config I built myself with the one that was built with the plugin using the "vpn tunneled access" type. Nothing major seems to be different, so I'm going with that.

 

Thanks for your help and for the plugin!

Link to comment
6 hours ago, bonienl said:

There is also an "import" function.

If you have a WireGuard configuration from a 3rd party (e.g. a VPN provider), you can simply import the settings, give it a name and it should be ready to go.

Oh man, I completely missed that!

 

Now to see if I can get my docker traffic forced across the wireguard interface.

Link to comment

Hi Guys,

 

so i've found solution to my issue with pfSense router running as VM on Unraid.

This however requires NIC with at least 3 sockets with pass through properly enabled. In overall: I've moved unraid onto completely separated network (physically).

 

I am pretty sure that this solution applies to OpenVPN etc.

 

Network Setup:

My pfSense VM has intel NIC with 4 sockets.

Socket 1 - Wan connection from ISP

Socket 2 - LAN1 (192.168.1.0/24) on this network i have pfSense. (DHCP Enableb)

Socket 3 - LAN2 (10.0.0.0/24) on this network i have unraid. (DHCP Disabled)

Socket 4 - Currently empty.

 

VLAN 20 (DHCP Disabled) on LAN2 (10.0.20.0/24) - Used for Unraid Dockers

VLAN 30 (DHCP Disabled) on LAN2 (10.0.30.0/24) - Used for Unraid VMs

 

I have literally short patch cable going from my Intel NIC to my Unraid Motherboard.

 

PF Sense - If you are NOT using NAT in Wireguard.

1. Settings to enable:

   1.1 - Go to: System -> Advanced -> Tab Firewall & NAT

   1.2 - Checkbox: Static route filtering: checked-in.

2. Create gateway with your unraid's static IP in my case (10.0.0.10)

3. Create static route:

    3.1 - Destination to wireguard network (from default its 20.253.0.0/24)

    3.2 - Choose gateway (10.0.0.10)

Issue to overcome: No internet access at all, everything on network is accessible and works (dockers, VMs, everything)

Solution to that is to enable NAT in wireguard. (everything then works perfectly).

 

If you dont want NAT enabled in Wireguard:

I think this is a manner of finding the right setting in pfSense and hookup the wireguard network to have access to the internet.

 

Unraid:

I've followed guides from posts in here written in the past, credits for this goes to @Can0nfan and @craftsman when it comes to setups dockers and vms.

 

Unraid IP is 10.0.0.10

Enable VLANS in net. settings and set them up in my case vlan tag 20 (dockers) and 30 (VMs)

go to docker settings and setup vlan network there, do not set DHCP pool and leave it blank.

go to VM settings and setup vlan network there.

 

 

NOW i won't be lying to you, there is much easier way to get VPN to your network by simply using build in pfSense OpenVPN and you don't have to go through any of these issues at all, but for sake of experimenting i think this was fun and this gave me a backup option if for some reason OpenVPN fails, because Wireguard is completely different solution than OpenVPN and i want to avoid using L2tp with IPsec. (i dont like L2tp with IPsec just from personal preference).

 

 

 

 

 

 

 

Edited by Korshakov
  • Thanks 2
Link to comment
17 minutes ago, alael said:

One more problem,

I suppose you mean "Remote access to server"?

When this setting is chosen, it will generate a peer configuration with the tunnel address of Unraid as Allowed IP address.

This is on purpose and allows users to reach the server on its tunnel address, even when the peer side uses the same network subnet as Unraid.

(recommendation though is that both peers use different network subnets and avoid overlap)

 

21 minutes ago, alael said:

the GUi set the the wrong ''allowed IP''

This is working okay for me, What steps do you do?

Link to comment
6 minutes ago, alael said:

Did just create a new test user and its only set to x.x.0.1 in the allowed IP while the assigned client is end with .3 this configuration is unusable.

Post a screenshot

 

Btw make sure you are running the latest version...

Edited by bonienl
Link to comment

I followed your screenshot which has "Remote tunneled access" set.

 

"remote server access'' does not exist, do you mean "Remote access to server"?

 

Btw it is not explicitely said or denied in the GUI, but a peer set up with "Remote tunneled access" must run in its own tunnel, and can not be shared with peers using a different type of access.

 

Sorry mixed up with "VPN tunneled access", which needs to be on its own tunnel

Edited by bonienl
Link to comment

I created a new tunnel, then added one new peer with "Remote access to server" (default) and applied the configuration.

Next downloaded the zip file for the Unraid configuration and the zip for the peer configuration.

 

Both zip files are correct.

 

Note: each new tunnel will use a new network pool to assign addresses for the peers.

Tunnel 0 = 10.253.0.0/24

Tunnel 1 = 10.253.1.0/24

etc

Link to comment

The config in the GUI shows the Unraid configuration.

 

Unraid = 10.253.0.1

Peer = 10.253.0.4 (=allowed IP)

 

This means for the peer configuration:

Peer = 10.253.0.4

Unraid = 10.253.0.1 (=allowed IP)

 

The zip file configuration is correct.

 

At the peer side, you need to reach the server on address 10.253.0.1

Edited by bonienl
Link to comment

The generated configurations are correct, I can't explain your situation.

My own tests confirm your situation happening when peer and server are in the same network.

 

It doesn't make sense to set the AllowedIP address to the local tunnel address, because WireGuard must add a route for reaching the remote side.

WG does this by using the AllowedIP address(es), which needs to be the tunnel address of the remote Unraid server.

Here is an example of my routing table with 2 active WG tunnels

root@vesta:/tmp# ip -4 route
default via 10.0.101.1 dev br0
10.0.101.0/24 dev br0 proto kernel scope link src 10.0.101.5
10.253.0.2 dev wg0 scope link
10.253.0.3 dev wg0 scope link

Unraid itself is 10.253.0.1

 

For completeness the WG configuration

[Interface]
#Main server wg0
PrivateKey=****
Address=10.253.0.1
ListenPort=51821
PostUp=logger -t wireguard 'Tunnel WireGuard-wg0 started'
PostDown=logger -t wireguard 'Tunnel WireGuard-wg0 stopped'

[Peer]
#Apple iPad
PublicKey=****
AllowedIPs=10.253.0.2

[Peer]
#Oppo phone
PublicKey=****
AllowedIPs=10.253.0.3

 

Edited by bonienl
Link to comment

Today's update adds a local tunnel firewall function.

This allowes the user to specify one or more IP addresses which will be blocked from remote access over the WireGuard tunnel.

This can be useful when "Remote access to LAN" or "Remote tunneled access" is configured, and you want certain systems/networks not accessible (protected).

Support for IPv6 is further enhanced in this version too.

 

  • Like 2
Link to comment
On 10/17/2019 at 8:53 PM, Hoopster said:

Thanks, this worked beautifully for me. 

 

As with my local LAN, I want WireGuard connected clients to go through Pihole as my DNS.  I am running Pihole on a Raspberry Pi on the local LAN and setting its IP address as the DNS in the peer configurations as mentioned above and regenerating QR code works.

 

Now the only problem left to resolve (There are posts on this already in these forums that I just need to read) is access via WireGuard to docker container webUIs that have IP addresses on a VLAN that is different than the unRAID server LAN subnet.

 yeah the regenerate QR and set the setup to remote access to lan resolved my issues with not getting to the dockers for some reason.

Link to comment
On 12/30/2019 at 5:38 PM, bonienl said:

Today's update adds a local tunnel firewall function.

This allowes the user to specify one or more IP addresses which will be blocked from remote access over the WireGuard tunnel.

This can be useful when "Remote access to LAN" or "Remote tunneled access" is configured, and you want certain systems/networks not accessible (protected).

Support for IPv6 is further enhanced in this version too.

 

Would it be possible to add inverted option? EG: Allow only IPs specified and block everything else.

 

  • Like 1
Link to comment

For this to work you do have to forward one port from your router directly to your unRAID server. It's been established in the past that it's better not to forward any port to your unRAID server because unRAID was never really built around security. Did I misunderstand something or did something change?

 

Aside from performances (and assuming this is not an issue), what are the advantages of using Wireguard instead of a dedicated docker/VM with OpenVPN/Softether where you can get better isolation from your unRAID server?

 

Also I'm no network expert but I have a Softether server installed in a Windows VM and never had to add any rule to my router besides forwarding 1 port to that VM and my dockers are perfectly accessible from the VPN. My router is still the DHCP server within that VPN tunnel so the peripheral jsut behaves like anything else on the local network (which is, from my perspective, what people usually try to achieve with VPN).

Edited by dnLL
Link to comment
59 minutes ago, dnLL said:

For this to work you do have to forward one port from your router directly to your unRAID server. It's been established in the past that it's better not to forward any port to your unRAID server because unRAID was never really built around security. Did I misunderstand something or did something change?

 

Aside from performances (and assuming this is not an issue), what are the advantages of using Wireguard instead of a dedicated docker/VM with OpenVPN/Softether where you can get better isolation from your unRAID server?

 

Also I'm no network expert but I have a Softether server installed in a Windows VM and never had to add any rule to my router besides forwarding 1 port to that VM and my dockers are perfectly accessible from the VPN. My router is still the DHCP server within that VPN tunnel so the peripheral jsut behaves like anything else on the local network (which is, from my perspective, what people usually try to achieve with VPN).

Forwarding a port to WireGuard means you have to trust WireGuard, the underlying OS doesn't really matter. You probably shouldn't forward other ports to Unraid, but once you have VPN there is less of a need for that anyway.

 

WireGuard is in the base OS so that it will start when the server boots, whether or not the array is started. You can start and stop dockers and VMs, start and stop the array, and adjust must of Unraid's config (other than the network), all while connected over VPN.

 

If you are happy with your current solution that is great! Nobody is saying you have to switch. If you do try it out I think you you will be amazed at how quickly your remote devices make a connection as compared to other solutions - it is almost instantaneous. And the throughput will likely be better as well.

Link to comment
25 minutes ago, ljm42 said:

If you are happy with your current solution that is great! Nobody is saying you have to switch. If you do try it out I think you you will be amazed at how quickly your remote devices make a connection as compared to other solutions - it is almost instantaneous. And the throughput will likely be better as well.

Seems like there is extra setup required to be able to properly connect to dockers from the VPN with Wireguard, which just shows how clueless I am about networking in general since I don't understand how different it is exactly from my current setup.

 

You are right though, I should give it a try. I assume there is an API that comes with Wireguard to monitor who is connected or the number of connections? I will give it a look.

Link to comment

Hello, I started testing this yesterday. I added 3 different tunnels. I have 2 wans, so:

- 1 Remote access to lan through wan1

- 1 Remote access to lan through wan2

- 1 Remove tunneled access through the wan1

 

I can connect and everything works perfectly fine, but it looks like when I disconnect and reconnect, or just connect with another tunnel, my server's network dies. I can't ping it anymore and there's some kernel panic logs:

 

Jan 1 21:09:34 Tower kernel: ------------[ cut here ]------------
Jan 1 21:09:34 Tower kernel: NETDEV WATCHDOG: eth1 (igb): transmit queue 1 timed out
Jan 1 21:09:34 Tower kernel: WARNING: CPU: 23 PID: 0 at net/sched/sch_generic.c:465 dev_watchdog+0x161/0x1bb
Jan 1 21:09:34 Tower kernel: Modules linked in: wireguard ip6_udp_tunnel udp_tunnel nvidia_uvm(O) arc4 ecb md4 sha512_ssse3 sha512_generic cmac cifs ccm xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 iptable_mangle ip6table_filter ip6_tables xt_nat vhost_net tun vhost tap veth ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables xfs md_mod nct6775 hwmon_vid k10temp bonding igb i2c_algo_bit edac_mce_amd kvm_amd nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) ipmi_ssif kvm crct10dif_pclmul rsnvme(PO) crc32_pclmul crc32c_intel drm_kms_helper ghash_clmulni_intel drm pcbc aesni_intel agpgart wmi_bmof syscopyarea aes_x86_64 crypto_simd cryptd ahci sysfillrect sysimgblt ccp sr_mod pcc_cpufreq fb_sys_fops i2c_piix4 glue_helper libahci button ipmi_si cdrom acpi_cpufreq nvme i2c_core nvme_core
Jan 1 21:09:34 Tower kernel: wmi [last unloaded: i2c_algo_bit]
Jan 1 21:09:34 Tower kernel: CPU: 23 PID: 0 Comm: swapper/23 Tainted: P O 4.19.88-Unraid #1
Jan 1 21:09:34 Tower kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X470D4U, BIOS P3.20 08/12/2019
Jan 1 21:09:34 Tower kernel: RIP: 0010:dev_watchdog+0x161/0x1bb
Jan 1 21:09:34 Tower kernel: Code: 71 94 00 00 75 39 48 89 ef c6 05 38 71 94 00 01 e8 85 a8 fd ff 44 89 e9 48 89 ee 48 c7 c7 49 1f da 81 48 89 c2 e8 8f 1b af ff <0f> 0b eb 11 41 ff c5 48 81 c2 40 01 00 00 41 39 cd 75 95 eb 13 48
Jan 1 21:09:34 Tower kernel: RSP: 0018:ffff888ffebc3ea0 EFLAGS: 00010286
Jan 1 21:09:34 Tower kernel: RAX: 0000000000000000 RBX: ffff888ff679c438 RCX: 0000000000000007
Jan 1 21:09:34 Tower kernel: RDX: 00000000000005be RSI: 0000000000000002 RDI: ffff888ffebd64f0
Jan 1 21:09:34 Tower kernel: RBP: ffff888ff679c000 R08: 0000000000000003 R09: 000000000001cd00
Jan 1 21:09:34 Tower kernel: R10: 0000000000000000 R11: 0000000000000058 R12: ffff888ff679c41c
Jan 1 21:09:34 Tower kernel: R13: 0000000000000001 R14: ffff888ff39ee940 R15: 0000000000000017
Jan 1 21:09:34 Tower kernel: FS: 0000000000000000(0000) GS:ffff888ffebc0000(0000) knlGS:0000000000000000
Jan 1 21:09:34 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 1 21:09:34 Tower kernel: CR2: 00007ffbffff8fc8 CR3: 0000000f93166000 CR4: 0000000000340ee0
Jan 1 21:09:34 Tower kernel: Call Trace:
Jan 1 21:09:34 Tower kernel: <IRQ>
Jan 1 21:09:34 Tower kernel: call_timer_fn+0x18/0x7b
Jan 1 21:09:34 Tower kernel: ? qdisc_reset+0xc0/0xc0
Jan 1 21:09:34 Tower kernel: expire_timers+0x7e/0x8d
Jan 1 21:09:34 Tower kernel: run_timer_softirq+0x72/0x120
Jan 1 21:09:34 Tower kernel: ? enqueue_hrtimer.isra.0+0x23/0x27
Jan 1 21:09:34 Tower kernel: ? __hrtimer_run_queues+0xdd/0x10b
Jan 1 21:09:34 Tower kernel: ? ktime_get+0x44/0x95
Jan 1 21:09:34 Tower kernel: __do_softirq+0xc9/0x1d7
Jan 1 21:09:34 Tower kernel: irq_exit+0x5e/0x9d
Jan 1 21:09:34 Tower kernel: smp_apic_timer_interrupt+0x80/0x93
Jan 1 21:09:34 Tower kernel: apic_timer_interrupt+0xf/0x20
Jan 1 21:09:34 Tower kernel: </IRQ>
Jan 1 21:09:34 Tower kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141
Jan 1 21:09:34 Tower kernel: Code: ff 45 84 f6 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 e0 99 bb ff fb 66 0f 1f 44 00 00 <48> 2b 2c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cd
Jan 1 21:09:34 Tower kernel: RSP: 0018:ffffc90006423e98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Jan 1 21:09:34 Tower kernel: RAX: ffff888ffebdfac0 RBX: ffff888ff1fd6400 RCX: 000000000000001f
Jan 1 21:09:34 Tower kernel: RDX: 0000000000000000 RSI: 0000000021bf5be5 RDI: 0000000000000000
Jan 1 21:09:34 Tower kernel: RBP: 000038ac8d05b99b R08: 000038ac8d05b99b R09: 00000000000003b2
Jan 1 21:09:34 Tower kernel: R10: 00000000000ac1b8 R11: 071c71c71c71c71c R12: 0000000000000001
Jan 1 21:09:34 Tower kernel: R13: ffffffff81e5e260 R14: 0000000000000000 R15: ffffffff81e5e2d8
Jan 1 21:09:34 Tower kernel: ? cpuidle_enter_state+0xbf/0x141
Jan 1 21:09:34 Tower kernel: do_idle+0x17e/0x1fc
Jan 1 21:09:34 Tower kernel: cpu_startup_entry+0x6a/0x6c
Jan 1 21:09:34 Tower kernel: start_secondary+0x197/0x1b2
Jan 1 21:09:34 Tower kernel: secondary_startup_64+0xa4/0xb0
Jan 1 21:09:34 Tower kernel: ---[ end trace 47c27e2823999dc7 ]---
Jan 1 21:09:34 Tower kernel: igb 0000:24:00.0 eth1: Reset adapter

 

If I wait a bit, it ends up resetting the adapter and I get a ping response again, however I can't access the UI or anything, there's another kernel panic in the logs immediatelly.

 

Jan 1 21:09:34 Tower kernel: igb 0000:24:00.0 eth1: Reset adapter
Jan 1 21:09:35 Tower kernel: igb 0000:24:00.0 eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Jan 1 21:09:37 Tower kernel: igb 0000:24:00.0: exceed max 2 second
Jan 1 21:10:25 Tower kernel: rcu: INFO: rcu_sched self-detected stall on CPU
Jan 1 21:10:25 Tower kernel: rcu: 1-....: (60028 ticks this GP) idle=3d2/1/0x4000000000000002 softirq=7551290/7551290 fqs=14516
Jan 1 21:10:25 Tower kernel: rcu: (t=60000 jiffies g=22102957 q=73124)
Jan 1 21:10:25 Tower kernel: NMI backtrace for cpu 1
Jan 1 21:10:25 Tower kernel: CPU: 1 PID: 8154 Comm: kworker/1:2 Tainted: P W O 4.19.88-Unraid #1
Jan 1 21:10:25 Tower kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X470D4U, BIOS P3.20 08/12/2019
Jan 1 21:10:25 Tower kernel: Workqueue: wg-kex-wg2 wg_packet_handshake_receive_worker [wireguard]
Jan 1 21:10:25 Tower kernel: Call Trace:
Jan 1 21:10:25 Tower kernel: <IRQ>
Jan 1 21:10:25 Tower kernel: dump_stack+0x67/0x83
Jan 1 21:10:25 Tower kernel: nmi_cpu_backtrace+0x71/0x83
Jan 1 21:10:25 Tower kernel: ? lapic_can_unplug_cpu+0x97/0x97
Jan 1 21:10:25 Tower kernel: nmi_trigger_cpumask_backtrace+0x57/0xd4
Jan 1 21:10:25 Tower kernel: rcu_dump_cpu_stacks+0x8b/0xb4
Jan 1 21:10:25 Tower kernel: rcu_check_callbacks+0x296/0x5a0
Jan 1 21:10:25 Tower kernel: update_process_times+0x24/0x47
Jan 1 21:10:25 Tower kernel: tick_sched_timer+0x36/0x64
Jan 1 21:10:25 Tower kernel: __hrtimer_run_queues+0xb7/0x10b
Jan 1 21:10:25 Tower kernel: ? tick_sched_handle.isra.0+0x2f/0x2f
Jan 1 21:10:25 Tower kernel: hrtimer_interrupt+0xf4/0x20e
Jan 1 21:10:25 Tower kernel: smp_apic_timer_interrupt+0x7b/0x93
Jan 1 21:10:25 Tower kernel: apic_timer_interrupt+0xf/0x20
Jan 1 21:10:25 Tower kernel: </IRQ>
Jan 1 21:10:25 Tower kernel: RIP: 0010:get_random_u32+0xd/0x89
Jan 1 21:10:25 Tower kernel: Code: 50 01 89 53 40 48 8b 04 c3 48 89 04 24 e8 0b 7d 28 00 48 8b 04 24 5a 5b 5d 41 5c c3 c3 0f 1f 44 00 00 ba 0a 00 00 00 0f c7 f0 <72> 79 ff ca 75 f7 41 54 48 c7 c2 b0 9f 29 82 48 c7 c7 80 66 c6 81
Jan 1 21:10:25 Tower kernel: RSP: 0018:ffffc9000d4cbd28 EFLAGS: 00000203 ORIG_RAX: ffffffffffffff13
Jan 1 21:10:25 Tower kernel: RAX: 00000000ffffffff RBX: ffff8889c7b7b2c8 RCX: 0000000000000000
Jan 1 21:10:25 Tower kernel: RDX: 000000000000000a RSI: 00000000fffffe01 RDI: ffffffffa02a1f38
Jan 1 21:10:25 Tower kernel: RBP: ffff8889bb300000 R08: ffffffffa02c89e0 R09: ffffc9000d4cbd18
Jan 1 21:10:25 Tower kernel: R10: ffffc9000d4cbe18 R11: 007e315c0363396c R12: ffff8889bb310000
Jan 1 21:10:25 Tower kernel: R13: ffff8889c7b7b2d0 R14: ffffc9000d4cbd68 R15: ffff8889c7b7b3a0
Jan 1 21:10:25 Tower kernel: ? wg_index_hashtable_insert+0x48/0x100 [wireguard]
Jan 1 21:10:25 Tower kernel: wg_index_hashtable_insert+0x58/0x100 [wireguard]
Jan 1 21:10:25 Tower kernel: wg_noise_handshake_create_response+0x23e/0x260 [wireguard]
Jan 1 21:10:25 Tower kernel: wg_packet_send_handshake_response+0x3f/0xd0 [wireguard]
Jan 1 21:10:25 Tower kernel: wg_packet_handshake_receive_worker+0x93/0x290 [wireguard]
Jan 1 21:10:25 Tower kernel: process_one_work+0x16e/0x24f
Jan 1 21:10:25 Tower kernel: worker_thread+0x1e2/0x2b8
Jan 1 21:10:25 Tower kernel: ? rescuer_thread+0x29e/0x29e
Jan 1 21:10:25 Tower kernel: kthread+0x10c/0x114
Jan 1 21:10:25 Tower kernel: ? kthread_park+0x89/0x89
Jan 1 21:10:25 Tower kernel: ret_from_fork+0x22/0x40

 

I know the build is not stable, but I just wanted to know if I it's just that, or is there something else going on?

 

Also attaching my diagnostics.

 

Thank you.

tower-diagnostics-20200101-2117.zip

Link to comment
1 hour ago, dnLL said:

You are right though, I should give it a try. I assume there is an API that comes with Wireguard to monitor who is connected or the number of connections? I will give it a look.

You can run the underlying wg commands if you want, but you can fully monitor everything right from the Unraid dashboard. Pretty cool.

image.png.93f51074bcae6dc782bd35a13f094143.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.