Dynamix WireGuard VPN


bonienl

Recommended Posts

7 hours ago, bonienl said:

With Unraid containers may have either fixed addresses or dynamic addresses when used on a custom (macvlan) network.

To ensure that "any" container can be accessed by the host, I took the approach as described in the blog, and modified it to the needs for Unraid.

 

Instead of defining a subnet associated with the DHCP pool for containers, the complete macvlan subnet is split in two smaller subnets, e.g. a 1 x /24 becomes 2 x /25 and these subnets are used to set up a "shim" network which allows the host (Unraid) to access any container in the associated macvlan network.

 

To make use of this feature it is a simple matter of enabling it in the Docker settings page (new setting which defaults to "disabled").

 

image.thumb.png.8e16f43266d393bd0a311beecddadd5d.png

 

Now I can ping the container "Pi-hole" with (fixed) address 10.0.101.100 on custom network br0


root@vesta:/# ping 10.0.101.100
PING 10.0.101.100 (10.0.101.100) 56(84) bytes of data.
64 bytes from 10.0.101.100: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 10.0.101.100: icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from 10.0.101.100: icmp_seq=3 ttl=64 time=0.032 ms
64 bytes from 10.0.101.100: icmp_seq=4 ttl=64 time=0.020 ms

And I can ping the container "Tautulli" with (dynamic) address 10.0.101.128 on custom network br0


root@vesta:/# ping 10.0.101.128
PING 10.0.101.128 (10.0.101.128) 56(84) bytes of data.
64 bytes from 10.0.101.128: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 10.0.101.128: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 10.0.101.128: icmp_seq=3 ttl=64 time=0.026 ms
64 bytes from 10.0.101.128: icmp_seq=4 ttl=64 time=0.024 ms

 

There is ONE CAVEAT ...

 

When remotely accessing a container on a custom network over a WireGuard tunnel, you MUST define a route on your router (gateway) which points back to the tunnel on the server. E.g. route 10.253.0.0/24 ==> 192.168.1.2 (Unraid server)

 

This is required because it is not possible to use NAT between a custom network and a WG tunnel, since everything is handled internally on the server and never leaves the physical interface, hence NAT is never in the picture here.

 

Awesome work mate :)

Link to comment

Hey guys. I have an issue accessing the Pi-Hole docker from outside my network over Wireguard. I believe this is a known issue but I can't seem to find a definitive answer.

 

My Unraid server is 192.168.1.110

Pi-Hole is using custom br0 192.168.1.111

No other dockers use a custom br0.

 

When outside my network using Wireguard, I can access Unraid and all other dockers, in addition to all other devices on my LAN (router, Windows clients, etc). However, I am unable to access Pi-Hole. Can someone confirm if this is a known issue and if it will be fixed in a future release?

Link to comment
On 1/8/2020 at 6:22 PM, bonienl said:

With Unraid containers may have either fixed addresses or dynamic addresses when used on a custom (macvlan) network.

To ensure that "any" container can be accessed by the host, I took the approach as described in the blog, and modified it to the needs for Unraid.

 

Instead of defining a subnet associated with the DHCP pool for containers, the complete macvlan subnet is split in two smaller subnets, e.g. a 1 x /24 becomes 2 x /25 and these subnets are used to set up a "shim" network which allows the host (Unraid) to access any container in the associated macvlan network.

 

To make use of this feature it is a simple matter of enabling it in the Docker settings page (new setting which defaults to "disabled").

 

image.thumb.png.8e16f43266d393bd0a311beecddadd5d.png

 

Now I can ping the container "Pi-hole" with (fixed) address 10.0.101.100 on custom network br0


root@vesta:/# ping 10.0.101.100
PING 10.0.101.100 (10.0.101.100) 56(84) bytes of data.
64 bytes from 10.0.101.100: icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from 10.0.101.100: icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from 10.0.101.100: icmp_seq=3 ttl=64 time=0.032 ms
64 bytes from 10.0.101.100: icmp_seq=4 ttl=64 time=0.020 ms

And I can ping the container "Tautulli" with (dynamic) address 10.0.101.128 on custom network br0


root@vesta:/# ping 10.0.101.128
PING 10.0.101.128 (10.0.101.128) 56(84) bytes of data.
64 bytes from 10.0.101.128: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 10.0.101.128: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 10.0.101.128: icmp_seq=3 ttl=64 time=0.026 ms
64 bytes from 10.0.101.128: icmp_seq=4 ttl=64 time=0.024 ms

 

There is ONE CAVEAT ...

 

When remotely accessing a container on a custom network over a WireGuard tunnel, you MUST define a route on your router (gateway) which points back to the tunnel on the server. E.g. route 10.253.0.0/24 ==> 192.168.1.2 (Unraid server)

 

This is required because it is not possible to use NAT between a custom network and a WG tunnel, since everything is handled internally on the server and never leaves the physical interface, hence NAT is never in the picture here.

 

Thank You very much, this works for me, too.

Link to comment

For the 3rd time now I can no longer start wireguard.
I deleted a peer. Hit apply, and now it will not start. There's no log output for wireguard anywhere, so it's impossible to troubleshoot. I know this is by design. The only thing in syslog is "Tunnel WireGuard-wg0 started" refreshing the page reveals that it is not running.

Before I delete ALL of my peers again and start over is there anyting I should be looking at?

I don't like having to resend a new config every time this happens. But that's usually what I end up having to do.

Edited by Xaero
Link to comment
11 minutes ago, bonienl said:

Open a terminal window and type


wg-quick up wg0

This is assuming you want to start tunnel wg0. Check the output of the command above.


wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.253.0.1 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] ip -4 route add 10.253.0.6/32 dev wg0
[#] ip -4 route add 10.253.0.5/32 dev wg0
[#] ip -4 route add 10.253.0.4/32 dev wg0
[#] ip -4 route add 10.253.0.3/32 dev wg0
[#] ip -4 route add 10.253.0.2/32 dev wg0
[#] logger -t wireguard 'Tunnel WireGuard-wg0 started'
[#] iptables -t nat -A POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE;iptables -N WIREGUARD_DROP_WG0;iptables -A WIREGUARD -o br0 -j WIREGUARD_DROP_WG0;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.254 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.74 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.93 -j ACCEPT;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -j DROP;iptables -A WIREGUARD_DROP_WG0 -j RETURN
iptables: Chain already exists.
[#] ip link delete dev wg0

 


It looks like the iptables chain is somehow already present?

Looking at network settings in the Routing Table I don't see it:image.thumb.png.ba5463822ea295498ba4c68f7ce54cd5.png

Listing iptables rules (iptables -L -v -n):


iptables -L -v -n
Chain INPUT (policy ACCEPT 32857 packets, 4938K bytes)
 pkts bytes target     prot opt in     out     source               destination         
58614   11M LIBVIRT_INP  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
6239K 6624M LIBVIRT_FWX  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
6239K 6624M LIBVIRT_FWI  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
6239K 6624M LIBVIRT_FWO  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
6239K 6624M DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
6239K 6624M DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
3432K 1589M ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
 1297 72041 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
2804K 5034M ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
  147 10731 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-0bee9d2a6b71 !br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-0bee9d2a6b71 br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-e5c922ad1d14 !br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-e5c922ad1d14 br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      pterodactyl0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      pterodactyl0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  pterodactyl0 !pterodactyl0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  pterodactyl0 pterodactyl0  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 29406 packets, 6627K bytes)
 pkts bytes target     prot opt in     out     source               destination         
52058   15M LIBVIRT_OUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (4 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.2           tcp dpt:80
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.3           tcp dpt:80
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.4           tcp dpt:3306
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.6           tcp dpt:9443
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.6           tcp dpt:22
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.6           tcp dpt:9080
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25575
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25574
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25573
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25572
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25571
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25570
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25569
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25568
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25567
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25566
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:25565
    0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           udp dpt:25565
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8443
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8126
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8125
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8124
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8123
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8122
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8121
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:8120
 1141 60854 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.8           tcp dpt:51413
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.8           tcp dpt:6881
    8   416 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.8           tcp dpt:80
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.9           tcp dpt:8181
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.9           tcp dpt:8080
    1    40 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.9           tcp dpt:4443

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
2804K 5034M DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  br-0bee9d2a6b71 !br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  br-e5c922ad1d14 !br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  pterodactyl0 !pterodactyl0  0.0.0.0/0            0.0.0.0/0           
6239K 6624M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (4 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 DROP       all  --  *      br-0bee9d2a6b71  0.0.0.0/0            0.0.0.0/0           
    0     0 DROP       all  --  *      br-e5c922ad1d14  0.0.0.0/0            0.0.0.0/0           
    0     0 DROP       all  --  *      pterodactyl0  0.0.0.0/0            0.0.0.0/0           
2804K 5034M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
6239K 6624M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain LIBVIRT_FWI (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain LIBVIRT_FWO (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain LIBVIRT_FWX (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain LIBVIRT_INP (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain LIBVIRT_OUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain WIREGUARD_DROP_WG0 (0 references)
 pkts bytes target     prot opt in     out     source               destination   

 

Should I just delete the chain WIREGUARD_DROP_WG0?

 

Edited by Xaero
Link to comment
6 minutes ago, bonienl said:

In the settings disable NAT and remove the firewall rules. Start and stop the tunnel should remove the iptables entries.

 

Next re-apply NAT and firewall rules and start the tunnel,

I disabled NAT, changed the firewall rules back to deny and deleted all my ips.
Hit apply.
Afterward checked iptables -L -v -n, the chain was still there.
Changed the rule back to allow, added my IPs back in, and hit apply. 
Tried to start the tunnel, which it did not.
Checked iptables again and the chain was still there.

This is the latest version of the plugin on 6.8.1

EDIT: I need to read better. Let me retry with a start and stop of the tunnel between haha.

EDIT2:

Okay, so that didn't work, either. The chain remains with the tunnel stopped, so it vomits when trying to add the chain.

Edited by Xaero
Link to comment
7 minutes ago, bonienl said:

Okay, then you need to delete it manually. Not sure why it isn't removed.

I think iptables is confused:


wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.253.0.1 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] ip -4 route add 10.253.0.6/32 dev wg0
[#] ip -4 route add 10.253.0.5/32 dev wg0
[#] ip -4 route add 10.253.0.4/32 dev wg0
[#] ip -4 route add 10.253.0.3/32 dev wg0
[#] ip -4 route add 10.253.0.2/32 dev wg0
[#] logger -t wireguard 'Tunnel WireGuard-wg0 started'
[#] iptables -t nat -A POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE;iptables -N WIREGUARD_DROP_WG0;iptables -A WIREGUARD -o br0 -j WIREGUARD_DROP_WG0;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.254 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.74 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.93 -j ACCEPT;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -j DROP;iptables -A WIREGUARD_DROP_WG0 -j RETURN
iptables v1.8.4 (legacy): Cannot use -A with -A

Try `iptables -h' or 'iptables --help' for more information.
[#] ip link delete dev wg0



Was iptables updated in 6.8.1? Thats a pretty silly error. for now I'll stop using the allow filter, since I think it's the source of my frustration.
Link to comment
50 minutes ago, bonienl said:

The syntax isn't right. I need to check

I found the mistake:

iptables -t nat -A POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE;iptables -N WIREGUARD_DROP_WG0;iptables -A WIREGUARD -o br0 -j WIREGUARD_DROP_WG0;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.254 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.74 -j ACCEPT,iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 192.168.1.93 -j ACCEPT;iptables -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -j DROP;iptables -A WIREGUARD_DROP_WG0 -j RETURN

The ACCEPT rules for the "Allow" filter are separated by a comma instead of a semicolon.
 

Link to comment
18 minutes ago, bonienl said:

Thanks, I made a correction, see version 2020.01.25

Excellent, it is working as intended now.
I do wonder how this wasn't a problem, perhaps I had manually done this prior to it being a supported feature and just never touched it until now haha.

Link to comment

I'm running into the issue where; after I edit a peer config (or add a peer) and hit the save/apply button, the wireguard service just stops.
This happened both times when I was doing this remotely while connecting via Wireguard to my server.

 

Obviously I'd understand if the service would need to restart to apply the changes, but it never comes back up. I have to manually start it again when I'm back home.

 

When I make changes however, while I'm not connected to Wireguard, the service seems to stay up and running.

 

Is this a known issue?

Link to comment
On 1/28/2020 at 1:58 AM, xorinzor said:

I'm running into the issue where; after I edit a peer config (or add a peer) and hit the save/apply button, the wireguard service just stops.
This happened both times when I was doing this remotely while connecting via Wireguard to my server.

 

Obviously I'd understand if the service would need to restart to apply the changes, but it never comes back up. I have to manually start it again when I'm back home.

 

When I make changes however, while I'm not connected to Wireguard, the service seems to stay up and running.

 

Is this a known issue?

So this happens because of the way the unraid webgui works. Your connection is interrupted when you make the change so you can't send the next request to start the service. My suggestion would be to have a dedicated management profile on a separate tunnel. This way you have a way to change settings on the tunnel that actual does the work. Currently I have a windows 10 VM running chrome remote desktop in curtain mode that I use for making remote changes. But if you wanted to stay low resource consumption and be able to do things from the VPN connection like this, a dedicated management profile on a different tunnel would be needed. I'm not sure if it's possible to work around this on unraid's side.

Link to comment
1 minute ago, Xaero said:

So this happens because of the way the unraid webgui works. Your connection is interrupted when you make the change so you can't send the next request to start the service. My suggestion would be to have a dedicated management profile on a separate tunnel. This way you have a way to change settings on the tunnel that actual does the work. Currently I have a windows 10 VM running chrome remote desktop in curtain mode that I use for making remote changes. But if you wanted to stay low resource consumption and be able to do things from the VPN connection like this, a dedicated management profile on a different tunnel would be needed. I'm not sure if it's possible to work around this on unraid's side.

Ah, I wasn't aware that it sent the "start" request via the GUI as well. I assumed this was handled by the plugin on the backend.

Not sure how the plugin handles these requests, but perhaps it'd be able to send the current state along with any request made, and restore the state to it's original state if needed.

Link to comment

Possibly, I'd still recommend a secondary dedicated management connection. If you make a configuration change that breaks stuff, now you have to wait until you have physical access to the server to fix it. If you have a dedicated remote management option you can get in still.

For example in my case I have the remote desktop VM, which is only ok as long as it can access the internet. If I screw that up, I'm done. Except I have IPMI2.0 KVM. That isn't accessible from outside my network so I have to be able to get in still. For that I have a raspberry pi with ssh and key only authentication to get in. I use it to create a tunnel when needed, which isn't often. 
It's nice being able to reset stuff when I'm out of the state haha.

Link to comment

Hello,

 

I was trying to add a peer to setup Remote tunneled access and now I can no longer access my Unraid server from within my LAN from my desktop or any other computer on the same network. I lost access right after adding 192.168.1.0/24 to peer allowed IPs. I entered it after the peer IP and a comma and then clicking apply. The admin page hung and eventually showed an error that the page cannot be reached. I rebooted the unraid server and still cannot access it from within LAN from my desktop. I'm also seeing that the Unraid server can no longer access the internet.

 

I have been making regular backups of my server so I was wondering if there is a way to restore settings or somehow get my server to be accessible again. Thanks for any help.

 

 

IMG_20200130_191016.jpg

Link to comment
1 minute ago, jfs9112 said:

Hello,

 

I was trying to add a peer to setup Remote tunneled access and now I can no longer access my Unraid server from within my LAN from my desktop or any other computer on the same network. I lost access right after adding 192.168.1.0/24 to peer allowed IPs. I entered it after the peer IP and a comma and then clicking apply. The admin page hung and eventually showed an error that the page cannot be reached. I rebooted the unraid server and still cannot access it from within LAN from my desktop. I'm also seeing that the Unraid server can no longer access the internet.

 

I have been making regular backups of my server so I was wondering if there is a way to restore settings or somehow get my server to be accessible again. Thanks for any help.

 

 

 

If you just put your USB-stick with the unraid installation in your computer, and access the config file at /boot/config/wg0.cfg you can remove the network from peer allowed IPs again, to see if that fixes it.

 

Not sure why it'd cause your admin panel to become unreachable however, but either way, this way you can still access all config files from unraid. As well as some log files.

  • Thanks 1
Link to comment
6 minutes ago, xorinzor said:

If you just put your USB-stick with the unraid installation in your computer, and access the config file at /boot/config/wg0.cfg you can remove the network from peer allowed IPs again, to see if that fixes it.

 

Not sure why it'd cause your admin panel to become unreachable however, but either way, this way you can still access all config files from unraid. As well as some log files.

You're a life saver, xorinzor!! I removed the network from allowed peers, rebooted, and now the server is accessible again. Thank you!!

Link to comment

I need to use port 8000 for Splunk, is there a way I can change the UDP port on wireguard from 8000 to something else? 

 

 

Edit: I figured out how to change the port used by Splunk. This fixed the issue but it would still be nice to have the option in unRAID to adjust the 8000 port used by wireguard.

Edited by Wafflehouse
Figured it out!
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.