Setting up an unraid VM for LAN-Only access


Dav3

Recommended Posts

Hi,

 

I'm stuck setting up unraid and am getting no help on the forums.  I have been working on trying to solve this problem for over a month now.  I've also searched extensively for days and can't seem to find answers so I'm hoping to get some attention on the pre-sales support section:

 

I'm trying to create an 'software-assured' environment where my proprietary IP can't leave the LAN.  So I've set up a Windows VM in unraid and am trying to firewall it's networking to allow access inside the LAN but block access to the WAN.

 

I tried installing pfSense into a VM following Spaceinvader One's instructions but when I try to stub the either NIC (purchased for this purpose) it breaks unraid's access to the WAN for some reason. (?)

 

So I've turned to trying to use unraid's built-in iptables support to firewall the VM since it seems ideally suited to this simple task.  My plan is to add a bash script that's called by unraid's /boot/config/go script to add a few simple firewall rules.  This script currently is:

 

#!/bin/bash
# Allow only LAN traffic on eth1
iptables -A OUTPUT -o eth1 -d 192.168.1.0/24 -j ACCEPT
iptables -A OUTPUT -o eth1 -j REJECT
iptables -A INPUT -i eth1 -s 192.168.1.0/24 -j ACCEPT
iptables -A INPUT -i eth1 -j REJECT

 

I have networking settings for both NICs set to bonding:no, bridging:yes, eth0 in br0, eth1 in br1.  I set the Windows vm to use br1 which should use eth1 thus route vm network traffic thru eth1 and obey iptables rules that allow LAN / block WAN traffic (right?)

 

The problem is that when I run the above script in terminal and then the windows VM the VM can still access the internet.  What am I doing wrong?

 

I admit I'm not a deep linux or networking guy. Can anyone help me out?  If someone could just point me towards whatever tip / post / guide I need to figure this out I'd be grateful.  Thanks.

Link to comment

The normal approach would be to make firewall rules on your router, but it is possible using iptables.

You must have the bridge function enabled for eth1 and let the VM(s) use br1 as their interface.

Do the following

# Allow local traffic, deny internet
iptables -A FORWARD -o br1 -d 192.168.1.0/24 -j ACCEPT
iptables -A FORWARD -o br1 -j DROP

 

  • Like 1
Link to comment

Thanks for the help, @bonienl.  Your instructions helped me get past my issue.  Actually I had to leave the eth1 bridge function disabled to get things working but at least now I can move beyond the problem.

 

You're right about blocking at the router but unfortunately my AT&T ISP router has to be the gateway device and I've witnessed their techs log in into the router from the WAN & reset it back to defaults (thus no outbound blocking) -->AFTER<-- I had changed admin passwords, etc so I don't really trust the router.  I'm probably going to have to double-NAT the thing to regain confidence back in my LAN security.  Then I'll block the relevant NIC MAC & IP's.

 

Edited by Dav3
duh
Link to comment
On 1/24/2020 at 11:05 PM, bonienl said:

This solution works great as far as my VM is concerned, @bonienl - there's just one problem.  I can no longer use the unraid manager's WebUI to access my dockers.  Sometimes the connection is refused, other times it just fails to respond.  I think the problem might be that unraid is setting the default gateway to route thru br1 which now has unanticipated drop rules for it's default gateway.

 

Here's the current Network Settings routing table:


Protocol    Route    Gateway    Metric
IPv4    default    192.168.1.254 via br1    1    
IPv4    172.17.0.0/16    docker0    1    
IPv4    192.168.1.0/24    br0    1    
IPv4    192.168.1.0/24    br1    1    
 
IPv6    ::1    lo    256   

This doesn't entirely make sense to me unless the problem is that lo is blocked from br1 (?)

Regardless, is there a way to set unraid to use br0 as it's default gateway?

I've been looking around but am currently stumped.

Other than that this has been a day of great progress. Thanks for your help.

UPDATE:

No, something funny is going on.  I removed the iptables commands from the /boot/config/go file & rebooted.  Both NICS are set to bond=no, bridge=yes, so unraid should be back to a known-working config yet the problem persists.

Any ideas?

 

 

On 1/24/2020 at 11:05 PM, bonienl said:

 

 

Edited by Dav3
Link to comment

Good news!  The unraid default gateway is now on br0.

However although I'm now able to access dockers via the WebUI in one web client scenario, I can't from the two others I actually need.  (Sorry to complicate things)
Here are the three scenarios I've tested, one of which is working:

1: Accessing from a separate windows machine on the LAN:
   Unraid web interface (192.168.1.201:80):  Works
   Unraid docker WebUI (192.168.1.201:6080): Fails "Unable to connect"

2: Accessing from a GPU-passthru/virtio-nic unraid guest windows VM that is using br1 (i.e. firewalled to 192.168.1.0/24) set to use IP 192.168.1.210:
   Unraid web interface (192.168.1.201:80):  Works
   Unraid docker WebUI (192.168.1.201:6080): Fails "The connection was reset"

3: Using the GPU-passthru/virtio-nic guest VM to access access a 2nd unraid guest windows VM via RDP, which is set to use 192.168.1.211, regardless of whether it's booted with it's virtio NIC set to either br0 or br1:
   Unraid web interface (192.168.1.201:80):  Works
   Unraid docker WebUI (192.168.1.201:6080): Works (I don't currently understand why)

Details that may or may not matter:
* All the above are using the ISP router at 192.168.1.254 as the gateway and are setting static IP addresses.
* I do have a windows server 2012R2 box on 192.168.1.200 that acts as a DNS server, but I don't think it does DHCP.
* I'm addressing using IP address and not netbios name, DNS, etc.

I need to manage the unraid server including docker apps from the LAN (obviously).
My guess is that there's some assumption about localhost & my atypical attempts to firewall br1 (?)
But I'm not strong on the networking side (obviously).

Thanks again for the help.

Link to comment

Docker has a built-in security feature which disallows containers to talk directly to the host.

 

Unraid 6.8.2 has a new setting "Host access to custom networks" which bypasses the Docker secuirty scheme, see Docker settings.

 

Change this to "enabled" and it should give you access to the Docker containers.

 

Link to comment

Thanks for your continued patience, @bonienl, I very much appreciate your taking the time.  Unfortunately the news is not good.  I've tried the following with no changes in the results:

 

* Commented out custom iptables commands in /boot/config/go.  The only network configuration I have is that both interfaces are set to bonding=no, bridging=yes & eth1's IP is set to 0.0.0.0, gateway="".  So I may be wrong but I don't think the problem is due to my customization.

 

* Updated to Unraid 6.8.2, set "Host access to custom networks" = "Enabled".  Same problem.

 

* Tried setting "Preserve user defined networks" = "Yes".  Same problem.

 

* Tried unchecking "IPv4 custom network on interface br0 (optional)" (default is checked).  Same problem.  Note: defaults are currently br0: Checked, Subnet:192.168.1.0/24 Gateway: 192.168.1.254 DHCP pool=unckecked, br1: Unchecked.

 

* Tried several permutations of the above, attempting access both from br0 (different machine on the LAN) and from VM on br1 (now not firewalled)

 

I'm using binhex-krusader for my docker WebUI access testing.  Typically I'm seeing "handler exception: [Errno 104] Connection reset by peer" messages in krusader's docker log when access fails.

 

To confuse the issue a bit, I'm also intermittently seeing "error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 504".  However if I stop and start the docker container, the message doesn't reappear and it appears to start normally.

 

Thanks again for the suggestions.  I'm still stuck.

 

Link to comment

Hmm, poking around, I wonder if a server reboot is required for these docker interface changes to take effect?  I've been just disabling and re-enabling docker.  The reason I ask is that when I disable docker I see the docker0 interface go away, but the DOCKER firewall rules persist until I reboot.  I expected the docker firewall rules to be removed along with the docker interface.

 

I'm still newb so confidence is low but suggest the docker routing rules should be cleaned up when the docker interface is removed.

 

 

 

Link to comment

Hey @bonienl I think I might have figured it out.  And I think may be a corner-case bug having to do with my issue where br1 somehow became the unraid default gateway.

 

I noticed that "iptables -S FORWARD -v" returns "-A FORWARD -d 192.168.1.0/24 -o br1 -j ACCEPT" but not the expected "-A FORWARD -d 192.168.1.0/24 -o br0 -j ACCEPT".  (br1 is no longer useful to unraid)

 

Executing "iptables -A FORWARD -d 192.168.1.0/24 -o br0 -j ACCEPT" gave me access to the docker WebUI.

 

Meaning when I removed the IP address from br1, although unraid reset the default gateway, but docker did not.

 

Can you suggest how to fix this?  I can add it to the unraid go script but disabling & enabling docker seems to lose the rule.

 

Update.  Uh, no.  It was working but adding the above iptables statement to the go file and rebooting, its no longer working...

Edited by Dav3
Link to comment
9 hours ago, Dav3 said:

eth1's IP is set to 0.0.0.0, gateway=""

This isn't right. eth1 should be set to 'None', like this (in your case it can be IPv4 only)

 

image.thumb.png.d9d1a73a098988b4ea85d4a01e14b64a.png

 

Let's try to get the network configuration correct first.

You can make a br0 explicit rule, like this

# Allow local traffic, deny internet
iptables -A FORWARD -o br0 -j ACCEPT
iptables -A FORWARD -o br1 -d 192.168.1.0/24 -j ACCEPT
iptables -A FORWARD -o br1 -j DROP

 

Link to comment
20 minutes ago, Dav3 said:

docker WebUI still doesn't work...

I did a quick test in my lab.

 

Any container which has a "bridge" network is NOT accessible by the VM, regardless of the iptables rules.

I guess some limitation in how Docker handles port mappings. These don't seem to work when internal communication takes place. Remember: VM and container are both on the same server.

 

Containers with a "host" or "custom" network are all reachable with the iptables rules I gave at the very beginning.

 

  • Like 1
Link to comment

Hey I just wanted to say thanks for your help.

 

Just FYI, I did get some weird results when switching container to "host" network; at first it worked but after a server reboot starting a docker container & selecting WebUI actually resulted in opening a VNC session to a running VM (!)  I think that had to do with DHCP assignment cruft.  I ended up putting all VM's into static IP assignment since that was a to-do anyway.

 

I also ended up putting dockers into "custom" network with static IP addresses.  That made accessing dockers from br1 reliable, which is my primary need.  I still can't seem to get dockers to be accessible from the LAN (via br0) but as long as I can still access the main unraid web interface externally I'm going to back-burner this task and come back to it.

 

Link to comment
On 1/27/2020 at 1:47 AM, Dav3 said:

You're right about blocking at the router but unfortunately my AT&T ISP router has to be the gateway device and I've witnessed their techs log in into the router from the WAN & reset it back to defaults (thus no outbound blocking) -->AFTER<-- I had changed admin passwords, etc so I don't really trust the router.  I'm probably going to have to double-NAT the thing to regain confidence back in my LAN security.  Then I'll block the relevant NIC MAC & IP's.

 

This is offtopic, but IME it is a really bad idea to ever use ISP-provided equipment on the customer side, if you're doing anything more complex connecting a single computer with consumer-type defaults -- and any setup using Unraid fits that bill. If your ISP doesn't allow you to provide your own router, I'd switch providers just because of that (using their physical-layer modem only is OK).

Link to comment
3 hours ago, GreenDolphin said:

This is offtopic, but IME it is a really bad idea to ever use ISP-provided equipment on the customer side, if you're doing anything more complex connecting a single computer with consumer-type defaults -- and any setup using Unraid fits that bill. If your ISP doesn't allow you to provide your own router, I'd switch providers just because of that (using their physical-layer modem only is OK).

I agree but in this monopoly-dominated world we live in we need to work with what we get.  Until recently I was using charter cable until suddenly without warning AT&T deployed fiber to my area.  Yay!  Being 40% cheaper & 3x faster, I made the jump.  Also 1-Gb + fixed IP address blocks being available, this is everything I always wanted.  I was pretty happy.  Then after deployment, examining the router (which isn't actually half bad) I realized it was a bit of a Faustian Bargain.  However it looks like I can punt the router into 'passthrough' mode, turning it into more of a physical bridge device where I can put my router in front of it and filter LAN traffic away from it.  This little task is on my to-do list...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.