bonienl Posted December 14, 2017 Share Posted December 14, 2017 (edited) By default unRAID, the VMs and Docker containers all run within the same network. This is a straightforward solution, it does not require any special network setup and for most users this is a suitable solution. Sometimes more isolation is required, for example let VMs and Docker containers run in their own network environment completely separated from the unRAID server. Setting up such an environment needs changes in the unRAID network settings, but also requires your switch and router to have additional network possibilities to support this environment. The example here makes use of VLANs. This is an approach which allows to split your physical cable into two or more logical connections, which can run fully isolated from each other. If your switch does not support VLANs then the same can be achieved by connecting multiple physical ports (this requires however more ports on the unRAID server). The following assingments are done: network 10.0.101.0/24 = unRAID management connection. It runs on the default link (untagged) network 10.0.104.0/24 = isolated network for VMs. It runs on VLAN 4 (tagged) network 10.0.105.0/24 = isolated network for docker containers. It runs on VLAN 5 (tagged) UNRAID NETWORK SETTINGS We start with the main interface. Make sure the bridge function is enabled (this is required for VMs and docker). In this example both IPv4 and IPv6 are used, but this is not mandatory, e.g. IPv4 only is a good starting choice. Here a static IPv4 address is used, but automatic assignment can be used too. In this case it is recommended that your router (DHCP server) always hands out the same IP address to the unRAID server. Lastly enable VLANs for this interface. VM NETWORK SETTINGS VMs will operate on VLAN 4 which corresponds to interface br0.4. Here again IPv4 and IPv6 are enabled, but it may be limited to IPv4 only, without any IP assignment for unRAID itself. On the router DHCP can be configured, which allows VMs to obtain an IP address automatically. DOCKER NETWORK SETTINGS Docker containers operate on VLAN 5 which corresponds to interface br0.5. We need to assign IP addresses on this interface to ensure that Docker "sees" this interface and makes it a choice in the network selection of a container. Assignment can be automatic if you have a DHCP server running on this interface or static otherwise. VM CONFIGURATION We can set interface br0.4 as the default interface for the VMs which we are going to create (existing VMs you'll need to change individually). Here a new VM gets interface br0.4 assigned. DOCKER CONFIGURATION Docker will use its own built-in DHCP server to assign addresses to containers operating on interface br0.5 This DHCP server however isn't aware of any other DHCP servers (your router). Therefor it is recommended to set an IP range to the Docker DHCP server which is outside the range used by your router (if any) and avoid conflicts. This is done in the Docker settings while the service is stopped. When a docker container is created, the network type br0.5 is selected. This lets the container run on the isolated network. IP addresses can be assigned automatically out of the DHCP pool defined earlier. Leave the field "Fixed IP address" empty in this case. Or containers can use a static address. Fill-in the field "Fixed IP address" in this case. This completes the configuration on the unRAID server. Next we have to setup the switch and router to support the new networks we just created on the server. SWITCH CONFIGURATION The switch must be able to assign VLANs to its different ports. Below is a picture of a TP-LINK switch, other brands should have something similar. ROUTER CONFIGURATION The final piece is the router. Remember all connections eventually terminate on the router and this device makes communication between the different networks possible. If you want to allow or deny certain traffic between the networks, firewall rules on the router need to be created. This is however out of scope for this tutorial. Below is an example of a Ubiquiti USG router, again other brands should offer something similar. That's it. All components are configured and able to handle the different communications. Now you need to create VMs and containers which make use of them. Good luck. Edited December 15, 2017 by bonienl 12 8 1 Quote Link to comment
1812 Posted December 14, 2017 Share Posted December 14, 2017 I didn't read this all, but it looks fairly comprehensive and I'm 100% sure that I will need it in the future. Thanks! Quote Link to comment
DZMM Posted December 15, 2017 Share Posted December 15, 2017 I've added a link in this post to a full guide to setting up VLANs in pfsense Quote Link to comment
ken-ji Posted December 17, 2017 Share Posted December 17, 2017 Wait, in 6.4, an IP must be assigned to Docker VLAN for it to function with the GUI? Hmm... Dunno if that will break things for me... Quote Link to comment
unevent Posted December 17, 2017 Share Posted December 17, 2017 (edited) Is this going to be 'the new way' of doing things or just one possible way for 6.4? I have no need or desire to segregate Dockers from VMs using VLANs and then juggle with rules on my router when I need communication between VLAN. I do have a need and desire to be able to give each Docker and VM there own IP and MAC, which I can do now just fine in 6.3.5 and on same subnet. Edited December 17, 2017 by unevent Quote Link to comment
DZMM Posted December 18, 2017 Share Posted December 18, 2017 4 hours ago, unevent said: Is this going to be 'the new way' of doing things or just one possible way for 6.4? I have no need or desire to segregate Dockers from VMs using VLANs and then juggle with rules on my router when I need communication between VLAN. I do have a need and desire to be able to give each Docker and VM there own IP and MAC, which I can do now just fine in 6.3.5 and on same subnet. The possible way. Unless you decide to create vlans, all Dockers and VMs are on the same subnet and Dockers only have unique ips if you assign them. Quote Link to comment
bonienl Posted December 18, 2017 Author Share Posted December 18, 2017 (edited) On 12/18/2017 at 8:29 AM, DZMM said: The possible way. Unless you decide to create vlans, all Dockers and VMs are on the same subnet and Dockers only have unique ips if you assign them. Correct this is an addition not a replacement. Everything defined under unRAID 6.3 remains working under unRAID 6.4. Edited January 16, 2018 by bonienl Quote Link to comment
unevent Posted December 18, 2017 Share Posted December 18, 2017 5 hours ago, bonienl said: Correct this is an additional not a replacement. Everything defined under unRAID 6.3 remains working under unRAID 6.4. Thanks, and nice work. Quote Link to comment
Dephcon Posted January 15, 2018 Share Posted January 15, 2018 (edited) I've having some issues with this... If I setup a VLAN (4) tagged interface as per the VM example with address assignment set to None: I can ping my vlan gateway just fine from my unraid cli. my ubuntu VM on br0.4 can not ping it's gateway When I try to follow the docker example: I can ping the vlan ip set on the unraid box, but not the gateway. docker containers on br0.4 are not reachable I'm also using a unfi router, setup the VLAN network to be "corporate" so inter vlan routing is enabled by default, just like in your example. Pretty sure everything is in order, the only difference from your examples is that my main unraid interface is a 2GB LACP bond. Edited January 15, 2018 by Dephcon Quote Link to comment
bonienl Posted January 16, 2018 Author Share Posted January 16, 2018 Do you have a switch in between and configured to allow VLAN tagged traffic? Quote Link to comment
Dephcon Posted January 16, 2018 Share Posted January 16, 2018 (edited) 9 hours ago, bonienl said: Do you have a switch in between and configured to allow VLAN tagged traffic? I do have a unifi switch and by default it trunks all vlans to all ports. I'll take some screenshots to illustrate where i'm at **edit** I basically nuked all the routes and started from scratch and now, at least the containers, seem to work. It's possible that my USG didn't get all the config changes as it sometimes starts to provision before finishing the changes. I forced a provision which may have helped also. Anyway, this is a super cool feature and i'm glad to have it working. Now I just need to work backwards to open up all the ports I need for my containers then slap a big DENY ALL at the end of the list to limit the docker vlan from accessing my main LAN. I ended up taking all those screenshots anyway so I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default. Thanks! Edited January 16, 2018 by Dephcon 2 Quote Link to comment
dave Posted January 16, 2018 Share Posted January 16, 2018 6 hours ago, Dephcon said: I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default. I certainly would appreciate that! I've been thinking of trying this for my Dockers but do not have a managed switch. However, I can connect my server directly to a port on my EdgeRouterX. That would remove the managed switch dependency, right? Quote Link to comment
Dephcon Posted January 17, 2018 Share Posted January 17, 2018 18 hours ago, dave said: I certainly would appreciate that! I've been thinking of trying this for my Dockers but do not have a managed switch. However, I can connect my server directly to a port on my EdgeRouterX. That would remove the managed switch dependency, right? Not sure about the ER line, but if you use the second LAN port on the USG Pro i think it disabled hardware offload or something so i don't think that would be ideal. Quote Link to comment
dave Posted January 17, 2018 Share Posted January 17, 2018 3 hours ago, Dephcon said: Not sure about the ER line, but if you use the second LAN port on the USG Pro i think it disabled hardware offload or something so i don't think that would be ideal. As far as I can tell, from the GUI I am able to assign a VLAN to any port. With that said, plugging my server directly into that port should work? I will have to watch some tutorials on setting up VLANs tonight to figure this out. Quote Link to comment
Dephcon Posted January 18, 2018 Share Posted January 18, 2018 just make sure that routing between physical ports on your ER doesn't bypass hardware offload, that would be less than ideal. Quote Link to comment
Diggewuff Posted January 19, 2018 Share Posted January 19, 2018 On 1/16/2018 at 5:40 PM, Dephcon said: I do have a unifi switch and by default it trunks all vlans to all ports. I'll take some screenshots to illustrate where i'm at **edit** I basically nuked all the routes and started from scratch and now, at least the containers, seem to work. It's possible that my USG didn't get all the config changes as it sometimes starts to provision before finishing the changes. I forced a provision which may have helped also. Anyway, this is a super cool feature and i'm glad to have it working. Now I just need to work backwards to open up all the ports I need for my containers then slap a big DENY ALL at the end of the list to limit the docker vlan from accessing my main LAN. I ended up taking all those screenshots anyway so I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default. Thanks! I would be very interested in that guide. Quote Link to comment
Dephcon Posted January 19, 2018 Share Posted January 19, 2018 (edited) So it turns out the firewall in unifi is awful, no bi-directional rules, no protocol specification in port groups. I've defaulted back to wide open between my container VLAN and the main LAN for now. I might dig into the ER config guide and see if it's easier to just configure the firewall via CLI and export to json. However I might be able to throw something together as an example of what needs to be done to facilitate permitting one app and then deny all, i already had it working for plex before i started piling more apps into the VLAN. Edited January 19, 2018 by Dephcon Quote Link to comment
bonienl Posted January 20, 2018 Author Share Posted January 20, 2018 10 hours ago, Dephcon said: configure the firewall via CLI and export to json I had to do the same for IPv6 configuration, not available in the GUI, but CLI allows (most) of what the EdgeRouter can do. It is a pain to keep track with json files though. Any customization in CLI needs to be saved with json otherwise it will be lost the next time a device is provisioned thru the GUI. Quote Link to comment
DZMM Posted January 20, 2018 Share Posted January 20, 2018 @bonienl I was a bit surprised by the extra addresses setup for the unraid server by the VLANs which affected my transfer speeds - is this normal? Quote Link to comment
bonienl Posted January 20, 2018 Author Share Posted January 20, 2018 See my answer in your other topic. Quote Link to comment
dave Posted January 23, 2018 Share Posted January 23, 2018 Got the VLAN (2) setup on my EdgeRouterX, added the VLAN in Network Settings for unRAID, added the br0.2 to Docker settings, but when I edit a Container only br0 is shown in the drop-down. There is no option for br0.2 -- any ideas? Quote Link to comment
dave Posted January 23, 2018 Share Posted January 23, 2018 Ok, got this working! Now, is there a way for my LetsEncrypt docker to hand over traffic to another Docker? I have my firewall forwarding the port to LetsEncrypt but now I can't figure out how to get it to pass traffic to the final destination. Previously this all worked because it was a single IP and passing across ports. Thanks! Quote Link to comment
DZMM Posted January 23, 2018 Share Posted January 23, 2018 2 hours ago, dave said: Ok, got this working! Now, is there a way for my LetsEncrypt docker to hand over traffic to another Docker? I have my firewall forwarding the port to LetsEncrypt but now I can't figure out how to get it to pass traffic to the final destination. Previously this all worked because it was a single IP and passing across ports. Thanks! if you've assigned an IP to LE e.g mine is 192.168.50.80, then you have to assign an IP to all the dockers you want it to connect to e.g. I have 192.168.30.86 for nzbget. Then you reference that IP in the config file: location /nzbget { proxy_pass http://192.168.30.86:6789; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; you can't use dockers on Bridge anymore - they need to have a unique IP to be able to communicate with each other Quote Link to comment
dave Posted January 23, 2018 Share Posted January 23, 2018 1 hour ago, DZMM said: if you've assigned an IP to LE e.g mine is 192.168.50.80, then you have to assign an IP to all the dockers you want it to connect to e.g. I have 192.168.30.86 for nzbget. Then you reference that IP in the config file: location /nzbget { proxy_pass http://192.168.30.86:6789; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; you can't use dockers on Bridge anymore - they need to have a unique IP to be able to communicate with each other Ah, yes, you're right. I updated that IP in the config file and all is working! Thanks! Quote Link to comment
mifronte Posted January 29, 2018 Share Posted January 29, 2018 I went ahead and created a VLAN (30) for my UniFi docker. The VLAN is on a separate network from my main LAN. The only issue I have with this setup is that the unRAID server gets an address is accessible from the VLAN too! I was hoping by putting the docker apps in a VLAN on a separate network, the docker apps would be segregated from my unRAID server. Is there anyway to prevent unRAID from being in the VLAN network too? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.