Can I Assign a Docker Container to eth1?


Recommended Posts

Couldn't help myself from having a go at this again as it has been bugging me.

 

I had some success using a "home-assistant" docker and my second and unused nic "eth1".

 

Background network info:

- physical router/gateway: 192.168.2.1

- subnet mask: 255.255.255.0 or /24

- docker container static ip: 192.168.2.106

 

I used the following commands to get my docker network up:

docker network create --subnet 192.168.2.0/24 --aux-address "DefaultGatewayIPv4=192.168.2.1" --gateway=192.168.2.2 -o com.docker.network.bridge.name=br-home-net homenet
brctl addif br-home-net eth1
ip a del 192.168.2.2/24 dev br-home-net

The part about deleting the gateway may not be necessary. But I was following steps from someone else so kept it the same for the meantime.

 

Then to add the ip to the docker i added the following extra parameter:

--net homenet --ip 192.168.2.106

Make sure your network is set to "None"

I was concerned that the --net config showing up twice, once for "None" and then again for our custom network might cause issues but not in this case.

 

Next step will be:

- To test some other containers and see how they go.

- Figure out a way for those with only one interface to be able to use this feature.

- automate the creation of the docker network

- let others test the process and see how it goes

- if it work suggest that it be built into unraid GUI

 

A quick note on something I did that isn't normal: I had tried using the inbuilt bridge feature suggested in the earlier post to make br1. This is not what you need to do. So I manually deleted my bridge because I didnt want to take down my VM to do so. I hope that doesnt impact my results but I will test this later. Is used the follwinf command to do so:

ip link set br1 down
brctl delbr br1

 

 

Link to comment

Couldn't help myself from having a go at this again as it has been bugging me.

 

I had some success using a "home-assistant" docker and my second and unused nic "eth1".

 

Background network info:

- physical router/gateway: 192.168.2.1

- subnet mask: 255.255.255.0 or /24

- docker container static ip: 192.168.2.106

 

I used the following commands to get my docker network up:

docker network create --subnet 192.168.2.0/24 --aux-address "DefaultGatewayIPv4=192.168.2.1" --gateway=192.168.2.2 -o com.docker.network.bridge.name=br-home-net homenet
brctl addif br-home-net eth1
ip a del 192.168.2.2/24 dev br-home-net

The part about deleting the gateway may not be necessary. But I was following steps from someone else so kept it the same for the meantime.

 

Then to add the ip to the docker i added the following extra parameter:

--net homenet --ip 192.168.2.106

Make sure your network is set to "None"

I was concerned that the --net config showing up twice, once for "None" and then again for our custom network might cause issues but not in this case.

 

Next step will be:

- To test some other containers and see how they go.

- Figure out a way for those with only one interface to be able to use this feature.

- automate the creation of the docker network

- let others test the process and see how it goes

- if it work suggest that it be built into unraid GUI

 

A quick note on something I did that isn't normal: I had tried using the inbuilt bridge feature suggested in the earlier post to make br1. This is not what you need to do. So I manually deleted my bridge because I didnt want to take down my VM to do so. I hope that doesnt impact my results but I will test this later. Is used the follwinf command to do so:

ip link set br1 down
brctl delbr br1

 

I knew this could be done!  :)

 

How did you configure eth1 in the unraid network settings?  Did you have to turn on bridging?

 

The one thing that is nice is that after you create the new network, it survives a reboot.  Now I wonder if there is a file somewhere in Dynamix (or somewhere else) that we can edit to make the new network available in the dropdown on the container editing page.

 

I just crawled out of bed but will play with this later.  My whole reason for wanting this is to be able to force my Deluge and SAB containers through my VPN.  Right now I send the entinre unraid server that path.  For this purpose, I will also pass

--dns=xxx.xxx.xx.xx

to those two containers to avoid DNS leaks since they obtain their entries from the host and anything set at the router level via DHCP is not applied.

 

John

Link to comment

I had turned on bridging but wanted it off. so rather than reboot I just deleted the bridge. I will turn the bridge off next time I shut my array down.

 

The one thing that is nice is that after you create the new network, it survives a reboot.  Now I wonder if there is a file somewhere in Dynamix (or somewhere else) that we can edit to make the new network available in the dropdown on the container editing page.

I didn't realise the network setting persisted after reboot. The bridge between the docker bridge and the nic wont though as these commands are done in unraid not docker.

 

There is a lot more to do, especially around the nic bridging, before this would be ready for inclusion in the gui. Also at the moment the docker network option is only available to those with a spare nic.

 

 

I just crawled out of bed but will play with this later.  My whole reason for wanting this is to be able to force my Deluge and SAB containers through my VPN.  Right now I send the entinre unraid server that path.  For this purpose, I will also pass

--dns=xxx.xxx.xx.xx

to those two containers to avoid DNS leaks since they obtain their entries from the host and anything set at the router level via DHCP is not applied.

 

This is already possible using pipework and much simpler than the docker networking method.

At the moment this has to many balancing parts that make it suitable for my non development dockers.

Link to comment

I just crawled out of bed but will play with this later.  My whole reason for wanting this is to be able to force my Deluge and SAB containers through my VPN.  Right now I send the entinre unraid server that path.  For this purpose, I will also pass

--dns=xxx.xxx.xx.xx

to those two containers to avoid DNS leaks since they obtain their entries from the host and anything set at the router level via DHCP is not applied.

 

Just to clarify. I have not tried dns setting using pipework. I have pipework giving dockers their own static IP and then using my router to manage traffic priority or pushing the traffic through a vpn if necessary.

Link to comment

FYI..found the PHP file that needs to be modified to allow selecting the custom network:

 

/usr/local/emhttp/plugins/dynamix.docker.manager/include/CreateDocker.php

 

The relevant section...

 

        <td>Network Type:</td>
        <td>
          <select name="contNetwork" class="narrow">
            <option value="bridge">Bridge</option>
            <option value="host">Host</option>
            <option value="none">None</option>
          </select>
        </td>

 

I imagine adding a line to the GO file to either inject the custom network (or overwrite the file with a modified one) can be done.  I may also ask for a feature request to dynamically create the network list in the container config based on 'docker network ls' (or some other way).

 

John

Link to comment

I just crawled out of bed but will play with this later.  My whole reason for wanting this is to be able to force my Deluge and SAB containers through my VPN.  Right now I send the entinre unraid server that path.  For this purpose, I will also pass

--dns=xxx.xxx.xx.xx

to those two containers to avoid DNS leaks since they obtain their entries from the host and anything set at the router level via DHCP is not applied.

 

Just to clarify. I have not tried dns setting using pipework. I have pipework giving dockers their own static IP and then using my router to manage traffic priority or pushing the traffic through a vpn if necessary.

 

Be careful of this...

 

Even though I was telling pfsense to assign my containers the DNS servers of my VPN provider, it was being overwritten by Docker with tells the container to get them from the host.

Link to comment

Did you have to assign an IP (either static or dynamic) for eth1 in the unraid NIC settings?  Or does the newly create docker network take care of that?

I did not assign anything to the nic, just left it as up.

 

I have stopped testing docker networking with my server now I started to get very strange results.

By manually creating the bridge between the docker network and eth1 using brctl all of my traffic from VM's and dockers (even using pipework, not sure about docker networking) started to come from my unraid host ip on eth0. This caused havoc in my system as low priority traffic was considered high priority as it was coming from my unraid ip.

 

I will put the Docker networking testing on hold again as the trouble is not worth it while I have a solution that currently works.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.