Networking - Multiple Ports


Defylimits

7 posts in this topic Last Reply

Recommended Posts

Hey eveyone, just wondering if someone can help me out as bit green with networking in general and on unraid.

Just added a new network card into my server (10gbe as it was cheap) and wondering how to configure my dockers and VM's to use it.

Currently I have

eth0 - 10.10.10.208 - Motherboard port 1 - Unraid host connection, only connection I've been using up until now
eth1 - Port Down - Motherboard port 2

eth2- 10.10.10.210 - Asus 10gbe network card - Connection I want to use for all dockers and VMs.

I disabled bridging of eth0, and enabled bridging of eth2, and custom br2 was created. Reassigning the VMs to this bridge, with their IP addresses set to 10.10.10.211 and 10.10.10.213 appears to have got these to work.

Now on to the dockers, that are runing on the Host network and so use the 10.10.10.208 address, however I want these to use the 10.10.10.210 address, how do I point these in this direction and thus use the 10gbe port?

If I bridge the docker then I get something like this "172.17.0.3:9987/UDP --> 10.10.10.208:9987" (teamspeak), and if I use br2 like the VMs I get "10.10.10.209:80/TCP --> 10.10.10.209:80" (webservice).

I feel like I need to create a seperate Host2 or Bridge2 network type that I can use to select when trying to assign a Docker, but am unsure how to go about this, anyone got any pointers?

Link to post

That's great thanks, how do I also access the network shares through this connection as well?

I mean to be honest I'd prefer just to run everything off the 10Gbe connection if possible? Maybe with the 1gig connection for management access?

Link to post
6 hours ago, Defylimits said:

That's great thanks, how do I also access the network shares through this connection as well?

Via the Unraid IP on eth0.

6 hours ago, Defylimits said:

I mean to be honest I'd prefer just to run everything off the 10Gbe connection if possible? Maybe with the 1gig connection for management access?

This works by reordering the interfaces in Settings | Network (while the array is down) so that eth0 is the 10gbe interface. I'm not sure if there could be  a problem, but briding everything together might work for you too.

Link to post

Thanks! I hadn't realise that you could reorder the interfaces. So just an update for you,

eth0 - 10GBe Network Card ---->> br0 (Dockers and VMs)
eth1 - 1 GBe Onboard (No IP Address)
eth2 - 1GBe Onboard (Port Down)

 

So that's how I've got it working so far and changed my strategy. Now I want to have a second bridge conenction linked to eth1, so that I can assign some dockers just to use that port.

However whenever I follow the setup in your guide below my docker image becomes orphaned. As soon as the docker is started with this - 

"Modify any Docker via the WebUI in Advanced mode

Set Network to None

Remove any port mappings

Fill in the Extra Parameters with: --network docker1

Apply and start the docker

The docker is assigned an IP from the pool 10.0.3.128 - 10.0.3.254; typically the first docker gets the first IP address"

 

Not sure what the cause of this would be?

On 3/2/2020 at 12:04 PM, ken-ji said:

Refer to these for more details:

 

 

Link to post

Can you see docker1 in the drop down list of networks?
Can you see docker1 in the list of docker networks under Settings | Docker ?

 

Please the read both threads thoroughly. There are gotchas like not being allowed to have more than one docker network with the same gateway address.

Link to post
  • 1 year later...

I recently installed the crs305_1g_4s_in in my home network and it is powered from my EdgeRouterX. Previously my port-forwarding was setup to be routed via static IP that was assigned to my server over a local 192.168.1.15 IP address over a 1GbE connection straight from my EdgeRouter.

What I have done is statically assigned the 1GbE connection to become 192.168.1.14 and the new SFP1 port on the Mikrotik to be 192.168.1.15. In other words, the 10GbE is now the .15 and the 1GbE is a .14 address.

My Edge Router's port-foward setup has not changed and set to 192.168.1.15. However if I unplug my 1GbE connection (currently set to 192.168.1.14), external connections are being rejected and not routed.

 

I did more research and I think this is an issue on the unraid side. What kind of networks settings are needed to achieve this? I currently have the 10GbE port set to .15 as eth0 with bridging enabled, and the 1GbE port set to .14 as eth1 and bridging enabled. Is there something wrong in my config?

 

Fixed by editing the default route to the correct ethernet port. IPv4 default 192.168.1.1 via br0 instead of br0.

Edited by Waddoo
Solved.
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.