Should I Utilize My Second NIC?


Recommended Posts

I am running unRAID 6.0.1 and just got Windows 7 Pro VM using KVM working.  My motherboard has two NIC (Intel® 82573V + Intel® 82573L) which are plugged into my unmanaged Zyxel GS1100-16 R switch.  I have not enable bonding yet but have enabled bridging (br0) for the VM.

 

It appears unRAID is only using eth0 and I have no idea what eth1 is doing.  Up to now, I have been running unRAID on one machine (with the second NIC unplugged) and Windows 7 on another.  Now that both are on the same machine, at peak time, my household tend to have two BD streams and two HD OTA recordings from my network tuners (HD HomeRun) going at the same time.

 

[*]Should I enable bonding to utilize my second NIC?

[*]If no to bonding, should I have the second ethernet port plugged into my switch?

[*]If yes to bonding, which mode would work best with my unmanaged dumb switch and do I need to do any configuration with the Windows VM?

[*]Other than bonding, is there a better way to utilize my second NIC?

 

BTW, big kudos to all who made getting VM so easy to setup on unRAID. This is my first foray into virtualization and I am impressed!

Link to comment

I still don't know the right answer, but I have done some digging.

 

Pro's/cons to both it seems.

 

A pro to teaming/bonding/whatever: When passing the second NIC through to the VM, your VM and unRaid then do all communications over ethernet. That is, Out from your unRAID box, across your network, and back in to your unRAID box. When utilizing the unRAID internal networking, transfers are smart and do not pass through your network, therefore less latency, no gigabit speed limits, network traffic issues etc.

 

But still, I don't know what is better.

 

I will say, when my one VM is maxing my internet connection(30mb/s), for some reason my unRAID webGUI loads quite slowly. And that is loading it from an internal to network computer, so it shouldnt matter that my WAN is maxed. My router is beefy and is not the limiting factor there.

 

A second issue I see with bonding is QOS. I don't want myself transferring files to my unRAID from another in network PC, maxing out my unRAID network speed, to interfere with my VM's, dockers, etc trying to access the internet. That is one use case for individually assigning NIC rather than bonding them. I think.... haha

Link to comment

I currently have a dual Intel NIC card. Both NICs on the same subnet. I used the web interface to configure eth0 (statically) for unRAID adminstration and Dockers. Then configured eth1 as bridge for VMs. Configuring eth1 was not obvious and as such cannot be done via the web interface (at it stands now, you could only configure one interface).

 

I had to put the appropriate commands in the go file to configure eth1 at boot. This is what I have:

 

brctl addbr in0
brctl addif in0 eth1
ifconfig eth1 up
ifconfig in0 up
route add -net 192.168.1.0 netmask 255.255.255.0 dev in0

 

I needed to add the route command as I ran into an issue where the VMs couldn't communicate with the Docker apps. I can't say if this is optimal or not, but seems to be fine for my needs. Hope it helps somebody.

 

Besides the required hardware to get bonding work, might it be overkill to some extent in some situations? I guess redundancy would be cool but I can't imagine the chances a good quality NIC dying are high.

 

Link to comment

I believe that for Nic bonding to have any effect, your switch needs to support bonding as well.

That was what I always believe as well.  I've seen things that would lead me to believe coding has progressed enough to get around it without a switch update.  But I could easily have been reading into things too much as well.
Link to comment

I decided to use adaptive load balancing (mode 6) bonding since there's no point having a second NIC sitting around doing nothing.  At least iperf3 does not show any differences between having the bonding disabled or enabled.

 

On another note, I even tried testing iperf3 between a Windows VM and my Windows desktop and the throughput is the same as between unRAID and my Windows desktop.  So the unRAID internal bridge (br0) for VMs has no negative impact on my system.

Link to comment

I decided to use adaptive load balancing (mode 6) bonding since there's no point having a second NIC sitting around doing nothing.  At least iperf3 does not show any differences between having the bonding disabled or enabled.

 

On another note, I even tried testing iperf3 between a Windows VM and my Windows desktop and the throughput is the same as between unRAID and my Windows desktop.  So the unRAID internal bridge (br0) for VMs has no negative impact on my system.

 

That is currently what I am using, heres to hoping it is actually a benefit to the system? haha

Link to comment

Monitoring the Dashboard->System Status with the dropdown set to Errors,

 

I see packet drops on bond0 (from the second NIC).

 

Does anyone know if this is normal?  I've been googling but have yet to come to a definite conclusion.

 

Hmm. I see the same thing here, though none of my services seem to be running with issues. Plex, game servers, voice server, file transfers...

 

bond0 Errors: 0

Drops: 72680

Overruns: 0 Errors: 0

Drops: 123

Overruns: 0

Link to comment

I started up four BD streams from different clients to see if both NICs were being utilized.  Sure enough the transmitted packets count were almost evenly divided between the two NICs. Compared to before bonding, my 2nd NIC would not be active at all (zero packet transmitted or received).

 

However the packets dropped counter on the second NIC keeps on going up.  Surely this is not normal at all.

 

Edit:

I should clarify that all the packet drops are on the 2nd NIC (non-active slave in the bond) receiving end.  No packet drops on transmitting.

Link to comment

I believe that for Nic bonding to have any effect, your switch needs to support bonding as well.

 

This is true. for any sort of bonding where you basically get 2gbps you need support from the switch as well, otherwise you'd get two IP addresses.

 

If your switch does not have this ability I'd just turn it off..

Link to comment

I found this via dd-wrt:

 

balance-alb

 

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

 

However, I do note that my logs are full of:

 

Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

 

 

And I do mean, like, thousands of entries.

Link to comment

I don't have any errors in my syslog related to the bonding.

 

My second NIC does not receive an IP address (no record in my DHCP server).  The drop packets may be common when using these bonding modes.  According to this document, under the Duplicate Incoming Packets section:

 

Duplicated Incoming Packets

It is not uncommon to observe a short burst of duplicated
traffic when the bonding device is first used, or after it has been
idle for some period of time. This is most easily observed by issuing
a "ping" to some other host on the network, and noticing that the
output from ping flags duplicates (typically one per slave).

For example, on a bond in active-backup mode with five slaves
all connected to one switch, the output may appear as follows:

# ping -n 10.0.4.2
PING 10.0.4.2 (10.0.4.2) from 10.0.3.10 : 56(84) bytes of data.
64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.7 ms
64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!)
64 bytes from 10.0.4.2: icmp_seq=2 ttl=64 time=0.216 ms
64 bytes from 10.0.4.2: icmp_seq=3 ttl=64 time=0.267 ms
64 bytes from 10.0.4.2: icmp_seq=4 ttl=64 time=0.222 ms
This is not due to an error in the bonding driver, rather, it
is a side effect of how many switches update their MAC forwarding
tables. Initially, the switch does not associate the MAC address in
the packet with a particular switch port, and so it may send the
traffic to all ports until its MAC forwarding table is updated. Since
the interfaces attached to the bond may occupy multiple ports on a
single switch, when the switch (temporarily) floods the traffic to all
ports, the bond device receives multiple copies of the same packet
(one per slave device).

The duplicated packet behavior is switch dependent, some
switches exhibit this, and some do not. On switches that display this
behavior, it can be induced by clearing the MAC forwarding table (on
most Cisco switches, the privileged command "clear mac address-table
dynamic" will accomplish this).

 

Edit:

Until I get a real managed switch, I will leave the bonding at balance-alb.  It appears to equally distribute packets between the two NICs and so far I have not noticed any dramatic side effects other than the dropped packets.

Link to comment

However, I do note that my logs are full of:

 

Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

 

 

And I do mean, like, thousands of entries.

Keep an eye on the free space where the logs are kept, you could easily fill it up and end up not logging anything after it's full. Unless you modify it, I'm pretty sure the log volume is still only 128MB. Run df at the console and see what your used and available are in tmpfs mounted to /var/log
Link to comment

However, I do note that my logs are full of:

 

Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address

 

 

And I do mean, like, thousands of entries.

Keep an eye on the free space where the logs are kept, you could easily fill it up and end up not logging anything after it's full. Unless you modify it, I'm pretty sure the log volume is still only 128MB. Run df at the console and see what your used and available are in tmpfs mounted to /var/log

 

 

Good call, already at 38 out of 128, and ive only been bonded like this for a couple of days. Will need to investigate I suppose!

Link to comment
  • 4 years later...

I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN.

 

Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear.

 

I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so:

 

image.png.d67dc720ed73434fbffa38dd86656b77.png

 

I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel.

 

Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter.

 

Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. 🙂

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.