mifronte Posted August 29, 2015 Share Posted August 29, 2015 I am running unRAID 6.0.1 and just got Windows 7 Pro VM using KVM working. My motherboard has two NIC (Intel® 82573V + Intel® 82573L) which are plugged into my unmanaged Zyxel GS1100-16 R switch. I have not enable bonding yet but have enabled bridging (br0) for the VM. It appears unRAID is only using eth0 and I have no idea what eth1 is doing. Up to now, I have been running unRAID on one machine (with the second NIC unplugged) and Windows 7 on another. Now that both are on the same machine, at peak time, my household tend to have two BD streams and two HD OTA recordings from my network tuners (HD HomeRun) going at the same time. [*]Should I enable bonding to utilize my second NIC? [*]If no to bonding, should I have the second ethernet port plugged into my switch? [*]If yes to bonding, which mode would work best with my unmanaged dumb switch and do I need to do any configuration with the Windows VM? [*]Other than bonding, is there a better way to utilize my second NIC? BTW, big kudos to all who made getting VM so easy to setup on unRAID. This is my first foray into virtualization and I am impressed! Quote Link to comment
NotYetRated Posted August 30, 2015 Share Posted August 30, 2015 Also curious as to what others think, have a similar situation. Though, I set mine to balance-alb just to see what would happen. I think no problems so far? But not sure if its actually benefiting me. Quote Link to comment
BobPhoenix Posted August 31, 2015 Share Posted August 31, 2015 Curious myself. I passed my 2nd nic through to the Windows VM but if it helps to use bonding and share the nics with unRAID I would try it. Can you tell I'm not too familiar with networking even though I've had a network since 10Mb was the norm and 100Mb was over the horizon . Quote Link to comment
archedraft Posted August 31, 2015 Share Posted August 31, 2015 Curious myself. Include me on the list as well. Quote Link to comment
NotYetRated Posted August 31, 2015 Share Posted August 31, 2015 I still don't know the right answer, but I have done some digging. Pro's/cons to both it seems. A pro to teaming/bonding/whatever: When passing the second NIC through to the VM, your VM and unRaid then do all communications over ethernet. That is, Out from your unRAID box, across your network, and back in to your unRAID box. When utilizing the unRAID internal networking, transfers are smart and do not pass through your network, therefore less latency, no gigabit speed limits, network traffic issues etc. But still, I don't know what is better. I will say, when my one VM is maxing my internet connection(30mb/s), for some reason my unRAID webGUI loads quite slowly. And that is loading it from an internal to network computer, so it shouldnt matter that my WAN is maxed. My router is beefy and is not the limiting factor there. A second issue I see with bonding is QOS. I don't want myself transferring files to my unRAID from another in network PC, maxing out my unRAID network speed, to interfere with my VM's, dockers, etc trying to access the internet. That is one use case for individually assigning NIC rather than bonding them. I think.... haha Quote Link to comment
meep Posted August 31, 2015 Share Posted August 31, 2015 I believe that for Nic bonding to have any effect, your switch needs to support bonding as well. Quote Link to comment
joelones Posted August 31, 2015 Share Posted August 31, 2015 I currently have a dual Intel NIC card. Both NICs on the same subnet. I used the web interface to configure eth0 (statically) for unRAID adminstration and Dockers. Then configured eth1 as bridge for VMs. Configuring eth1 was not obvious and as such cannot be done via the web interface (at it stands now, you could only configure one interface). I had to put the appropriate commands in the go file to configure eth1 at boot. This is what I have: brctl addbr in0 brctl addif in0 eth1 ifconfig eth1 up ifconfig in0 up route add -net 192.168.1.0 netmask 255.255.255.0 dev in0 I needed to add the route command as I ran into an issue where the VMs couldn't communicate with the Docker apps. I can't say if this is optimal or not, but seems to be fine for my needs. Hope it helps somebody. Besides the required hardware to get bonding work, might it be overkill to some extent in some situations? I guess redundancy would be cool but I can't imagine the chances a good quality NIC dying are high. Quote Link to comment
BobPhoenix Posted September 1, 2015 Share Posted September 1, 2015 I believe that for Nic bonding to have any effect, your switch needs to support bonding as well. That was what I always believe as well. I've seen things that would lead me to believe coding has progressed enough to get around it without a switch update. But I could easily have been reading into things too much as well. Quote Link to comment
NotYetRated Posted September 1, 2015 Share Posted September 1, 2015 The two bottom options on unRAID bonding do not require a managed switch. Quote Link to comment
mifronte Posted September 2, 2015 Author Share Posted September 2, 2015 I decided to use adaptive load balancing (mode 6) bonding since there's no point having a second NIC sitting around doing nothing. At least iperf3 does not show any differences between having the bonding disabled or enabled. On another note, I even tried testing iperf3 between a Windows VM and my Windows desktop and the throughput is the same as between unRAID and my Windows desktop. So the unRAID internal bridge (br0) for VMs has no negative impact on my system. Quote Link to comment
NotYetRated Posted September 2, 2015 Share Posted September 2, 2015 I decided to use adaptive load balancing (mode 6) bonding since there's no point having a second NIC sitting around doing nothing. At least iperf3 does not show any differences between having the bonding disabled or enabled. On another note, I even tried testing iperf3 between a Windows VM and my Windows desktop and the throughput is the same as between unRAID and my Windows desktop. So the unRAID internal bridge (br0) for VMs has no negative impact on my system. That is currently what I am using, heres to hoping it is actually a benefit to the system? haha Quote Link to comment
mifronte Posted September 2, 2015 Author Share Posted September 2, 2015 Monitoring the Dashboard->System Status with the dropdown set to Errors, I see packet drops on bond0 (from the second NIC). Does anyone know if this is normal? I've been googling but have yet to come to a definite conclusion. Quote Link to comment
NotYetRated Posted September 2, 2015 Share Posted September 2, 2015 Monitoring the Dashboard->System Status with the dropdown set to Errors, I see packet drops on bond0 (from the second NIC). Does anyone know if this is normal? I've been googling but have yet to come to a definite conclusion. Hmm. I see the same thing here, though none of my services seem to be running with issues. Plex, game servers, voice server, file transfers... bond0 Errors: 0 Drops: 72680 Overruns: 0 Errors: 0 Drops: 123 Overruns: 0 Quote Link to comment
mifronte Posted September 2, 2015 Author Share Posted September 2, 2015 I started up four BD streams from different clients to see if both NICs were being utilized. Sure enough the transmitted packets count were almost evenly divided between the two NICs. Compared to before bonding, my 2nd NIC would not be active at all (zero packet transmitted or received). However the packets dropped counter on the second NIC keeps on going up. Surely this is not normal at all. Edit: I should clarify that all the packet drops are on the 2nd NIC (non-active slave in the bond) receiving end. No packet drops on transmitting. Quote Link to comment
mr-hexen Posted September 2, 2015 Share Posted September 2, 2015 I believe that for Nic bonding to have any effect, your switch needs to support bonding as well. This is true. for any sort of bonding where you basically get 2gbps you need support from the switch as well, otherwise you'd get two IP addresses. If your switch does not have this ability I'd just turn it off.. Quote Link to comment
NotYetRated Posted September 3, 2015 Share Posted September 3, 2015 I found this via dd-wrt: balance-alb Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server. However, I do note that my logs are full of: Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address And I do mean, like, thousands of entries. Quote Link to comment
mifronte Posted September 3, 2015 Author Share Posted September 3, 2015 I don't have any errors in my syslog related to the bonding. My second NIC does not receive an IP address (no record in my DHCP server). The drop packets may be common when using these bonding modes. According to this document, under the Duplicate Incoming Packets section: Duplicated Incoming Packets It is not uncommon to observe a short burst of duplicated traffic when the bonding device is first used, or after it has been idle for some period of time. This is most easily observed by issuing a "ping" to some other host on the network, and noticing that the output from ping flags duplicates (typically one per slave). For example, on a bond in active-backup mode with five slaves all connected to one switch, the output may appear as follows: # ping -n 10.0.4.2 PING 10.0.4.2 (10.0.4.2) from 10.0.3.10 : 56(84) bytes of data. 64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.7 ms 64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!) 64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!) 64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!) 64 bytes from 10.0.4.2: icmp_seq=1 ttl=64 time=13.8 ms (DUP!) 64 bytes from 10.0.4.2: icmp_seq=2 ttl=64 time=0.216 ms 64 bytes from 10.0.4.2: icmp_seq=3 ttl=64 time=0.267 ms 64 bytes from 10.0.4.2: icmp_seq=4 ttl=64 time=0.222 ms This is not due to an error in the bonding driver, rather, it is a side effect of how many switches update their MAC forwarding tables. Initially, the switch does not associate the MAC address in the packet with a particular switch port, and so it may send the traffic to all ports until its MAC forwarding table is updated. Since the interfaces attached to the bond may occupy multiple ports on a single switch, when the switch (temporarily) floods the traffic to all ports, the bond device receives multiple copies of the same packet (one per slave device). The duplicated packet behavior is switch dependent, some switches exhibit this, and some do not. On switches that display this behavior, it can be induced by clearing the MAC forwarding table (on most Cisco switches, the privileged command "clear mac address-table dynamic" will accomplish this). Edit: Until I get a real managed switch, I will leave the bonding at balance-alb. It appears to equally distribute packets between the two NICs and so far I have not noticed any dramatic side effects other than the dropped packets. Quote Link to comment
JonathanM Posted September 3, 2015 Share Posted September 3, 2015 However, I do note that my logs are full of: Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address And I do mean, like, thousands of entries. Keep an eye on the free space where the logs are kept, you could easily fill it up and end up not logging anything after it's full. Unless you modify it, I'm pretty sure the log volume is still only 128MB. Run df at the console and see what your used and available are in tmpfs mounted to /var/log Quote Link to comment
NotYetRated Posted September 3, 2015 Share Posted September 3, 2015 However, I do note that my logs are full of: Sep 2 21:08:44 BigBang kernel: net_ratelimit: 7426 callbacks suppressed Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address Sep 2 21:08:44 BigBang kernel: br0: received packet on bond0 with own address as source address And I do mean, like, thousands of entries. Keep an eye on the free space where the logs are kept, you could easily fill it up and end up not logging anything after it's full. Unless you modify it, I'm pretty sure the log volume is still only 128MB. Run df at the console and see what your used and available are in tmpfs mounted to /var/log Good call, already at 38 out of 128, and ive only been bonded like this for a couple of days. Will need to investigate I suppose! Quote Link to comment
Zer0Nin3r Posted July 24, 2020 Share Posted July 24, 2020 I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN. Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear. I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so: I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel. Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter. Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. 🙂 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.