Ndgame Posted January 18, 2016 Share Posted January 18, 2016 Is there a guide for bonding 2 NICS in UNRAID 6.1.7. I have LAG enabled on my managed Netgear Switch.. I have 2 Intel pro cards installed. At one time I have 5 different TV's in the house streaming and I am trying to team the nics for more bandwidth. Any instruction on how to do this from the GUI would be greatly appreciated. Thanks in advance. Quote Link to comment
jonp Posted January 18, 2016 Share Posted January 18, 2016 Go under the Settings -> Network Settings tab and turn on help. There is a section about enabling bonding and bonding modes. Should be able to figure it out from there. Quote Link to comment
Ndgame Posted January 19, 2016 Author Share Posted January 19, 2016 I did this and help did not explain all the different mode options and which is better to use. Can anyone explain the difference in them or tell me which to use. On Server 2012 I just teamed the NICS together to get a 4GB connection along with the settings on my switch. All I want to do is create a 2Gb pipe with teaming 2 nics.. Quote Link to comment
ken-ji Posted January 19, 2016 Share Posted January 19, 2016 If the enabled teaming on your switch has LACP, pick mode=4 (802.3ad) in the unRaid network settings for the bond otherwise, you probably should try and error it and see which one works for you and which ones causes a loop My switch supports LACP so I'm using mode=4 1 Quote Link to comment
Ndgame Posted January 19, 2016 Author Share Posted January 19, 2016 Thank you for the response. I have another question. Where can I confirm that it even seems my end nic? Quote Link to comment
Wimpie Posted January 19, 2016 Share Posted January 19, 2016 It's some time ago, but I seem to remember I needed to also enable br0 (it worked and I didn't want to investigate any further at that time...). See network settings (capture1.png) Configure the bond in the switch (capture2-4.png, I use a TP-link) (I have 2 servers) Hope this helps... CLI output for ifconfig (on unraid server): ********************************** bond0: flags=5443<UP,BROADCAST,RUNNING,PROMISC,MASTER,MULTICAST> mtu 1500 ether 0c:c4:7a:05:46:5e txqueuelen 0 (Ethernet) RX packets 5395740 bytes 465190370 (443.6 MiB) RX errors 0 dropped 3 overruns 0 frame 0 TX packets 9117279 bytes 1811085194 (1.6 GiB) TX errors 0 dropped 2 overruns 0 carrier 0 collisions 0 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.87.51.78 netmask 255.255.255.0 broadcast 10.87.51.255 ether 0c:c4:7a:05:46:5e txqueuelen 0 (Ethernet) RX packets 5386191 bytes 359554170 (342.8 MiB) RX errors 0 dropped 71390 overruns 0 frame 0 TX packets 9033136 bytes 1805955114 (1.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.42.1 netmask 255.255.0.0 broadcast 0.0.0.0 ether be:d8:d9:1d:8f:33 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2499 bytes 282510 (275.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 0c:c4:7a:05:46:5e txqueuelen 1000 (Ethernet) RX packets 41411 bytes 3192730 (3.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2348 bytes 290940 (284.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xdf560000-df57ffff eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 0c:c4:7a:05:46:5e txqueuelen 1000 (Ethernet) RX packets 172385 bytes 18834907 (17.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 310391 bytes 78224131 (74.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xdf540000-df55ffff eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 0c:c4:7a:05:46:5e txqueuelen 1000 (Ethernet) RX packets 4935800 bytes 421827537 (402.2 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 37465 bytes 2117020 (2.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xdf520000-df53ffff eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 0c:c4:7a:05:46:5e txqueuelen 1000 (Ethernet) RX packets 246144 bytes 21335196 (20.3 MiB) RX errors 0 dropped 2 overruns 0 frame 0 TX packets 8767075 bytes 1730453103 (1.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xdf500000-df51ffff lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 0 (Local Loopback) RX packets 430 bytes 80546 (78.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 430 bytes 80546 (78.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 *********************************** Quote Link to comment
ken-ji Posted January 19, 2016 Share Posted January 19, 2016 I think /sbin/ip link gives a better result the ifconfig MediaStore:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 8: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 46:be:f0:f3:73:2a brd ff:ff:ff:ff:ff:ff On My TP-Link, I just enabled LACP on the specific ports from the LACP Config Tab, and picked passive - as this will allow the server to have a network connection in case you are manually debugging or similar Then to quick check on the server root@MediaStore:~# ethtool bond0 Settings for bond0: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 2000Mb/s Duplex: Full Port: Other PHYAD: 0 Transceiver: internal Auto-negotiation: off Link detected: yes Mine is 2Gbps since I only have two linked ports you should be getting 4Gbps in your case. Finally, my switch logs this: 98 2016-01-16 16:36:44 LAG level_6 Added new Link Aggregation Group 1, members: Port 7-8. 99 2016-01-16 16:36:40 Link level_3 port 7, changed state to up. 100 2016-01-16 16:36:39 Link level_3 port 8, changed state to up. Quote Link to comment
Wimpie Posted January 20, 2016 Share Posted January 20, 2016 Thank you ken-ji for posting your settings. I got to these settings (getting lots of frustration) by trial and error to get this working. So when it finally worked, I just let those settings as they were... Do you know why it's needed to have br0 enabled to have this working? I (as a non-expert) find it very strange that the IP is granted to the br0 interface. I would expect that the IP address would go to the bond0 interface. What do you mean by this : "and picked passive - as this will allow the server to have a network connection in case you are manually debugging or similar" Thanks! Quote Link to comment
ken-ji Posted January 21, 2016 Share Posted January 21, 2016 I use passive LACP versus active LACP, as this should allow the switch to disable the link if the server is not linking the ports together (ie in case of server startup, alternate OS, recovery OS, etc) Quote Link to comment
Wimpie Posted January 22, 2016 Share Posted January 22, 2016 I use passive LACP versus active LACP, as this should allow the switch to disable the link if the server is not linking the ports together (ie in case of server startup, alternate OS, recovery OS, etc) Thanks, will keep it in mind. Quote Link to comment
klamath Posted January 22, 2016 Share Posted January 22, 2016 you can get more info by looking at /proc/net/bonding/bond0 Im doing LACP with layer 2+3 load balancing: root@orion:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 9 Partner Key: 1003 Partner Mac Address: 1c:de:a7:30:aa:03 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:90:d5:17:34 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 0 port key: 9 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 1 oper key: 1003 port priority: 1 port number: 52 port state: 61 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:90:d5:17:35 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 0 port key: 9 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 1 oper key: 1003 port priority: 1 port number: 51 port state: 61 root@orion:~# Quote Link to comment
dereitz Posted June 9, 2022 Share Posted June 9, 2022 (edited) I recently used this thread as a guide to enable link agg on my unraid box. I just wanted to say 'Thank you' to everyone that contributed to this thread as it was extremely helpful!! Edited June 9, 2022 by dereitz Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.