demonmaestro Posted January 10, 2016 Share Posted January 10, 2016 So I am trying to get it where 2 of my nics are running LAG to the NAS for storage and then the other 2 nics will run its own LAG to support the VMs. So you are probably asking your self why does he need 2 2gb connections. For one while running backups of the nas and transferring large files I am maxing out the 1gb connection. As far as in the VMs they need the bandwith of 2gb for ip cam intake and other automation. I know how to configure my switch for LAG. Quote Link to comment
ken-ji Posted January 13, 2016 Share Posted January 13, 2016 for the main pair, you can set that up under Settings using bonding mode=4 for the secondary pair, you probably want to manually add commands to your go script to create the bond, then the bridge kinda like: /sbin/ip link add dev bond1 type bond # /sbin/ip link set dev bond1 addr <mac address> #Optional; to control the mac address if you need this /sbin/ip link set dev bond1 up /sbin/ip link set eth2 up master bond1 /sbin/ip link set eth3 up master bond1 /usr/sbin/brctl add br1 /usr/sbin/brctl addif br1 bond1 # add bridge tweaks like stp, aging, and forwarding # /sbin/ip add <bridge-ip>/netmask brd + dev br1 # Optional; as it adds additional IP for unRAID to confuse you with Then set the KVM to use this bridge (br1) rather than vibr0 or br0 Code might not be completely correct as I haven't messed with manual bonds in a long while Quote Link to comment
demonmaestro Posted January 16, 2016 Author Share Posted January 16, 2016 Thanks for the script. Anyone tell me where that script goes. The place where I thought it went I cannot find anymore on the main page. I'm probably just loosing my mind. Quote Link to comment
BRiT Posted January 16, 2016 Share Posted January 16, 2016 /boot/config/go That's the script someone else referenced. You should have one, otherwise emhttp won't be started. Quote Link to comment
demonmaestro Posted January 16, 2016 Author Share Posted January 16, 2016 I had went to that over putty as well with NANO and put that it but shouldn't br1 show in the drop down option for networking on the VMs? Quote Link to comment
ken-ji Posted January 17, 2016 Share Posted January 17, 2016 Did you reboot the server? or manually run the commands on the command line? Quote Link to comment
demonmaestro Posted January 17, 2016 Author Share Posted January 17, 2016 I had reboot the server but nothing showed up. Quote Link to comment
ken-ji Posted January 18, 2016 Share Posted January 18, 2016 Odd. using putty, what does /usr/sbin/brctl show give you? Quote Link to comment
demonmaestro Posted February 8, 2016 Author Share Posted February 8, 2016 I had notice this on the console. I am showing 2 bonds When typed "/usr/sbin/brctl show" Quote Link to comment
ken-ji Posted February 8, 2016 Share Posted February 8, 2016 The error about br1 - what's in your go file? and why does your UI report two bonds... could you also show the results of # ip link Mine shows root@MediaStore:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 8: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff 10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 0a:ec:12:78:a8:8a brd ff:ff:ff:ff:ff:ff 40: vmbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 72:7e:dc:19:1f:f3 brd ff:ff:ff:ff:ff:ff Quote Link to comment
demonmaestro Posted February 8, 2016 Author Share Posted February 8, 2016 Here you go. ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 8: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 9: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 10: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 13: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff 14: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500 link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff It might show that because I have my switch configured for LAG on eth2 ð3 Quote Link to comment
ken-ji Posted February 8, 2016 Share Posted February 8, 2016 Like I said its been a while. change /usr/sbin/brctl add br1 to /usr/sbin/brctl addbr br1 in your go file The last line you can run manually I think and it should work without restarting... Quote Link to comment
demonmaestro Posted February 11, 2016 Author Share Posted February 11, 2016 It got the br1 to show in the drop down however now the VMs are showing no connectivity. Like I said its been a while. change /usr/sbin/brctl add br1 to /usr/sbin/brctl addbr br1 in your go file The last line you can run manually I think and it should work without restarting... Quote Link to comment
ken-ji Posted February 11, 2016 Share Posted February 11, 2016 Did you assign an ip to br1? Any case, pleas post the output of # /sbin/ip link Edit: Just realized for VMS, br1 doesn't need an IP at all. Quote Link to comment
demonmaestro Posted February 12, 2016 Author Share Posted February 12, 2016 /sbin/ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 8: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 9: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000 link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 10: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff 12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP mode DEFAULT link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 13: br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff 14: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff 15: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500 link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff 17: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UNKNOWN mode DEFAULT qlen 500 link/ether fe:54:00:3f:19:a9 brd ff:ff:ff:ff:ff:ff Quote Link to comment
ken-ji Posted February 12, 2016 Share Posted February 12, 2016 ok easy enough... you'll need to enable br1 (interface is still marked down) with: /sbin/ip link set dev br1 up and you might also need to enable promiscuous mode on bond1 with /sbin/ip link set dev bond1 promic on Try them manually then add them to the go script Quote Link to comment
demonmaestro Posted February 14, 2016 Author Share Posted February 14, 2016 That worked. Good sir! Thank you. Also something I have notice is I am still not peaking out the full transfer rate. If transferring a 32gb file from SSD on NAS to SSD on computer I am getting 113MB/second transfer rate. I have LAG setup also on my computer. side note: transferring from SSD(Intel 730 Series 450GB - Sata 6) to SSD(Intel 750 Series 400GB - PciE) on my computer transfer rate average 505MB/second Quote Link to comment
ken-ji Posted February 14, 2016 Share Posted February 14, 2016 I do believe that's normal. Unless your PC has multiple uplinks too with LACP support, the PC will be limited to around 125MB/s tops (including any protocol overhead) Edit: You do have LAG, but I'm not sure if Windows SMB client to Linux Samba server can saturate the 2Gbps link. Maybe someone with actual experience can tell us. Your server on the other hand, should be able to handle two (or three) clients at maximum speed simultaneously now. Quote Link to comment
demonmaestro Posted February 14, 2016 Author Share Posted February 14, 2016 I do believe that's normal. Unless your PC has multiple uplinks too with LACP support, the PC will be limited to around 125MB/s tops (including any protocol overhead) Your server on the other hand, should be able to handle two (or three) clients at maximum speed simultaneously now. It has Intel Nics and I am running LACP with it as well. Quote Link to comment
ken-ji Posted February 14, 2016 Share Posted February 14, 2016 Just saw this: and it might be the answer: https://www.reddit.com/r/sysadmin/comments/3jx8bt/isnt_lacp_supposed_to_increase_overall_effective/ Quote Link to comment
demonmaestro Posted February 14, 2016 Author Share Posted February 14, 2016 Just saw this: and it might be the answer: https://www.reddit.com/r/sysadmin/comments/3jx8bt/isnt_lacp_supposed_to_increase_overall_effective/ Yea and it's working that way. although doing 2 different transfers to 2 different disks. they both are transfering around 80MB/s to 90MB/s. so total utilizing about 70%. that is more than 1gb worth of transfer so LACP is working. But there has to be a way to get more transfer out of this. Quote Link to comment
BRiT Posted February 14, 2016 Share Posted February 14, 2016 Only way when unraid is involved is to have it upgrade to samba 3 for multisession support. Currently that doesnt seem supported in the unix versions, only for windows versions. Quote Link to comment
demonmaestro Posted February 14, 2016 Author Share Posted February 14, 2016 Well this is the same happening transferring to my Windows Server 2012. 91MB/second transfer speed. Quote Link to comment
JorgeB Posted February 14, 2016 Share Posted February 14, 2016 For windows sbm multichannel you don't need lacp, just more than 1 nic connected to a dum switch, only requirement is that link speeds for each computer are all the same, only windows 8/10 and 2012. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.