Jump to content

2 - 802.3ad LAGs with 4 NICs [Help]


Recommended Posts

So I am trying to get it where 2 of my nics are running LAG to the NAS for storage and then the other 2 nics will run its own LAG to support the VMs.

 

So you are probably asking your self why does he need 2 2gb connections. For one while running backups of the nas and transferring large files I am maxing out the 1gb connection. As far as in the VMs they need the bandwith of 2gb for ip cam intake and other automation.

 

I know how to configure my switch for LAG.

Link to comment

for the main pair, you can set that up under Settings using bonding mode=4

for the secondary pair, you probably want to manually add commands to your go script to create the bond, then the bridge

kinda like:

/sbin/ip link add dev bond1 type bond
# /sbin/ip link set dev bond1 addr <mac address> #Optional; to control the mac address if you need this
/sbin/ip link set dev bond1 up
/sbin/ip link set eth2 up master bond1
/sbin/ip link set eth3 up master bond1

/usr/sbin/brctl add br1
/usr/sbin/brctl addif br1 bond1
# add bridge tweaks like stp, aging, and forwarding

# /sbin/ip add <bridge-ip>/netmask brd + dev br1 # Optional; as it adds additional IP for unRAID to confuse you with

Then set the KVM to use this bridge (br1) rather than vibr0 or br0

 

Code might not be completely correct as I haven't messed with manual bonds in a long while

Link to comment
  • 3 weeks later...

The error about br1 - what's in your go file? and why does your UI report two bonds... could you also show the results of

# ip link

 

Mine shows

root@MediaStore:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT
    link/ipip 0.0.0.0 brd 0.0.0.0
6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000
    link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff
7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000
    link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT
    link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 00:e0:b6:17:76:84 brd ff:ff:ff:ff:ff:ff
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 0a:ec:12:78:a8:8a brd ff:ff:ff:ff:ff:ff
40: vmbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
    link/ether 72:7e:dc:19:1f:f3 brd ff:ff:ff:ff:ff:ff

Link to comment

Here you go.

ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT

    link/ipip 0.0.0.0 brd 0.0.0.0

3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT

    link/gre 0.0.0.0 brd 0.0.0.0

4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff

5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT

    link/ipip 0.0.0.0 brd 0.0.0.0

6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

8: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

9: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

10: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

13: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT

    link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff

14: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500

    link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff

It might show that because I have my switch configured for LAG on eth2 &eth3

2-8-2016-0912.fw.png

2-8-2016-0913.fw.png

Link to comment
/sbin/ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT

    link/ipip 0.0.0.0 brd 0.0.0.0

3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT

    link/gre 0.0.0.0 brd 0.0.0.0

4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff

5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT

    link/ipip 0.0.0.0 brd 0.0.0.0

6: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

8: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

9: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT qlen 1000

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

10: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6a brd ff:ff:ff:ff:ff:ff

12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UP mode DEFAULT

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

13: br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT

    link/ether 00:25:90:f5:18:6c brd ff:ff:ff:ff:ff:ff

14: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT

    link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff

15: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500

    link/ether 52:54:00:48:b5:15 brd ff:ff:ff:ff:ff:ff

17: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UNKNOWN mode DEFAULT qlen 500

    link/ether fe:54:00:3f:19:a9 brd ff:ff:ff:ff:ff:ff

Link to comment

That worked. Good sir! Thank you.

 

Also something I have notice is I am still not peaking out the full transfer rate. If transferring a 32gb file from SSD on NAS to SSD on computer  I am getting 113MB/second transfer rate.

 

I have LAG setup also on my computer.

 

side note: transferring from SSD(Intel 730 Series 450GB - Sata 6)  to SSD(Intel 750 Series 400GB - PciE) on my computer transfer rate average 505MB/second

Link to comment

I do believe that's normal. Unless your PC has multiple uplinks too with LACP support, the PC will be limited to around 125MB/s tops (including any protocol overhead)

 

Edit: You do have LAG, but I'm not sure if Windows SMB client to Linux Samba server can saturate the 2Gbps link. Maybe someone with actual experience can tell us.

 

Your server on the other hand, should be able to handle two (or three) clients at maximum speed simultaneously now.

Link to comment

I do believe that's normal. Unless your PC has multiple uplinks too with LACP support, the PC will be limited to around 125MB/s tops (including any protocol overhead)

 

Your server on the other hand, should be able to handle two (or three) clients at maximum speed simultaneously now.

 

It has Intel Nics and I am running LACP with it as well.

Link to comment

 

Yea and it's working that way. although doing 2 different transfers to 2 different disks. they both are transfering around 80MB/s to 90MB/s. so total utilizing about 70%. that is more than 1gb worth of transfer so LACP is working. But there has to be a way to get more transfer out of this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...