<SOLVED>Not able to trunk ports for link aggregation across multiple switches.


Chezro

Recommended Posts

I've been having so much trouble lately trying to get a 2 Gigabit connection going from my home computer to my unraid server and I'm spending too much money trying to figure it out. 

First to start with specs: 

Home PC: EVGA z87 Classified, i7 4770k, 32gb Ram, SSD @ ~500Mb r/w. Intel i217/210 dual gig Ethernet on mobo. Latest intel drivers for nic install. 
NIC teaming setup as static. 

Unraid: z270-AR mobo i5 7600, 32gb ram, 1 onboard nic, two tp-link PCI-E 1x nic's.

 

I'm using the two Intel nic's set in static link aggregation. 

The two unraid tp-link nic's are set to balance-rr (Though I've tried with all methods.)

 

Switch setup is as follows: 

 

8 port tplink TL-SGG108E with static port trunking:  

 

ports 1&2 trunked to computer. 

ports 5&6 trunked to dlink smartswitch  

3,4,7,8 are just living room components (TV, Xbox, etc.)

 

dlink smartswitch:

 

D-Link Systems 10-Port Gigabit Web Smart Switch including 2 Gigabit SFP Ports (DGS-1210-10)

 

ports 1&2 going to ports 5&6 on TPlink.

ports 5&6 going to Unraid server TPlink NIC's and ports set to static. 

ports 7& 8 going to netgear x10 nighthawk link aggregation ports and trunked as LACP. 

 

When I transfer files between unraid and my computer  (SSD to SSD over ethernet) or vice versa I only show 50% usage of my 2GB bandwidth in windows 10. Unraid only shows traffic on eth0, eth1 traffic is negligible. 

 

I ordered this: D-Link Systems 20-Port SmartPro Stackable Switch & 2 Gigabit SFP Ports and 2 10GbE SFP+ Ports (DGS-1510-20), since I've been eyeing it for a while and i has a 10gb port that I want to connect to my netgear and in the future, unraid through a 10gb nic. 

 

 

Any Ideas on where to get started on troubleshooting this would be appreciated. I'm at a loss. 

 

I'm not sure where the issue is. The switching is built this way due to the segregation of equipment throughout the house but I'd like to take advantage of the aggregate bandwitdh for file transfers and ISCSI. 

 

Currently tested smb and iscsi and I installed smb direct in windows 10. 

Edited by Chezro
Link to comment

I had problems getting round robin working in unRaid and eventually gave up. I could never figure out it if was the switch or unRaid. I could get all other bonding modes to wok no problem.

 

Now I use a 10gbe mellanox card/quanta 48 port +2 10gbe switch. I have a little more tuning to do but getting it all going was fairly easy.

 

If you ordered the 10gbe capable switch, then spend another 40-60 bucks, buy 2 compatible used 10gbe cards with dac cables, and call it day. 

 

Save yourself the frustration.

Edited by 1812
  • Upvote 1
Link to comment

I was able to get my unRAID host trunked (LACP round robin) but also gave it up.  First of all, if you are expecting to see 2Gb speeds end-to-end, forget it.  The LACP algorithm will decide which physical port to use based on MAC or IP addressing, and transmissions will pin to a single gigabit link and stay there.  If you have multiple hosts connecting this might be advantageous (think more lanes on the highway instead of faster speed limit).  From my experience in a single host to server connection you will not see an increase in throughput.  In fact, my testing showed the file copy performance slightly degraded in a trunking configuration.  I was using an HP switch and trunking on both client and server ends of the connection.  The trunking provides for link redundancy, but not much more IMO.

  • Upvote 1
Link to comment

Wow.  I can definitely get my unraid to communicate with the switch on 10G but I'd like to get it working through gb ethernet for now if possible if only to make sure it works. lol I wonder if that would be enough to take care of things on unraid's side so that my computer can get it's full bandwidth of 2G. I have cat6 going through the house already...just haven't found out how that works with sfp+ yet. 

 

I really would like to trunk ports through ethernet but so far I think I'll have to settle for 1GB performance. Maybe I'll install an embedded version of windows or something to the unraid server and see if that works using nic teaming. 

 

Could anyone point me in the direction of topics related to getting trunking to work on Unraid? 

 

Thank you for all the help. I appreciate it. 

Link to comment

There's currently no way to get a single file transfer between windows clients and unRAID to use link aggregation, it works between unRAID and other linux clients, with windows it should work in the future when Samba has better support for SMB multichannel.

Edited by johnnie.black
Link to comment

My configuration was pretty simple:

 

unRaid host set to Bonding- yes.  Mode to balance-rr (0)

Network switch ports configured for Trunk - admin mode and static (I don't believe dynamic mode would negotiate)

 

Link comes up bonded.  Network switch shows one interface primarily in use.  

 

From what I read, your configuration sounds like it is working.  50% of a 2Gb link = 1Gb throughput.  You will not exceed the speed of a member link (1Gb) between these hosts.  Your unRAID network statistics are showing this, as well.  Google 2Gb trunk throughput for more information.  The bonded interface is functioning properly.

 

Link to comment
1 minute ago, johnnie.black said:

I didn't say 10Gbit doesn't work, I use it myself, I said link aggregation for a single file transfer won't work, it does work for multiple simultaneous transfers.

that's true for me too - i have Intel 4 port Gigabit Ethernet adapter and trunked it on ESXi side plus on my main HP switch. And a single benefit from that is only for multiple clients who connect to server at once - since inside ESXi there are 10 Gbit, then if up to four clients is connected to unRAID, then every one get 1Gbit.  

Link to comment
1 hour ago, johnnie.black said:

I didn't say 10Gbit doesn't work, I use it myself, I said link aggregation for a single file transfer won't work, it does work for multiple simultaneous transfers.

 

I didn't experience this. When I started copying another file at the same time, my transfer speeds were cut in half. Googling 2gb trunking now. 

 

Is there any solution out there that could achieve the results I desire? 

Edited by Chezro
Link to comment

To achieve a faster point-to-point connection you need a faster pipe.  ->  10Gb

 

Just keep in mind that you will need this connectivity all the way through the client to host chain.  Otherwise, any slower links in between (including your switch's capabilities) could easily make your 10Gb investment a significant waste of hardware dollars.

Link to comment

And regarding your testing of two client PCs, this is another part of the trunking issue.  The trunk decides on which physical port to use.  The decision is based on an algorithm from the source and destination addresses.  So, very likely both of your test PC's have landed on the same physical connection.  You might be able to change an IP address to see if it changes the link usage behavior.

Link to comment
5 hours ago, Chezro said:

I've been having so much trouble lately trying to get a 2 Gigabit connection going from my home computer to my unraid server and I'm spending too much money trying to figure it out. 

First to start with specs: 

Home PC: EVGA z87 Classified, i7 4770k, 32gb Ram, SSD @ ~500Mb r/w. Intel i217/210 dual gig Ethernet on mobo. Latest intel drivers for nic install. 
NIC teaming setup as static. 

Unraid: z270-AR mobo i5 7600, 32gb ram, 1 onboard nic, two tp-link PCI-E 1x nic's.

 

I'm using the two Intel nic's set in static link aggregation. 

The two unraid tp-link nic's are set to balance-rr (Though I've tried with all methods.)

 

Switch setup is as follows: 

 

8 port tplink TL-SGG108E with static port trunking:  

 

ports 1&2 trunked to computer. 

ports 5&6 trunked to dlink smartswitch  

3,4,7,8 are just living room components (TV, Xbox, etc.)

 

dlink smartswitch:

 

D-Link Systems 10-Port Gigabit Web Smart Switch including 2 Gigabit SFP Ports (DGS-1210-10)

 

ports 1&2 going to ports 5&6 on TPlink.

ports 5&6 going to Unraid server TPlink NIC's and ports set to static. 

ports 7& 8 going to netgear x10 nighthawk link aggregation ports and trunked as LACP. 

 

When I transfer files between unraid and my computer  (SSD to SSD over ethernet) or vice versa I only show 50% usage of my 2GB bandwidth in windows 10. Unraid only shows traffic on eth0, eth1 traffic is negligible. 

 

I ordered this: D-Link Systems 20-Port SmartPro Stackable Switch & 2 Gigabit SFP Ports and 2 10GbE SFP+ Ports (DGS-1510-20), since I've been eyeing it for a while and i has a 10gb port that I want to connect to my netgear and in the future, unraid through a 10gb nic. 

 

 

Any Ideas on where to get started on troubleshooting this would be appreciated. I'm at a loss. 

 

I'm not sure where the issue is. The switching is built this way due to the segregation of equipment throughout the house but I'd like to take advantage of the aggregate bandwitdh for file transfers and ISCSI. 

 

Currently tested smb and iscsi and I installed smb direct in windows 10. 

 

 

Windows 7 and above do not support LACP...you have to go Windows Server.  You will only get single link speeds.  Linux supports it and can confirm it works as it is supposed to.  I run Xubuntu on my PC with a dual gig Intel card and the same tp-link switch you have along with dual gig Intel card in unRAID.  Configuration is balance-rr and I get dual gigabit speeds when I ran tests transferring ram drive to ram drive (PC to unRAID using tmpfs), otherwise the speeds are what my cache drive can write at which is around 140-170MB/s (from PC SSD to unRAID spinner cache drive).  Speeds vary as my cache holds a VM and eleven Dockers so it always has some writes/reads going on.

Link to comment

https://blogs.technet.microsoft.com/josebda/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0/

I imagine this might work well in the way I want. It seems most link aggregation doesn't behave the way I thought however I did notice iscsi off a freenas vm combined with an smb transfer did use my entire trunk. The iscsi lun is on a separate network on the same lan. No vlan. 

 

It seems the load is spread evenly so long as the device is considered a different destination according to a hash that's calculated. 

 

Short of it is I need a 10Gb nic, unless I'm wireless....and buy an ad wireless device for some reason. What even needs that much wireless bandwidth? ^___^

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.