tocho666 Posted February 11, 2016 Share Posted February 11, 2016 I'm new to NAS and networking so bare with me. I would like to set up a system with 4x4TB HDD, 1x500GB SSD cache. 8 core intel cpu. LSI 9260 8i RAID controller. 4 port gigabit Pcie Nic. Connect the 4 gigabit ethernet ports to my gigabit switch and out to my PC which also has a 4 port gigabit Nic. Assuming that I setup the link aggregation correctly, could I get write speeds of up to 400 MB/s? Has anyone tried this setup? Please tell me if this works. Quote Link to comment
JorgeB Posted February 11, 2016 Share Posted February 11, 2016 Afaik, and I’m referring to a single TCP transfer, i.e. copying a single file using more than one NIC, it’s possible with linux to linux using balance-rr together with smart switch with trunk support, it’s also possible with windows to windows (8/10 and Server 2012) with smb3 multichannel and any switch, it’s not possible at the moment between windows > linux, maybe in the near future when samba4 supports smb multichannel. Quote Link to comment
mr-hexen Posted February 11, 2016 Share Posted February 11, 2016 You should be able to bond 4 on both ends using one of the 802.11ad standards. But your switch would have to be a "smart" switch, meaning have its own user interface for configuring it. https://en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver Quote Link to comment
tocho666 Posted February 11, 2016 Author Share Posted February 11, 2016 Afaik, and I’m referring to a single TCP transfer, i.e. copying a single file using more than one NIC, it’s possible with linux to linux using balance-rr together with smart switch with trunk support, it’s also possible with windows to windows (8/10 and Server 2012) with smb3 multichannel and any switch, it’s not possible at the moment between windows > linux, maybe in the near future when samba4 supports smb multichannel. So even if I apply 802.11ad on the unraid server as well as my switch, I cannot use the full bandwidth of 4Gbps when transferring a single file from one client? Quote Link to comment
JorgeB Posted February 11, 2016 Share Posted February 11, 2016 Not from windows to linux (or linux to windows). Quote Link to comment
tocho666 Posted February 11, 2016 Author Share Posted February 11, 2016 What if I can afford a 10Gb LAN Nic? Would I see an exponential increase in write speed from windows to linux? Quote Link to comment
JorgeB Posted February 11, 2016 Share Posted February 11, 2016 Yes, limit by the HDD/SSD write speed. Quote Link to comment
gundamguy Posted February 11, 2016 Share Posted February 11, 2016 What if I can afford a 10Gb LAN Nic? Would I see an exponential increase in write speed from windows to linux? That would work, because it's not relying on SMB 3.0 Multichannel. Hopefully SAMBA will get pushed out soon with SMB 3.0 Multichannel support. Quote Link to comment
gubbgnutten Posted February 11, 2016 Share Posted February 11, 2016 What if I can afford a 10Gb LAN Nic? Would I see an exponential increase in write speed from windows to linux? No, you would not see an exponential increase in write speed, more like a linear increase (until something else becomes a bottleneck). If you get one 10Gb NIC for the server and one for the client and connect them either directly or through a 10Gb capable switch, writes would likely be as fast as your cache SSD will allow (provided your source drive can read that fast). Writes directly to the array (not using the cache drive) on the other hand are unlikely to be any faster than with 1Gb networking. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.