unRAID Server Release 5.0-rc12 Available


Recommended Posts

Other thing here is network bonding support.  This lets you use multiple ethernet connections.  Good overview here:

http://serverfault.com/questions/446911/what-are-the-differences-between-channel-bonding-modes-in-linux

I have been using mainly mode 1 (active backup) and have not thoroughly tested the other modes to see what throughput increase if any occurs with the other modes.  Some of the modes require switch support.

 

Edit: I should add: to enable network bonding just go to Settings/Network settings and you'll see "Bonding" - set to "Yes" and set your mode.  It should take effect without reboot.  Finally make use of your extra ethernet ports!

 

Has anyone tried the Network Bonding feature rc12a?  I have a 2nd nic onboard my SuperMicro C2SEA and was thinking about connecting it up to improve transfer speeds.  I did notice that the network bonding option under Settings is not there.  I guess it shows up after it detects two active nics.

Link to comment
  • Replies 480
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Has anyone tried the Network Bonding feature rc12a?  I have a 2nd nic onboard my SuperMicro C2SEA and was thinking about connecting it up to improve transfer speeds.  I did notice that the network bonding option under Settings is not there.  I guess it shows up after it detects two active nics.

 

I only have one network card and the network bonding option is available under Settings -> Network Settings: Enable Bonding/Bonding Mode.  My NIC is Realtek RTL8111DL.

Link to comment

What does the bonding option do? Can it increase overall transfer speeds? I have 3 nic's on my X9SCM, one is for IPMI, the other 2 can be used for network access, at this moment i have only one connected. Bonding option does show up in my settings.

Link to comment

What does the bonding option do? Can it increase overall transfer speeds? I have 3 nic's on my X9SCM, one is for IPMI, the other 2 can be used for network access, at this moment i have only one connected. Bonding option does show up in my settings.

 

Channel bonding can be done in a few different ways.  One way is to have an active/passive link, in other words only one NIC is active and the other NIC sits in a standby status waiting to take over in the event the first NIC fails.  It is used for redundancy.  This method usually only requires OS support, switch support is not needed.

 

Another way is "link aggregation" in which multiple NIC are bonded together to increase throughput.  So if you bonded two 1Gbps NICs, you'd have theoretical throughput of 2Gpbs to the server.  This can also be done a few different ways, but for simplicity think of it as RAID 0 striping for NICs.  This method requires switch level support as well as OS support.

 

If I recall correctly, I remember Tom stating that he has tried active/passive with success, but has not actually tried link aggregation.

Link to comment

Just to chip in here as this may be a possible bug, perhaps it only affects certain NICs.  I have only one NIC (Realtek RTL8168F/8111F) and Enable Bonding/Bonding Mode selection DOES NOT show up under Settings -> Network Settings. 

 

Ya, I have one active and one inactive, and I see no option under settings to enable bonding.  Yes I tried clearing the cache and refreshing screen in browser, still no opiton shows up.

Link to comment

While active/passive bonding would be nice (although the reality is that NICs RARELY fail, so it's probably not a big deal);  I can think of no advantage to link aggregation with UnRAID.    A single GB link is FAR faster than any modern disk can transfer data, so you'd not gain any speed for copying files, etc.

 

It is true that if you were copying from multiple disks at the same time the aggregate bandwidth could exceed 1GB, but with most UnRAID applications this is not a likely scenario.  Much more likely is multiple video streams -- and a single GB link is plenty fast enough for these.    Link aggregation is much more useful on striped arrays, where the data rate from the array is higher than a GB link can support.

 

Link to comment

While active/passive bonding would be nice (although the reality is that NICs RARELY fail, so it's probably not a big deal);  I can think of no advantage to link aggregation with UnRAID.    A single GB link is FAR faster than any modern disk can transfer data, so you'd not gain any speed for copying files, etc.

 

This isn't really true. Some of the latest 3 and 4TB 7200rpm spinners can do 150-175MBps reads vs. 1GbE theoretical max of 125MBps.

 

Not to mention we have quiet a few people out there using SSDs as cache drives and those can easily hit 500MBps reads and writes on a high end drive on a SATAIII port.

 

With my virtualized Unraid and Ubuntu guests in ESXi using the 10GbE vmxnet3 driver I can hit 140MBps reads and 120MBps writes on my WD Black 7200rpm cache drive over NFS.

 

 

 

 

tl;dr Gigabit Ethernet is, in fact, to slow for many of today's modern drives.

Link to comment

Thats all nice, but isnt unraids (parity) nature just too slow to benefit from all this in the first place? If i cant copy anything to unraid over 50MB/s at this moment with my Gb network, i dont think link aggregation will improve anything...?

Link to comment

Thats all nice, but isnt unraids (parity) nature just too slow to benefit from all this in the first place? If i cant copy anything to unraid over 50MB/s at this moment with my Gb network, i dont think link aggregation will improve anything...?

 

Yes, for writes, but parity is not involved in reads.  Reads can happen from unRAID at speeds in excess of GigE if you have fast enough drives, even spinners. 

Link to comment

This isn't really true. Some of the latest 3 and 4TB 7200rpm spinners can do 150-175MBps reads vs. 1GbE theoretical max of 125MBps.

 

Not to mention we have quiet a few people out there using SSDs as cache drives and those can easily hit 500MBps reads and writes on a high end drive on a SATAIII port.

 

With my virtualized Unraid and Ubuntu guests in ESXi using the 10GbE vmxnet3 driver I can hit 140MBps reads and 120MBps writes on my WD Black 7200rpm cache drive over NFS.

 

 

 

 

tl;dr Gigabit Ethernet is, in fact, to slow for many of today's modern drives.

 

Totally agree.  Not to mention that GigE real world speeds usually top out around 800Mbps (100MBps).  The drives in my server can easily do sustained reads in excess of 100MBps, even though they are spinners.

Link to comment

Ok, I may try it.  First, I want to understand the network switch requirement or support for this to work.  I have a smart layer 3 switch (Linksys SML2008).  Do I need to change the default port settings, or just connect my 2nd nic to the switch?

 

After connecting, I sure hope the bonding feature appers in the gui, otherwise this will be a waste of time.

Link to comment

Ok, I may try it.  First, I want to understand the network switch requirement or support for this to work.  I have a smart layer 3 switch (Linksys SML2008).  Do I need to change the default port settings, or just connect my 2nd nic to the switch?

 

After connecting, I sure hope the bonding feature appers in the gui, otherwise this will be a waste of time.

The network bonding feature was apparently added in rc12 as per the release notes. 

http://lime-technology.com/wiki/index.php?title=UnRAID_Server_Version_5.0-beta_Release_Notes

 

Are you still running rc11 as shown in your signature line?

Link to comment

Ok, I may try it.  First, I want to understand the network switch requirement or support for this to work.  I have a smart layer 3 switch (Linksys SML2008).  Do I need to change the default port settings, or just connect my 2nd nic to the switch?

 

After connecting, I sure hope the bonding feature appers in the gui, otherwise this will be a waste of time.

The network bonding feature was apparently added in rc12 as per the release notes. 

http://lime-technology.com/wiki/index.php?title=UnRAID_Server_Version_5.0-beta_Release_Notes

 

Are you still running rc11 as shown in your signature line?

Hi Joe - yes, I'm running rc12a, just forgot to update my sig.  I know it was added in rc12 and I read the release notes.  However, I don't see the option in my gui.  Also, in another post a page or two back, somebody said that the OS and the network switch had to support this feature, thus the reason I was asking for more information.

 

Link to comment

Also, in another post a page or two back, somebody said that the OS and the network switch had to support this feature, thus the reason I was asking for more information.

 

Yes, switch support is necessary for link aggregation.  I have no experience with the Linksys SML2008, so can't tell you how it needs to be configured (I've only done link aggregation on Cisco IOS switches).  Assuming the switch uses the 802.3ad protocol for link aggregation and not a proprietary standard it should be fairly straight forward, assuming the switch can even do link aggregation.  Your best bet is to consult the manual, unless someone here can speak specifically to that switch.

 

EDIT:  I looked up your switch (was curious if it was Cisco/Linksys) and supposedly it does do 802.3ad link aggregation.  The feature in the switch management interface is called LAG (link aggregation group).

Link to comment

thanks - confirmed, found it in the online manual.  Funny, the paper manual that came with it doesn't say anything about it, but the online version for advanced features does.

 

Linksys SLM2008 (Cisco)

Port > Static Link Aggregation

You can create multiple links between devices that work

as one virtual, aggregate link (LAG). An aggregated link

offers a dramatic increase in bandwidth for network

segments where bottlenecks exist, as well as providing a

fault-tolerant link between two devices. You can create up

to two LAGs on the Switch. Each LAG can contain up to

five/eight ports.

 

Going to give this a try tomorrow.  :)

Link to comment

thanks - confirmed, found it in the online manual.  Funny, the paper manual that came with it doesn't say anything about it, but the online version for advanced features does.

 

Linksys SLM2008 (Cisco)

Port > Static Link Aggregation

You can create multiple links between devices that work

as one virtual, aggregate link (LAG). An aggregated link

offers a dramatic increase in bandwidth for network

segments where bottlenecks exist, as well as providing a

fault-tolerant link between two devices. You can create up

to two LAGs on the Switch. Each LAG can contain up to

five/eight ports.

 

Going to give this a try tomorrow.  :)

 

Yeah, just found the same and was about to post for you.  ;D  Good luck, I hope you get it going.  You should probably create a separate thread with your results and/or troubleshooting to get it to work so it's easier to find for those looking to do so in the future.

Link to comment

Thats all nice, but isnt unraids (parity) nature just too slow to benefit from all this in the first place? If i cant copy anything to unraid over 50MB/s at this moment with my Gb network, i dont think link aggregation will improve anything...?

 

Not sure who you were responding to but read my post. With fast spinning or SSD cache drives writes can exceed even the theoretical max of 1GbE, let alone the real world speeds you see with 1GbE. Same goes for reads whether from cache or the array.

 

Some of the fastest SSDs on the market right now can saturate even a 6Gbps SATA port.

Link to comment
I did notice that the network bonding option under Settings is not there.  I guess it shows up after it detects two active nics.

 

I have one NIC disabled in the BIOS but the bonding option is shown.  Perhaps the dedicated IPMI port is taken into account?

Link to comment

802.3ad Link Aggregation will not likely help you.

 

That will depend on your requirements.  A couple of ordinary GBe clients, simultaneously reading from different drives, would surely benefit from link aggregation at the server.

 

However, as you say, building a 10GBe network would be much more effective, but much more expensive.

Link to comment

802.3ad Link Aggregation will not likely help you.

 

http://lime-technology.com/forum/index.php?topic=16887.msg153977

 

Get a couple of 10GBe cards instead.

 

http://lime-technology.com/forum/index.php?topic=24418.msg212095

 

Thanks bubbaQ, hadn't seen that thread before. 

 

As PeterB said, link aggregation can still be beneficial at the server in a few situations, but if you're after pure throughput between the server and a single Windows client, 10GbE looks to be the way to go.  Man are the NICs expensive though, not to mention the necessity of moving to a 10Gb switch.

Link to comment

I hadn't considered SSDs with my earlier comment -- that is certainly a good reason to use link aggregation, which would benefit for both cached writes and reads.    There's only a bit of benefit with outer cylinder reads/writes on spinning drives, but definitely a nice benefit with SSDs.

 

As for using 10Gb NICs => certainly that's an even better idea.    And a 10Gb NIC card isn't really all that bad (~ $350) ... but unless you're simply connecting two of those back-back it can get expensive REAL fast [check out the cost of 10Gb switches  :) ]

 

Link to comment

I hadn't considered SSDs with my earlier comment -- that is certainly a good reason to use link aggregation, which would benefit for both cached writes and reads.    There's only a bit of benefit with outer cylinder reads/writes on spinning drives, but definitely a nice benefit with SSDs.

 

As for using 10Gb NICs => certainly that's an even better idea.    And a 10Gb NIC card isn't really all that bad (~ $350) ... but unless you're simply connecting two of those back-back it can get expensive REAL fast [check out the cost of 10Gb switches  :) ]

 

With all due respect, with the latest spinners (Seagate ST3000DM001 and ST2000DM001 for example) link aggregation can be very useful.  The ST3000DM001 does sustained reads at ~175MB/s, and the ST2000DM001 does ~170MB/s, both far in excess of GbE real world max throughput of ~100MB/s.  I have 1x ST3000DM001 and 2x ST2000DM001 in my unRAID.  Imagine 3 concurrent reads, each to a separate workstation, each from a separate drive (any of which by itself can EASILY saturate GbE), and the benefits are clear.

Link to comment
Guest
This topic is now closed to further replies.