unRAID Server Release 5.0-rc12 Available


Recommended Posts

both far in excess of GbE real world max throughput of ~100MB/s.

 

If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

 

Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

Link to comment
  • Replies 480
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

both far in excess of GbE real world max throughput of ~100MB/s.

 

If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

 

Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

 

No worries, none taken.  While I have witnessed close to theoretical 125MB/s transfers many times on enterprise grade gear, I have rarely seen it on consumer grade gear.  I should have been more clear as I was stating a real world throughput of ~100MB/s on consumer grade gear, which in my experience most home/small business GbE networks effectively top out at.

 

EDIT: Thought I would clarify further that it is my assumption that the vast majority of unRAID installs are in homes/small businesses on consumer grade networking gear.

Link to comment

both far in excess of GbE real world max throughput of ~100MB/s.

 

If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

 

Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

 

No worries, none taken.  While I have witnessed close to theoretical 125MB/s transfers many times on enterprise grade gear, I have rarely seen it on consumer grade gear.  I should have been more clear as I was stating a real world throughput of ~100MB/s on consumer grade gear, which in my experience most home/small business GbE networks effectively top out at.

 

EDIT: Thought I would clarify further that it is my assumption that the vast majority of unRAID installs are in homes/small businesses on consumer grade networking gear.

 

I didn't use anything too crazy. Just some Intel PCIe NICs and a Trendnet Gb Switch. I can be picky about my NICs, I find Intel gives me the least trouble and the best performance. I'd love to have a nice managed switch but I don't think my wife would be happy with the noise :) I envy all of you that have a basement, no chance of that here in Florida...

Link to comment

I didn't use anything too crazy. Just some Intel PCIe NICs and a Trendnet Gb Switch. I can be picky about my NICs, I find Intel gives me the least trouble and the best performance. I'd love to have a nice managed switch but I don't think my wife would be happy with the noise :) I envy all of you that have a basement, no chance of that here in Florida...

 

Good to know. Congrats on the awesome speeds!  CAT5e, CAT6, CAT6e?  Any long runs, or everything under say 10 meters? 

 

Totally agree regarding Intel NICs being least trouble/best performance.

Link to comment

"... The ST3000DM001 does sustained reads at ~175MB/s " ==>  Yes, the new 1TB/platter 3 and 4 TB drives can get very good outer-cylinder speeds.    But these quickly deteriorate about mid-platter to speeds below the 125MB threshold of a Gb link.    Link aggregation is certainly nice -- but few setups will really benefit significantly from these speeds with rotating platter drives.    You'd not only have to be reading from the outer cylinders of a high-speed drive;  but also writing to the outer cylinders on the destination drive.    Of course, if you're doing multiple reads at the same time, then the cumulative bandwidth is much more beneficial;  but again, unless you're copying files around, it doesn't matter => if you're just streaming media, the bandwidth requirements simply aren't there for Gb+ speeds.

 

On the other hand, if you're doing a lot of SSD -> SSD transfers, where the speeds are notably higher for both the reads and the writes, then link aggregation would really be beneficial.

 

Link to comment

LAGs do not allow you to go faster than a single member in a single session. That is not how they work.

 

If you can get 100Mb's before a LAG that is what you will get after.

 

You will just be able to have more than one thread at this speed. This benefits multi seat networks or multihtreaded transfers.

Link to comment

I cant write any faster then about 40MB/s to the server over the network these days, and reading from it wont get faster then about 55MB/s.

Sustained writing to the ssd disk on unraid over the network does a nice 95MB/s but reading from the ssd also wont go faster then 55MB/s... which is weird right?

Link to comment

both far in excess of GbE real world max throughput of ~100MB/s.

 

If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

 

Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

 

Not quite.  It isn't NICx/Switches.  iSCSI and a multispindle RAID array is not SMB on a single spinner. SMB is chatty, and you will never get the throughput on SMB that you can get with iSCSI.  Plus even for a good spinner that delvers 175MB/sec, that is SEQUENTIAL output, and seek times will still kill you.

 

If you want to test your infrastructure and hardware, test it by copying RAMdisk to RAMdisk.  Compare that to using physical disks (SSD or spinners or multispindle RAID) and you will always see a performance drop.  Compare iSCIS or FTP to SMB and SMB2 using RAMDisks and you will see drastic differences based on the transport.  The point is that no matter HOW spiffy your disk subsystem, it is slow compared to a RAMDisk, and creating latency.... so it is not a good way to test infrastructure.

Link to comment

I cant write any faster then about 40MB/s to the server over the network these days, and reading from it wont get faster then about 55MB/s.

Sustained writing to the ssd disk on unraid over the network does a nice 95MB/s but reading from the ssd also wont go faster then 55MB/s... which is weird right?

 

What are you doing with the 'read' on the other end?  If you are writing it to a spinner...  Transfer speed testing at high bit rates requires a lot of thought and specialized software written just for that propose. 

Link to comment

For network bonding, see attached image.  It's a new option in Network Settings.  If you only have NIC the option still appears and it's possible to use with only one NIC but no point in doing so.  The main reason this was put in was to support Supermicro motherboards that have two on-board NICs.  In this case the bonding mode is "active backup".  This simply lets you plug an ethernet cable into either port (or both ports).  Using bonding to increase throughput is possible but will be configuration-specific.

Capture.JPG.2d557a03aff3c6417f8694e3e30fccb3.JPG

Link to comment

Yes, the new 1TB/platter 3 and 4 TB drives can get very good outer-cylinder speeds.    But these quickly deteriorate about mid-platter to speeds below the 125MB threshold of a Gb link. 

 

Sure, if you consider "mid-platter" to be roughly 2.4TB into a 3TB drive.  The ST3000DM001 transfer rate stays above theoretical GbE speeds until approximately the 2.4TB mark.  So yes, the last 600GB of the disk fall below theoretical GbE speeds.

 

Link aggregation is certainly nice -- but few setups will really benefit significantly from these speeds with rotating platter drives. 

 

Agreed, this is why I stated link aggregation could be beneficial in certain situations.

 

You'd not only have to be reading from the outer cylinders of a high-speed drive;  but also writing to the outer cylinders on the destination drive.

 

Again, I personally don't consider 80% of a given drive (the ST3000DM001) to be "outer cylinders".  I'd consider that to be the majority of the drive.

 

Of course, if you're doing multiple reads at the same time, then the cumulative bandwidth is much more beneficial;  but again, unless you're copying files around, it doesn't matter => if you're just streaming media, the bandwidth requirements simply aren't there for Gb+ speeds.

 

The first half of your statement would be one of those "situations" I was referring to in previous posts.  The end of your statement regarding streaming media, I fully agree.

 

if you're doing a lot of SSD -> SSD transfers, where the speeds are notably higher for both the reads and the writes, then link aggregation would really be beneficial.

 

Agree completely.

 

It depends entirely on how one uses unRAID.  Not everyone uses it ONLY as a streaming media server.

Link to comment

Bonding 2 nics is only beneficial for multiple ip streams from different sources, gets even more complicated even with multiple streams depending on the way the traffic gets put up the 2 links via ip hashes or mac address hashing round robin etc.  A single pc with even 10gb Ethernet talking to a unraid box with 2 bonded 1gb links is still only going to see a throughput equivalent on a single 1Gb Ethernet link.  It will provide the redundancy and possible better overall throughput with multiple clients talking at the same time but bonding is not going to provide any better throughput from your PC even if you have 2 nics in your pc bonded.

 

Until multipath TCP is live http://linux.slashdot.org/story/13/03/23/0054252/a-50-gbps-connection-with-multipath-tcp then you are pretty much stuck unless you actually start to use 10Gb nics

Link to comment

Monthy Parity Check ran:

 

Last checked on Mon Apr 1 11:50:31 2013 PDT (yesterday), finding 0 errors.

> Duration: 11 hours, 50 minutes, 28 seconds. Average speed: 70.4 MB/sec

 

good / ok / or bad?

 

This is ok. With newer drives you could get the time below 8 hours.

Link to comment

My results for a 24TB array with 4TB parity disk:

 

Last checked on Tue Apr 2 07:34:47 2013 CEST (today), finding 0 errors. 
* Duration: 8 hours, 23 minutes, 34 seconds. Average speed: 132.4 MB/sec

 

This is with v5.0-rc5... will test rc12a tonight.

Link to comment

Monthy Parity Check ran:

 

Last checked on Mon Apr 1 11:50:31 2013 PDT (yesterday), finding 0 errors.

> Duration: 11 hours, 50 minutes, 28 seconds. Average speed: 70.4 MB/sec

 

good / ok / or bad?

 

As noted already, this is a reasonable speed.    It would be notably quicker without the mixed sizes.  Modern zoned sector drives have their best performance on the outer cylinders;  then begin to slow down as the heads move inwards.    The performance generally stays good for about 70-80% of the drive; then slows significantly in the last 30% or so.      When you're doing a parity check, the speed is always limited by the slowest drive -- so when you have mixed drive sizes you'll encounter this slowdown for each different size.    In your specific case, the transfer speeds were likely well over 100MB/s at first;  then slowed down at around the 750MB point (due to the 1TB drive);  then sped back up after 1TB (that drive out of the picture);  then slowed down again around 1.6TB (due to the 2TB drives);  then sped back up again after 2TB;  then slowed down for the 3rd time around the 2.5TB point (inner cylinders of the 3TB drive).

 

Exactly where the slowdown occurs depends primarily on the architecture of the drives -- how many platters; density of the platters; and exactly where on the platter the active cylinders are.  But the last 20-30% is always the slowest part of the drive.

 

My system takes just over 8 hrs with all 3TB WD Reds.    I'd expect 4TB drives to perform about the same, since they simply add another 1TB platter, so can transfer data 33% faster.

 

Link to comment
Guest
This topic is now closed to further replies.