Jump to content

I love me lots of ECC RAM and x64 unRAID! :)


johnodon

Recommended Posts

We don't need no stinkin' cache drives.  :)

 

Reading from array over Gbit...

 

Ybtd5ya.png

 

Writing to array over Gbit...

 

u1HKGQs.png

 

Prior to beefing up my server and unRAID moving to x64, my high watermark for writing to the parity protected array was ~35MB/s.

Link to comment

Two thoughts ...

 

Regardless of your comment ["... We don't need no stinkin' cache drives. "], your configuration shows a cache pool of 3 SSDs  :)

 

I assume you did these writes to non-cacheable shares (or directly to the disk share).    Also I presume the write is roughly 4GB or so ... which is easily buffered in your 48GB of RAM, so the REAL write speed likely hasn't improved at all => just the perception of a Windows user because UnRAID has it all buffered in RAM.

 

With that much RAM, most writes are probably going to have the same "feel" => but if you really want to see your current actual write speed you need to do a BIG copy to the array -- and be sure it's not using the cache pool.  Probably 100GB or more to actually see how much it slows down to once UnRAID can't "hide" the true speed by buffering.

 

I agree, however, that this is a superb setup that, with typical use, will likely make all of your writes "feel" very fast, as few of us write more than 10-20GB at a time to our arrays (except when initially loading them) ... so your array will likely "catch up" in-between writes.

 

 

Link to comment

How does this work?

I have 24GB on my unraid machine, but I don't think it's used to buffer file transfers like this.

 

It is used on writes and reads. It is automatic. For reads it should be used as well, however you're still limited by the drive limitation for reads, as you cant send faster than you can read the file.

Link to comment

It is used on writes and reads. It is automatic. For reads it should be used as well, however you're still limited by the drive limitation for reads, as you cant send faster than you can read the file.

 

I'm ok with current read speed, but write buffering would be nice. I've never seen it go above 30-40MB/s when copying to the unraid machine from network (from windows xp, windows 8.1 and server 2012). Btw, I don't use any cache drive (well I do, but only for Dockers and VM).

Link to comment

I'm surprised with 24GB that you're not seeing a bump in the initial write speed due to local buffering.

 

But I suspect the main reason is you need to adjust the disk "tunables" on the Settings - Disk Settings tab.

 

You can run this utility to get an idea what might work best with your system:

http://lime-technology.com/forum/index.php?topic=29009.0

 

... or you can just make the md_num_stripes and the md_write_limit very large (leave the write stripes smaller than the total stripes or you'll destroy your read performance during write activity)

 

Each "stripe" takes 4k ... so the default 1280 for md_num_stripes only requires 5MB for these buffers, so you can make this number MUCH larger with 24GB of RAM.    The md_write_limit tells UnRAID how many of those stripes can be used for writes.

 

Johnodon =>  What values do you have set for your disk tunables ?? 

 

 

Another possibility is that your drives may all be very full, and there is some file system activity (basically deciding where to write the new data) that's slowing things down somewhat.  But I suspect the issue is much more likely simply that you need to allow UnRAID more buffer space by increasing the stripe limits.

 

 

Note:  One important thing to remember ==> If you set these limits so high that you are buffering a LOT of GB of data, remember that this puts the system somewhat "at risk" for what could be a moderately long time after you THINK a write has finished.    For example, assuming the file johnodon showed in the first post here is a 4GB file, it "appears" to write at 113MB/s, which would take about 35 seconds until it was "done."  BUT the ACTUAL write speed to the array is likely 35-40MB/s (possibly a bit higher, but not by a lot) ... so assuming it's 40MB/s the actual write time is 100 seconds => this means the system is still writing for 65 seconds after it appeared to be done.

 

As long as it's UPS protected that shouldn't be an issue, but clearly a power loss during that time would NOT be good  :'(

 

And if you make the buffer dramatically larger where you're buffering 10-20GB of writes that "catch-up" time could be 5-10 minutes.    So think carefully about just how high you want to set the stripe limits.

 

 

Link to comment

wish i had the cabbage for that setup ::)

 

nice array 8)

 

Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were "current" [ http://www.amazon.com/Intel-BX80602E5530-Quad-Core-E5530/dp/B001Q8Q8GK ]    A motherboard to support them still costs a good bit ... and 48GB of registered RAM will set you back ~ $400 or so.    But ~ $1000 could get you the motherboard/CPUs/memory ... not at all bad for a system with that much "oomph" !!

 

Link to comment

My concern with that setup is that you'll "think" the write is complete when it really isn't. If you're moving something that means it will be deleted from source before the write is really complete. A write failure, crash, hardware failure, power outage etc means now you've lost the source AND the "copy".

 

By writing to a cache drive you know the source is safe until the copy is confirmed. Then the only risk for data loss is the failure of the cache drive until Mover is run.

 

Just my thoughts [shrug].

Link to comment

And your on a version 6 rc build?

 

Yes, I've been following the rc's and today I've just updated to rc6a.

 

I'm surprised with 24GB that you're not seeing a bump in the initial write speed due to local buffering.

 

But I suspect the main reason is you need to adjust the disk "tunables" on the Settings - Disk Settings tab.

 

You can run this utility to get an idea what might work best with your system:

http://lime-technology.com/forum/index.php?topic=29009.0

 

... or you can just make the md_num_stripes and the md_write_limit very large (leave the write stripes smaller than the total stripes or you'll destroy your read performance during write activity)

 

Each "stripe" takes 4k ... so the default 1280 for md_num_stripes only requires 5MB for these buffers, so you can make this number MUCH larger with 24GB of RAM.    The md_write_limit tells UnRAID how many of those stripes can be used for writes.

 

Thanks, I shall try those out. I wonder if these can (cause resource) 'conflict' with dockers/KVM if set too high?

 

Another possibility is that your drives may all be very full, and there is some file system activity (basically deciding where to write the new data) that's slowing things down somewhat.  But I suspect the issue is much more likely simply that you need to allow UnRAID more buffer space by increasing the stripe limits.

 

I think I've seen that (a LOT) when I had a few full reiserfs drives. They can stall for 10-30 seconds when it's about to write files. My current drives have 200GB or more free and I've converted 70% of them to xfs.. I don't think I'm seeing any filesystem stalling at my current unraid condition *knocks on wood*.

 

 

Note:  One important thing to remember ==> If you set these limits so high that you are buffering a LOT of GB of data, remember that this puts the system somewhat "at risk" for what could be a moderately long time after you THINK a write has finished.    For example, assuming the file johnodon showed in the first post here is a 4GB file, it "appears" to write at 113MB/s, which would take about 35 seconds until it was "done."  BUT the ACTUAL write speed to the array is likely 35-40MB/s (possibly a bit higher, but not by a lot) ... so assuming it's 40MB/s the actual write time is 100 seconds => this means the system is still writing for 65 seconds after it appeared to be done.

 

Going back to cache drives and disk speed... wouldn't a SSD can sustain 100MB/s write anyway?

I wish unraid supports multiple cache pools, because I use a pair of 2tb mirrored (btrfs) for docker/kvm images... and I'd love to add a small ssd just for pure caching file transfers.

Link to comment

Two thoughts ...

 

Regardless of your comment ["... We don't need no stinkin' cache drives. "], your configuration shows a cache pool of 3 SSDs  :)

 

I assume you did these writes to non-cacheable shares (or directly to the disk share).    Also I presume the write is roughly 4GB or so ... which is easily buffered in your 48GB of RAM, so the REAL write speed likely hasn't improved at all => just the perception of a Windows user because UnRAID has it all buffered in RAM.

 

With that much RAM, most writes are probably going to have the same "feel" => but if you really want to see your current actual write speed you need to do a BIG copy to the array -- and be sure it's not using the cache pool.  Probably 100GB or more to actually see how much it slows down to once UnRAID can't "hide" the true speed by buffering.

 

I agree, however, that this is a superb setup that, with typical use, will likely make all of your writes "feel" very fast, as few of us write more than 10-20GB at a time to our arrays (except when initially loading them) ... so your array will likely "catch up" in-between writes.

 

Correct.  I do have a cache pool but do not use it for caching...only VM and conctainer storage via cache-only user shares.

 

Correct.  Writes were directly to the parity protected array.  No cache drive was harmed in this example. :)

 

Agreed.  A very large write would give me true disk/network performance benchmark.  However, sicne I have 48GB of ECC, I have yet to find a need to transfer a file of that size.  In fact, I can think of only 3 that I have...the double bluray of the extended versions of LOTR are each in the ~65GB range when merged into a single MKV.  As most of my largest files are normal BDRIPS (~20GB), my RAM handles them just fine.  :)

 

John

Link to comment

Going back to cache drives and disk speed... wouldn't a SSD can sustain 100MB/s write anyway?

I wish unraid supports multiple cache pools, because I use a pair of 2tb mirrored (btrfs) for docker/kvm images... and I'd love to add a small ssd just for pure caching file transfers.

 

Yes, an SSD could sustain writes that were limited only by your network speed ... so you should see the same kind of speed noted in the first post here.  AND with a cache pool the data is fault tolerant as well (unlike a single cache drive).    You don't really need multiple cache pools ... you just need a cache that's big enough to both cache your writes and keep your application images.    I'd think your pair of 2TB drives easily meets that criteria ... and assuming these are 1TB/platter drives they should be able to sustain a write speed that's at least double what it would be to the protected array (but not full network speed).    Easy enough to see what they can do -- just enable cache for one of your shares and do a few writes to it  :)

Link to comment

My concern with that setup is that you'll "think" the write is complete when it really isn't. If you're moving something that means it will be deleted from source before the write is really complete. A write failure, crash, hardware failure, power outage etc means now you've lost the source AND the "copy".

 

By writing to a cache drive you know the source is safe until the copy is confirmed. Then the only risk for data loss is the failure of the cache drive until Mover is run.

 

Just my thoughts [shrug].

 

Actually, I did test that.  As soon as Windows thinks the transfer is complete (writing to the array from WIN8 desktop), I yank the Ethernet cable from the WIN8 desktop.  No corruption on the array side and file is completely intact.

 

John

Link to comment

wish i had the cabbage for that setup ::)

 

nice array 8)

 

Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were

 

 

Exactly. I think I paid $70 for a matched pair of 5530s.  I paid more for the f'ing heatsinks.

 

I also really lucked out on the MB...$155 system pull.

Link to comment

Johnodon =>  What values do you have set for your disk tunables ?? 

 

 

I left them at the defaults:  1800/1280/384.

 

Very interesting.  I wonder why your system is (apparently) buffering your entire write while ysss's isn't.  He doesn't have 48GB, but he does have 24GB, which should be PLENTY to cache most files.    I thought perhaps you had adjusted the number of stripes significantly upward and that accounted for the difference, but apparently that's not the issue.

 

Have you by any chance run pauven's "tunables tester" ??

Link to comment

Johnodon =>  What values do you have set for your disk tunables ?? 

 

 

I left them at the defaults:  1800/1280/384.

 

Very interesting.  I wonder why your system is (apparently) buffering your entire write while ysss's isn't.  He doesn't have 48GB, but he does have 24GB, which should be PLENTY to cache most files.    I thought perhaps you had adjusted the number of stripes significantly upward and that accounted for the difference, but apparently that's not the issue.

 

Have you by any chance run pauven's "tunables tester" ??

 

I ran his util about a year ago but wasn't floored by the results it provided so I didn't change anything.  No one setting was that much better than the rest.

Link to comment

My concern with that setup is that you'll "think" the write is complete when it really isn't. If you're moving something that means it will be deleted from source before the write is really complete. A write failure, crash, hardware failure, power outage etc means now you've lost the source AND the "copy".

 

By writing to a cache drive you know the source is safe until the copy is confirmed. Then the only risk for data loss is the failure of the cache drive until Mover is run.

 

Just my thoughts [shrug].

 

Actually, I did test that.  As soon as Windows thinks the transfer is complete (writing to the array from WIN8 desktop), I yank the Ethernet cable from the WIN8 desktop.  No corruption on the array side and file is completely intact.

 

John

 

Yanking the network cable wouldn't cause any problem -- once the transfer is "done" from Windows perspective, it's all buffered in the server's RAM.    The issue would be if you had a system failure between that point and the completion of the action writes.    As I noted earlier, as long as it's UPS protected, that's not really an issue ... but it IS a "window of vulnerability" that wouldn't exist if the writes were going directly to the disks.

 

Link to comment

wish i had the cabbage for that setup ::)

 

nice array 8)

 

Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were

 

 

Exactly. I think I paid $70 for a matched pair of 5530s.  I paid more for the f'ing heatsinks.

 

I also really lucked out on the MB...$155 system pull.

 

You did indeed luck out on the board.  They're still over $400 most places, and even on e-bay they're generally over $300.    I assume you had to actually pay "real" prices for the memory ... or did you luck into a deal on that as well?

 

Link to comment

The other thing to try on network writes is bypassing the user file system and shares and use the disk shares direectly ( \\tower\disk# ) instead and see if that gives you speeds higher than 30-40 mb/s.

 

I always write directly to disk shares...

Link to comment

The other thing to try on network writes is bypassing the user file system and shares and use the disk shares direectly ( \\tower\disk# ) instead and see if that gives you speeds higher than 30-40 mb/s.

 

I always write directly to disk shares...

 

Perhaps that's the issue ... maybe it's only buffering writes to user shares.  Have you tried that to see if it makes a difference in the apparent speed?

 

Link to comment

wish i had the cabbage for that setup ::)

 

nice array 8)

 

Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were

 

 

Exactly. I think I paid $70 for a matched pair of 5530s.  I paid more for the f'ing heatsinks.

 

I also really lucked out on the MB...$155 system pull.

 

You did indeed luck out on the board.  They're still over $400 most places, and even on e-bay they're generally over $300.    I assume you had to actually pay "real" prices for the memory ... or did you luck into a deal on that as well?

 

12 sticks 4GB 2Rx4 PC3-10600R 24GB Hynix HMT151R7BFR4C-H9 FBD Server RAM ECC Reg (again...system pull) = $210 shipped.

Link to comment

The other thing to try on network writes is bypassing the user file system and shares and use the disk shares direectly ( \\tower\disk# ) instead and see if that gives you speeds higher than 30-40 mb/s.

 

I always write directly to disk shares...

 

Perhaps that's the issue ... maybe it's only buffering writes to user shares.  Have you tried that to see if it makes a difference in the apparent speed?

 

I see the same speed writing to disk share.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...