unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


1096 posts in this topic Last Reply

Recommended Posts

However ... in real life writes are done over the network      And if the writes are at a speed significantly below what the network can sustain (which is certainly the case with Gb networks), then there's no benefit to using md_write_limit settings higher than whatever maximizes your writes across the network.

 

Not all of us do these writes over a physical network. There are people virtualizaing with virtual 10GB virtual switches.

There are people who do writes directly on the machine. i.e. downloads via torrent and/or news readers.

People are using mysql for data warehousing. Many times I will move files via rsync from one area to another or decompress large archives on the array.  I'm sure there are people who assemble files from newsgroups and would like it to occur as fast as possible.

 

One of the things that I learned years ago when experimenting with unRAID and a P4 and XEON, When you benchmark a 1GB network from application to application without physical disk I/O or a filesystem you can achieve over 90MB/s.  So the potential to move data at a very high speed does exist on the network. The potential to write data locally on the machine at a high speed exists with internal applications.

 

Limiting writes to your network speed is probably only useful if an unRAID design has issues with low memory and the goal is to preserve that as much as possible.

Link to post
  • Replies 1.1k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

NEW! For Unraid 6.x, this utility is named:  unraid6x-tunables-tester.sh For Unraid 5.x, this utility is named:  unraid-tunables-tester.sh   The current version is 4.1 for Unraid 6.x an

Well, good job guys.  The conversation has prompted me to find and review my testing documentation, which includes my strategy for the next test routine.   And it just so happens that at thi

Well, it's finally happened:  Unraid 6.x Tunables Tester v4.0   The first post has been updated with the release notes and download.   Paul

Posted Images

... using mysql for data where housing.

 

Limiting writes to your network speed is probably only useful if an unRAID design has issues with low memory and the goal is to preserve that as much as possible.

 

data warehousing perhaps ??  :)

 

A Gb network with UnRAID is NOT "... limiting writes to your network speed ..."

(except possibly to the cache drive if it's an SSD or a fast high areal density unit)

 

That would certainly be true with a hardware RAID-5 or -6 controller with a high drive count where the bandwidth can easily get well over 1GB (not Gb) per second ... indeed they can saturate even a 10Gb network.    But UnRAID's writes -- unless your array is composed of all SSDs -- aren't going to come even close to Gb network speed.

 

Link to post

A Gb network with UnRAID is NOT "... limiting writes to your network speed ..."

(except possibly to the cache drive if it's an SSD or a fast high areal density unit)

 

This isn't what I said. (or meant).

 

What I said was in response to...

There's no benefit to using md_write_limit settings higher than whatever maximizes your writes across the network.

 

And my point is, people do writes locally so there is a benefit to maximizing md_write_limit to whatever gives you the fastest speed for your application, network or not.  For most cases, it's network related for others it may not be.

 

If I had the ability to to have higher write speeds by maximizing md_write_limit above network speeds, there is a benefit.. Especially if you have to do some full disk based operation.  Even the chore of house keeping such as updating directory entries and/or superblocks would benefit from the highest speed possible.

 

Remember when everyone saw this big speed boost in one of the RCs.. later on it turned out the superblock wasn't being updated properly.  Regular housekeeping does take time and resources.  There is a benefit to having the fastest write speed possible even if it exceeds your network.

 

You would probably want to use lower md_write_limit settings if you want to preserve low memory.

Link to post

Agree.  It certainly makes sense to tune your write parameters for the highest possible speeds if you're doing writes internal to the array.    I'm not sure that isn't the same thing I was doing, however ... since the network can easily exceed the speed that writes maxed out at -- indeed reading the same file back to my PC transfers at over double the speed it was written at (well over 100MB/s).

 

If I had a spare SATA port (I don't), I'd pop an SSD in as a cache for testing and see if it writes to the array any faster.

My expectation is that it would not.

 

 

Link to post

If I had a spare SATA port (I don't), I'd pop an SSD in as a cache for testing and see if it writes to the array any faster.

My expectation is that it would not.

At the current time, it will not, even with SSD as parity and data, it's still not what would be expected.

However, with SATA III Speed and high quality SSD's, it could come pretty close.

Link to post

Thanks so much for this script!  Just tried it and the lowest was the fastest for me also:

 

VERY interesting.

 

It would be very interesting if you would:

 

(a)  Post your system's hardware configuration details;

 

and

 

(b)  download the new v2.2 of Paul's script and run it ... posting the results here.  [it will also find your optimal value even better, since it looks "downward" for those systems that have their best performance at the low end of these values]

 

I finally had a chance to run v2.2 an got similar results.  Each test returns pretty much the same speed except for a few which jump up to ~101.  All the others extremely consistent.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  89.7 MB/s 
   2  |    1536     |     768     |     640     |  87.1 MB/s 
   3  |    1664     |     768     |     768     |  90.8 MB/s 
   4  |    1920     |     896     |     896     |  91.3 MB/s 
   5  |    2176     |    1024     |    1024     |  62.5 MB/s 
   6  |    2560     |    1152     |    1152     | 101.6 MB/s 
   7  |    2816     |    1280     |    1280     | 101.8 MB/s 
   8  |    3072     |    1408     |    1408     | 101.2 MB/s 
   9  |    3328     |    1536     |    1536     |  94.0 MB/s 
  10  |    3584     |    1664     |    1664     |  91.9 MB/s 
  11  |    3968     |    1792     |    1792     |  91.7 MB/s 
  12  |    4224     |    1920     |    1920     |  91.7 MB/s 
  13  |    4480     |    2048     |    2048     |  90.8 MB/s 
  14  |    4736     |    2176     |    2176     |  91.6 MB/s 
  15  |    5120     |    2304     |    2304     |  91.7 MB/s 
  16  |    5376     |    2432     |    2432     |  91.7 MB/s 
  17  |    5632     |    2560     |    2560     |  91.7 MB/s 
  18  |    5888     |    2688     |    2688     |  91.7 MB/s 
  19  |    6144     |    2816     |    2816     |  91.7 MB/s 
  20  |    6528     |    2944     |    2944     |  91.7 MB/s 
--- Targeting Fastest Result of md_sync_window 1280 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    2576     |    1160     |    1160     |  91.5 MB/s 
  22  |    2592     |    1168     |    1168     |  91.5 MB/s 
  23  |    2608     |    1176     |    1176     |  91.4 MB/s 
  24  |    2624     |    1184     |    1184     |  91.7 MB/s 
  25  |    2648     |    1192     |    1192     |  91.5 MB/s 
  26  |    2664     |    1200     |    1200     |  91.5 MB/s 
  27  |    2680     |    1208     |    1208     |  91.7 MB/s 
  28  |    2696     |    1216     |    1216     |  91.6 MB/s 
  29  |    2720     |    1224     |    1224     |  91.6 MB/s 
  30  |    2736     |    1232     |    1232     |  91.7 MB/s 
  31  |    2752     |    1240     |    1240     |  91.6 MB/s 
  32  |    2768     |    1248     |    1248     |  91.7 MB/s 
  33  |    2784     |    1256     |    1256     |  91.6 MB/s 
  34  |    2808     |    1264     |    1264     |  91.6 MB/s 
  35  |    2824     |    1272     |    1272     |  91.6 MB/s 
  36  |    2840     |    1280     |    1280     |  91.6 MB/s 

Completed: 2 Hrs 10 Min 54 Sec.

Best Bang for the Buck: Test 1 with a speed of 89.7 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 66MB of RAM on your hardware.


Unthrottled values for your server came from Test 24 with a speed of 91.7 MB/s

     Tunable (md_num_stripes): 2624
     Tunable (md_write_limit): 1184
     Tunable (md_sync_window): 1184

These settings will consume 123MB of RAM on your hardware.
This is 57MB more than your current utilization of 66MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

After the initial run I changed the settings to the "Bang for the Buck" setting that it showed at 101 MB/sec and then ran a parity check, but it stayed at my usual 89 MB/sec.  So no change in performance with those new settings.

Link to post

I'm doing different kinds of performance tests with a new unRAID build. I just posted some of my findings to the Jumbo frame topic: http://lime-technology.com/forum/index.php?topic=29124.msg260786#msg260786. When I built my first unRAID system back in 2009 I performed quite extensive testing using IOZone: http://lime-technology.com/forum/index.php?topic=3958.0. All the scripts are there so you could easily utilise them. There are also quite a few hints for caveats to look for like how to prevent cached reads/writes.

 

I also executed the test script from this thread with the results at the end of this message. For me the parity check speed is irrelevant as long as the total time is less than 8 hours so it can be performed at night time. Even if it took longer there is no real problem since parity check does not affect my normal usage. I can easily stream multiple high-bitrate mkv's through Plex in direct mode while parity check is in progress. Fully understand that this might not be case when using older hardware but even my 2009 built system (details in sig) can manage multiple streams while parity check is in progress. Media streaming is small potatoes to an unRAID system, in theory you could run ~20 50Mbps bitrate streams especially if there were multiple disks involved.

 

Since unRAID with modern disks and interfaces can easily saturate 1Gbps when reading, the only thing that matters is write speed. Before performing the unraid-tunables test I tested Weebotech's settings found here: http://lime-technology.com/forum/index.php?topic=29009.msg260640#msg260640. On the new system I was using stock settings and getting ~40MBps write speed over the network and locally. When I applied the new settings the write speed jumped to ~50MBps. But the speed is fluctuating, sometimes going over 80Mbps and some times less than 30Mbps. I will do some more experimenting this evening and reports back.

 

From results below you can see that there is basically no change what so ever. It makes me wonder whether there is something wrong with script or did my non-stock settings some how interfere with the test. If there is no problem with the results then the whole concept of optimising these settings based on the parity sync speed is a totally wrong approach and you should be focusing only on the real read/write performance of the system. I did read through the whole thread and I think that is the way you are already going.

 

If you want to measure only raw throughput then dd is fine but even then you must take care of cached reads/writes. This involves either using very large data files (> 3-5 times system memory size) and flushing the caches between tests. My scripts in the IOZone thread contain the commands for flushing. If real life usage is more than just raw transfers (eg. databases), then IOZone is perhaps the easiest benchmarking tool.

 

As a small note for the test script, it seemed to force to apply either the Best for Buck or Unthrottled settings. I couldn't see an option for using original settings so I cancelled the script with ctrl-c at that point.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 144.2 MB/s 
   2  |    1536     |     768     |     640     | 144.7 MB/s 
   3  |    1664     |     768     |     768     | 144.7 MB/s 
   4  |    1920     |     896     |     896     | 144.7 MB/s 
   5  |    2176     |    1024     |    1024     | 144.7 MB/s 
   6  |    2560     |    1152     |    1152     | 144.7 MB/s 
   7  |    2816     |    1280     |    1280     | 144.7 MB/s 
   8  |    3072     |    1408     |    1408     | 144.7 MB/s 
   9  |    3328     |    1536     |    1536     | 144.7 MB/s 
  10  |    3584     |    1664     |    1664     | 144.7 MB/s 
  11  |    3968     |    1792     |    1792     | 144.7 MB/s 
  12  |    4224     |    1920     |    1920     | 144.7 MB/s 
  13  |    4480     |    2048     |    2048     | 144.7 MB/s 
  14  |    4736     |    2176     |    2176     | 144.7 MB/s 
  15  |    5120     |    2304     |    2304     | 144.7 MB/s 
  16  |    5376     |    2432     |    2432     | 144.7 MB/s 
  17  |    5632     |    2560     |    2560     | 144.7 MB/s 
  18  |    5888     |    2688     |    2688     | 144.7 MB/s 
  19  |    6144     |    2816     |    2816     | 144.7 MB/s 
  20  |    6528     |    2944     |    2944     | 144.7 MB/s 
--- Targeting Fastest Result of md_sync_window 640 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    1424     |     768     |     520     | 144.7 MB/s 
  22  |    1440     |     768     |     528     | 144.7 MB/s 
  23  |    1448     |     768     |     536     | 144.7 MB/s 
  24  |    1456     |     768     |     544     | 144.7 MB/s 
  25  |    1464     |     768     |     552     | 144.7 MB/s 
  26  |    1472     |     768     |     560     | 144.7 MB/s 
  27  |    1480     |     768     |     568     | 144.7 MB/s 
  28  |    1488     |     768     |     576     | 144.7 MB/s 
  29  |    1496     |     768     |     584     | 144.7 MB/s 
  30  |    1504     |     768     |     592     | 144.7 MB/s 
  31  |    1520     |     768     |     600     | 144.7 MB/s 
  32  |    1528     |     768     |     608     | 144.7 MB/s 
  33  |    1536     |     768     |     616     | 144.7 MB/s 
  34  |    1544     |     768     |     624     | 144.7 MB/s 
  35  |    1552     |     768     |     632     | 144.7 MB/s 
  36  |    1560     |     768     |     640     | 144.7 MB/s 

Completed: 2 Hrs 7 Min 4 Sec.

Best Bang for the Buck: Test 1 with a speed of 144.2 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 5MB of RAM on your hardware.


Unthrottled values for your server came from Test 21 with a speed of 144.7 MB/s

     Tunable (md_num_stripes): 1424
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 520

These settings will consume 5MB of RAM on your hardware.
This is -11MB less than your current utilization of 16MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to post

As a teaser of what's coming in the next version, here's my write test results.  Big thanks to WeeboTech for sharing his knowledge.  The dd command is fantastic, and this was easy enough to program that I will be doing both read tests and write tests.  I still have quite a bit of programming to do, so it won't be out for a few days.

 

I think I hit peak write performance just shy of 50MB/s, which is a nice 25%+ boost over stock unRAID settings.  This was a 6 hour test, with 10GB written for each pass.

 

This was conducted on a WD Red 3TB NAS drive (5400RPM) that was 48% full.  Unfortunately it was my emptiest drive.  I might have to move some data around to be able to test on the fast part of the drive.

 

-Paul

 

md_write_limit=128, 301.676 s, 33.9 MB/s
md_write_limit=192, 317.47 s, 32.3 MB/s
md_write_limit=256, 283.058 s, 36.2 MB/s
md_write_limit=320, 298.919 s, 34.3 MB/s
md_write_limit=384, 297.67 s, 34.4 MB/s
md_write_limit=448, 302.064 s, 33.9 MB/s
md_write_limit=512, 297.018 s, 34.5 MB/s
md_write_limit=576, 306.282 s, 33.4 MB/s
md_write_limit=640, 290.022 s, 35.3 MB/s
md_write_limit=704, 273.539 s, 37.4 MB/s
md_write_limit=768, 261.84 s, 39.1 MB/s
md_write_limit=832, 269.1 s, 38.1 MB/s
md_write_limit=896, 272.334 s, 37.6 MB/s
md_write_limit=960, 276.062 s, 37.1 MB/s
md_write_limit=1024, 268.788 s, 38.1 MB/s
md_write_limit=1088, 267.094 s, 38.3 MB/s
md_write_limit=1152, 260.519 s, 39.3 MB/s
md_write_limit=1216, 245.766 s, 41.7 MB/s
md_write_limit=1280, 236.244 s, 43.3 MB/s
md_write_limit=1344, 235.364 s, 43.5 MB/s
md_write_limit=1408, 238.502 s, 42.9 MB/s
md_write_limit=1472, 239.953 s, 42.7 MB/s
md_write_limit=1536, 236.889 s, 43.2 MB/s
md_write_limit=1600, 234.248 s, 43.7 MB/s
md_write_limit=1664, 232.905 s, 44.0 MB/s
md_write_limit=1728, 232.573 s, 44.0 MB/s
md_write_limit=1792, 226.406 s, 45.2 MB/s
md_write_limit=1856, 224.929 s, 45.5 MB/s
md_write_limit=1920, 227.07 s, 45.1 MB/s
md_write_limit=1984, 230.45 s, 44.4 MB/s
md_write_limit=2048, 226.18 s, 45.3 MB/s
md_write_limit=2112, 226.925 s, 45.1 MB/s
md_write_limit=2176, 223.399 s, 45.8 MB/s
md_write_limit=2240, 223.267 s, 45.9 MB/s
md_write_limit=2304, 223.151 s, 45.9 MB/s
md_write_limit=2368, 219.552 s, 46.6 MB/s
md_write_limit=2432, 223.168 s, 45.9 MB/s
md_write_limit=2496, 220.98 s, 46.3 MB/s
md_write_limit=2560, 220.718 s, 46.4 MB/s
md_write_limit=2624, 218.409 s, 46.9 MB/s
md_write_limit=2688, 216.448 s, 47.3 MB/s
md_write_limit=2752, 216.608 s, 47.3 MB/s
md_write_limit=2816, 217.598 s, 47.1 MB/s
md_write_limit=2880, 216.989 s, 47.2 MB/s
md_write_limit=2944, 216.578 s, 47.3 MB/s
md_write_limit=3008, 216.179 s, 47.4 MB/s
md_write_limit=3072, 213.888 s, 47.9 MB/s
md_write_limit=3136, 212.857 s, 48.1 MB/s
md_write_limit=3200, 211.932 s, 48.3 MB/s
md_write_limit=3264, 212.881 s, 48.1 MB/s
md_write_limit=3328, 213.732 s, 47.9 MB/s
md_write_limit=3392, 214.099 s, 47.8 MB/s
md_write_limit=3456, 214.81 s, 47.7 MB/s
md_write_limit=3520, 212.939 s, 48.1 MB/s
md_write_limit=3584, 212.713 s, 48.1 MB/s
md_write_limit=3648, 211.289 s, 48.5 MB/s
md_write_limit=3712, 210.06 s, 48.7 MB/s
md_write_limit=3776, 211.82 s, 48.3 MB/s
md_write_limit=3840, 209.016 s, 49.0 MB/s
md_write_limit=3904, 210.22 s, 48.7 MB/s
md_write_limit=3968, 212.187 s, 48.3 MB/s
md_write_limit=4032, 210.77 s, 48.6 MB/s
md_write_limit=4096, 208.326 s, 49.2 MB/s
md_write_limit=4160, 209.331 s, 48.9 MB/s
md_write_limit=4224, 208.988 s, 49.0 MB/s
md_write_limit=4288, 207.678 s, 49.3 MB/s
md_write_limit=4352, 210.226 s, 48.7 MB/s
md_write_limit=4416, 209.887 s, 48.8 MB/s
md_write_limit=4480, 210.053 s, 48.7 MB/s
md_write_limit=4544, 210.036 s, 48.8 MB/s
md_write_limit=4608, 209.409 s, 48.9 MB/s
md_write_limit=4672, 210.029 s, 48.8 MB/s
md_write_limit=4736, 206.886 s, 49.5 MB/s
md_write_limit=4800, 208.978 s, 49.0 MB/s
md_write_limit=4864, 206.788 s, 49.5 MB/s
md_write_limit=4928, 207.969 s, 49.2 MB/s
md_write_limit=4992, 208.136 s, 49.2 MB/s
md_write_limit=5056, 207.687 s, 49.3 MB/s
md_write_limit=5120, 207.471 s, 49.4 MB/s
md_write_limit=5184, 206.618 s, 49.6 MB/s
md_write_limit=5248, 210.428 s, 48.7 MB/s
md_write_limit=5312, 210.248 s, 48.7 MB/s
md_write_limit=5376, 208.4 s, 49.1 MB/s
md_write_limit=5440, 209.255 s, 48.9 MB/s
md_write_limit=5504, 207.281 s, 49.4 MB/s
md_write_limit=5568, 205.834 s, 49.7 MB/s
md_write_limit=5632, 207.783 s, 49.3 MB/s
md_write_limit=5696, 209.198 s, 48.9 MB/s
md_write_limit=5760, 208.117 s, 49.2 MB/s
md_write_limit=5824, 208.329 s, 49.2 MB/s
md_write_limit=5888, 208.249 s, 49.2 MB/s

Link to post

From results below you can see that there is basically no change what so ever. It makes me wonder whether there is something wrong with script or did my non-stock settings some how interfere with the test. If there is no problem with the results then the whole concept of optimising these settings based on the parity sync speed is a totally wrong approach and you should be focusing only on the real read/write performance of the system. I did read through the whole thread and I think that is the way you are already going.

 

Nothing wrong with the script, or your build, but you have only two drives in your test build:  1 parity and 1 data.  You have effectively created a RAID 0 using unRAID.  This means unRAID doesn't have to do any calculations or anything.  When building or checking parity, unRAID simply has to make parity=datadrive.  Mirrors are fast.

 

That is why you are seeing parity check speeds at full disk speeds.  That is also why these parameters are not doing anything on this specific build.

 

As a small note for the test script, it seemed to force to apply either the Best for Buck or Unthrottled settings. I couldn't see an option for using original settings so I cancelled the script with ctrl-c at that point.

Had you selected either Unthrottled or Best Bang, the very next prompt would have allowed you to revert to original values.  In fact, I programmed so that if you fail to select either Unthrottled or Best Bang, it automatically reverts to original values.  Had you simply hit ENTER instead of CTRL-C, you would be back to the original settings, but since you did a CTRL-C, your settings were left at the last test values.

 

You can always hit APPLY again from the unRAID Disk Setttings page to get your settings back to configured values.

 

I'll try and make the options friendlier in the next version.

Link to post

I finally had a chance to run v2.2 an got similar results.  Each test returns pretty much the same speed except for a few which jump up to ~101.  All the others extremely consistent.

 

Thanks for sharing RockDawg.  There's something wonky in your results. The speed increase seen in the 1152 to 1408 range was not repeated in Pass 2.  And from test 10 on, your results appear to be stuck at about 91.5 MB/s, all the way through the end of Pass 2.

 

Does anyone have any ideas on what may be causing this behavior, and/or any suggestions on how to prevent it?  I wouldn't think cache would be coming into play on a parity check, especially one running for 3-4 minutes, as that is a lot of data from a lot of drives, but I really don't know.

 

-Paul

Link to post

Nothing wrong with the script, or your build, but you have only two drives in your test build:  1 parity and 1 data.  You have effectively created a RAID 0 using unRAID.  This means unRAID doesn't have to do any calculations or anything.  When building or checking parity, unRAID simply has to make parity=datadrive.  Mirrors are fast.

 

That is why you are seeing parity check speeds at full disk speeds.  That is also why these parameters are not doing anything on this specific build.

Hmmm, are you sure that the one data disk situation is handled in a special way? The only thing which could be optimised (=left out) here is the XOR operation over the bits/blocks of the data disks since there is only one. I would expect my parity check speeds to be identical if I would add second or third WD Green 3TB data disk. In my 2009 build I had four identical Samsung F1 1TB and I saw identical parity check speeds regardless of the number of data disks. Even now with 13 drives I'm seeing similar speeds. Everything is done in parallel and if you don't have any bottlenecks in the interfaces the speed should not change. I've always taken care to not cause any bottlenecks and used appropriate HBAs and MB ports.

 

But I will anyway add more disks to the test system and will get back with the results.

Link to post

Hmmm, are you sure that the one data disk situation is handled in a special way? The only thing which could be optimised (=left out) here is the XOR operation over the bits/blocks of the data disks since there is only one. I would expect my parity check speeds to be identical if I would add second or third WD Green 3TB data disk. In my 2009 build I had four identical Samsung F1 1TB and I saw identical parity check speeds regardless of the number of data disks. Even now with 13 drives I'm seeing similar speeds. Everything is done in parallel and if you don't have any bottlenecks in the interfaces the speed should not change. I've always taken care to not cause any bottlenecks and used appropriate HBAs and MB ports.

 

But I will anyway add more disks to the test system and will get back with the results.

 

I can't say for sure that adding additional drives will have an impact.  Only one way to find out.

 

It's certainly possible that your system works great at any combination of md_* values, possibly as a result of using the 6Gbps ports of the ASRock FM2A85X-ITX.  We are seeing that different hd controllers has an impact on what set of values brings optimum performance.  I know that this is a SFF build, but if you were adding more drives than the MB could handle, forcing you to include an add-in card, most likely these parameters would have an impact.

 

Since your system works well with stock values, you're in great shape.

Link to post

I'm looking for Best Bang feedback.  Below are a few sample points from my write test.  Which would you consider the Best Bang for the MB? Or do you have another criteria in mind?

 

A) 100%:  49.7 MB/s @ md_write_limit=5568 uses 435MB  <-- Fastest recorded write speed

B)  95%:  47.3 MB/s @ md_write_limit=2688 uses 210MB  <-- Half the memory usage with 95% of the speed

C)  90%:  45.2 MB/s @ md_write_limit=1792 uses 140MB  <-- 1/3rd the memory usage with 90% of the speed

D)  85%:  43.3 MB/s @ md_write_limit=1280 uses 100MB  <-- 1/4th the memory usage with 85% of the speed

E)  79%:  39.1 MB/s @ md_write_limit= 768 uses  60MB  <-- unRAID Stock

 

Keep in mind that the memory usage above is for the writes only - total memory usage would be this plus sync plus an overage for reads.  Also, the memory usage above is calculated for my server, with 20 drives.  Each server is different.

Link to post

I can't comment on best bang for the buck, but I can say, I would choose the highest speed.

@90%, Does using up double the amount of low memory make it worth the extra 6MB/s?

 

For me, does using up the extra 500MB of memory make up for 10MB/s, Yes. Probably good only on the lower drive count systems. Until we are 64bit at least.

 

Great job man!!!

Link to post

Please,  add the md_num_stripes= with this line or at least start of line group.

 

) 100%:  49.7 MB/s @ md_write_limit=5568 uses 435MB

 

I have an empty HP Micro server with unRAID under ESX that I can run tests on.

I have a 4tb parity with a 4tb data drive and a 3tb data drive.

Link to post

Please,  add the md_num_stripes= with this line or at least start of line group.

 

) 100%:  49.7 MB/s @ md_write_limit=5568 uses 435MB

 

I have an empty HP Micro server with unRAID under ESX that I can run tests on.

I have a 4tb parity with a 4tb data drive and a 3tb data drive.

 

Don't worry, I will have nice output.  What I showed above wasn't actual output, just some data I cobbled together to get forum feedback.  Formatting the data and performing the results analysis takes more coding than actually running the write tests!

 

Choosing 100% performance will always be an option, but I also want to make it easy for people to choose options that still give great results with much lower memory usage.  That's the best bang.

Link to post

It's certainly possible that your system works great at any combination of md_* values, possibly as a result of using the 6Gbps ports of the ASRock FM2A85X-ITX.  We are seeing that different hd controllers has an impact on what set of values brings optimum performance.  I know that this is a SFF build, but if you were adding more drives than the MB could handle, forcing you to include an add-in card, most likely these parameters would have an impact.

I forgot the main point from my first post or at least I forgot to emphasise it  :-[  Even though I did not see any change in parity check speed the actual write speed definitely benefited from the changed disk settings (40 -> 50MBps write speed over the network).

 

Regarding the add-in cards, I'd recommend using well proven models like the Supermicro AOC-SAS2LP-MV8. It for instance has capacity of 600MBps per channel so you have 150MBps per drive. The card is PCI-e x8 v2.0 so it has a maximum throughput of 4000MBps (8 x 500MBps) which is 500MBps per port. Even if you attach it to a x4 slot you will be getting the max performance of 150MBps. I will be building a 24 drive system soon, so I will have a chance to test this further.

 

And most importantly, keep up the good work!

Link to post

I'm looking for Best Bang feedback.  Below are a few sample points from my write test.  Which would you consider the Best Bang for the MB? Or do you have another criteria in mind?

 

A) 100%:  49.7 MB/s @ md_write_limit=5568 uses 435MB  <-- Fastest recorded write speed

 

Keep in mind that the memory usage above is for the writes only - total memory usage would be this plus sync plus an overage for reads.

I wonder if those stripes are using lowmem by any chance.  If so, then that's asking for some serious trouble.  Do you dare test with values twice that high?  Tail the syslog from another console and watch if hell breaks loose.

 

From what I remember, it uses low memory.

A while back when I was testing this if I was to aggressive with my increase I would see the OOM issues crop up more frequently.  The more drives you have, the more you are prone to an OOM condition when expanding these too aggressively. 

 

For my 4 drive HP MicroServer, I'm going to try and be as aggressive as possible for write speed purposes once the new script flushes out.  I'm at 2048 now with good results, I didn't think about expanding it further. 

Color me interested!!!!

Link to post

Nothing wrong with the script, or your build, but you have only two drives in your test build:  1 parity and 1 data.  You have effectively created a RAID 0 using unRAID.  This means unRAID doesn't have to do any calculations or anything.  When building or checking parity, unRAID simply has to make parity=datadrive.  Mirrors are fast.

 

Actually it's effectively a RAID-1, not a RAID-0

 

... and while the impact of that is a mirror (since UnRAID uses even parity), it's NOT built that way .. i.e. UnRAID doesn't simply "make parity = datadrive".    It still does the parity calculations ... so writes aren't any faster than they would be with more drives.  But parity checks are very fast because there are only 2 drives involved, so the calculation is very quick.

 

Link to post

I wonder if those stripes are using lowmem by any chance.  If so, then that's asking for some serious trouble.  Do you dare test with values twice that high?  Tail the syslog from another console and watch if hell breaks loose.

My syslog is spotless so far.  :)

 

Are you asking me to double again, to 11776?!!!

Link to post

Actually it's effectively a RAID-1, not a RAID-0

 

... and while the impact of that is a mirror (since UnRAID uses even parity), it's NOT built that way .. i.e. UnRAID doesn't simply "make parity = datadrive".    It still does the parity calculations ... so writes aren't any faster than they would be with more drives.  But parity checks are very fast because there are only 2 drives involved, so the calculation is very quick.

 

RAID1, correct.  I'm prone to reverse the two in my head.

 

I didn't mean to imply that unRAID is built to do something different when you have a single drive parity protected array, but my point was that the math collapsed to the point it was effectively mirroring, so the math is extremely fast.

Link to post

I wonder if those stripes are using lowmem by any chance.  If so, then that's asking for some serious trouble.  Do you dare test with values twice that high?  Tail the syslog from another console and watch if hell breaks loose.

My syslog is spotless so far.  :)

 

Are you asking me to double again, to 11776?!!!

 

How many drives do you have?

How many files do you have?

 

If you are using cache_dirs, then it could hit some limit.

 

Where I would see issues is using the locate package.

Locate would go through the whole system tree and capture every file.

 

Everything would be fine, until I ran that or did a massive rsync on my backup directory.

I had 17 drives, but millions of files.

Link to post

Thanks for creating this, I've been following this thread with interest  ;)

 

Just in case it proves useful to you, here is my output after running the FULLAUTO option. All my drives are 3TB WD Reds, except 1 which is a Hitachi 3TB.  They are all connected to a M1015:

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 118.5 MB/s 
   2  |    1536     |     768     |     640     | 118.5 MB/s 
   3  |    1664     |     768     |     768     | 120.3 MB/s 
   4  |    1920     |     896     |     896     | 118.5 MB/s 
   5  |    2176     |    1024     |    1024     | 120.3 MB/s 
   6  |    2560     |    1152     |    1152     | 118.5 MB/s 
   7  |    2816     |    1280     |    1280     | 118.6 MB/s 
   8  |    3072     |    1408     |    1408     | 120.3 MB/s 
   9  |    3328     |    1536     |    1536     | 118.5 MB/s 
  10  |    3584     |    1664     |    1664     | 120.3 MB/s 
  11  |    3968     |    1792     |    1792     | 118.5 MB/s 
  12  |    4224     |    1920     |    1920     | 118.6 MB/s 
  13  |    4480     |    2048     |    2048     | 120.3 MB/s 
  14  |    4736     |    2176     |    2176     | 118.6 MB/s 
  15  |    5120     |    2304     |    2304     | 118.6 MB/s 
  16  |    5376     |    2432     |    2432     | 120.3 MB/s 
  17  |    5632     |    2560     |    2560     | 118.6 MB/s 
  18  |    5888     |    2688     |    2688     | 120.3 MB/s 
  19  |    6144     |    2816     |    2816     | 118.5 MB/s 
  20  |    6528     |    2944     |    2944     | 118.6 MB/s 
--- Targeting Fastest Result of md_sync_window 768 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    1568     |     768     |     648     | 120.3 MB/s 
  22  |    1576     |     768     |     656     | 119.0 MB/s 
  23  |    1584     |     768     |     664     | 119.0 MB/s 
  24  |    1600     |     768     |     672     | 119.0 MB/s 
  25  |    1608     |     768     |     680     | 119.0 MB/s 
  26  |    1616     |     768     |     688     | 120.1 MB/s 
  27  |    1624     |     768     |     696     | 120.3 MB/s 
  28  |    1632     |     768     |     704     | 119.0 MB/s 
  29  |    1640     |     768     |     712     | 119.0 MB/s 
  30  |    1648     |     768     |     720     | 119.0 MB/s 
  31  |    1656     |     768     |     728     | 119.0 MB/s 
  32  |    1664     |     768     |     736     | 120.3 MB/s 
  33  |    1680     |     768     |     744     | 119.0 MB/s 
  34  |    1688     |     768     |     752     | 119.0 MB/s 
  35  |    1696     |     768     |     760     | 119.0 MB/s 
  36  |    1704     |     768     |     768     | 119.0 MB/s 

Completed: 2 Hrs 7 Min 32 Sec.

Best Bang for the Buck: Test 1 with a speed of 118.5 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 126MB of RAM on your hardware.


Unthrottled values for your server came from Test 21 with a speed of 120.3 MB/s

     Tunable (md_num_stripes): 1568
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 648

These settings will consume 140MB of RAM on your hardware.
This is 25MB more than your current utilization of 115MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to post

I forgot the main point from my first post or at least I forgot to emphasise it  :-[  Even though I did not see any change in parity check speed the actual write speed definitely benefited from the changed disk settings (40 -> 50MBps write speed over the network).

I find it interesting that one parameter showed benefit while the other did not.  Glad to hear you got some benefit!  When the new version comes out, you'll be able to tune your writes much more effectively.

 

Regarding the add-in cards, I'd recommend using well proven models like the Supermicro AOC-SAS2LP-MV8. It for instance has capacity of 600MBps per channel so you have 150MBps per drive. The card is PCI-e x8 v2.0 so it has a maximum throughput of 4000MBps (8 x 500MBps) which is 500MBps per port. Even if you attach it to a x4 slot you will be getting the max performance of 150MBps. I will be building a 24 drive system soon, so I will have a chance to test this further.

Interestingly, the AOC-SAS2LP-MV8 is the card that uses the same chipset as my HighPoint 2760A (and if you think the SAS2 is special, you really should take a gander at the 2760A).  This chipset is highly influenced by the md_sync_window values, which has caused users to complain about performance for a long long time, and the discovery that the solution is adjusting the md_* tunables became the foundation of this utility.

 

So while your motherboard might be immune to the parameters, the SAS2 most definitely is not!

 

And most importantly, keep up the good work!

I appreciate it!

 

-Paul

Link to post

Thanks for creating this, I've been following this thread with interest  ;)

 

Just in case it proves useful to you, here is my output after running the FULLAUTO option. All my drives are 3TB WD Reds, except 1 which is a Hitachi 3TB.  They are all connected to a M1015:

 

Thanks for sharing jack0w.  Looks like your server is also immune to influence from the md_sync_window.

 

I really need to add 384 back into the FULLAUTO routine so I can see if there's any improvement over stock.  I was skipping it based upon the assumption that higher values are faster, but so many results have proven that assumption wrong.

 

-Paul

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.