unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


1096 posts in this topic Last Reply

Recommended Posts

Interesting. Seems the lowest possible setting is the best setting for me.

 

Tunables Report from  unRAID Tunables Tester v2.1 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
---------------------------------------------------------------------------
   1  |    896     |     768     |     128     |  83.5 MB/s 
   2  |    1024     |     768     |     256     |  82.9 MB/s 
   3  |    1280     |     768     |     384     |  80.1 MB/s 
   4  |    1408     |     768     |     512     |  74.6 MB/s 
   5  |    1536     |     768     |     640     |  66.9 MB/s 

Completed: 0 Hrs 52 Min 20 Sec.

Best Bang for the Buck: Test 1 with a speed of 83.5 MB/s

     Tunable (md_num_stripes): 896
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 128

These settings will consume 35MB of RAM on your hardware.


Unthrottled values for your server came from Test 1 with a speed of 83.5 MB/s

     Tunable (md_num_stripes): 896
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 128

These settings will consume 35MB of RAM on your hardware.
This is -15MB less than your current utilization of 50MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to post
  • Replies 1.1k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

NEW! For Unraid 6.x, this utility is named:  unraid6x-tunables-tester.sh For Unraid 5.x, this utility is named:  unraid-tunables-tester.sh   The current version is 4.1 for Unraid 6.x an

Well, good job guys.  The conversation has prompted me to find and review my testing documentation, which includes my strategy for the next test routine.   And it just so happens that at thi

Well, it's finally happened:  Unraid 6.x Tunables Tester v4.0   The first post has been updated with the release notes and download.   Paul

Posted Images

I had an array filled with these WD10EACS.  Slowest 1TB drives I've owned.

 

Not at all surprising.  The 1TB EACS drives were originally released with 4 platters holding 250GB/platter;  later versions had 3 platters with 333GB/platter.    In either case, compared to more recent drives with much higher Areal density platters, those are very slow.    The newer EZRX units, for example, are all 1TB/platter drives ... so you get 3 to 4 times as much data per revolution from the platter !!

 

Link to post

The drives include the following models from Hitachi, Seagate & WD:

 

HDS5C4040ALE630

ST3000DM001

WD10EACS

WD10EAVS

WD10EADS

WD15EARS

WD20EARS

WD30EZRX

 

Jardo => You have a VERY nice build, but have two significant "flaws" that are slowing things way down.

 

As I noted above (and others have as well), your PCI-X controller in a PCI slot is a MAJOR bottleneck for performance.    In addition, you have several drives that only have 250GB or 333GB platters -- note that this means they provide 1/4th to 1/3rd as much data per revolution as a new 1TB/platter drive !!

 

Your parity drive, Seagate 3TB drive, and probably your 3TB EZRX drive are all 1TB/platter units (there was an early version of the EZRX that used 750GB platters, but assuming yours is recent it will be 1TB/platter).

 

Your WD15EARS is a 500GB/platter unit.

The WD20EARS could be either 500GB/platter or 667GB/platter (depending on specific version).

Your WD10EACS is either 250GB/platter or 333GB/platter (very slow).

Your WD10EAVS is 333GB/platter.

Your WD10EADS is either 333GB/platter or 500GB/platter (again, depending on specific version)

 

The first thing you want to do is absolutely replace your controller card.

 

But the next big performance gain you could do is to replace your 3 1TB drives with a single 3 or 4TB 1TB/platter drive.    That would get rid of all 250GB and 333GB/platter bottlenecks.    Then if you replaced your two EARS units with a 4TB 1TB/platter drive you'd probably have all 1TB/platter drives => and with the other characteristics of your system (very high-end Haswell setup) you'd have a VERY nicely performing system !!

 

Link to post

Interesting. Seems the lowest possible setting is the best setting for me.

 

Fascinating!  This was completely unexpected! 

 

I've now got a few thoughts:

 

  • a) I should have opened up even lower values below 128... but how low should I go?  64?  32?  16?  8?
  • b) John, what crazy hardware are your running?  You should put your build in your profile for convenience.
  • c) If higher numbers are slowing down your syncs, are higher numbers also slowing down your writes?

 

For c), I don't ever drop the md_write_limit below the unRAID stock of 768, but my theory is that the value that works best for syncs is the value that works best for writes too.  At 768 bytes, your Parity Check speed is limited to 60.6 MB/s, climbing 38% to 83.5 MB/s at 128 bytes.  I can't help but wonder if writing to your array would see similar levels of gain?

 

Of course, to test you would need a gigabit connection, anything slower and your network will skew the results.  You also need a source that can send the file faster than unRAID can write it.  You would also want to write to an empty drive to accurately measure the impact of the parameter changes, preferably the fastest model drive in your build.

 

-Paul

Link to post

Fascinating!  This was completely unexpected! 

 

An understatement  :)

 

jbartlett ==> Please post your configuration details !!  I suspect you have very little memory in your system ... is that correct?    Or perhaps a LOT of plugins that are using significant RAM.    In any event, the specifics would be very interesting.

 

Link to post

As I noted above (and others have as well), your PCI-X controller in a PCI slot is a MAJOR bottleneck for performance.    In addition, you have several drives that only have 250GB or 333GB platters -- note that this means they provide 1/4th to 1/3rd as much data per revolution as a new 1TB/platter drive !!

 

JarDo, garycase is spot-on in his assessment.  Areal density makes a huge impact on performance.  In my desktop I have two different versions of 4TB drives, one is a 5 platter 800GB/platter 7200RPM beast, and the other is a 4 platter 1TB/platter 5400RPM eco type drive.  The 5400 rpm drive is the faster drive.  Go figure.

 

I recently upgraded all my unRAID drives to 3TB units, and I am very pleased with the consistent level of performance I get, plus a much faster parity check.

 

That said, swapping your drives will primarily only affect parity checks at the 'whole system level'.  Otherwise you would only experience slower performance when reading or writing to one of the slow drives.  If these are drives you don't use much, you may not find much benefit from replacing them, especially if you only run a monthly parity check.

Link to post

jbartlett ==>... I suspect you have very little memory in your system ... is that correct?    Or perhaps a LOT of plugins that are using significant RAM. 

 

That's a possibility I hadn't considered, and I think it is quite likely. 

 

I was thinking that maybe the hd controller could only accept so many bytes per request.  At 128 stripes times 4k, that's 512k per drive.  I was thinking that too much data at once overwhelms the controller, and I think that every controller has that limit at some point (be it a limit per drive or a limit upon the sum of all drives), and the limit is unique to that controller.  My controller seems to hit that limit around 24 MB per drive, or 480 MB as a sum of all drives I have installed. 

 

This idea also helps explain some of the sporadic/spikey results some users have reported - I've noticed that these users appear to have multiple controllers (mb + add-in), and each controller has different limits/needs, and what makes one controller faster might be making the other controller slower.  My results have been smooth as butter, but I only have the one controller.  What a nightmare if you have one controller that wants 128, and another that wants 2600.  It's impossible to satisfy both at once.

 

Maybe I'm way off base.

Link to post

As far as making a statement in the initial post that this utility has no regard for the md_num_stripes and md_write_limit parameters, I will make no such statement, since I have clearly put consideration into how all three values are set.  I freely described my methodology, and everyone is free to use or ignore this utility as they feel want to do.

 

Completely agree.    As your long post in Reply #94 above notes, the methodology you used here does a nice job of showing the impact at various settings for the sync_window, while still maintaining reasonable stripes and write_limit values.    Folks are certainly free to adjust any of those parameters as they see fit to see what impact it has on their system.    As you may recall, I did a LOT of testing on this a few months ago ... probably ran 30 parity checks in a 10-12 day period ... to find a reasonably optimum setting for my system.    I was curious whether this utility would do better ... and it did indeed find a range that shaved another 8 minutes off my parity checks !!  (at a cost of a bit of additional RAM)

 

I think any read impact that optimizing parity check performance has is just as likely to be due to disk thrashing (as the read request(s) are satisfied) as it is from any stripe settings.  The best parity check setting should keep the disks fully engaged, so any read or write activity is going to thrash the disks involved (and simultaneously "halt" the parity check while that read/write request is satisfied).  It's easy to see how this may cause a few "hiccups" in video streaming during a parity check ... but that's not really due to "optimizing for parity" => it's a byproduct of the fact that if you ARE optimized for parity, the disks are VERY busy, and any attempt to do a high-bandwidth transfer at the same time will result in thrashing that may cause a few minor delays.    That could happen during any parity check ... whether or not you're optimized for its performance.

 

Bottom line:  While not a panacea ... and certainly not a utility that really "optimizes" parity checks, as it doesn't do any real analyses of all possible ranges (i.e. it certainly didn't find the weird anomaly in jbartlett's system),  your utility does a VERY good job of finding the likely best ranges ... AND providing a simple tool for a user whos system is outside of those parameters to look at the data and rerun the test to look at ranges that work best for his/her system.      I can't image needing to go any lower than 128 -- but on the other hand, allowing even lower values in "Manual" mode wouldn't hurt anything ... so perhaps you should set that threshold to 32 or 64.  I can't imagine that a value that low will be "optimal" for anyone => but then again I'd have never anticipated the results jbartlett had  :)

Link to post

I have two different versions of 4TB drives, one is a 5 platter 800GB/platter 7200RPM beast, and the other is a 4 platter 1TB/platter 5400RPM eco type drive.  The 5400 rpm drive is the faster drive.  Go figure.

 

I'll take a 5400/5900rpm 1TB/platter drive over a 7200rpm lower-density unit anytime.    There IS, however, one area where the 7200rpm units still easily outperform lower rpm drives ... they have better access times.  So STARTING a transfer will be quicker on a 7200rpm unit, even though the transfer rate will be faster on the drive with higher areal density (assuming the difference is enough to make up for the rotational speed difference).  So, for example, on a desktop with both Seagate DX and DM units (I assume that's what you have), the DX drive would be a better place for the Windows page file, since these are all small transfers where the quicker access would be beneficial.

 

Nevertheless (as I noted above), when buying new drives I ONLY buy 1TB/platter units  :)

 

... there's a lot of conjecture about whether WD's new 5TB WD Reds (due in January) are going to be 5 platter 1TB/platter units or 4 platter 1.25TB/platter units.    I sure hope they're the latter !!

 

Link to post

4 platter 1.25TB/platter units

DROOOL.  :)

 

You're spot on about the DX and DM's.  I actually have 4, two of each, in a RAID 10.  Maybe not the best to mix different rpms in a RAID 10, but I got the two new drives as partial payment for my old server, so I couldn't be too picky.  Works a charm, though.  8TB at up to 286MB/s does wonders for editing 1080p video.

Link to post

Everyone keeps talking about watching video during parity checks and I have seen numerous complaints about how this is a problem. On my stock settings I can't even watch a video during a parity check. I have always accepted that as a necessary evil during parity checks. If I can shave time off my parity check and do it safely using this utility then that is fantastic and I am all for it. Right now a parity check takes me about 9.5 hours give or take. One question though, would this have any effect on the rebuild speed if a disk fails?

Link to post

One question though, would this have any effect on the rebuild speed if a disk fails?

 

Yes, I believe so.  Hopefully it would be a positive effect.

 

I first documented an issue where parity checks took significantly longer than parity rebuilds.  I took my case to Tom, and he pointed me to these md_* tunables.

 

I "believe" that since a rebuild is writing data, it is limited by the md_write_limit instead of the md_sync_window.  If I am correct, and if increasing your md_sync_window beyond 768 bytes improves your parity check speed, then also increasing the md_write_limit beyond 768 would improve your rebuild speed.

 

Or maybe Tom has some other control in place on rebuilds...

 

Obviously a rebuild is much harder to test than a parity check (I don't think you want me aborting and restarting a rebuild a few dozen times to test it, do ya?).

 

Hopefully someone with a couple disk upgrades to perform could run each with different values for md_write_limit and report back.  I just went through a bunch of these myself, and it didn't even occur to me to test this.  Too bad, since all my drives are upgraded now.

Link to post

I have updated the utility to version 2.2 (see the main post).  The changes are very minor, so feel free to skip this version unless you need one of the three changes.

 

New Features in v2.2:

  • Added support for starting position byte values down to 8 bytes (previously 128).  You will have to select (M)anual on the Starting Position Override screen, then you can manually enter the starting value.  You can also select down to 8 on the Ending Position Override screen too.
  • Updated the FULLAUTO routine to add a special pass if the fastest value after Pass 1 is 512 bytes.  The extra pass basically extends pass 1 down to 128 bytes, and adds 12 minutes to the running time.  If 128 bytes is the new fastest value after special pass 1b, then the final pass (unmodified) will effectively test values from 8 bytes to 128.  This allows the FULLAUTO routine to correctly handle 'special' servers like John's, which for reasons unknown works better with lower values.
  • I fixed a small bug that caused the utility to not restore the original configured parameters after the test when the user failed to select either the (U)nthrottled or (B)est Bang for the Buck values.

 

You can download the file from post #1:  http://lime-technology.com/forum/index.php?topic=29009.msg259087#msg259087

 

-Paul

 

John - this version is for you.  While you can run the ultra low values in Manual mode, I would appreciate it if you could also run a FULLAUTO, as I added logic to handle your server.  I am not able to test this logic since my server doesn't behave better with lower values.

 

 

Link to post

jbartlett => Please post the detailed results of your run of Fullauto with the new v2.2 that Paul created especially for you.

 

Most interesting to see just where your system optimizes.    And please post the full system configuration details (as I already asked for earlier).    Both Paul & I are certainly very interested to see just what the system config is that optimizes at such low values  :)

Link to post

I am absolutely amazed at the effort and detail that you guys put into your replies to my questions.  Thank you so much for all the good information.  I feel very confident now with my future server upgrade plans.

 

No problem.    When you replace the controller card, post your new "tunables tester" results so we can see just how much things improved (I suspect you'll be amazed)  :)

 

... if you get rid of those 250/333GB platters you'll see yet-another big jump.

 

Difficult to really predict, but I'll go out on a limb and estimate that the new controller card will put you in the 60-70MB/s range;  and if you then get rid of the small platters you'll be well over 100MB/s (although that may also require that you shed the 500GB platter units as well)

 

Link to post

It seems that your idea of tuning for playback enjoyment is to starve the parity check, leaving excess processing capacity on the hard drives so that reads have no competition.  While that is a valid proposition, I don't think purposely de-tuning one process to get performance on another process is the greatest of solutions.  I think your assertion that Tom de-tuned parity checks on purpose is conjecture, as we can see in many of these test results that optimum performance comes from unRAID stock values.  If anything, Tom chose nice default values that work great on many servers, and work decent on all servers, while being sized for the smallest of servers.

 

I would be interested to know how Tom prioritizes reads vs. writes vs. syncs:  is this something hard coded into mdcmd, or is it controlled by the allocation % of these three tunable parameters.  Perhaps someone would care to peruse the mdcmd.c source file and see if they can see a methodology. 

 

I thought it was rather clear I was commenting on leaving enough resources available for other services which require reads/writes to the array - a more balance approach.  Regardless, no one fully understands these parameters yet so no point in debating testing methodology. One thing to note is the source is available on every unRAID installation as md.c and unraid.c which are heavily modified from Linux open source md.c and raid5.c.  Since they both fall under GNU v2 they are required to be included in the distribution.  The only post I found where Tom explains these parameters is http://lime-technology.com/forum/index.php?topic=4625.msg42091#msg42091.

 

 

 

Link to post

Thanks so much for this script!  Just tried it and the lowest was the fastest for me also:

 

Tunables Report from  unRAID Tunables Tester v2.0 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 101.9 MB/s 
   2  |    1536     |     768     |     640     | 101.9 MB/s 
   3  |    1664     |     768     |     768     |  95.8 MB/s 
   4  |    1920     |     896     |     896     |  92.1 MB/s 
   5  |    2176     |    1024     |    1024     |  91.7 MB/s 
   6  |    2560     |    1152     |    1152     |  91.7 MB/s 
   7  |    2816     |    1280     |    1280     |  91.7 MB/s 
   8  |    3072     |    1408     |    1408     |  91.7 MB/s 
   9  |    3328     |    1536     |    1536     |  91.7 MB/s 
  10  |    3584     |    1664     |    1664     |  91.7 MB/s 
  11  |    3968     |    1792     |    1792     |  91.7 MB/s 
  12  |    4224     |    1920     |    1920     |  91.7 MB/s 
  13  |    4480     |    2048     |    2048     |  91.6 MB/s 
  14  |    4736     |    2176     |    2176     |  91.7 MB/s 
  15  |    5120     |    2304     |    2304     |  91.7 MB/s 
  16  |    5376     |    2432     |    2432     |  91.7 MB/s 
  17  |    5632     |    2560     |    2560     |  91.7 MB/s 
  18  |    5888     |    2688     |    2688     |  91.7 MB/s 
  19  |    6144     |    2816     |    2816     |  91.7 MB/s 
  20  |    6528     |    2944     |    2944     |  91.7 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Medium Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    1288     |     768     |     392     |  91.6 MB/s 
  22  |    1296     |     768     |     400     |  91.7 MB/s 
  23  |    1304     |     768     |     408     |  91.7 MB/s 
  24  |    1312     |     768     |     416     |  91.6 MB/s 
  25  |    1320     |     768     |     424     |  91.3 MB/s 
  26  |    1328     |     768     |     432     |  91.6 MB/s 
  27  |    1336     |     768     |     440     |  91.7 MB/s 
  28  |    1344     |     768     |     448     |  91.6 MB/s 
  29  |    1360     |     768     |     456     |  91.6 MB/s 
  30  |    1368     |     768     |     464     |  91.7 MB/s 
  31  |    1376     |     768     |     472     |  91.6 MB/s 
  32  |    1384     |     768     |     480     |  91.3 MB/s 
  33  |    1392     |     768     |     488     |  91.7 MB/s 
  34  |    1400     |     768     |     496     |  91.6 MB/s 
  35  |    1408     |     768     |     504     |  91.7 MB/s 
  36  |    1416     |     768     |     512     |  91.6 MB/s 

Completed: 2 Hrs 9 Min 48 Sec.

Best Bang for the Buck: Test 1 with a speed of 101.9 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 66MB of RAM on your hardware.


Unthrottled values for your server came from Test 22 with a speed of 91.7 MB/s

     Tunable (md_num_stripes): 1296
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 400

These settings will consume 60MB of RAM on your hardware.
This is -36MB less than your current utilization of 96MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to post

Thanks so much for this script!  Just tried it and the lowest was the fastest for me also:

 

VERY interesting.

 

It would be very interesting if you would:

 

(a)  Post your system's hardware configuration details;

 

and

 

(b)  download the new v2.2 of Paul's script and run it ... posting the results here.  [it will also find your optimal value even better, since it looks "downward" for those systems that have their best performance at the low end of these values]

 

Link to post

Interesting point in the results from RockDawg's test.  It appears the "unthrottled" is the best found during pass 2 and ignored pass 1.

 

Noted something else:  Look at the results of test #1 compared to the almost identical test #36:

 

1  |    1408    |    768    |    512    | 101.9 MB/s

 

36  |    1416    |    768    |    512    |  91.6 MB/s

 

Really strange.    The only difference (aside from an insignificant 8 strip difference in the total allowed) is the 4 min duration vs. 3 min

 

I have to wonder if this is related to the virtualization.

 

RockDawg:  I assume your controller is passed through -- correct?

Link to post

I just kicked off a fullauto with version 2.2 with the following changes: Setting TestLen to 599 instead of 239 (in 3 places in the full-auto logic) to run for 10 minutes in order to get past my 32GB SSD drive which seems to skew the results.

 

System specs

MB: ASUSTeK Computer INC. - M4A88TD-V EVO/USB3

CPU: AMD Phenom II X4 965 - 3.4 GHz

RAM: 4GB

HD: 1.5TB WD EARS (3, jumpered), 2TB WD (1), 3TB Seagate (3), 4TB Seagate (2+parity), 32GB SSD (1), 500GB WD (Cache)

 

Hard drives are connected to MB and to two PCI-E SATA extenders.

 

Been replacing the EARS drives, one a month, with the 4TB Seagates. Preclearing another 4TB now.

 

-John

Link to post

Seeing differing results on 5.0 final, lot higher and more steady. Going to restart with the script unmodified.

 

Test 1  - md_sync_window=512  - Completed in 604.779 seconds =  94.0 MB/s

Test 2  - md_sync_window=640  - Completed in 605.112 seconds =  89.1 MB/s

Test 3  - md_sync_window=768  - Completed in 605.048 seconds =  89.5 MB/s

Test 4  - md_sync_window=896  - Completed in 605.122 seconds =  88.6 MB/s

Test 5  - md_sync_window=1024 - Completed in 604.995 seconds =  89.2 MB/s

 

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.