unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

You can get all three values in one pass:

eval $(egrep 'num_stripes|write_limit|sync_window' /boot/config/disk.cfg | tr -d '\r')

I assume you were looking for the default values, before you start changing things.

Remember, those values can be superceeded by: /boot/config/extra.cfg

as per: http://lime-technology.com/forum/index.php?topic=4625.msg42091#msg42091

 

My thoughts on this were that when Tom originally rolled out these tunables, the only way to adjust them was in extra.cfg, and that at some point later he made the adjustment a core part of disk.cfg, and tunable through the GUI.

 

But you make a good point, some may still be using extra.cfg to override the values.

 

When I allow the changes to be saved, should I be only writing to extra.cfg, or is it okay to make the changes in disk.cfg?  Since the GUI changes disk.cfg, I was following that model.

Link to comment

When I allow the changes to be saved, should I be only writing to extra.cfg, or is it okay to make the changes in disk.cfg?  Since the GUI changes disk.cfg, I was following that model.

You're getting carried away trying to make a tool that will do EVERYTHING plus make you some coffee.  Focus on the core functionality, and let the users change their default settings if they like.

 

Ever seen the movie Idiocracy?  Remember the receptionist's computer in the hospital?  That movie invented the Android OS years ahead of its time!  :D

 

Let the users make the change they want.

Put in the documentation where to make the change in the gui or file.

 

Later on after there are many tests and if you feel like revisiting, you can update the file automatically with some option switch. Or via some webgui presentation test.

 

I would not do it at this point.

 

Example. I want to run the benchmark without touching my own customized values.

What I have works for very well for mel with no pauses and high burst write speed (Which is very important to me).

Link to comment

When I allow the changes to be saved, should I be only writing to extra.cfg, or is it okay to make the changes in disk.cfg?  Since the GUI changes disk.cfg, I was following that model.

You're getting carried away trying to make a tool that will do EVERYTHING plus make you some coffee.  Focus on the core functionality, and let the users change their default settings if they like.

 

Ever seen the movie Idiocracy?  Remember the receptionist's computer in the hospital?  That movie invented the Android OS years ahead of its time!  :D

 

Let the users make the change they want.

Put in the documentation where to make the change in the gui or file.

 

Later on after there are many tests and if you feel like revisiting, you can update the file automatically with some option switch. Or via some webgui presentation test.

 

I would not do it at this point.

 

Example. I want to run the benchmark without touching my own customized values.

What I have works for very well for mel with no pauses and high burst write speed (Which is very important to me).

 

Perhaps neither of you have tried any of the recent versions.  For a while, there has been a 'SAVE' option that presents the user with the option to write the values to the disk.cfg file.

 

If the user wants the values written to the file, they simply type SAVE and it's done.  The user get's to choose between the fastest and best bang values.

 

If they don't want the new values, they simply don't type SAVE, and their original values are preserved.

 

My question wasn't should I do this or not, I've been doing it for a while, and will continue to do it.  This is the type of program I write. 

 

My question was simply whether I should maintain the values in disk.cfg, or extra.cfg.

Link to comment

Agree with WeeboTech ... if you're going to save them, save them in disk.cfg like the GUI does.

 

If someone's got an "extra.cfg" file, then it's up to them to change the values if they want to.  If you want to be "extra helpful", you could check for the presence of that file and remind them that they have it and that it overrides the values in disk.cfg => but let the knowledgeable user make that change.

 

By the way, I don't see anywhere in the utility to tell it how strong you like your coffee !!  :)

 

Link to comment

Am I doing something wrong in my Read tests? 

 

I'm asking because all of my results, from md_num_stripes=128 to md_num_stripes=2816, had the same read speed (as calculated by dd).

 

I tested with a 5GB file (my server has 4GB of RAM).  The file was all zeros (this was a file I had created with the write test).

 

Here's the command I issued for the test: dd if=/mnt/disk$TestDisk/testfile5gbzeros.txt bs=64k of=/dev/null

 

I also drop the caches: sync && echo 3 > /proc/sys/vm/drop_caches

 

I set the write_limit and sync_window both to 128 for the whole read test - I don't think these values affect read performance so I push them way down and out of the way (probably not necessary, but figured someone would ask about it).

 

Here's my results:

 Test | num_stripes | write_limit | sync_window |   Speed   
------------------------------------------------------------
   1  |      128    |      128    |     128    | 156 MB/s
   2  |      256    |      128    |     128    | 155 MB/s
   3  |      384    |      128    |     128    | 156 MB/s
   4  |      512    |      128    |     128    | 156 MB/s
   5  |      640    |      128    |     128    | 156 MB/s
   6  |      768    |      128    |     128    | 156 MB/s
   7  |      896    |      128    |     128    | 156 MB/s
   8  |     1024    |      128    |     128    | 156 MB/s
   9  |     1152    |      128    |     128    | 156 MB/s
  10  |     1280    |      128    |     128    | 156 MB/s
  11  |     1408    |      128    |     128    | 155 MB/s
  12  |     1536    |      128    |     128    | 156 MB/s
  13  |     1664    |      128    |     128    | 155 MB/s
  14  |     1792    |      128    |     128    | 156 MB/s
  15  |     1920    |      128    |     128    | 156 MB/s
  16  |     2048    |      128    |     128    | 156 MB/s
  17  |     2176    |      128    |     128    | 156 MB/s
  18  |     2304    |      128    |     128    | 156 MB/s
  19  |     2432    |      128    |     128    | 155 MB/s
  20  |     2560    |      128    |     128    | 156 MB/s
  21  |     2688    |      128    |     128    | 156 MB/s
  22  |     2816    |      128    |     128    | 156 MB/s

 

I'm planning on testing with both a random file and a real DVD ISO file, but the results above caught me off guard, so I thought I would ask the experts in case I had completely missed a vital step or formed my dd test wrong.

 

Thanks,

Paul

Link to comment

I did not expect the numbers to have much effect on read, but I thought it would have 'some' effect on writes.

 

Tuning md_write_limit has had a huge effect on writes - as you expected.

 

Tuning md_num_stripes has had zero effect on reads, at least on my server.  Several users have commented about how they improved read performance by tuning num_stripes.  I expected to see something, especially with values at low as 128, 10% of unRAID stock.

 

I tried writing a 10GB file of random data (instead of zero data) to use for the test, but hours later the file still wasn't complete.

 

Now I'm reading a 7.1GB DVD ISO, and I'm getting a constant 128 MB/s regardless of md_num_stripes.

 

I know not everyone's server has shown a response to tuning md_sync_window, perhaps my server just doesn't show a response to md_num_stripes.  I guess we'll find out when I release v3.0.

 

Link to comment

So, you're doing one sustained read from one disk, while nothing else is happening. Why does the result surprise you?

 

The result surprises me because I've set the md_num_stripes to a very low value, purposely trying to starve the reads, and even a value of 8 (yes, eight) didn't starve it.

 

A few users in this post have made a point of the importance of increasing md_num_stripes to very large values above sync+writes, and here I am playing with tiny values and it having no impact whatsoever.

 

I thought you'd do those read tests while you're running a parity check.

 

Just trying to make sure the read test is working correctly before I get crazy complex by running multiple tests simultaneously.  I wasn't trusting the results, so I asked for input.

Link to comment

I thought you'd do those read tests while you're running a parity check.

 

Here are my results while running a 7.1 GB DVD ISO read while running a Parity Check.

 

No matter what values I used for md_num_stripes, performance was not altered by that value.  It appears that unRAID 5.0 is prioritizing the Parity Sync over Read traffic.

 

I found that reporting comparable results was also problematic, as the further the Parity Check progressed, the more my Read speeds dropped, most likely because the HD had to seek further and further to find the data to Read as the Parity Sync position got further away from the Read data.  You can see this in test results 1-7 below.

 

To 'level set' the results, I did two Test 1's, both at the same Parity Sync position, one with a md_num_stripes a measly 4 stripes larger than md_sync_window, and a second with md_num_stripes 3x the value of md_sync_window.  There was no read speed improvement. 

  • Test  1 - md_num_stripes=2820 - (7.1 GB) took 2896.03s averaging 2.5 MB/s  <--Num Stripes only 4 bigger than Sync Window
  • Test  1 - md_num_stripes=8320 - (7.1 GB) took 2869.10s averaging 2.5 MB/s  <--Num Stripes set to 3x the Sync Window

 

Parity Check Speed was barely impacted by running the Read tests simultaneously.  My full Parity Check time increased less than 5 minutes, from 7h35m to 7h39m.

 

In these tests, you can see the gradual slowdown of the Read results as the Parity Check progressed, and that the Parity Check finished during Test 8:

 Test | num_stripes | write_limit | sync_window |   Speed   
------------------------------------------------------------
   1  |     2820    |      128    |     2816    | 2.5 MB/s
   2  |     2948    |      128    |     2816    | 2.3 MB/s
   3  |     3076    |      128    |     2816    | 1.9 MB/s
   4  |     3204    |      128    |     2816    | 1.9 MB/s
   5  |     3332    |      128    |     2816    | 1.7 MB/s
   6  |     3460    |      128    |     2816    | 1.6 MB/s
   7  |     3588    |      128    |     2816    | 1.6 MB/s
   8  |     3716    |      128    |     2816    | 24.7 MB/s
   9  |     3844    |      128    |     2816    | 129 MB/s
  10  |     3972    |      128    |     2816    | 128 MB/s

 

Here's Test 1 again (from the same Parity Check position as the Test 1 above), but with the higher md_num_stripes:

 Test | num_stripes | write_limit | sync_window |   Speed   
------------------------------------------------------------
   1  |     8320    |      128    |     2816    | 2.5 MB/s

 

The read-time of the DVD during a Parity Check was anywhere from 48 minutes (should be watchable) at the beginning of my Parity Check to 75 minutes (would probably have stuttering) at the end of my Parity Check.  I don't think a Blu-Ray would have been watchable at all.  Most likely the movie was located at the beginning of the drive, and the results might have been different otherwise.

 

Any questions, thoughts or suggestions?

 

I might run another test with lower sync_window values, to see if I can artificially prioritize Reads by starving the Parity Sync.

 

I planning on releasing v3.0 of this utility with the Read test included, so other's can see if it has any impact on their system, but personally I don't see much value in this test.

 

-Paul

 

Link to comment
  • 2 weeks later...

Hi

 

Ran your script and it produced the values below.

 

I decided to go with the Best Bang for the Buck values and haven't been disappointed.

 

I just watched a 720p stream with XBMC via SMB and have a parity check running also (just rebuilt a disk and wanted to run a check), to top it off the mover kicked in whilst i was watching and there was buffering for a few seconds but all carried on as normal. Parity check speed is currently at 29.6MB/s

 

I was not able to do this before and just wanted to say thanks.

 

Note: The results listed below are with me using a WD 3TB Green as parity I've just upgraded it to a 3TB Toshiba 7200rpm so will need to rerun script to see if the values change.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  81.6 MB/s 
   2  |    1536     |     768     |     640     |  81.5 MB/s 
   3  |    1664     |     768     |     768     |  81.6 MB/s 
   4  |    1920     |     896     |     896     |  81.5 MB/s 
   5  |    2176     |    1024     |    1024     |  81.5 MB/s 
   6  |    2560     |    1152     |    1152     |  81.5 MB/s 
   7  |    2816     |    1280     |    1280     |  81.5 MB/s 
   8  |    3072     |    1408     |    1408     |  81.6 MB/s 
   9  |    3328     |    1536     |    1536     |  81.5 MB/s 
  10  |    3584     |    1664     |    1664     |  81.5 MB/s 
  11  |    3968     |    1792     |    1792     |  81.4 MB/s 
  12  |    4224     |    1920     |    1920     |  81.6 MB/s 
  13  |    4480     |    2048     |    2048     |  81.6 MB/s 
  14  |    4736     |    2176     |    2176     |  81.5 MB/s 
  15  |    5120     |    2304     |    2304     |  81.6 MB/s 
  16  |    5376     |    2432     |    2432     |  81.4 MB/s 
  17  |    5632     |    2560     |    2560     |  81.3 MB/s 
  18  |    5888     |    2688     |    2688     |  81.4 MB/s 
  19  |    6144     |    2816     |    2816     |  81.4 MB/s 
  20  |    6528     |    2944     |    2944     |  81.4 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Special Pass ---
--- FULLY AUTOMATIC TEST PASS 1b (Rough - 4 Sample Points @ 3min Duration)---
  21  |    896     |     768     |     128     |  50.9 MB/s 
  22  |    1024     |     768     |     256     |  79.9 MB/s 
  23  |    1280     |     768     |     384     |  81.2 MB/s 
  24  |    1408     |     768     |     512     |  81.4 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  25  |    1288     |     768     |     392     |  80.1 MB/s 
  26  |    1296     |     768     |     400     |  80.0 MB/s 
  27  |    1304     |     768     |     408     |  80.2 MB/s 
  28  |    1312     |     768     |     416     |  80.1 MB/s 
  29  |    1320     |     768     |     424     |  80.0 MB/s 
  30  |    1328     |     768     |     432     |  80.1 MB/s 
  31  |    1336     |     768     |     440     |  80.1 MB/s 
  32  |    1344     |     768     |     448     |  80.1 MB/s 
  33  |    1360     |     768     |     456     |  80.0 MB/s 
  34  |    1368     |     768     |     464     |  80.2 MB/s 
  35  |    1376     |     768     |     472     |  80.2 MB/s 
  36  |    1384     |     768     |     480     |  80.1 MB/s 
  37  |    1392     |     768     |     488     |  80.2 MB/s 
  38  |    1400     |     768     |     496     |  80.2 MB/s 
  39  |    1408     |     768     |     504     |  80.1 MB/s 
  40  |    1416     |     768     |     512     |  80.2 MB/s 

Completed: 2 Hrs 21 Min 34 Sec.

Best Bang for the Buck: Test 1 with a speed of 81.6 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 38MB of RAM on your hardware.


Unthrottled values for your server came from Test 27 with a speed of 80.2 MB/s

     Tunable (md_num_stripes): 1304
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 408

These settings will consume 35MB of RAM on your hardware.
This is -4MB less than your current utilization of 39MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to comment

Well, after many, many changes (got rid of all 1.5TB drives, upgraded MV8 to a pair of M1015s, removed Areca controller, added Intel 525 SSD cache) and re-running this script a couple of times, I'm finally happy with my parity check speeds again!

 

Duration: 10 hours, 27 minutes, 11 seconds. Average speed: 106.3 MB/sec

 

Previously, my parity check times were around 15 hours.

 

I do regret breaking up my RAID0 parity drive though. It was NOT the root cause of my slow parity checks. I may do that again in the future, but with a 6 Gbps (SATA 3) controller. The Areca I had was only a 3 Gbps (SATA 2) controller.

 

I have no idea if it will make any improvements or not, but I want to try this again with the "vm.highmem_is_dirtyable=1" and limiting unRAID memory to 4095MB.

 

 

Here are the results from the Tunables script:

 

unRAID Tunables Tester v2.2 by Pauven

--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---

Test 1   - md_sync_window=512  - Completed in 179.433 seconds = 118.1 MB/s
Test 2   - md_sync_window=640  - Completed in 181.565 seconds = 123.8 MB/s
Test 3   - md_sync_window=768  - Completed in 179.441 seconds = 124.9 MB/s
Test 4   - md_sync_window=896  - Completed in 179.459 seconds = 125.6 MB/s
Test 5   - md_sync_window=1024 - Completed in 179.456 seconds = 125.8 MB/s
Test 6   - md_sync_window=1152 - Completed in 179.438 seconds = 125.8 MB/s
Test 7   - md_sync_window=1280 - Completed in 179.435 seconds = 125.9 MB/s
Test 8   - md_sync_window=1408 - Completed in 179.451 seconds = 125.8 MB/s
Test 9   - md_sync_window=1536 - Completed in 179.448 seconds = 125.8 MB/s
Test 10  - md_sync_window=1664 - Completed in 179.446 seconds = 125.8 MB/s
Test 11  - md_sync_window=1792 - Completed in 179.454 seconds = 125.8 MB/s
Test 12  - md_sync_window=1920 - Completed in 179.469 seconds = 125.8 MB/s
Test 13  - md_sync_window=2048 - Completed in 179.482 seconds = 125.8 MB/s
Test 14  - md_sync_window=2176 - Completed in 179.442 seconds = 125.8 MB/s
Test 15  - md_sync_window=2304 - Completed in 179.448 seconds = 125.8 MB/s
Test 16  - md_sync_window=2432 - Completed in 179.451 seconds = 125.8 MB/s
Test 17  - md_sync_window=2560 - Completed in 179.441 seconds = 125.8 MB/s
Test 18  - md_sync_window=2688 - Completed in 179.436 seconds = 125.8 MB/s
Test 19  - md_sync_window=2816 - Completed in 179.444 seconds = 125.8 MB/s
Test 20  - md_sync_window=2944 - Completed in 179.447 seconds = 125.6 MB/s
--- Targeting Fastest Result of md_sync_window 1280 bytes for Final Pass ---

--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---

Test 21  - md_sync_window=1160 - Completed in 239.569 seconds =  86.2 MB/s
Test 22  - md_sync_window=1168 - Completed in 239.567 seconds =  84.2 MB/s
Test 23  - md_sync_window=1176 - Completed in 239.579 seconds =  86.4 MB/s
Test 24  - md_sync_window=1184 - Completed in 239.627 seconds =  84.0 MB/s
Test 25  - md_sync_window=1192 - Completed in 239.584 seconds =  97.0 MB/s
Test 26  - md_sync_window=1200 - Completed in 239.587 seconds = 125.7 MB/s
Test 27  - md_sync_window=1208 - Completed in 239.598 seconds = 125.7 MB/s
Test 28  - md_sync_window=1216 - Completed in 239.603 seconds = 125.7 MB/s
Test 29  - md_sync_window=1224 - Completed in 239.605 seconds = 125.8 MB/s
Test 30  - md_sync_window=1232 - Completed in 239.593 seconds = 125.8 MB/s
Test 31  - md_sync_window=1240 - Completed in 239.600 seconds = 125.8 MB/s
Test 32  - md_sync_window=1248 - Completed in 239.584 seconds = 125.7 MB/s
Test 33  - md_sync_window=1256 - Completed in 239.595 seconds = 125.8 MB/s
Test 34  - md_sync_window=1264 - Completed in 239.587 seconds = 125.8 MB/s
Test 35  - md_sync_window=1272 - Completed in 239.592 seconds = 125.8 MB/s
Test 36  - md_sync_window=1280 - Completed in 239.595 seconds = 125.8 MB/s

Completed: 2 Hrs 7 Min 53 Sec.

Press ENTER To Continue
Best Bang for the Buck: Test 4 with a speed of 125.6 MB/s

     Tunable (md_num_stripes): 1920
     Tunable (md_write_limit): 896
     Tunable (md_sync_window): 896

These settings will consume 82MB of RAM on your hardware.


Unthrottled values for your server came from Test 29 with a speed of 125.8 MB/s

     Tunable (md_num_stripes): 2720
     Tunable (md_write_limit): 1224
     Tunable (md_sync_window): 1224

These settings will consume 116MB of RAM on your hardware.
This is -27MB less than your current utilization of 143MB.
NOTE: Adding additional drives will increase memory consumption.

 

 

Thanks again, Pauven!!

Link to comment
  • 2 weeks later...

I upgrade by 7200prm 3TB parity to a 7200rpm 4TB parity drive, removed my 250GB drive altogether and replace it with the previous parity drive.

 

This broke the 100MB barrier for my system. In my original post I forgot to mention unRAID runs virtualized, 4GB Ram reservation, 16port LSI controller with an lsi expander hanging off 2 of the ports. I have a mixture of drives, Data: (7) 2TB green drives, (7) 3TB green drives, (1) 2TB 7200rpm / Parity 4TB 7200rpm / Cache 2TB 7200rpm

 

I do wish the script ran a test with the default unRAID values first to see the difference.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 100.3 MB/s 
   2  |    1536     |     768     |     640     | 108.1 MB/s 
   3  |    1664     |     768     |     768     | 109.4 MB/s 
   4  |    1920     |     896     |     896     | 110.8 MB/s 
   5  |    2176     |    1024     |    1024     | 111.4 MB/s 
   6  |    2560     |    1152     |    1152     | 111.7 MB/s 
   7  |    2816     |    1280     |    1280     | 112.0 MB/s 
   8  |    3072     |    1408     |    1408     | 112.0 MB/s 
   9  |    3328     |    1536     |    1536     | 112.1 MB/s 
  10  |    3584     |    1664     |    1664     | 112.2 MB/s 
  11  |    3968     |    1792     |    1792     | 111.9 MB/s 
  12  |    4224     |    1920     |    1920     | 111.9 MB/s 
  13  |    4480     |    2048     |    2048     | 111.9 MB/s 
  14  |    4736     |    2176     |    2176     | 112.1 MB/s 
  15  |    5120     |    2304     |    2304     | 111.8 MB/s 
  16  |    5376     |    2432     |    2432     | 112.0 MB/s 
  17  |    5632     |    2560     |    2560     | 112.1 MB/s 
  18  |    5888     |    2688     |    2688     | 112.1 MB/s 
  19  |    6144     |    2816     |    2816     | 111.9 MB/s 
  20  |    6528     |    2944     |    2944     | 111.9 MB/s 
--- Targeting Fastest Result of md_sync_window 1664 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    3424     |    1544     |    1544     | 112.1 MB/s 
  22  |    3448     |    1552     |    1552     | 112.0 MB/s 
  23  |    3464     |    1560     |    1560     | 112.2 MB/s 
  24  |    3480     |    1568     |    1568     | 112.1 MB/s 
  25  |    3496     |    1576     |    1576     | 112.2 MB/s 
  26  |    3520     |    1584     |    1584     | 112.2 MB/s 
  27  |    3536     |    1592     |    1592     | 112.0 MB/s 
  28  |    3552     |    1600     |    1600     | 112.1 MB/s 
  29  |    3568     |    1608     |    1608     | 112.1 MB/s 
  30  |    3584     |    1616     |    1616     | 112.1 MB/s 
  31  |    3608     |    1624     |    1624     | 112.1 MB/s 
  32  |    3624     |    1632     |    1632     | 112.1 MB/s 
  33  |    3640     |    1640     |    1640     | 112.1 MB/s 
  34  |    3656     |    1648     |    1648     | 112.0 MB/s 
  35  |    3680     |    1656     |    1656     | 112.0 MB/s 
  36  |    3696     |    1664     |    1664     | 112.0 MB/s 

Completed: 2 Hrs 8 Min 17 Sec.

Best Bang for the Buck: Test 5 with a speed of 111.4 MB/s

     Tunable (md_num_stripes): 2176
     Tunable (md_write_limit): 1024
     Tunable (md_sync_window): 1024

These settings will consume 153MB of RAM on your hardware.


Unthrottled values for your server came from Test 23 with a speed of 112.2 MB/s

     Tunable (md_num_stripes): 3464
     Tunable (md_write_limit): 1560
     Tunable (md_sync_window): 1560

These settings will consume 243MB of RAM on your hardware.
This is 153MB more than your current utilization of 90MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

Again thanks for all the work. I will run a parity check to see if its less than 10 hours (which is what I had with the 250GB drive in the array previously)

 

previous results with 250GB drive and 3TB parity drive (all else was the same) and v2.0 of the script back then:

 

Tunables Report from  unRAID Tunables Tester v2.0 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  84.4 MB/s 
   2  |    1536     |     768     |     640     |  82.4 MB/s 
   3  |    1664     |     768     |     768     |  84.7 MB/s 
   4  |    1920     |     896     |     896     |  84.9 MB/s 
   5  |    2176     |    1024     |    1024     |  85.0 MB/s 
   6  |    2560     |    1152     |    1152     |  84.8 MB/s 
   7  |    2816     |    1280     |    1280     |  84.9 MB/s 
   8  |    3072     |    1408     |    1408     |  85.0 MB/s 
   9  |    3328     |    1536     |    1536     |  84.9 MB/s 
  10  |    3584     |    1664     |    1664     |  85.0 MB/s 
  11  |    3968     |    1792     |    1792     |  85.0 MB/s 
  12  |    4224     |    1920     |    1920     |  84.4 MB/s 
  13  |    4480     |    2048     |    2048     |  84.9 MB/s 
  14  |    4736     |    2176     |    2176     |  85.0 MB/s 
  15  |    5120     |    2304     |    2304     |  84.8 MB/s 
  16  |    5376     |    2432     |    2432     |  85.0 MB/s 
  17  |    5632     |    2560     |    2560     |  84.9 MB/s 
  18  |    5888     |    2688     |    2688     |  85.0 MB/s 
  19  |    6144     |    2816     |    2816     |  85.0 MB/s 
  20  |    6528     |    2944     |    2944     |  85.0 MB/s 
--- Targeting Fastest Result of md_sync_window 1024 bytes for Medium Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    2008     |     904     |     904     |  84.8 MB/s 
  22  |    2024     |     912     |     912     |  84.8 MB/s 
  23  |    2040     |     920     |     920     |  84.7 MB/s 
  24  |    2056     |     928     |     928     |  84.7 MB/s 
  25  |    2080     |     936     |     936     |  82.9 MB/s 
  26  |    2096     |     944     |     944     |  84.7 MB/s 
  27  |    2112     |     952     |     952     |  84.8 MB/s 
  28  |    2128     |     960     |     960     |  84.3 MB/s 
  29  |    2144     |     968     |     968     |  84.4 MB/s 
  30  |    2168     |     976     |     976     |  84.8 MB/s 
  31  |    2184     |     984     |     984     |  84.2 MB/s 
  32  |    2200     |     992     |     992     |  84.9 MB/s 
  33  |    2216     |    1000     |    1000     |  84.8 MB/s 
  34  |    2240     |    1008     |    1008     |  84.7 MB/s 
  35  |    2256     |    1016     |    1016     |  84.7 MB/s 
  36  |    2272     |    1024     |    1024     |  73.7 MB/s 

Completed: 2 Hrs 8 Min 12 Sec.

Best Bang for the Buck: Test 1 with a speed of 84.4 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 99MB of RAM on your hardware.


Unthrottled values for your server came from Test 32 with a speed of 84.9 MB/s

     Tunable (md_num_stripes): 2200
     Tunable (md_write_limit): 992
     Tunable (md_sync_window): 992

These settings will consume 154MB of RAM on your hardware.
This is 64MB more than your current utilization of 90MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

P.S. I always have NCQ  enabled

Link to comment

Well unfortunately my parity check is longer,  "kernel: md: sync done. time=44969sec" 12.49 hrs

 

What I noticed was the last TB in the new parity drive (4TB) took the additional 2 hrs to complete, largest data drives are 3TB, so even after all data drives completed their reads, it still took this additional amount (which I previously didn't have, as the parity drive was a 3TB drive) guess thats how it goes. Go big go long  :)

So taking the last 2 hours out of the equations (If the parity drive was a 3TB drive), would put the parity check at 10hrs, which is the same amount of time I had with the 250GB drive installed and running a parity check. Surprisingly as the script showed a bump  of 27MB from the previous Best bang for the buck to the new one.

 

I also decide to take a deeper look at my green drives, the 2TB units are 5940 rpm, and the 3TB units are 5700 rpm (all drives are Hitachi)

 

Link to comment
  • 2 weeks later...
  • 3 weeks later...

So I have a question. 

 

I have a diverse mix of drives, 160GB, 320GB, 1TB (all 2.5"), 2TB, 3TB and 4TB (3.5").  Obviously, the slowest drives are the smallest.  So what would be the best policy to tune my parity check speed?  This utility, though awesome (thanks Pauven!) doesn't even get past my 160GB drives, which is probably why my tuning has resulted in parity checks upwards of 15 hours!  it is certainly true that your speed is limited by the slowest drive, but that only holds while the slowest drive is being checked!

 

My supposition is this:  I assume that, barring any extreme circumstances, one should tune one's parity check to the values that are most beneficial for the drive(s) one spends the largest(?) amount of the parity check on.  But I have 6 2TB drives, should I tune for those, or the 4 3TB drives (12 is 12 is 12, so to speak)?  And then, supposing there is actually an answer to that question, is there a better way than trial and error to tune this array?  Can I use options to this script to tune for that?

 

Just throwing it out there.

 

EDIT:  this is actually several questions...

 

TIA,

 

P

Link to comment

Upgraded all my 1.5TB EARS drives to Seagate 4TB, ran the script again.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  78.5 MB/s 
   2  |    1536     |     768     |     640     |  67.5 MB/s 
   3  |    1664     |     768     |     768     |  60.7 MB/s 
   4  |    1920     |     896     |     896     |  67.3 MB/s 
   5  |    2176     |    1024     |    1024     |  64.4 MB/s 
   6  |    2560     |    1152     |    1152     |  60.9 MB/s 
   7  |    2816     |    1280     |    1280     |  62.2 MB/s 
   8  |    3072     |    1408     |    1408     |  56.9 MB/s 
   9  |    3328     |    1536     |    1536     |  63.7 MB/s 
  10  |    3584     |    1664     |    1664     |  65.9 MB/s 
  11  |    3968     |    1792     |    1792     |  61.0 MB/s 
  12  |    4224     |    1920     |    1920     |  61.0 MB/s 
  13  |    4480     |    2048     |    2048     |  65.2 MB/s 
  14  |    4736     |    2176     |    2176     |  69.9 MB/s 
  15  |    5120     |    2304     |    2304     |  64.0 MB/s 
  16  |    5376     |    2432     |    2432     |  64.1 MB/s 
  17  |    5632     |    2560     |    2560     |  63.3 MB/s 
  18  |    5888     |    2688     |    2688     |  66.8 MB/s 
  19  |    6144     |    2816     |    2816     |  63.8 MB/s 
  20  |    6528     |    2944     |    2944     |  62.2 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Special Pass ---
--- FULLY AUTOMATIC TEST PASS 1b (Rough - 4 Sample Points @ 3min Duration)---
  21  |    896     |     768     |     128     |  88.8 MB/s 
  22  |    1024     |     768     |     256     |  81.7 MB/s 
  23  |    1280     |     768     |     384     |  83.1 MB/s 
  24  |    1408     |     768     |     512     |  76.6 MB/s 
--- Targeting Fastest Result of md_sync_window 128 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  25  |    856     |     768     |     8     |  74.4 MB/s 
  26  |    864     |     768     |     16     |  78.8 MB/s 
  27  |    880     |     768     |     24     |  72.0 MB/s 
  28  |    888     |     768     |     32     |  77.1 MB/s 
  29  |    896     |     768     |     40     |  74.2 MB/s 
  30  |    904     |     768     |     48     |  79.1 MB/s 
  31  |    912     |     768     |     56     |  80.9 MB/s 
  32  |    920     |     768     |     64     |  83.5 MB/s 
  33  |    928     |     768     |     72     |  84.9 MB/s 
  34  |    936     |     768     |     80     |  84.7 MB/s 
  35  |    944     |     768     |     88     |  86.0 MB/s 
  36  |    960     |     768     |     96     |  85.1 MB/s 
  37  |    968     |     768     |     104     |  88.5 MB/s 
  38  |    976     |     768     |     112     |  87.0 MB/s 
  39  |    984     |     768     |     120     |  85.9 MB/s 
  40  |    992     |     768     |     128     |  87.3 MB/s 

Completed: 2 Hrs 21 Min 36 Sec.

Best Bang for the Buck: Test 1 with a speed of 78.5 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 55MB of RAM on your hardware.


Unthrottled values for your server came from Test 37 with a speed of 88.5 MB/s

Test 21 had the highest result. Notably, it had a higher md_sync_window window. Maybe an additional test pass to test for differeing md_sync_window values on the best setting?

     Tunable (md_num_stripes): 968
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 104

These settings will consume 37MB of RAM on your hardware.
This is -2MB less than your current utilization of 39MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

I suspect my 2TB WD drive is still holding me back with speeds hovering around 80 MB/sec but really jumping when it gets past the 2TB mark.

Link to comment
  • 2 weeks later...

First off extremely good work. I haven't really looked at this again since the very for page of posts so I thought I would take time to play again today.

 

There is an elegance to your approach that is to be commended.

 

I do however have a question/observation.

 

The speed values reported by your tuning tool seem, for me at least, to have no to relation to the values reported by unRAID.

 

I will keep updating this post as I eyeball values:

 

...  34  |    3376     |    1520     |    1520     | 106.6 MB/s
  35  |    3392     |    1528     |    1528     | 106.7 MB/s
  36  |    3408     |    1536     |    1536     | 106.6 MB/s

Completed: 2 Hrs 8 Min 38 Sec.

Best Bang for the Buck: Test 1 with a speed of 105.3 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 49MB of RAM on your hardware.

 

unRAID v5 r3 webgui

Parity-Sync: 1.2 % 48.3 MB/sec
Parity-Sync: 1.4 % 34.7 MB/sec
Parity-Sync: 1.6 % 112.4 MB/sec
Parity-Sync: 2.5% 112.1 MB/sec
Parity-Sync: 3.0% 111.9 MB/sec
Parity-Sync: 23.1% 93.4 MB/sec
Parity-Sync: 31.1% 82.9 MB/sec
Parity-Sync: 31.1% 74.3 MB/sec
Parity-Sync: 42.7% 67.1 MB/sec
Parity-Sync: 48.7% 54.3 MB/sec
Parity-Sync: 57.7%  99.4 MB/sec
Parity-Sync: 78.8%  95.7 MB/sec

 

update: it just sped up, very odd indeed. I can reproduce this by cancelling and starting again

up2date2: number > 50% are less meaningul as i have a mix of 2,3 and 4Tb disks however for completeness i will keep posting

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.