Jump to content
Pauven

unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables

1039 posts in this topic Last Reply

Recommended Posts

Another thing in version 1.1:

The function HighestDiskInArray() is not doing you any good.  I can have only two disks assigned in my array, one is disk1 and the other is disk20.  The way that function works now, it will think that I have 20 disks in my array.

 

If you want to know the number of disks in the array, you can do something as simple as this:

NumberOfDisks=`ls /dev/md? | wc -l`

 

I did this on purpose, as my understanding is that if you have two disks, one as disk1, and one as disk20, the md driver will allocate memory for 20 drives.

 

I guess I would need Tom to chime in here and let me know if my understanding is incorrect, but for now I will leave it as coded, as I believe it is reporting the true impact of the setting.

 

On my server I only have 15 drives, but the way I have them spaced out for cooling means I have a drive in slot 23.

Share this post


Link to post

That may actually have made the quality of the results even worse:  You are waiting inside a very tight loop, which now consumes even more CPU resources that could otherwise be spent on the actual parity calculations.  This approach is wrong.  Your test-ending criteria should be time elapsed, instead of megabytes passed.  That way you wont need to cycle in a loop.  You can just sleep for a predifined amount of time, and then base the calculations on how many megabytes work were done during that time period.  We want to minimize the effect of the observation influencing what is being observed.

 

Another problem with your loop is that you can't possibly achieve a precision of 1 millisecond in a Bash loop that  is shelling out to six external commands (sleep, mdcmd, grep, sed, date, awk) inside the loop.  Such thing is very time consumming, not to mention that one of your external commands is another Bash script.  I timed the meat inside your loop on my toy-server, and it took about 350 milliseconds when parity check was running.  So, the accuracy is not really 1000x higher.  It's nowhere near that.  But discussion about the loop precision is moot as you should just get rid of that loop altogether, for the reasons in the previous paragraph.

 

You make some very good points, but I think your critique of the current method is overly harsh.  The current method works, and works fantastically well at that.  Your proposed changes make for better code, but may not actually make for better results.

 

You're right about the precision not being 1000x higher, I think I am seeing about 250ms processing time on my server.  I could probably remove the sleep statement altogether as I don't think it is accomplishing anything anymore in the current loop.

 

All said, I think I will implement your suggestions for the next version.  Thanks for sharing your feedback.

 

-Paul

 

Share this post


Link to post

It seems very unreasonable that the memory unRAID uses would depend on they way you assign the drives in the slots.  But yes, only Tom can clear that.  Don't rely on him seeing this thread, just email him with thet simple question.

 

Tom's busy with more pressing matters at the moment.  We all need 5.0 to go gold.

 

I got my understanding of the memory allocation from this post by mikejp, which reportedly sourced information that Tom provided:  http://lime-technology.com/forum/index.php?topic=15224.msg142767#msg142767

 

I can't imagine someone would write "highest disk number in array" by accident instead of simply writing "disks in array", but once Tom surfaces again and things calm down, I can ping him.

Share this post


Link to post

Thanks for this, currently running it.  Also thanks to you and Patilan for not letting your conversation devolve into a petty argument.

 

Sent from a phone, sorry for any typos

 

 

Share this post


Link to post

instead of harping on an inefficient loop, just run the program and enjoy the free fruits of my labor.

No need to get so cross, I was just trying to help.

 

I am attaching here the script that I used to test my tunables.  It does basically the same thing, but in only 60 lines of code.  See if you like it, and canibalize it for your purposes any way you see fit.

 

Enjoy the free fruits of my labor. :)

Your script forces makes the webgui unreachable and forces unraid to start a parity check... i would advice NOT using it.

Pauven's script can be run without any weird effects.

Share this post


Link to post

You sound very irritated and stressed out. Take a breath or two and try to react normal.

FYI, until i upgraded to rc16c a few days ago, my server ran 24/7 over 130 days without any problems.

Share this post


Link to post

Nevermind, not interested anymore. My extremely unstable and soon to be crashing server which is running al sorts of weird shit just can't handle it.

Share this post


Link to post

Thanks a lot Pauven

 

Really like the idea of this script and very excited to eek every last bit of transfer rate out of the box.

 

Just finished an upgrade to 5RC16 and running the FULLAUTO test now. Getting addicted to checking the terminal for the new biggest number  :P

 

I have sort of a general question, what would cause these values to need to be changed? What I am trying to get at is, with normal use (not major hardware changes), adds, deletes, drive upgrades, additions, replacements, would there be a need to find a new optimal value? Like maybe run this once a every x months? After a hard drive change?

Share this post


Link to post

Hey Jonathan,

 

I know what you mean about addictively watching the console, I'm a number watcher just like you!

 

I think you would have to do a major change:  adding new HD controller, or possibly new faster HD's (like adding a couple 4TB 7200 RPM to your mix of 2TB 5400 RPM drives).  Basically, any kind of change that would directly affect throughput rates, as these memory parameters can be considered as a throttle on throughput.  Parameters that work well for 100MB/s drives could be too low to allow 140MB/s and 180MB/s drives to reach their full potential.

 

It's also possibly that simply adding drives is enough to warrant a change in parameters, but since memory is allocated on a per-drive basis, adding more drives automatically allocates more memory, so I don't think that is likely.

 

But in general, once you do this once, I think it would be extremely rare to ever do it again.

 

As an example, I had manually tested and chosen 1024 bytes for my md_sync_window a few months back.  This dropped my Parity Check from around 11 hours to 7:40.  Using this utility, I found that 2600 bytes is the threshold of unthrottled performance on my hardware, and it gave me an extra 3.5MB/s on the top end.  But this only dropped my Parity Check by 5 minutes, to 7:35.  The moral of the story is that, after you've tuned once and replaced the unRAID stock values, retuning again will probably show no real-world benefit.

 

By the way, I programmed version 2.0 of the utility over the weekend, and plan to post it today.  The most notable change is that I got the FULLAUTO routine down to 2.1 hours.

 

-Paul

Share this post


Link to post

By the way, I programmed version 2.0 of the utility over the weekend, and plan to post it today.  The most notable change is that I got the FULLAUTO routine down to 2.1 hours.

 

-Paul

 

Awesome!  I had planned on running this last night, but was too tired to come upstairs to kick it off. I will run it tonight after the new version is posted.

Share this post


Link to post

Thanks for the info Pauven, makes sense that the numbers would be mostly affected by major hardware changes.

 

 

By the way, I programmed version 2.0 of the utility over the weekend, and plan to post it today.  The most notable change is that I got the FULLAUTO routine down to 2.1 hours.

 

-Paul

 

What what what?? Down from 1.5x Parity time to only 2.1 hours? For me parity is about 8 hours so with version 2 I'd save about 10 hours...  :o

 

Any loss in samples taken or just a much more efficient routine?

Share this post


Link to post

Basically just a much more efficient routine.

 

I eliminated pass 3 altogether.  I originally suspected that there might be one single value that hit the sweet spot, so to say, and pass 3 was trying to find that single value.  After running way too many tests for my server's own good, I finally came to the conclusion that performance gradually increases to a plateau, levels off, and sometimes takes a dive after that plateau. 

 

So my new routine is all about finding the leading edge of that plateau with a simplified search pattern.  The first pass get's me in the neighborhood, and the 2nd pass finds it down to the nearest increment of 8. 

 

Fewer total samples are being taken, but many of the samples are longer, more accurate samples.  I have found that shorter samples lower the sample quality, so I think 2.1 hours is about as short as I want to make the FULLAUTO routine while still delivering any promise of accuracy.

Share this post


Link to post

Basically just a much more efficient routine.

 

I eliminated pass 3 altogether.  I originally suspected that there might be one single value that hit the sweet spot, so to say, and pass 3 was trying to find that single value.  After running way too many tests for my server's own good, I finally came to the conclusion that performance gradually increases to a plateau, levels off, and sometimes takes a dive after that plateau. 

 

So my new routine is all about finding the leading edge of that plateau with a simplified search pattern.  The first pass get's me in the neighborhood, and the 2nd pass finds it down to the nearest increment of 8. 

 

Fewer total samples are being taken, but many of the samples are longer, more accurate samples.  I have found that shorter samples lower the sample quality, so I think 2.1 hours is about as short as I want to make the FULLAUTO routine while still delivering any promise of accuracy.

 

have you looked into "sysctl vm.highmem_is_dirtyable=1" ?

I know that if I enable this command I do see higher speeds with stock nn values.

 

Read this post about it: http://lime-technology.com/forum/index.php?topic=25431.msg240538#msg240538

 

I wonder if once things are tuned with your script how that command would affect it?

Share this post


Link to post

Hey zoggy,

 

No, I've not looked into that before.

 

My hunch is that the md_* values are the proper way to tune the md subsystem for proper performance, and that allocating additional highmem for caching was somewhat masking the issue.

 

It's certainly possible that both forms of tuning may work wonders when combined. Though keep in mind nothing will ever be faster the the transfer speed of your drives, and I feel that I am already achieving that level of performance from this tuning utility alone.  Since I don't have any experience with the highmem_is_dirtyable parameter, perhaps you would be willing to test and report back on your experiences after tuning the md_* values?

 

-Paul

Share this post


Link to post

have you looked into "sysctl vm.highmem_is_dirtyable=1" ?

I know that if I enable this command I do see higher speeds with stock nn values.

 

This works great for regular writes to the filesystems.

I'm not sure it comes into play for the party check. I believe the md buffers are in low memory.

At least that was my experience when tuning them years ago.

Share this post


Link to post

Thanks again Pauven.

 

Testing out the new version 2.0 now.

 

The countdown timer in v2 looks slicker than the progression number of version 1.

 

Now I know how long before I need to look at the terminal for the next value to appear  ;D

Share this post


Link to post

fyi, in 2.0 your fullauto states at the command prompt that its going to take 1.5x length as full parity check. if you continue, the next screen mentions that its going to take 2.1 hours.

 

also, shouldnt a user do a non-correcting parity check BEFORE any test is done to make sure everything is good before continuing?

Share this post


Link to post

As Jonathan just alluded to, I have updated the utility to version 2.0 (see the main post) and I highly recommend upgrading to this new version.

 

New Features in v2.0:

  • As Patilan kindly pointed out, the original utility was causing high CPU loading due to a program loop that was constantly measuring the progress of the test by polling mdcmd status.  Not only was this code inefficient, it had the potential to influence the results due to high CPU usage.  As Patilan recommended, I changed the code to instead wait a predefined period of time, then measure how much data was processed during the test.  This has, as expected, lowered CPU utilization to normal levels.  It also appears, to my eyes at least, to have slightly improved the overall results.
  • Even though I made the above change, I still have a loop to update the GUI with a nice countdown timer.  Hopefully there are no concerns this time with the CPU load incurred by simply decrementing a timer variable and echoing a status update.  But if there are concerns, Patilan has made his lightweight version available which makes a nice alternate to this utility.
  • Probably more important than the algorithm change, I have further refined the FULLAUTO routine, and it now completes in 2.1 hours, hopefully with as much accuracy as before.  I now recommend this option for everyone.
  • I updated the menu prompts to reflect the new time based options.
  • Also noteworthy, I added a new 'Best Bang for the Buck' recommendation, which basically reports the values that will deliver 'good enough' performance while minimizing memory use.  I added this because I found that more than doubling my md_sync_window values from 1024 to 2600 only improved my Parity Check performance by 1%.  Unless you care about extracting every last iota of performance, I recommend using the new Best Bang for the Buck vaues.  That goes doubly so if you use a lot of 3rd party add-ons that want their share of memory too.
  • Running the utility will now automatically cancel a Parity Check (or data rebuild) if one is in progress.  I give a warning to this effect on the first menu, so you do have a chance to cancel the utility instead of auto-cancelling the check/rebuild.  I figured if you are doing those types of tasks, you shouldn't be playing with tools like this utility, and any running Parity Check was more likely a result of an aborted previous test run.
  • I also added a check to make sure the array is started. You know, since it's required and all.

 

You can download the file from post #1:  http://lime-technology.com/forum/index.php?topic=29009.msg259087#msg259087

 

Some Test Notes:

I have been using the utility almost constantly over the past week, primarily to test my code changes.  From all my testing, I've noticed a few details worth sharing.

 

Primarily, shorter tests seem to have a lot of variability to them, and I think some of this may be due to caching and burst transfers.  Results start stabilizing around the 1-2 minute mark, and 3 minutes provide nice, consistent results.  I almost released the new FULLAUTO routine with a 2-min first pass, as it produced identical answers to the 3-min first past version (at least on my server).  But it appeared to me that the results hadn't fully settled down, and I didn't think saving 20 minutes was worth the increase in variability.

 

Very small improvements are to be had for going beyond an md_sync_window of 1024, at least on my server with 3TB 5400RPM drives, though I theorize that faster 4TB drives, and 7200RPM drives might drive that number up a small bit.

 

If you are running a mixed assortment of drives, your overall Parity Check performance is limited by your slowest drive.  This utility will correctly report lower md_* values for your array in this situation.  If you upgrade your drives later and replace your slow drives with fast drives, it's probably worth running this utility again as the recommendation has probably changed.

 

The faster numbers reported by higher md_sync_window values are only realizable at the beginning of a Parity Check, not the entirety of it.  For example, my drives max out at about 139MB/s, and I had to set my md_sync_window up to 2600 to hit it.  But by 10% in the Parity Check, drive performance had already naturally dropped below 136MB/s, the same level of performance provided with an md_sync_window of 1024 on my server.  So more than doubling the memory allocated to mdcmd only improved performance a small percentage, and even then for only a small portion of the drive.  This is why I added the Best Bang for the Buck recommendation.

 

The exception to the above statement is if you're still using the unRAID stock value of 384, which on my server limited performance to about 60 MB/s, which was low enough to throttle the entirety of the Parity Check.  If your server is being throttle by the unRAID stock values, any increase at all will pay huge dividends.

 

-Paul

 

 

Share this post


Link to post

fyi, in 2.0 your fullauto states at the command prompt that its going to take 1.5x length as full parity check. if you continue, the next screen mentions that its going to take 2.1 hours.

 

Good catch zoggy, thanks!  I don't think that's worthy of dropping a new version, so I'll address that in the next version (assuming there is one).  2.1 hours is the correct length.

 

also, shouldnt a user do a non-correcting parity check BEFORE any test is done to make sure everything is good before continuing?

 

Hmmm... probably not necessary.  After all, this utility is merely running non-correcting parity checks.  I don't think running a non-correcting parity check is required before running a bunch of partial non-correcting parity checks.

 

In general, you should be running regular, correcting type parity checks.  But if it puts your mind at ease to run an extra one first (correcting or non-correcting), then by all means.  If your regular parity checks are finding errors (corrected or not), then you've got issues you need to address before using this tool.

 

Also important is drive health.  If you're SMART reports are showing issues or hints of a drive on the way out, or if any drive balls are not green, then don't play with this tool.  Get your issues fixed first.

 

-Paul

Share this post


Link to post

just started a full auto test.. seems like you could just use some intelligence here to see the trend and just exit out early rather than waste the time to see the speeds diminished.

 

Since PASS 1 (3min duration with 20 points = 1hr), I could have saved 45mins (if stopping after 5th test) and just went onto the next part.

 

Note that I did not have the stock tunables set.. do you ever record what the user had initially?

 

I made a copy of my unraid drive before doing this test just in case. Here were my values before running this test.

boot/config/disk.cfg

md_num_stripes="2560"

md_write_limit="1536"

md_sync_window="1024"

 

Grabbed the output from TunablesReport.txt, can see that the 2nd pass appears to be going the same way.

Tunables Report from  unRAID Tunables Tester v2.0 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  88.0 MB/s 
   2  |    1536     |     768     |     640     |  87.8 MB/s 
   3  |    1664     |     768     |     768     |  87.4 MB/s 
   4  |    1920     |     896     |     896     |  87.0 MB/s 
   5  |    2176     |    1024     |    1024     |  87.2 MB/s 
   6  |    2560     |    1152     |    1152     |  86.8 MB/s 
   7  |    2816     |    1280     |    1280     |  86.6 MB/s 
   8  |    3072     |    1408     |    1408     |  86.2 MB/s 
   9  |    3328     |    1536     |    1536     |  86.0 MB/s 
  10  |    3584     |    1664     |    1664     |  85.7 MB/s 
  11  |    3968     |    1792     |    1792     |  85.7 MB/s 
  12  |    4224     |    1920     |    1920     |  86.1 MB/s 
  13  |    4480     |    2048     |    2048     |  86.2 MB/s 
  14  |    4736     |    2176     |    2176     |  85.7 MB/s 
  15  |    5120     |    2304     |    2304     |  85.3 MB/s 
  16  |    5376     |    2432     |    2432     |  85.3 MB/s 
  17  |    5632     |    2560     |    2560     |  85.1 MB/s 
  18  |    5888     |    2688     |    2688     |  85.1 MB/s 
  19  |    6144     |    2816     |    2816     |  84.8 MB/s 
  20  |    6528     |    2944     |    2944     |  84.8 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Medium Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    1288     |     768     |     392     |  84.9 MB/s 
  22  |    1296     |     768     |     400     |  84.8 MB/s 
  23  |    1304     |     768     |     408     |  84.7 MB/s 
  24  |    1312     |     768     |     416     |  84.7 MB/s 
  25  |    1320     |     768     |     424     |  84.4 MB/s 
  26  |    1328     |     768     |     432     |  84.7 MB/s 
  27  |    1336     |     768     |     440     |  84.7 MB/s 
  28  |    1344     |     768     |     448     |  84.4 MB/s 
  29  |    1360     |     768     |     456     |  84.6 MB/s 
  30  |    1368     |     768     |     464     |  84.7 MB/s 
  31  |    1376     |     768     |     472     |  84.3 MB/s 
  32  |    1384     |     768     |     480     |  84.5 MB/s 
  33  |    1392     |     768     |     488     |  84.7 MB/s 
  34  |    1400     |     768     |     496     |  84.6 MB/s 
  35  |    1408     |     768     |     504     |  84.5 MB/s 
  36  |    1416     |     768     |     512     |  84.7 MB/s 

Completed: 2 Hrs 8 Min 16 Sec.

Best Bang for the Buck: Test 1 with a speed of 88.0 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 77MB of RAM on your hardware.


Unthrottled values for your server came from Test 21 with a speed of 84.9 MB/s

     Tunable (md_num_stripes): 1288
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 392

These settings will consume 70MB of RAM on your hardware.
This is -70MB less than your current utilization of 140MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

Pauven, if you wanna chat you can jump on the unraid irc channel.

#unraid on irc.freenode.net

If you dont have a irc client, you can connect via the web: http://webchat.freenode.net/

Share this post


Link to post

Just finished a run with v2.0.  I rebooted into "Safe Mode" and ran the utility from the console.  I was also running to in another window and I never saw the CPU go over 3%.

 

Im thinking I need to get rid of the 1.5TB Seagates to pick up any more speed.

 

Thanks Paul!

 

Tunables Report from  unRAID Tunables Tester v2.0 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  67.5 MB/s 
   2  |    1536     |     768     |     640     |  74.4 MB/s 
   3  |    1664     |     768     |     768     |  75.5 MB/s 
   4  |    1920     |     896     |     896     |  69.8 MB/s 
   5  |    2176     |    1024     |    1024     |  77.2 MB/s 
   6  |    2560     |    1152     |    1152     |  72.9 MB/s 
   7  |    2816     |    1280     |    1280     |  78.7 MB/s 
   8  |    3072     |    1408     |    1408     |  75.2 MB/s 
   9  |    3328     |    1536     |    1536     |  75.6 MB/s 
  10  |    3584     |    1664     |    1664     |  79.2 MB/s 
  11  |    3968     |    1792     |    1792     |  74.8 MB/s 
  12  |    4224     |    1920     |    1920     |  79.7 MB/s 
  13  |    4480     |    2048     |    2048     |  75.3 MB/s 
  14  |    4736     |    2176     |    2176     |  79.5 MB/s 
  15  |    5120     |    2304     |    2304     |  78.9 MB/s 
  16  |    5376     |    2432     |    2432     |  75.7 MB/s 
  17  |    5632     |    2560     |    2560     |  80.8 MB/s 
  18  |    5888     |    2688     |    2688     |  76.0 MB/s 
  19  |    6144     |    2816     |    2816     |  79.5 MB/s 
  20  |    6528     |    2944     |    2944     |  79.7 MB/s 
--- Targeting Fastest Result of md_sync_window 2560 bytes for Medium Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    5416     |    2440     |    2440     |  76.5 MB/s 
  22  |    5440     |    2448     |    2448     |  79.3 MB/s 
  23  |    5456     |    2456     |    2456     |  78.6 MB/s 
  24  |    5472     |    2464     |    2464     |  79.5 MB/s 
  25  |    5488     |    2472     |    2472     |  79.3 MB/s 
  26  |    5504     |    2480     |    2480     |  79.5 MB/s 
  27  |    5528     |    2488     |    2488     |  79.2 MB/s 
  28  |    5544     |    2496     |    2496     |  79.5 MB/s 
  29  |    5560     |    2504     |    2504     |  79.2 MB/s 
  30  |    5576     |    2512     |    2512     |  80.8 MB/s 
  31  |    5600     |    2520     |    2520     |  79.5 MB/s 
  32  |    5616     |    2528     |    2528     |  79.6 MB/s 
  33  |    5632     |    2536     |    2536     |  80.6 MB/s 
  34  |    5648     |    2544     |    2544     |  81.5 MB/s 
  35  |    5664     |    2552     |    2552     |  80.7 MB/s 
  36  |    5688     |    2560     |    2560     |  79.5 MB/s 

Completed: 2 Hrs 10 Min 56 Sec.

Best Bang for the Buck: Test 3 with a speed of 75.5 MB/s

     Tunable (md_num_stripes): 1664
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 768

These settings will consume 71MB of RAM on your hardware.


Unthrottled values for your server came from Test 34 with a speed of 81.5 MB/s

     Tunable (md_num_stripes): 5648
     Tunable (md_write_limit): 2544
     Tunable (md_sync_window): 2544

These settings will consume 242MB of RAM on your hardware.
This is 99MB more than your current utilization of 143MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Share this post


Link to post

Just finished a run with v2.0.  I rebooted into "Safe Mode" and ran the utility from the console.  I was also running to in another window and I never saw the CPU go over 3%.

 

Im thinking I need to get rid of the 1.5TB Seagates to pick up any more speed.

 

Thanks Paul!

 

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  67.5 MB/s 
   2  |    1536     |     768     |     640     |  74.4 MB/s 
   3  |    1664     |     768     |     768     |  75.5 MB/s 
   4  |    1920     |     896     |     896     |  69.8 MB/s 
   5  |    2176     |    1024     |    1024     |  77.2 MB/s 
   6  |    2560     |    1152     |    1152     |  72.9 MB/s 
   7  |    2816     |    1280     |    1280     |  78.7 MB/s 

 

Thanks for posting your results Steven.  Test result #5, at 77.2 MB/s probably would be your 'Best Bang for the Buck' value, but test result #4 tripped up my logic which stops checking after the first value that doesn't show enough improvement.  I'll have to think about a way to address that scenario.

 

I find your results intriguing, as they seem cyclical in nature, with recurrent spikes at regular intervals.  This is something I expected to occur, but was not present on my server.  I think you would be well served to run another test with more granularity, a bit more similar to the previous FULLAUTO routine.

 

If you have the time and are willing, run a (N)ormal length test with a 32 byte interval - option (3).  Let it run starting from 512 or 640 or 768, and let it run until end pos 2944.  I think this might provide more clarity on how your hardware is responding to these values.  This test will take about 2.5 hours.

 

-Paul

Share this post


Link to post

just started a full auto test.. seems like you could just use some intelligence here to see the trend and just exit out early rather than waste the time to see the speeds diminished.

You're results are very interesting.  I had wondered if something like this might turn up.  If you look at StevenD's results, you'll see the problem with trying to implement logic that determines further tests are not necessary, as sometimes the values jump back up.  Are you still running the highmem settings?

 

If you don't like the way the tests are proceeding, you can always abort with CTRL-C.  You can cancel any running Parity Check manually, or by rerunning the utility.

 

Note that I did not have the stock tunables set.. do you ever record what the user had initially?

I have not found a way to read the currently running values from memory, but I do read what is stored in the config file, and this gets reported after the test is complete. 

 

I also don't actually change what is in the config file during the tests, but I do give you the option to update the configuration file after the test is complete.  As long as you don't type 'SAVE' to that prompt, it won't permanently change anything.

 

Pauven, if you wanna chat you can jump on the unraid irc channel.

 

I'll jump on in a few.

 

-Paul

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.