Jump to content
Pauven

unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables

1037 posts in this topic Last Reply

Recommended Posts

Just now, garycase said:

I rarely get to the Atlanta area, but next time I'm there I wanna play !! :D

 

For you, definitely.

Share this post


Link to post

Just a small question:

When i start the script, i get this error:

 

/usr/local/sbin/mdcmd: line 11: echo: write error: Invalid argument

 

I have installed the script on my flash-drive like suggested

Thanks for your help ^_^

 

EDIT: After finishing the Test i got a parameter which is not present in v6.3.5: md_write_limit - It seems its an old parameter form previous version(s)

Edited by Zonediver

Share this post


Link to post

Did you download the modified script to let it work under 6.2+? 

 

Share this post


Link to post
14 minutes ago, Zonediver said:

I use Version 2.2 - is there a new one?

Not yet.  But the stock script doesn't work 6.2+.  There's a patched version in the linked post above.

Share this post


Link to post
Just now, Squid said:

Not yet.  But the stock script doesn't work 6.2+.  There's a patched version in the linked post above.

 

Ok thanks - then i will download the modified version and see what happens xD

Share this post


Link to post

@Pauven I hope is going well with you. Any thoughts on releasing the last version you had under private release to the public so the rest of us can benefit from it?

 

Share this post


Link to post
On 03/08/2017 at 5:16 PM, Zonediver said:

Just a small question:

When i start the script, i get this error:

 

/usr/local/sbin/mdcmd: line 11: echo: write error: Invalid argument

 

I have installed the script on my flash-drive like suggested

Thanks for your help ^_^

 

EDIT: After finishing the Test i got a parameter which is not present in v6.3.5: md_write_limit - It seems its an old parameter form previous version(s)

I get this error as well:

/root/mdcmd: line 11: echo: write error: Invalid argument

after every test.  Should I be worried or is this normal?

 

Thanks

Share this post


Link to post

There's a revised script posted that removes a couple of the lines to fix this.

Share this post


Link to post
On 8/24/2016 at 6:34 PM, Pauven said:

I've selected the beta testers.  Apologies to those who weren't selected, the list below is a little bigger than I wanted to go, but with so many good choices I had a hard time excluding.

 

 

  • ljm42 (because you have a nice boring server ;-) with drives on the motherboard)
     
  • StevenD (because you have 13 identical drives on a dual Xeon box with tons of memory, dual M1015's, and running ESXi)
     
  • jack0w (because you have single parity and both a HBA and an extender)
     
  • Squid (because you have AMD cpus, a SAS2LP, and a nice mix of drives)
     
  • scottc (because you have AMD CPU's, a 9211-8i and a H310, and a nice mix of drives)
     
  • johnnie.black (because of all your servers, you're like 7 beta testers in one!!!)

 

 

I'm thinking the above will provide a nice cross section of hardware.

 

I'll send PM's to each of you with download instructions later today.  I'll provide testing guidelines in a post at that time.

 

If you weren't included in the beta, please be patient.  I believe we're very close to a public release.

 

Thanks,

Paul

 

Dear Pail

 

I am considering using UnRAID for a large archive (write once, read rarely) system, typically 15-25 drives per server. I already clarified with Johnnie.black the failure management  for any kind of failure on unRAID+btrfs system with single and dual parity. I am now considering the power and thermal aspect and as you mentionned you worked out on the Staggered Spin-up (SSU) matter, I dare contact you straight forward :

The core idea with SSU is to level in time the peak load at drive spin-up and this basically can be driven by the drives and/or the OS.

The normal use of unRAID within an archive system is to deal with one drive at a time (+parity/ies during write).

    Question 1 : Hows does unRAID cover corectly this matter at the system start-up and mount : all drives remains off until a read or write request is send?

                            or do all the driveshave to be spun-up at the array mount ? Which are the parameters for unRAID ? for each drive ?

The worse case scenario for SSU is then in case a parity check or drive failure :

      Question 2 : How does unRAID deals with SSU in case of parity check with all drives in operation

      Questions 3 : in the case of single parity :

         - with parity drive failure : please confirm only the data drive is then spun-up on read/write request

         - with data drive failure : please confirm all data and parity drives are spun-up considering SSU on read/write request

      Questions 4 : in the case of dual parity :

         - with one parity drive failure : please confirm only the data drive (+ remaining parity drive for write) is then spun-up on read/write request

         - with single data drive failure : please confirm all data and parity drives are spun-up considering SSU on read/write request

Your detailed experience is much appreciated

vinski

Share this post


Link to post
On 9/20/2017 at 6:39 PM, BRiT said:

@Pauven I hope is going well with you. Any thoughts on releasing the last version you had under private release to the public so the rest of us can benefit from it?

 

 

It's 8 months later and same post applies...

Share this post


Link to post

Hey gang, sorry I've been absent.

 

Luckily I've been healthy, so no concerns there.

 

Instead, I've been sidetracked by by work and other projects.  My biggest sidetrack has been a new program I wrote to replace the old My Movies Media Center Plugin.  That may be of some interest to unRAID users like me who store their large movie collections on their server:  If you're interested, you can check it out here:  MM Browser

 

MM Browser was supposed to be a quick little 2-4 week programming project, just for myself, but then I went crazy and decided to sell it online, which required a ton more programming and a website.  User support has been much more time consuming than I ever fathomed.  I've easily spent 6 months full time on MM Browser.

 

MM Browser pulled me away from my other project, the Chameleon Pinball Engine.  I had planned to have it ready for the next big Southern Fried Gameroom Expo here in Atlanta.  Somehow the time slipped away.  The show is in 4 weeks, and I'm realizing that there's too much work to make up to make to the show.  That's a big disappointment for me.

 

I've also got a a small enterprise software suite that I've worked on for the past decade, and I'm currently working on my first big sale.  Trying to sell enterprise software to, uhm, big enterprises, has been eye opening to say the least.  So many hurdles, and I'm spending more time doing documentation than anything else.  Right now this is my biggest priority.

 

Plus I've got a full time consulting gig at the moment.

 

Long story short, I just haven't a moment to spare.

 

I would release the private beta, but to be honest it just didn't work well, so that version was scrapped.

 

I have documented plans for a new version that hopefully would fix the problems of the private beta.  Every so often I think about trying to knock it out, and I've come close to working on it a few times, but it just fell too low on my priority list.  There's always a chance I may get to it soon, but I can't make any promises.

 

I know that this isn't the answer anyone was looking for.  Sorry.

 

If anyone else wants to run with it, please feel free.  You have my blessing.

 

Paul

 

Edited by Pauven
  • Like 1
  • Upvote 1

Share this post


Link to post

I've been trying to figure out the performance issues I've been having with unRAID since I started using it. Running a cross-flashed IBM M1015 with 8 array drives, single parity, and a single drive cache. The parity and cache drives are attached to my motherboard's 6Gb connectors. I'm running the latest version of unRAID and used the modified 2.2 script.

 

The problem I've been having is that during times of high disk usage like copying files between drives Docker application and VM performance drop significantly to the point where they become unresponsive until the disk operation is finished.

 

Would it be normal to run the normal 2 minute test from 384 to 2944 without any discernible difference in resulting performance? With only a few low outliers it reported a pretty consistent 116.7-116.9 MB/s.

Share this post


Link to post

is this script no longer maintained?

 

Quote

Test 9   - md_sync_window=1408 - Test Range Entered - Time Remaining: 1s ./unraid-tunables-tester.sh.v2_2.sh: line 425: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 429: /root/mdcmd: No such file or directory
Test 9   - md_sync_window=1408 - Completed in 4.011 seconds   =   0.0 MB/s
./unraid-tunables-tester.sh.v2_2.sh: line 388: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 389: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 390: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 394: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 397: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 400: [: : integer expression expected
Test 10   - md_sync_window=1536 - Test Range Entered - Time Remaining: 1s ./unraid-tunables-tester.sh.v2_2.sh: line 425: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 429: /root/mdcmd: No such file or directory
Test 10  - md_sync_window=1536 - Completed in 4.011 seconds   =   0.0 MB/s
./unraid-tunables-tester.sh.v2_2.sh: line 388: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 389: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 390: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 394: /root/mdcmd: No such file or directory
./unraid-tunables-tester.sh.v2_2.sh: line 397: /root/mdcmd: No such file or directory

 

Share this post


Link to post
On 8/28/2016 at 10:44 AM, Pauven said:

I'm not 100% sure of how v6.2 behaves now that md_write_limit is gone, but here is what we uncovered with v5 - hopefully this will give you some ideas for testing v6.2:

 

md_num_stripes is the total # of stripes available for reading/writing/syncing.  It must always remain the highest number.

 

md_sync_window is the total # of stripes that parity syncs are limited to.  While parity checks are underway, md_num_stripes-md_sync_window=the # of remaining stripes available to handle read/write requests.  So for example, if md_num_stripes=5000, and md_sync_window=1000, then during a parity check there are 4000 stripes are remaining to handle reads/writes.

 

md_write_limit is the total # of stripes that array writes were limited to on v5.x.  While writing is underway, md_num_stripes-md_write_limit=the # of remaining stripes available to handle read/sync requests.  So for example, if md_num_stripes=5000, and md_write_limit=1500, then during writing, 3500 stripes are remaining to handle reads/syncs.

 

Using the above examples, if you were reading, writing and running a parity sync all at the same time, 5000-1000-1500=2500, so parity checks would get 1000 stripes, writes would get 1500 stripes, and reads would get the remaining 2500 stripes.

 

Lime-Tech never revealed any priority of reads vs. writes vs syncs, but if I had to guess, on v5 syncs had priority because of how much people complained about stuttering while watching movies during parity checks.  That's just a guess, though, plus I haven't been on the forums enough to know if this general behavior has changed in 6.x.

 

Quoting these bits to point this out to @Marshalleq

Share this post


Link to post
Posted (edited)

This seems to be working fine in unRAID 6.7.2.

 

I'm currently running it on all three of my unRAID setups right now.

unRAID1a with 14 array and 2 parity drives.

unRAID2 with 23 array and 1 parity drive.

unRAID3 with 19 array and 2 parity drives.

 

49TB unRAID1a--53TB unRAID2--73TB unRAID3

Edited by aaronwt

Share this post


Link to post
Posted (edited)

Attached is a debugged version of this script modified by me.
I've eliminated almost all of the extraneous echo calls.

Many of the block outputs have been replaced with heredocs.

All of the references to md_write_limit have either been commented out or removed outright.

All legacy command substitution has been replaced with modern command substitution

The script locates mdcmd on it's own.

https://paste.ee/p/wcwWV

 

Edited by Xaero
Removed attached copy due to issues, linking paste instead

Share this post


Link to post

@XaeroDo you have something installed that we need to add?  I'm getting command not found error, bad interpreter errors and file not found errors.

Share this post


Link to post
27 minutes ago, Marshalleq said:

@XaeroDo you have something installed that we need to add?  I'm getting command not found error, bad interpreter errors and file not found errors.

Open the file in a good editor (such as Notepad++ , not Notepad) and save it with Unix line endings (LF) not Windows line endings (CR LF)

Share this post


Link to post

I actually hadn't edited it.  I use VI and Nano so I don't think there's much chance of it coming from my end? - are you saying it was published to this site with windows line endings and I need to change it?  In which case I probably would need to do that with something (I run Mac) but this would be a first in 20 years!

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.