Jump to content

Pauven

Members
  • Posts

    747
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Pauven

  1. Hey jowi, I'm confused. You said you ran the first parity check at stock values, but the report indicates you already had an md_num_stripes of about 4460 - which definitely is not stock. The small increase from 4460 to 5984 (assuming that is the value you tested with) would have only provided a few seconds reduction in parity check time (as you reported). But had you really tested with stock unRAID values (1280/768/384) I think the processing time would have been much longer. Are you sure you tested correctly with stock unRAID values?
  2. jowi, that's a great time for a 4TB drive! Looks like your server runs fine with stock values. Many do. Would you mind posting your TunablesReport.txt file?
  3. zoggy, I've been giving some more thought to your results. On the one hand, the utility did as designed an helped identify the correct values for your server - it is just that they were unexpected low, and lower than where you had them configured. On the other hand, I'm suspicious of the results. I would have expected that Test 1 and Test 36, which both sampled the 512 byte value, would have had the nearly same result. Instead, it almost appears that your server is experiencing some kind of issue that is affecting the results. Two things: 1) Anything in your log file, error wise? 2) If you have the time, can you run another test: (N)ormal length, 16 Byte Interval, 384 Byte Start and 768 Byte End. This test should take about 50 minutes.
  4. Aaaarggghh! Cut-n-paste, you foil me again! Good catch, updated my post to correct.
  5. There is a way! Your scheduled parity check is just a cron job. The crontab format is: #minute hour mday month wday command So if you used the following values: 0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT Then the job would run at midnight (0 0), sometime between the 1st-7th of each month (1-7), every month (*), every day (*), but only if the test shows it is a Monday (test $(date +%u) -eq 1 &&) and it calls the parity check directly. I add this to my go file so it's always in my cron job list on reboot: # Add unRAID Fan Control & Monthly Parity Check Cron Jobs crontab -l >/tmp/crontab echo "# Run unRAID Fan Speed script every 5 minutes" >>/tmp/crontab echo "*/5 * * * * /boot/unraid-fan-speed.sh >/dev/null" >>/tmp/crontab echo "# Run a Parity Check on the First Monday of each Month at 12:00am" >>/tmp/crontab echo "0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT" >>/tmp/crontab crontab /tmp/crontab Note: I also add my unraid-fan-speed.sh script as a cron job, you don't need that part. Doing it this way doesn't require any packages or scripts or add-ons. Just a few lines in your go file. -Paul
  6. That is a nice goal. I know others share it. Personally, I just run my Parity Checks during the middle of the night when I'm sleeping and household temperatures are cooler. I would also much rather have a parity check take less time, so my drives are spun up a shorter period of time. Yes, I hard coded to 11% of that sum, same as you hard coded in your script to 15% of that sum. My rational was maintaining the same percentage of overhead that Tom ships unRAID with. Your 4-year-old quote doesn't represent the values currently shipping with unRAID. They were changed years ago. The current md_num_stripes is 11% bigger than the other two. Sound familiar? Agreed, testing of all three tunables would be meaningful, but the methodology for such a test has not been defined. Since one of the tunables involves writing, the only way to test all three would be to run a parity check, read files, and write files, all simultaneously. Oh, and you would want to do that while excluding any outside variables like LAN connectivity and desktop performance. If you want to go ahead and code it, I'll be happy to test with it. After all: [ $Your_bash_programming_skills -gt (( $My_bash_programming_skills**2 )) ]; You are correct, Tom has never clarified his statement. I suspect he doesn't know the answer any more than we do. Tests are the only way to determine the appropriate values, which is the goal of this utility primarily for testing the md_sync_window with a known quantifiable touch point. But before you can test, you have to identify your touch points for the tests. What are the known quantifiable touch points for md_write_limit and md_num_stripes? If you take a look at zoggy's results, you will see that his server got slower with any values beyond md_sync_window=512. At this point I don't think we understand enough about these values to make any generalized assumptions about what is right, as what works for one server doesn't necessarily work for another. Regarding the relationship between the 3 tunables, it can be viewed in multiple ways. For example, Tom ships unRAID with md_num_stripes=md_write_limit+md_sync_window+11%. But another way to look at those numbers is as a balanced equation: 60% writes, 30% syncs, 10% reads. Keep in mind that with the reads, they get whatever is left over, up to the maximum of md_num_stripes (since there is no md_read_limit). So if all you are doing is reading, you get 100% reads. If you are reading while doing a parity sync, you get 70% reads vs. 30% sync. If you are reading while writing, you get 40% reads vs. 60% writes. My formula basically adjusts the values so the ratio (for values 768 and beyond) is 45% syncs, 45% writes and 10% reads. If you are reading while syncing (your watching a Blu-Ray example), you get 55% reads vs 45% syncs. In your example values (33% syncs, 33% writes, 33% reads), if you are reading while syncing, you get 66% reads vs 33% syncs. Those numbers are not that far off from 55% reads vs 45% syncs. Interestingly enough, even at 50% bigger your read allocation is worse than stock unRAID.
  7. 120.5 and 121.4 are pretty much a full megabyte apart, but both round to 121. The extra decimal allows my script to have greater awareness of throughput trends. Since you don't care about extracting the last iota of performance, the new 'Best Bang for the Buck' recommendation addresses your desire to have good enough performance at lowest possible setting.
  8. You're results are very interesting. I had wondered if something like this might turn up. If you look at StevenD's results, you'll see the problem with trying to implement logic that determines further tests are not necessary, as sometimes the values jump back up. Are you still running the highmem settings? If you don't like the way the tests are proceeding, you can always abort with CTRL-C. You can cancel any running Parity Check manually, or by rerunning the utility. I have not found a way to read the currently running values from memory, but I do read what is stored in the config file, and this gets reported after the test is complete. I also don't actually change what is in the config file during the tests, but I do give you the option to update the configuration file after the test is complete. As long as you don't type 'SAVE' to that prompt, it won't permanently change anything. I'll jump on in a few. -Paul
  9. Thanks for posting your results Steven. Test result #5, at 77.2 MB/s probably would be your 'Best Bang for the Buck' value, but test result #4 tripped up my logic which stops checking after the first value that doesn't show enough improvement. I'll have to think about a way to address that scenario. I find your results intriguing, as they seem cyclical in nature, with recurrent spikes at regular intervals. This is something I expected to occur, but was not present on my server. I think you would be well served to run another test with more granularity, a bit more similar to the previous FULLAUTO routine. If you have the time and are willing, run a (N)ormal length test with a 32 byte interval - option (3). Let it run starting from 512 or 640 or 768, and let it run until end pos 2944. I think this might provide more clarity on how your hardware is responding to these values. This test will take about 2.5 hours. -Paul
  10. Good catch zoggy, thanks! I don't think that's worthy of dropping a new version, so I'll address that in the next version (assuming there is one). 2.1 hours is the correct length. Hmmm... probably not necessary. After all, this utility is merely running non-correcting parity checks. I don't think running a non-correcting parity check is required before running a bunch of partial non-correcting parity checks. In general, you should be running regular, correcting type parity checks. But if it puts your mind at ease to run an extra one first (correcting or non-correcting), then by all means. If your regular parity checks are finding errors (corrected or not), then you've got issues you need to address before using this tool. Also important is drive health. If you're SMART reports are showing issues or hints of a drive on the way out, or if any drive balls are not green, then don't play with this tool. Get your issues fixed first. -Paul
  11. As Jonathan just alluded to, I have updated the utility to version 2.0 (see the main post) and I highly recommend upgrading to this new version. New Features in v2.0: As Patilan kindly pointed out, the original utility was causing high CPU loading due to a program loop that was constantly measuring the progress of the test by polling mdcmd status. Not only was this code inefficient, it had the potential to influence the results due to high CPU usage. As Patilan recommended, I changed the code to instead wait a predefined period of time, then measure how much data was processed during the test. This has, as expected, lowered CPU utilization to normal levels. It also appears, to my eyes at least, to have slightly improved the overall results. Even though I made the above change, I still have a loop to update the GUI with a nice countdown timer. Hopefully there are no concerns this time with the CPU load incurred by simply decrementing a timer variable and echoing a status update. But if there are concerns, Patilan has made his lightweight version available which makes a nice alternate to this utility. Probably more important than the algorithm change, I have further refined the FULLAUTO routine, and it now completes in 2.1 hours, hopefully with as much accuracy as before. I now recommend this option for everyone. I updated the menu prompts to reflect the new time based options. Also noteworthy, I added a new 'Best Bang for the Buck' recommendation, which basically reports the values that will deliver 'good enough' performance while minimizing memory use. I added this because I found that more than doubling my md_sync_window values from 1024 to 2600 only improved my Parity Check performance by 1%. Unless you care about extracting every last iota of performance, I recommend using the new Best Bang for the Buck vaues. That goes doubly so if you use a lot of 3rd party add-ons that want their share of memory too. Running the utility will now automatically cancel a Parity Check (or data rebuild) if one is in progress. I give a warning to this effect on the first menu, so you do have a chance to cancel the utility instead of auto-cancelling the check/rebuild. I figured if you are doing those types of tasks, you shouldn't be playing with tools like this utility, and any running Parity Check was more likely a result of an aborted previous test run. I also added a check to make sure the array is started. You know, since it's required and all. You can download the file from post #1: http://lime-technology.com/forum/index.php?topic=29009.msg259087#msg259087 Some Test Notes: I have been using the utility almost constantly over the past week, primarily to test my code changes. From all my testing, I've noticed a few details worth sharing. Primarily, shorter tests seem to have a lot of variability to them, and I think some of this may be due to caching and burst transfers. Results start stabilizing around the 1-2 minute mark, and 3 minutes provide nice, consistent results. I almost released the new FULLAUTO routine with a 2-min first pass, as it produced identical answers to the 3-min first past version (at least on my server). But it appeared to me that the results hadn't fully settled down, and I didn't think saving 20 minutes was worth the increase in variability. Very small improvements are to be had for going beyond an md_sync_window of 1024, at least on my server with 3TB 5400RPM drives, though I theorize that faster 4TB drives, and 7200RPM drives might drive that number up a small bit. If you are running a mixed assortment of drives, your overall Parity Check performance is limited by your slowest drive. This utility will correctly report lower md_* values for your array in this situation. If you upgrade your drives later and replace your slow drives with fast drives, it's probably worth running this utility again as the recommendation has probably changed. The faster numbers reported by higher md_sync_window values are only realizable at the beginning of a Parity Check, not the entirety of it. For example, my drives max out at about 139MB/s, and I had to set my md_sync_window up to 2600 to hit it. But by 10% in the Parity Check, drive performance had already naturally dropped below 136MB/s, the same level of performance provided with an md_sync_window of 1024 on my server. So more than doubling the memory allocated to mdcmd only improved performance a small percentage, and even then for only a small portion of the drive. This is why I added the Best Bang for the Buck recommendation. The exception to the above statement is if you're still using the unRAID stock value of 384, which on my server limited performance to about 60 MB/s, which was low enough to throttle the entirety of the Parity Check. If your server is being throttle by the unRAID stock values, any increase at all will pay huge dividends. -Paul
  12. Hey zoggy, No, I've not looked into that before. My hunch is that the md_* values are the proper way to tune the md subsystem for proper performance, and that allocating additional highmem for caching was somewhat masking the issue. It's certainly possible that both forms of tuning may work wonders when combined. Though keep in mind nothing will ever be faster the the transfer speed of your drives, and I feel that I am already achieving that level of performance from this tuning utility alone. Since I don't have any experience with the highmem_is_dirtyable parameter, perhaps you would be willing to test and report back on your experiences after tuning the md_* values? -Paul
  13. Basically just a much more efficient routine. I eliminated pass 3 altogether. I originally suspected that there might be one single value that hit the sweet spot, so to say, and pass 3 was trying to find that single value. After running way too many tests for my server's own good, I finally came to the conclusion that performance gradually increases to a plateau, levels off, and sometimes takes a dive after that plateau. So my new routine is all about finding the leading edge of that plateau with a simplified search pattern. The first pass get's me in the neighborhood, and the 2nd pass finds it down to the nearest increment of 8. Fewer total samples are being taken, but many of the samples are longer, more accurate samples. I have found that shorter samples lower the sample quality, so I think 2.1 hours is about as short as I want to make the FULLAUTO routine while still delivering any promise of accuracy.
  14. Hey Jonathan, I know what you mean about addictively watching the console, I'm a number watcher just like you! I think you would have to do a major change: adding new HD controller, or possibly new faster HD's (like adding a couple 4TB 7200 RPM to your mix of 2TB 5400 RPM drives). Basically, any kind of change that would directly affect throughput rates, as these memory parameters can be considered as a throttle on throughput. Parameters that work well for 100MB/s drives could be too low to allow 140MB/s and 180MB/s drives to reach their full potential. It's also possibly that simply adding drives is enough to warrant a change in parameters, but since memory is allocated on a per-drive basis, adding more drives automatically allocates more memory, so I don't think that is likely. But in general, once you do this once, I think it would be extremely rare to ever do it again. As an example, I had manually tested and chosen 1024 bytes for my md_sync_window a few months back. This dropped my Parity Check from around 11 hours to 7:40. Using this utility, I found that 2600 bytes is the threshold of unthrottled performance on my hardware, and it gave me an extra 3.5MB/s on the top end. But this only dropped my Parity Check by 5 minutes, to 7:35. The moral of the story is that, after you've tuned once and replaced the unRAID stock values, retuning again will probably show no real-world benefit. By the way, I programmed version 2.0 of the utility over the weekend, and plan to post it today. The most notable change is that I got the FULLAUTO routine down to 2.1 hours. -Paul
  15. Tom's busy with more pressing matters at the moment. We all need 5.0 to go gold. I got my understanding of the memory allocation from this post by mikejp, which reportedly sourced information that Tom provided: http://lime-technology.com/forum/index.php?topic=15224.msg142767#msg142767 I can't imagine someone would write "highest disk number in array" by accident instead of simply writing "disks in array", but once Tom surfaces again and things calm down, I can ping him.
  16. You make some very good points, but I think your critique of the current method is overly harsh. The current method works, and works fantastically well at that. Your proposed changes make for better code, but may not actually make for better results. You're right about the precision not being 1000x higher, I think I am seeing about 250ms processing time on my server. I could probably remove the sleep statement altogether as I don't think it is accomplishing anything anymore in the current loop. All said, I think I will implement your suggestions for the next version. Thanks for sharing your feedback. -Paul
  17. I did this on purpose, as my understanding is that if you have two disks, one as disk1, and one as disk20, the md driver will allocate memory for 20 drives. I guess I would need Tom to chime in here and let me know if my understanding is incorrect, but for now I will leave it as coded, as I believe it is reporting the true impact of the setting. On my server I only have 15 drives, but the way I have them spaced out for cooling means I have a drive in slot 23.
  18. While I like that idea, I'm not sure it is feasible. When I was originally performing md_* tuning, I intentionally tried to get my server to run out of memory. I cranked up the tunables to high values, ran a non-correcting parity check, pre-cleared multiple drives at the same time, and read/streamed multiple files from the server all at once. The problem was that even though memory got very low, and the server got very slow, it never actually crashed. Credit where credit is due, unRAID is pretty solid. While I've certainly seen many reports of people complaining about Kernel OOPS and Out of Memory issues when increasing the md_* tunables, from my observation every person who complained was also running one or more plug-ins like Plex and SF. So the problems with trying to identify that the server is running out of memory are that 1) I've seen what I thought was dangerously low, and it wasn't actually dangerous, and 2) Plug-in developers need to be responsible for their own apps and how they utilize memory. There are too many apps for me to try and figure out how much memory each one needs to run without crashing. I just added a feature to v1.1 (now available!) that reports how much memory the recommended settings will consume. The amount of memory consumption really isn't that bad, considering how much memory is available. Here's some examples: Stock (md_num_stripes=1280) with 3 drives: 15 MB Stock (md_num_stripes=1280) with 7 drives: 35 MB Stock (md_num_stripes=1280) with 24 drives: 120 MB My Server: Tuned (md_num_stripes=5968) with 24 drives: 560 MB Since my server has 4GB, I don't see really any problem with allocating an extra 440MB to unRAID for maximizing performance. Since unRAID is sized out of the box for servers with 512MB of memory, giving it an extra 440MB still leaves a good 3GB of RAM for plug-ins and add-ons. And once unRAID goes 64 bit, we won't even have to think about this anymore.
  19. I have updated the utility to version 1.1 (see the main post) and I highly recommend upgrading to this new version. New Features in v1.1: I found an issue where some results would have a 1 second variance in the reported time, leading to false measurements. This was caused by a sleep statement that I had in the code which was sleeping 1 second between samples. I decreased the sleep time during the test run to 0.001 seconds (1 millisecond) so the accuracy is 1000x higher. This has dramatically improved the quality of the results. Originally I was choosing the recommended result by which test has the lowest time. The problem with this approach is that a variance of 1 ms was enough to cause bigger values to be recommended even though there was no real-world benefit. I changed my logic to compare the MB/s result instead of the elapsed time, choosing the lowest test number that has the highest speed, which should now represent best possible value. After the test is run, I now report how much memory the recommended values will consume on your server. This is a complex calculation that takes into account the number of stripes, the highest drive assigned to your array (not the number of drives) and the memory footprint of each stripe. I added the option to go ahead and apply the recommended settings to your server. This is a non-permanent change, as a reboot will go back to your normally configured values. I also added the option to write the recommended settings to your disk.cfg file. This makes the change permanent, so it will apply after reboot. I extended the FULLAUTO test range all the way up to 2944. I have seen interesting results in the 2600-2800 range, so I figured 'why not!'. I added a (M)ANUAL OVERRIDE option on the Start/End Override screens. I did this because I interrupted a FULLAUTO test in the third pass, and wanted a way to manually restart where it left off. Also, I allowed manually entered values up to 5888 for anyone that is crazy enough to try it. 2944 is as high as I've tried, so I have no idea if those super high values are okay to play with. I polished the formatting of the data that is produced, both during the test and in the TunablesReport.txt file. I also made a few minor tweaks to the user interface. You can download the file from post #1: http://lime-technology.com/forum/index.php?topic=29009.msg259087#msg259087
  20. Hopefully you mean your desktop crashed and not your unRAID server, right? If you lost your connection during the test, that probably means two things: a Parity Check was still running (feel free to cancel it) and the last set of tested values are still in use. You can still see the accumulated results in the TunablesReport.txt file, as it is written to as the test progresses. This will also give you a clue as to what set of values was being tested. If you want to get back to your normal values, there are multiple ways, but the easiest is probably just to restart your server. I highly recommend using screen, especially if you are using a remote connection like Telnet. After you log onto the server, you run screen, and then you can run one or more console windows through screen. If you get disconnected for any reason, you telnet back onto the server and run a screen -r to reconnect. Everything remained running while you were disconnected. I use screen when doing pre-clears. I'll telnet to the server, open up several screen console windows, start up multiple pre-clears, then close my telnet connection. I then monitor everything through unMenu's MyMain status page, which shows pre-clear progress. -Paul
  21. I think we are putting the cart before the horse with any talk of making this utility a package or plug-in. I haven't even seen reports of positive results by using the utility yet. I advise safe mode, though I've been testing with unMenu, cachedirs, screen, ups, and a few others without issues. I've also tested some very high values (higher than what I've released in v1.0 of the utility) without issue. The situation that I am concerned about is that you have a plug-in that is writing data to your array (or you are writing data to your array manually) and the server crashes due to an out of memory condition which resulted from the very high memory values being tested. This could result in some data lost (probably limited to what was being written). This test is accomplishing two goals: primarily finding the set of values that produces unhindered performance; but secondly helping to identify any limits to how high these values can go (by causing out of memory errors) - each server may be different. Everyone needs to remember it isn't exactly safe to be writing data in the middle of a test - not to mention that inconsistent background reads/writes may skew the results. So the best advice is to avoid allowing anything to write data to your array during these tests. After the tests are done and you've selected your new values, I still recommend caution especially if the new values are significantly higher than unRAID stock values. I would start of with lots of reads and reads while running a non-correcting parity check - basically try to crash your server by doing a lot of things that read simultaneously. If that goes well, introduce some writes into the equation. Ultimately you are responsible for your server's stability. The array has to be started. The utility is running a whole bunch of Parity Checks, after all. Can't do that with the array stopped. -Paul
  22. I knew someone would ask... I had to go look it up myself. I only chmod once in a blue moon. Luckily when I copy files onto my flash over the network they seem to already have the correct permissions, so that probably holds true for most unRAID users. I updated the instructions to clarify.
  23. RUNNING THE UTILITY: The utility is run from the server's console, and is not accessible from the unRAID GUI. I like to use TELNET and SCREEN to manage my console connections, but use whatever you like best. You should always run this utility interactively. It is not designed to be something you put in your go file or in a cron job. To run, simply cd to the folder you placed the file (i.e. cd /boot) then run the program (i.e. type: unraid-tunables-tester.sh, or ./unraid-tunables-tester.sh for those that like that convention). FULLY AUTOMATIC MODE: Fully Automatic mode is easy to select. At the main screen, enter Y (uppercase only) to accept the warning prompt, then type FULLAUTO or fullauto as the Test Type. You will then have to enter Y or y on the Fully Automatic Mode warning screen. At this point, the test is running. This test will take 2.1 hours to run the FULLAUTO routine, unless your server responds well to low values, in which case the test will be extended by 12 minutes to test extra low values (below unRAID stock values). Also, the FULLAUTO tests very large amounts of memory being allocated to the sd_* tunable parameters. At the end of the first pass, it has allocated more than 5 times more memory than the stock values allocate. Considering the stock values were appropriate for servers with only 512MB RAM, these amounts should be safe for servers with 4GB of RAM, but any plug-ins and add-ons you've installed will be competing for that same memory, so beware. Fully Automatic mode makes 2 passes. The first pass tests md_sync_window values ranging from 512 to 2944 with an Byte Increment of 128 and a Test Length of 3 minutes (somewhere between a Normal and a Thorough Test Type). The fastest speed is recorded and the second pass is centered on the corresponding md_sync_window with a test range of 120 (starting 120 values below the fastest md_sync_window) and a Byte Increment of 8. The second pass has a Test Length of 4 minutes (a Thorough level test). Interestingly enough, the FULLAUTO mode revealed to me that my server runs best with md_sync_window values around 2668, significantly higher than the stock unRAID value, and also more than double any value I had tested manually (back before I wrote this utility). My new value is about 4MB/s faster than my old value of 1024. I never would have discovered this improvement without the utility. I have yet to run a Parity Check, so I can't say for sure that my times will be reduced, and I don't know about long term server stability, so I will have to report back on that in the future. MANUAL MODE: Manual Mode isn't so much as a mode, but rather just the selection of various options you are presented with when you don't select FULLAUTO. TEST TYPE: You first option (on the same FULLAUTO selection screen) is the Test Type. The tests are listed in order from quickest [V for Veryfast) to slowest [E for Extreme]. These settings control how long the Parity Check is allowed to complete before it is cancelled and restarted with the next set of test values. unRAID Tunables Tester v1.0 by Pauven Please select what type of test you would like to perform: (V) Veryfast - Tests 0.02% of your array, produces inaccurate results (F) Fast - Tests 0.10% of your array, produces rough results *(N) Normal - Tests 0.25% of your array, produces good results (T) Thorough - Tests 1.00% of your array, produces great results (E) Extreme - Tests 4.00% of your array, produces accurate results (FULLAUTO) - 1.5x Length as Full Parity Check! Fantastic results (C) Cancel Enter V F N T E or FULLAUTO to continue or C to cancel: For your very first test, I would suggest using Veryfast so you can get comfortable with how the test runs. The downside with this quicker test types is that they are less accurate. Minor server hiccups can cause the results to skew badly. Longer tests collect more real data and squelch this noise. Also, quicker tests simply have a smaller sample from which to extrapolate performance. The Veryfast test stops the Parity Check at 0.02% complete, which takes only a couple seconds on my server. That's not much data to base decisions on, but this comes in handy for performing a quick scan of all test values to see if there is a range you want to hone in on. Conversely longer tests take... longer. Sometimes painfully so. The longest test is the Extreme, which allows the Parity Check to get to 4% for each set of test values. On my server, it takes about 15 minutes per test. You get very accurate results, but you need to be picky about how many different values you test. BYTE INCREMENTS: The Byte Increment value directly affects how many individual tests are run. The Byte Increment is the interval of values that will be tested for md_sync_window. unRAID Tunables Tester v1.0 by Pauven Please select what tunable value byte increment you would like to test with. NOTE: Smaller increments will cause additional test iterations to run. For example, an increment of 128 will run 14 tests, while an increment of 64 will run 27. Each smaller increment will run double the number of tests as the one before it. An increment of 1 will run 1665 tests. Increments below 64 are not recommended, but have been made available to you in case Curious George is your hero and the phrase 'Curiosity Killed The Cat' means nothing to you. CAUTION: You may only want test with small intervals when running a (F)ast type test, otherwise this test may take days... *(1) 128 bytes ( 14 Test Iterations) (5) 8 bytes ( 209 Test Iterations) (2) 64 bytes ( 27 Test Iterations) (6) 4 bytes ( 417 Test Iterations) (3) 32 bytes ( 53 Test Iterations) (7) 2 bytes ( 833 Test Iterations) (4) 16 bytes (105 Test Iterations) ( 1 bytes (1665 Test Iterations) (C) Cancel Enter 1-8 to continue or C to cancel: For example, if you select the default Byte Increment of 128, md_sync_window values of 384, 512, 640, etc. will be tested - each new test value is 128 higher than the previous. A Byte Increment of 1 will test values 384, 385, 386, etc. - each new test value is 1 higher than the previous. For your very first test, I would suggest using the default 128 so you can get comfortable with how the test runs. Remember that smaller increments mean more tests, which means longer overall testing time. It would be unwise to combine an Extreme Test Type with a 1 Byte Increment, as that could take a few weeks to run! The downside to larger increments is that large ranges of values go untested, and one of those values may be the sweet spot for your server. I like to try to zero in by first running a Fast or Normal Test Type with a medium large increment, like 64. Looking the the results, I might see a smaller range I want to test further, so I might run a Thorough or Extreme test with a smaller increment, but only over a smaller range. START POSITION OVERRIDE: By default, the test is designed to start at a md_sync_window value of 384 bytes (the unRAID stock value). This is fine for quicker Test Types and larger Byte Increments, but once you've run your preliminary tests you might want to zoom on a particular value range. For example, my server responded very well to values around 1280, so I might set the Start Postion Override to 1152, skipping over all the test values from 384 to 1151. Would you like to override the STARTING position of this test? This is helpful if you have run previous tests at faster speeds and larger byte increments, and you would now like to hone in on a smaller test range. The default starting position is the unRAID stock md_sync_window of 384 bytes. *(N) 384 bytes (5) 1024 bytes (10) 1664 bytes (15) 2304 bytes (1) 512 bytes (6) 1152 bytes (11) 1792 bytes (16) 2432 bytes (2) 640 bytes (7) 1280 bytes (12) 1920 bytes (17) 2560 bytes (3) 768 bytes ( 1408 bytes (13) 2048 bytes (18) 2688 bytes (4) 896 bytes (9) 1536 bytes (14) 2176 bytes (19) 2816 bytes (C) Cancel Enter N or 1-14 to continue, or C to cancel: END POSITION OVERRIDE: By default, the test is designed to end at a md_sync_window value of 2084 bytes (a somewhat arbitrary value). This is fine for quicker Test Types and larger Byte Increments, but once you've run your preliminary tests you might want to zoom on a particular value range. For example, my server responded very well to values around 1280, so I might set the End Position Override to 1408, skipping all the test values beyond that point. Would you like to override the ENDING position of this test? This is helpful if you have run previous tests at faster speeds and larger byte increments, and you would now like to hone in on a smaller test range. The default ending position of this test is 2048 bytes. The value you chose must be greater than or equal to 384 bytes. *(N) 2048 bytes (5) 1024 bytes (10) 1664 bytes (16) 2432 bytes (1) 512 bytes (6) 1152 bytes (11) 1792 bytes (17) 2560 bytes (2) 640 bytes (7) 1280 bytes (12) 1920 bytes (18) 2688 bytes (3) 768 bytes ( 1408 bytes (14) 2176 bytes (19) 2816 bytes (4) 896 bytes (9) 1536 bytes (15) 2304 bytes (20) 2944 bytes (C) Cancel Enter N or 1-14 to continue, or C to cancel: Combined with my Start Position Override, I've now focused my tests on a much smaller range of values from 1152-1408. I can now increase my Test Type to a longer test, and/or lower my Byte Increment to a smaller interval to hit more test points. One other use of the End Position Override is to test values beyond 2048. I've provided options all the way up to 2944 (again, a somewhat arbitrary number but we're getting pretty big and silly at that point). If there's a need for higher values, I'll consider adding them in the future, but for now I think this is a safe limit. MONITORING THE TEST RUN: Some of these tests can take a long time to run, especially in Extreme mode. I couldn't imagine waiting 15 minutes for a status update, I would go bonkers! So I designed the GUI to update every second. As each test progresses, you can see the current StopWatch elapsed time for the test, as well as the current position in the Parity Check (the same position data you would see in the unRAID GUI). For the previously completed tests, you can see the tested md_sync_window value, the test duration time, and the calculated MB/s. SAMPLE OF MY SCREEN WHILE RUNNING PASS 2 IN A FULLAUTO TEST: Test 79 With md_sync_window=2728 Completed in 412.109 seconds at 138.9 MB/s Test 80 With md_sync_window=2736 Completed in 412.201 seconds at 138.8 MB/s Test 81 With md_sync_window=2744 Completed in 412.200 seconds at 138.8 MB/s Test 82 With md_sync_window=2752 Completed in 412.192 seconds at 138.8 MB/s Test 83 With md_sync_window=2760 Completed in 412.134 seconds at 138.9 MB/s Test 84 With md_sync_window=2768 Completed in 412.128 seconds at 138.9 MB/s Test 85 With md_sync_window=2776 Completed in 412.204 seconds at 138.8 MB/s Test 86 With md_sync_window=2784 Completed in 412.136 seconds at 138.9 MB/s Test 87 With md_sync_window=2792 Completed in 413.145 seconds at 138.5 MB/s Test 88 With md_sync_window=2800 Completed in 412.209 seconds at 138.8 MB/s Test Range Entered - Stopwatch: 342.02s - Current Position: 49288492 ABORTING A TEST RUN: If you need to stop a test run for any reason, the easiest way is to simply press the keyboard dynamic duo CTRL-C (which means cancel here, not copy). Keep in mind that this cancels just the utility program, nothing else. If the utility was currently running a Parity Check, which is very likely, that was not cancelled. You could cancel the Parity Check the normal way through the GUI, or type /root/mdcmd nocheck at a command prompt. REVIEWING TEST RESULTS: After the test run is complete, detailed test results are written to TunablesReport.txt. Summary test results are presented in the console window, and you are also able to press Y to view a copy of the TunablesReport.txt file right in the console. The TunablesReport.txt file lives wherever you copied the unraid-tunables-tester.sh utility. If you need to save any results before running another test, you should rename or move this file, otherwise it will be overwritten by the new test run. Tunables Report from unRAID Tunables Tester v1.0 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Time | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 73 Sample Points @ 0.8% Duration)--- 1 | 1408 | 768 | 512 | 222.345s | 103.0 MB/s 2 | 1440 | 768 | 544 | 208.967s | 109.5 MB/s 3 | 1472 | 768 | 576 | 195.582s | 117.0 MB/s 4 | 1504 | 768 | 608 | 188.421s | 121.5 MB/s /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ 55 | 2833 | 1275 | 1275 | 1765.231s | 135.0 MB/s 56 | 2835 | 1276 | 1276 | 1766.537s | 135.0 MB/s 57 | 2837 | 1277 | 1277 | 1764.631s | 135.1 MB/s Completed: 7 Hrs 51 Min 36 Sec. Recommended values for your server came from Test # 57 with a time of 1764.631s: Tunable (md_num_stripes): 2837 Tunable (md_write_limit): 1277 Tunable (md_sync_window): 1277 In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Edited 08/29/2013 - Updated the FULLAUTO test description to reflect v2.2.
×
×
  • Create New...