Jump to content


Popular Content

Showing content with the highest reputation on 08/11/19 in all areas

  1. 1 point
    NEW! For Unraid 6.x, this utility is named: unraid6x-tunables-tester.sh For Unraid 5.x, this utility is named: unraid-tunables-tester.sh The current version is 4.1 for Unraid 6.x and is attached at the bottom of this post. I will maintain this post with future versions (if there are any). The legacy version 2.2 for Unraid 5.x is attached at the bottom of this post. This version is no longer maintained. VERSION HISTORY # V4.1: Added a function to use the first result with 99.8% max speed for Pass 2 # Fixed Server Name in Notification messages (was hardcoded TOWER) # Many fixes to the SCSI Host Controllers and Connected Drives report # Added a function to check lsscsi version and optionally upgrade to v0.30 # Added a function to archive unprocessed Notifications # Updated report to show RAM usage in KB if below 0.5 MB # Cosmetic menu tweaks - by Pauven 08/14/2019 # # V4.0: Updated to work with Unraid 6.6.x and newer # No longer compatible with Unraid 5.x servers (still supported by UTT v2.2) # Changed /root/mdcmd to just mdcmd # Added an Unraid version check and compatibility warnings # Added a Unraid Mover warning if Mover has file moves queued # Removed md_write_limit # Added new tests for md_sync_thresh and nr_requests # Updated logic to handle dual parity servers # Refined test points and test running time # Added new logic to only test low, higher or ultra high points if necessary # Added a staggered spin-up of any sleeping array drives before running tests # Added lots of new server info to the output file # Added RAM consumption of md_num_stripes info on every test point # Added a separate CSV file for charting the results # Added Baseline test to measure performance of current configuration # Added Default tests to measure performance of stock Unraid values # Replaced B4B with new Thriftiest - all new algorithm 95% of fastest speed # Added a new Recommended Sync Speed at 99% of fastest speed # Removed Read/Write tests - not compatible w/ Unraid v6 # Menu polishing, code cleanup and bug fixes # Reformated report output and added system info to reports # Added an Unraid Notifications script wrapper to block notifications # Added a trap to handle script aborting to run cleanup routines - by Pauven 08/05/2019 # # V3.0: Internal version only, never publicly released # Added download/install/usage of lshw for hd info # Added a write test for checking md_write_limit values # Added a read test for checking md_num_stripe values - by Pauven 09/06/2013 # # V2.2: Added support for md_sync_window values down to 8 bytes, # Added a extra-low value special pass to FULLAUTO, # Fixed a bug that didn't restore values after testing - by Pauven 08/28/2013 # V2.1: Added support for md_sync_window values down to 128 bytes, # Fixed a typo on the FULLAUTO option - by Pauven 08/28/2013 # V2.0: Changed the test method from a "time to process bytes" to a "bytes # processed in time" algorithm, lowering CPU utilization during testing, # Updated menus to reflect time based options instead of % options, # Revamped FULLAUTO with an optimized 2-hour 2-pass process, # Added Best Bang for the Buck sizing recommendation, # Added logic to autocancel Parity Checks in progress, # Added a check to make sure the array is Started - by Pauven 08/25/2013 # v1.1: Increased update frequency 1000x to improve result accuracy, # Polished the formatting of the output data, # Various menu improvements and minor logic tweaks, # Added menu options for Manual Start/End Overrides, # Updated logic for identifying the best result, # Extended the range of the FULLAUTO test to 2944, # Added a memory use calculation to the report, # Added option to write new params to disk.cfg - by Pauven 08/23/2013 # v1.0: Initial Version - by Pauven 08/21/2013 EXECUTIVE SUMMARY Unraid Tunables Tester (UTT) is a utility that runs dozens of partial, non-correcting parity checks with different values for the Unraid Tunable parameters and reports on the relative performance for each set of values. Adjusting these values can improve system performance, particularly Parity Check speed, and this utility helps you find the right values for your system. On the new UTT v4.1 for Unraid v6.x, users can select from predefined Normal, Thorough, Long, and Xtra-Long tests. There are no manual controls. This version tests md_sync_window, md_num_stripes, md_sync_thresh, and optionally nr_requests (in the Thorough and Xtra-Long tests). On the legacy UTT v2.2 for Unraid v5.x, users can either manually select the test value ranges and test types, or alternatively choose a Fully Automatic mode that runs an algorithm designed to zero in on the best values for your system. This version tests Unraid Tunables md_sync_window, md_write_limit, and md_num_stripes. Users don't need to know any command line parameters, as all prompts are provided at runtime with friendly guidance and some safety checks. SUMMARY Since Unraid servers can be built in a limitless number of configurations, it is impossible for Lime-Technology to know what tunable parameters are correct for each system. Different amounts of memory and various hardware components (especially HDD controllers and the drives themselves) directly affect what values work best for your system. To play it safe, Lime-Technology delivers Unraid with 'safe' stock values that should work with any system, including servers with limited RAM. But how is a user to know what values to use on their server? This utility addresses that problem by testing the available tunable parameters: UTT v4.1 for Unraid 6.x UTT v2.2 for Unraid 5.x md_num_stripes md_num_stripes md_sync_window md_sync_window md_sync_thresh md_write_limit nr_requests Each test is performed by automatically setting the values and running a partial, Non-Correcting Parity Check, typically less than 1% of a full Parity Check. By running just a short section of a Parity Check before stopping it, this utility can test multiple values in relatively quick succession (certainly quicker than running a full Parity Check or doing this process manually). Depending upon the test chosen, the UTT script will try dozens or even hundreds of combinations of values, finding the combination that works best for your particular server hardware. There are no command line parameters, the entire utility is driven through a user prompt system. Each test is timed down to the millisecond, which is important when running shorter tests, so you can determine which set of values are appropriate for your system. NOTES on the New UTT v4.1 for Unraid 6.x For the new UTT v4.1, the output is saved to the current directory when you launch UTT. The report will be named based upon the test type you chose (i.e. NormalSyncTestReport_<datestamp>.txt). There is also an identically named CSV file generated that has all of the test results in spreadsheet format, making for easier charting of results. When the tests complete, you are provided the option to apply the Fastest, Thriftiest, or Recommended values that were discovered, or revert back to the previous values. In addition, if you apply new values, you are also given the option to SAVE the chosen values to the server's configuration, so it will re-apply after reboot. You can also manually apply your chosen values later by going to Settings > Disk Settings. The Fastest values represent the combination of Tunables that resulted in the fastest measured Parity Check speeds. If the fastest speed was observed in multiple combinations of values, then the combination that has the lowest memory utilization is chosen as the fastest. The Recommended values are the combination of values with the lowest memory utilization that achieves at least 99% of the Fastest speed. Often this provides a nice memory savings over the Fastest values, while only adding a few seconds or minutes to a full Parity Check. The Thriftiest values are the combination of values with the lowest memory utilization that achieves at least 95% of the Fastest speed. These usually provide a significant memory savings over the Fastest values, but might make your Parity Checks noticably longer. In case you're wondering, the formula for assigning the four Tunables values is of my own design. It tests a whole bunch of values for md_sync_window, assigns md_num_stripes as 2x md_sync_window, and tests various methods of assigning md_sync_thresh (md_sync_window -1, -4, -8, -12, -16, -20, -24, -28, -32, -36, -40, -44, -48, -52, -56, -60, -64, and md_sync_window/2) and optionally various values for nr_requests (128, 16, and 8). If nr_requests is not tested, then all tests use nr_requests=128, which typically provides both the fastest speeds and the most obvious speed curves, making it easier to find the best values for the other three Tunables. It should be noted that low values for nr_requests (i.e. nr_requests=8 ) seems to make all values for the other Tunables perform really well, perhaps 90-95% of maximum possible speeds, but in our testing we have always found that the maximum possible speeds come from nr_requests=128. For this reason, all of the default tests are performed at nr_requests=128, and we make the nr_requests tests optional (Through or Xtra-Long tests). In our experience, after the other values have been properly tuned for your server, these optional nr_request tests of lower values will only show slower speeds. That said, it is possible that your server hardware responds differently, and the only way to know for sure is to run these optional tests. NOTES on the Legacy UTT v2.2 for Unraid 5.x For the legacy UTT v2.2, regardless of what type of tests your run, the output is saved to a file named TunablesReport.txt, which lives in the same directory you install the utility. No CSV is generated for this version. While this utility tests changes to all three tunable parameters, these changes are not permanent. If you wish to make the settings permanent, you have to chose your preferred values from the report, and manually enter them on the Settings > Disk Settings menu page in unRAID. Additionally, after the tests are completed, the utility sets the tunable values back to unRAID stock values (for safety, in case you forget about setting them). A reboot will return you to your previously selected values, as will hitting Apply on the Settings > Disk Settings menu page. In case you're wondering, the formula for assigning the three values is of my own design. It assigns md_num_stripes as approximately 11% bigger than md_write_limit + md_sync_window, rounded to the nearest testing interval. md_write_limit is also set to the same value as md_sync_window for all values beyond 768 bytes. Based upon your test parameters (primarily the Interval setting), the md_num_stripes value will calculate differently. As far as I am aware my logic works okay, but this may have to be revisited in the future if new understandings are gained on how these three values correlate. There are no published hard and fast rules for how to set the three values together. OBLIGATORY WARNINGS Yup, here's my CYA prose, but since it is for your benefit, I suggest you read it. Outside of writing the results report file (most likely to your flash drive), this utility does not do any writing to the server. The Parity Checks are all performed in a read-only, non-correcting fashion. But that doesn't mean something can't go horribly wrong. For one, simply using this utility may stress your server to the breaking point. Weak hardware may meet an early demise. All array drives will be spinning simultaneously (smaller drives won't spin down like a normal Parity Check permits) and heat will build up in your system. Ensure you have good cooling. Running these tests, especially Fully Automatic, may be harder on your system than a full Parity Check. You have to decide for yourself which tests are appropriate for your server and your comfort level. If you are unsure, the default values are a pretty safe way to go. And if you decide after starting a test that you want to abort it, just hit CTRL-C on your keyboard. If you do this, the Parity Check will most likely still be running, but you can Cancel it through the GUI. (Note, the new UTT v4 has built-in functionality to stop any running Parity Checks and to restore original values if you perform a CTRL-C and abort the test. Yay!!!) Another issue that can crop is is out of memory errors. The three Unraid Tunable values are directly related to memory allocation to the md subsystem. Some users have reported Kernel OOPS and Out Of Memory conditions when adjusting the Unraid Tunables, though it seems these users are often running many add-ons and plug-ins that compete for memory. This utility is capable of pushing memory utilization extremely high, especially in Fully Automatic mode, which scans a very large range of assignable values beyond what you may rationally consider assigning. Typically, running out of memory is not a fatal event as long as you are not writing to your array. If you are writing to your array when a memory error occurs, data loss may occur! The best advice is to not use your server at all during the test, and to disable 3rd party scripts, plug-ins, add-ons and yes even GUI/menu replacements - something made easier with unRAID's new Safe Boot feature. On Unraid 6.x, it is also important to stop any VM's and Dockers. One last caution: If you have less than 4GB of RAM, this utility may not be for you. That goes doubly if you are running a barebones, lightweight 512MB server, which should probably stay at the default Tunable values. This utility was designed and tested on a server with 4GB, and ran there without any issues, but you may run out of memory faster and easier if you have less memory to start with. NEW UTT V4.1 INSTALLATION Installation is simple. Download the file unraid6x-tunables-tester.sh.v4_1.txt (current version at the bottom of this post) Rename the file to remove the .v4_1.txt extension - name should be unraid6x-tunables-tester.sh Create a new folder for the script, for example \\<servername>\flash\utt (or /boot/utt from the Unraid console) Copy the file to the folder you created Check to see if the file is executable by running ls -l in the install directory: -rwxrwxrwx 1 root root 21599 2013-08-22 12:54 unraid6x-tunables-tester.sh* If you don't see -rwxrwxrwx (for Read Write Execute) use command chmod 777 unraid6x-tunables-tester.sh to make it executable LEGACY UTT V2.2 INSTALLATION Installation is simple. Download the file unraid-tunables-tester.sh.v2_2.txt (current version at the bottom of this post) Rename the file to remove the .v2_2.txt extension - name should be unraid-tunables-tester.sh Copy the file onto your flash drive (I put it in the root of the flash for convenience) Check to see if the file is executable by running ls -l in the install directory: -rwxrwxrwx 1 root root 21599 2013-08-22 12:54 unraid-tunables-tester.sh* If you don't see -rwxrwxrwx (for Read Write Execute) use command chmod 777 unraid-tunables-tester.sh to make it executable RUNNING THE UTILITY The utility is run from the server's console, and is not accessible from the unRAID GUI. I like to use PuTTY or TELNET , plus SCREEN to manage my console connections, but use whatever you like best. You should always run this utility interactively. It is not designed to be something you put in your go file or in a cron job. To run, simply cd to the folder you placed the file (i.e. cd /boot/utt) then run the program (i.e. type: unraid6x-tunables-tester.sh, or ./unraid6x-tunables-tester.sh for those that like that convention). For the new UTT v4.1, remember that the name has 6x in it, unraid6x-tunables-tester.sh, while the legacy UTT v2.2 is just unraid-tunables-tester.sh Edited 08/22/2013 - Added chmod instructions Edited 08/23/2013 - Updated to version 1.1 Edited 08/26/2013 - Updated to version 2.0 Edited 08/28/2013 - Updated to version 2.1 Edited 08/28/2013 - Updated to version 2.2 Edited 08/05/2019 - Added new version 4.0 for Unraid 6.x Edited 08/14/2019 - Updated to version 4.1 for Unraid 6.x CONTINUED IN NEXT POST... Download legacy UTT v2.2 for Unraid 5.x: unraid-tunables-tester.sh.v2_2.txt Download the new UTT v4.1 for Unraid 6.x: unraid6x-tunables-tester.sh.v4_1.txt
  2. 1 point
    Thanks @jbartlett, that's exactly what I needed. I wanted to make sure that Cache2 was IDX 31. I'll post Beta 1 of UTT v4.1 here shortly for testing.
  3. 1 point
    Much better. Looks like the accuracy is +/- 0.2 MB/s. The new logic in UTT v4.1 would have used md_sync_window 6144 (from TEST PASS 1_HIGH) for Pass 2, and test from 3072 - 9216. All things considered, I think the v.41 results would be identical to these results for you, as your server has a really flat curve that starts extremely low, and the new logic won't really affect those results.
  4. 1 point
    I just went through something similar - failure of unRAID array due to old/defective SATA backplanes on my Norco RPC-4220 storage enclosure. I ended up with one drive that was repaired by the Check Filesystems procedure but the other didn't recover properly. I had made a disc image using dd/ddrescue before trying the Check Filesystems in maintenance mode. Glad I did as I was able to use data recovery software (UFS Explorer Standard) on the unmountable drive (image) and it let me recover everything I needed/wanted. I looked at assorted recovery platforms but chose UFS Explorer Standard as it ran natively on my Ubuntu main system. There's a few more details in this thread:
  5. 1 point
    Thanks for all the data @StevenD, you've been very helpful today.
  6. 1 point
    There isn't. RAID1 only allows you to lose up to half of your disks. Losing 2 in a 3-disk RAID 1 = more than half = lost data.
  7. 1 point
    I got it working now! i needed to uninstall the nvidia version of unraid (back to stock) and that did the trick! thank you for helping
  8. 1 point
    I am having the same problem. After the container got updated during the night autodl stopped working. Then I have to go to irc-server settings and press ok and it starts to work again.
  9. 1 point
    Can answer this one, should be plug 'n play.
  10. 1 point
    I get the same thing with my custom image. I tried storing it on the flash drive but that didn't help.
  11. 1 point
    Not discounting the value of the suggestion, which I personally have no need for but certainly can see a valid use case, but I'm wondering about your workflow. Normally vdisk files are sparse, so no matter how large or small you allocate, they only take up as much space as actually used by the files inside. So, why not just set up the base images with the size you need to begin with? It's not going to change the amount of space they occupy.
  12. 1 point
    I can confirm this bug - but with a different conclusion. cp from cache to disk2 (using console) reaches about 200MB/s, read from disk3 (via SMB) drops to 5MB/s Once the disk2 write is done, read from disk3 immediately goes back up to 197MB/s cp from cache to unassigned device (using console) reaches 500MB/s, read from disk 3 (via SMB) is still high around 172MB/s To remove SMB as a variable, I have repeated the test using console only (2 simultaneous connections) and they have similar results To remove console as a variable, I have repeated the test using SMB only and I can see write speed about 2x-3x read speed but the frequent fluctuation makes it hard to judge. However, it's clear read speed is in the double-digit (i.e. faster than case (1) above). To remove write as a variable, I tested read (via SMB) from 3 disks, 2 disks and 1 disk and get 96-95-97, 141-143 and 210. To remove read as a variable, I tested write (via SMB) from 3 disks, 2 disks and 1 disk and get similarly even splits. No parity. All mitigation disabled via Squid's plugin. So it sounds to me like it's not necessarily an issue with concurrent performance but rather there's a speed limit to the array IO with incorrect prioritisation of write vs read. For read/write to a single disk, it's limited by the maximum speed of the device, usually HDD which is usually lower than this overall speed limit. When read / write to multiple disks, the total speed of multiple devices exceed the speed limit, causing the overall limit to be apparent. If only read or only write, the limit is divided across multiple disks evenly If read + write, there appears to significantly higher priority (and/or resources) given to write, crippling read speed.
  13. 1 point
    The current PLEX server doesn't use NVDEC for the decoding part. Even when you enable hardware transcoding in Plex settings, it only use the NVENC (encoding) and still decode using CPU. The NVDEC Script is a wrapper that tricks Plex Transcoder by passingthu the Decoding command in parameters so that ffmpeg underneath will use the NVDEC too. It's not perfect in the current state as the ffmpeg is very old in Plex. The newest ffmpeg support nvdec a lot better. Hopefully, PLEX will update it soon. When you use the NVDec script, that lower your CPU usage because if it can, it will decode also with the nvidia card.
  14. 1 point
    If as you try to access unsecure unRAID, you see this panel, insert \ backslash for user ID and click OK, your in.