unRAID Server Release 6.1.3 Available


limetech

Recommended Posts

A couple of more data points (See server Spec's below):

 

  My Media Server did the parity CHECK (No update) in 7 hours, 44 minutes, 57 seconds. Average speed: 107.6 MB/sec.

 

  My Test Bed Server did the Parity Check in 7 hours, 50 minutes, 39 seconds. Average speed: 106.3 MB/sec.

 

As I recall these times are close to what they have always been--- say half hour one way or the other.

 

I decided to do a bit of testing of my Test Bed server.  My reasoning behind this is that it is a system with a low processing power CPU that does not have any real issues with the Parity Check speed. 

 

Many of you are aware that a shell program was written by Pauven to optimize the md_* tunables.  This program can be found in the following thread:

 

        http://lime-technology.com/forum/index.php?topic=29009.0

 

However, it does  not work with version 6.X since the location of the mdcmd command was changed in version 6.X.  Squid provided a solution to this issue in the following post: 

 

        http://lime-technology.com/forum/index.php?topic=29009.msg402678#msg402678

 

After making this search-and-replace (I believe that were 23 of them), I ran the script on my Test Bed server.  This is the output file from that session. 

 

 

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  98.8 MB/s 
   2  |    1536     |     768     |     640     |  98.8 MB/s 
   3  |    1664     |     768     |     768     | 101.3 MB/s 
   4  |    1920     |     896     |     896     | 101.6 MB/s 
   5  |    2176     |    1024     |    1024     | 103.6 MB/s 
   6  |    2560     |    1152     |    1152     | 103.9 MB/s 
   7  |    2816     |    1280     |    1280     | 103.2 MB/s 
   8  |    3072     |    1408     |    1408     | 105.2 MB/s 
   9  |    3328     |    1536     |    1536     | 103.4 MB/s 
  10  |    3584     |    1664     |    1664     | 102.9 MB/s 
  11  |    3968     |    1792     |    1792     | 104.4 MB/s 
  12  |    4224     |    1920     |    1920     | 104.4 MB/s 
  13  |    4480     |    2048     |    2048     | 104.7 MB/s 
  14  |    4736     |    2176     |    2176     | 104.4 MB/s 
  15  |    5120     |    2304     |    2304     | 106.0 MB/s 
  16  |    5376     |    2432     |    2432     | 105.9 MB/s 
  17  |    5632     |    2560     |    2560     | 105.5 MB/s 
  18  |    5888     |    2688     |    2688     | 104.6 MB/s 
  19  |    6144     |    2816     |    2816     | 106.9 MB/s 
  20  |    6528     |    2944     |    2944     | 106.5 MB/s 
--- Targeting Fastest Result of md_sync_window 2816 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    5984     |    2696     |    2696     | 108.6 MB/s 
  22  |    6008     |    2704     |    2704     | 109.9 MB/s 
  23  |    6024     |    2712     |    2712     | 110.3 MB/s 
  24  |    6040     |    2720     |    2720     | 110.2 MB/s 
  25  |    6056     |    2728     |    2728     | 110.7 MB/s 
  26  |    6080     |    2736     |    2736     | 110.0 MB/s 
  27  |    6096     |    2744     |    2744     | 110.9 MB/s 
  28  |    6112     |    2752     |    2752     | 110.4 MB/s 
  29  |    6128     |    2760     |    2760     | 108.6 MB/s 
  30  |    6144     |    2768     |    2768     | 109.4 MB/s 
  31  |    6168     |    2776     |    2776     | 108.6 MB/s 
  32  |    6184     |    2784     |    2784     | 108.4 MB/s 
  33  |    6200     |    2792     |    2792     | 110.1 MB/s 
  34  |    6216     |    2800     |    2800     | 108.8 MB/s 
  35  |    6240     |    2808     |    2808     | 108.5 MB/s 
  36  |    6256     |    2816     |    2816     | 108.0 MB/s 

Completed: 2 Hrs 7 Min 41 Sec.

Best Bang for the Buck: Test 1 with a speed of 98.8 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 16MB of RAM on your hardware.


Unthrottled values for your server came from Test 27 with a speed of 110.9 MB/s

     Tunable (md_num_stripes): 6096
     Tunable (md_write_limit): 2744
     Tunable (md_sync_window): 2744

These settings will consume 71MB of RAM on your hardware.
This is 56MB more than your current utilization of 15MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

As you look over the results from the first twenty runs, you will observe that the results are almost a flat curve once the parameters start to climb.  So I decided to see if what the real world results would be if I were to use the Unthrottled parameter values from the test.

 

You can change  md_num_stripes  and md_sync_window  with the GUI.  To change md_num_stripes, you have to edit the /flash/config/disk.cfg and change that parameter.  (You can edit this file with almost any straight text editor that since it uses the Windows CR-LF as the line ender.  However, this is not an issue as you should only be changing the parameter and not entering any new code line/comment lines!)  After making the changes, I rebooted the server just to be sure that the parameters had gotten changed. 

 

I then run a full Parity Check ( without error correcting) and the results are below:

 

 

                Original Default Values                              Unthrottled Values

    Tunable (md_num_stripes): 1280            Tunable (md_num_stripes): 6096

    Tunable (md_write_limit): 768                Tunable (md_write_limit): 2744

    Tunable (md_sync_window): 384            Tunable (md_sync_window): 2744

 

    7 hours, 50 minutes, 39 seconds.            7 hours, 47 minutes, 7 seconds.

    Average speed: 106.3 MB/sec                  Average speed: 107.1 MB/sec

 

Interesting results!  But I reflected on them for a bit, I realized that it was not that unexpected.  Remember that the speed curve is almost flat for my hardware and the time for version 6.X is approximately the same I was experiencing with version 5.X.  Thus changing these parameters will probably not effect the time required for the parity check in my situation.

 

What is would be interesting to see what the results might be if some of the folks who have experienced significant slowdowns of Parity checking speeds were to perform the same experiment. 

 

Link to comment
  • Replies 246
  • Created
  • Last Reply

Top Posters In This Topic

I really would like to see all the posts related to the performance for parity check operations split out to a proper thread. These posts will all be lost once the next release hits, 6.1.4 or 6.2.0.

 

Good idea.  Next time I add any info on this, I'll start a new thread altogether.    There's already a thread dedicated to the SAS2LP issue, but I'd like to see a unified collection of posts related to the specific issue I've seen => i.e. reduction in speed due SOLELY to the UnRAID version number ... i.e. NO changes to the hardware; no Dockers, no Plugins, and no VM's running that might impact the test.  It's fairly clear that v6.1.3 is using FAR more CPU resources than v5 did to do a parity check ... it'd be very interesting to know WHY.

 

Link to comment

I really would like to see all the posts related to the performance for parity check operations split out to a proper thread. These posts will all be lost once the next release hits, 6.1.4 or 6.2.0.

 

Good idea.  Next time I add any info on this, I'll start a new thread altogether.    There's already a thread dedicated to the SAS2LP issue, but I'd like to see a unified collection of posts related to the specific issue I've seen => i.e. reduction in speed due SOLELY to the UnRAID version number ... i.e. NO changes to the hardware; no Dockers, no Plugins, and no VM's running that might impact the test.  It's fairly clear that v6.1.3 is using FAR more CPU resources than v5 did to do a parity check ... it'd be very interesting to know WHY.

 

The problem may revolve around hardware drivers for chip sets or SATA cards not being optimized for the current version of the Linux kernel.  And, perhaps, certain parameter tunings might help with the problem.  But we will never know until folks with issues start trying things with the current version  rather than against testing old versions to 'prove' that things are slower.  I seriously doubt that LimeTech is about to roll back the kernel version or dropped the preemptive options that were added in to allow VM and Dockers to function smoothly... 

Link to comment

The Premptive option was enabled to stop crashes (watch dog) on Parity Checks and Parity Syncs and normal array operations. Nothing specifically for VMs or Dockers.

The preemptive options do help with multitasking (make the experience smoother) with heavy I/O.  But the initial impetus was to fix the rcu_sched warnings.
Link to comment

Improving the multitasking experience was certainly a good thing.    However ... if the ONLY ongoing task is a parity check (which thus shouldn't be losing cycles to pre-emptions) it's still perplexing why the CPU % is so much higher with v6 ... especially since v6 is 64-bit, which SHOULD be much more efficient than the older 32-bit code.

 

Link to comment

The problem may revolve around hardware drivers for chip sets or SATA cards not being optimized for the current version of the Linux kernel.  And, perhaps, certain parameter tunings might help with the problem.  But we will never know until folks with issues start trying things with the current version  rather than against testing old versions to 'prove' that things are slower.  I seriously doubt that LimeTech is about to roll back the kernel version or dropped the preemptive options that were added in to allow VM and Dockers to function smoothly...

 

While this may be true, it would also be nice to see some direction from LT on this. People are randomly trying things to find the cause of their issue, but having LT provide a framework of what people can be doing that would actually be useful to LT to help diagnose would be beneficial for everyone.

 

There are obviously a number of people reporting issues (either SAS2LP or other). We need someone to point them in a common direction so that hopefully the findings can be applied to a new build (using current or newer kernel). That sort of direction can only come from LT.

Link to comment

The problem may revolve around hardware drivers for chip sets or SATA cards not being optimized for the current version of the Linux kernel.  And, perhaps, certain parameter tunings might help with the problem.  But we will never know until folks with issues start trying things with the current version  rather than against testing old versions to 'prove' that things are slower.  I seriously doubt that LimeTech is about to roll back the kernel version or dropped the preemptive options that were added in to allow VM and Dockers to function smoothly...

 

While this may be true, it would also be nice to see some direction from LT on this. People are randomly trying things to find the cause of their issue, but having LT provide a framework of what people can be doing that would actually be useful to LT to help diagnose would be beneficial for everyone.

 

There are obviously a number of people reporting issues (either SAS2LP or other). We need someone to point them in a common direction so that hopefully the findings can be applied to a new build (using current or newer kernel). That sort of direction can only come from LT.

 

Agree.    There are, for example, folks buying new SATA controllers to replace SAS2LP's, even though the SAS2LP's are excellent controllers ... but have this sudden issue with 6.1.3.    It would be nice to know if Limetech has had any contact with SuperMicro regarding this to see if there may be a solution forthcoming.    And while my 31% increase in parity check time between 5.0.6 and 6.1.3 was really disappointing, it wasn't as bad as the user who had a 69% increase with a CPU that has 33% more "horsepower" than mine !!

 

It would be really nice to know if there are a few parameters that could be adjusted to help with this  [e.g. is there a "hidden" parameter that sets the priority of the parity check task ??].

 

Link to comment

Just wanted to add I'm seeing a serious slowdown with my server.  I can't say whether it was from this release or not (i've not noticed it before though).

 

When the mover is running its copying from my ssd cache drive to the array at only 10MB/s max and if I try doing anything else with the server whilst its running such as streaming a movie - it stutters and hangs.

 

I've also noticed copying to my cache drive seems to top out at 65-70MB/s whereas previously I've seen over 100MB/s.

 

I've tried disabling cache dirs and my vm but that didn't help.

 

Can anyone advise how to revert to the previous version?  I see there is a "previous" folder on the flash drive - does that equate to the previous version?

Link to comment

The problem may revolve around hardware drivers for chip sets or SATA cards not being optimized for the current version of the Linux kernel.  And, perhaps, certain parameter tunings might help with the problem.  But we will never know until folks with issues start trying things with the current version  rather than against testing old versions to 'prove' that things are slower.  I seriously doubt that LimeTech is about to roll back the kernel version or dropped the preemptive options that were added in to allow VM and Dockers to function smoothly...

 

While this may be true, it would also be nice to see some direction from LT on this. People are randomly trying things to find the cause of their issue, but having LT provide a framework of what people can be doing that would actually be useful to LT to help diagnose would be beneficial for everyone.

 

There are obviously a number of people reporting issues (either SAS2LP or other). We need someone to point them in a common direction so that hopefully the findings can be applied to a new build (using current or newer kernel). That sort of direction can only come from LT.

 

Agree.    There are, for example, folks buying new SATA controllers to replace SAS2LP's, even though the SAS2LP's are excellent controllers ... but have this sudden issue with 6.1.3.    It would be nice to know if Limetech has had any contact with SuperMicro regarding this to see if there may be a solution forthcoming.    And while my 31% increase in parity check time between 5.0.6 and 6.1.3 was really disappointing, it wasn't as bad as the user who had a 69% increase with a CPU that has 33% more "horsepower" than mine !!

 

It would be really nice to know if there are a few parameters that could be adjusted to help with this  [e.g. is there a "hidden" parameter that sets the priority of the parity check task ??].

 

I am one of those who swapped the card out for a couple M1015s, but this was due to my SAS2LP card crashing, and causing disks to fall offline. I've actually taken a 10MB/sec hit on parity checks by doing so, but I trust my infrastructure is more stable, which is way more important to me.

Link to comment

I am one of those who swapped the card out for a couple M1015s, but this was due to my SAS2LP card crashing, and causing disks to fall offline. I've actually taken a 10MB/sec hit on parity checks by doing so, but I trust my infrastructure is more stable, which is way more important to me.

 

Same here. My unRAID was crashing during parity check with SAS2LP redballing the disk and causing data corruption because it was trying to fix the "errors".

Link to comment

... I've actually taken a 10MB/sec hit on parity checks by doing so, but I trust my infrastructure is more stable, which is way more important to me.

 

Definitely agree that stability is more important than the speed of a parity check.  If the parity checks have been improved in some way and that's why they require higher CPU resources, that's fine ... it'd just be nice to know that.  But I'd have certainly thought that the move to 64-bits would REDUCE the % of the CPU needed for these checks -- not increase it.

 

Link to comment

It all depends on how the resources are configured. In previous versions of Linux it was quite possible for a process to completely hog all the system resources that would produce non-responsive system. It remains to be seen if the newer versions of unraid can configure resources and control groups to improve measured system response. They may only be running with default configurations which may not be as efficient as it could be.

Link to comment

I am one of those who swapped the card out for a couple M1015s, but this was due to my SAS2LP card crashing, and causing disks to fall offline. I've actually taken a 10MB/sec hit on parity checks by doing so, but I trust my infrastructure is more stable, which is way more important to me.

 

Same here. My unRAID was crashing during parity check with SAS2LP redballing the disk and causing data corruption because it was trying to fix the "errors".

 

Is this only on unRAID 6.1.3 (or 6.x?)  I'm running two SAS2LP controllers on 5.0.6 and it's been rock solid for a couple of years now.  Was gonna experiment with 6.x on my test server after the weather cooled down a bit.  Too hot to run a test in here right now.  My main system is outlined in my .sig.

 

PS:  In case it matters, I'm guessing I'm on the 4.0.0.1808 SAS2LP firmware?  Just saw 4.0.0.1812, will need to build a way to boot into DOS to upgrade the cards.  Dunno if that's a factor or not in any of the issues others are seeing.

Link to comment

Is this only on unRAID 6.1.3 (or 6.x?)  I'm running two SAS2LP controllers on 5.0.6 and it's been rock solid for a couple of years now.  Was gonna experiment with 6.x on my test server after the weather cooled down a bit.  Too hot to run a test in here right now.  My main system is outlined in my .sig.

It's a v6 problem, not specific to 6.1.3 or any other v6 release.

Link to comment

Is this only on unRAID 6.1.3 (or 6.x?)  I'm running two SAS2LP controllers on 5.0.6 and it's been rock solid for a couple of years now.  Was gonna experiment with 6.x on my test server after the weather cooled down a bit.  Too hot to run a test in here right now.  My main system is outlined in my .sig.

It's a v6 problem, not specific to 6.1.3 or any other v6 release.

 

Thanks, good to know.  My test rig has three Adaptec RAID 1430SA controllers, and would not have exhibited the SAS2LP issues.  Guess I'll stick to 5.0.6 on the main server for the foreseeable future.  Will still try to experiment with 6.x on the test rig though.  Be good to learn all the new stuff.  :-)

Link to comment

i make 1 hour parity check tests with 6.1.3 and 6.1.2.

Docker disabled, VM disabled, cache dirs disabled, no webgui, no usage.

But i realize that i forgot to uncheck write corrections at 6.1.3 test does it affect speed?

 

Results:

6.1.3: 326 GB(10.9 %) average: 92,7 MB/s

6.1.2: 377 GB(12.6 %) average: 107,2 MB/s

i will try to retest 6.1.3 with nocorrect and other versions when i find some time.

 

Although 92 MB is a fair speed there has to be some explanation about 15% slowdowns background. And it is odd that Frank1940 not having this slowdown issue since we have pretty much same hardware. Maybe it is related with SASLP or it is related with me :)

612.jpg.be70b179f47284c07bc752ec35735892.jpg

613.jpg.c5dc4e2ba5c71f1c210e952a74c6f4bd.jpg

Link to comment
If you installed dockerman files or the VM Manager plugin, they must be uninstalled.

 

Does that mean this: IP Address/Extensions/Dockers or the Docker Manger plugin? I cannot remember if I installed VM Manager; where would I find it?

 

 

If you created a docker.img file with an earlier v6 beta, you may have to delete and recreate it.

 

Same questions

Link to comment

If you installed dockerman files or the VM Manager plugin, they must be uninstalled.

 

Does that mean this: IP Address/Extensions/Dockers or the Docker Manger plugin? I cannot remember if I installed VM Manager; where would I find it?

 

 

If you created a docker.img file with an earlier v6 beta, you may have to delete and recreate it.

 

Same questions

Check on the plugins tab.    If you have them installed there then the option to remove them should be selected as they are now built in.  Having precursors to the built-in versions installed can cause problems.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.