unRAID Server Release 6.1.4 Available


limetech

Recommended Posts

  • Replies 127
  • Created
  • Last Reply

Top Posters In This Topic

Updated the test server (lab) to 6.1.4 and saw a definite improvement.  This machine uses Adaptec RAID 1430SA controllers.  All tests were run with default settings and without the nr_requests fix.

 

6.1.3:

Last checked on Fri 06 Nov 2015 04:46:34 AM PST (today), finding 0 errors.

Duration: 8 hours, 21 minutes, 28 seconds. Average speed: 66.5 MB/sec

 

6.1.4:

Last checked on Tue 17 Nov 2015 09:14:44 PM PST (today), finding 0 errors.

Duration: 5 hours, 38 minutes, 46 seconds. Average speed: 98.4 MB/sec

 

Not sure when I'll get to test on my main server (Cortex), but I'll post those numbers when I do.

Link to comment

parity is definitely faster, and i would like to try and improve it some more. On V5, i used this a long time ago. Is this still the tool to use to find better tunable attributes?

 

http://lime-technology.com/forum/index.php?topic=29009.0

 

I found that running tunables-tester makes much more difference on Marvell controllers, especially the SAS2LP where it can more than double the speed, naturally there’s no harm in trying but at least for me there’s zero to little difference using this on for example the Dell H310.

 

For the SAS2LP try it with the default md_sync_thresh value of 192 and a higher value, e.g. 1024, as it should improve speeds with md_sync_window values above 1024.

 

I let it run and got very consistent results across all tests it seems. The drive is a WD 3TB red on an SAS2LP

 

 Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 110.8 MB/s 
   2  |    1536     |     768     |     640     | 110.6 MB/s 
   3  |    1664     |     768     |     768     | 109.9 MB/s 
   4  |    1920     |     896     |     896     | 110.3 MB/s 
   5  |    2176     |    1024     |    1024     | 110.5 MB/s 
   6  |    2560     |    1152     |    1152     | 110.6 MB/s 
   7  |    2816     |    1280     |    1280     | 110.8 MB/s 
   8  |    3072     |    1408     |    1408     | 110.8 MB/s 
   9  |    3328     |    1536     |    1536     | 110.9 MB/s 
  10  |    3584     |    1664     |    1664     | 110.7 MB/s 
  11  |    3968     |    1792     |    1792     | 110.7 MB/s 
  12  |    4224     |    1920     |    1920     | 110.3 MB/s 
  13  |    4480     |    2048     |    2048     | 110.3 MB/s 
  14  |    4736     |    2176     |    2176     | 110.5 MB/s 
  15  |    5120     |    2304     |    2304     | 110.6 MB/s 
  16  |    5376     |    2432     |    2432     | 110.7 MB/s 
  17  |    5632     |    2560     |    2560     | 110.8 MB/s 
  18  |    5888     |    2688     |    2688     | 110.5 MB/s 
  19  |    6144     |    2816     |    2816     | 110.5 MB/s 
  20  |    6528     |    2944     |    2944     | 110.6 MB/s 
--- Targeting Fastest Result of md_sync_window 1536 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    3144     |    1416     |    1416     | 110.7 MB/s 
  22  |    3160     |    1424     |    1424     | 110.4 MB/s 
  23  |    3176     |    1432     |    1432     | 110.5 MB/s 
  24  |    3200     |    1440     |    1440     | 110.7 MB/s 
  25  |    3216     |    1448     |    1448     | 110.8 MB/s 
  26  |    3232     |    1456     |    1456     | 110.7 MB/s 
  27  |    3248     |    1464     |    1464     | 110.7 MB/s 
  28  |    3264     |    1472     |    1472     | 110.8 MB/s 
  29  |    3288     |    1480     |    1480     | 110.4 MB/s 
  30  |    3304     |    1488     |    1488     | 110.3 MB/s 
  31  |    3320     |    1496     |    1496     | 110.7 MB/s 
  32  |    3336     |    1504     |    1504     | 110.6 MB/s 
  33  |    3360     |    1512     |    1512     | 110.8 MB/s 
  34  |    3376     |    1520     |    1520     | 110.6 MB/s 
  35  |    3392     |    1528     |    1528     | 110.7 MB/s 
  36  |    3408     |    1536     |    1536     | 110.8 MB/s 

Completed: 2 Hrs 7 Min 6 Sec.

Best Bang for the Buck: Test 1 with a speed of 110.8 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

 

I just had a look at my flash drive backups and came across this test i did back in Sept 2013. It would seem that V6 is back up to similar speeds to V5.

 

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  90.9 MB/s 
   2  |    1536     |     768     |     640     | 102.3 MB/s 
   3  |    1664     |     768     |     768     | 110.7 MB/s 
   4  |    1920     |     896     |     896     | 112.0 MB/s 
   5  |    2176     |    1024     |    1024     | 112.0 MB/s 
   6  |    2560     |    1152     |    1152     | 112.1 MB/s 
   7  |    2816     |    1280     |    1280     | 112.3 MB/s 
   8  |    3072     |    1408     |    1408     | 112.3 MB/s 
   9  |    3328     |    1536     |    1536     | 112.1 MB/s 
  10  |    3584     |    1664     |    1664     | 112.3 MB/s 
  11  |    3968     |    1792     |    1792     | 112.2 MB/s 
  12  |    4224     |    1920     |    1920     | 112.2 MB/s 
  13  |    4480     |    2048     |    2048     | 112.2 MB/s 
  14  |    4736     |    2176     |    2176     | 112.2 MB/s 
  15  |    5120     |    2304     |    2304     | 112.2 MB/s 
  16  |    5376     |    2432     |    2432     | 112.1 MB/s 
  17  |    5632     |    2560     |    2560     | 112.3 MB/s 
  18  |    5888     |    2688     |    2688     | 112.1 MB/s 
  19  |    6144     |    2816     |    2816     | 112.3 MB/s 
  20  |    6528     |    2944     |    2944     | 112.5 MB/s 
--- Targeting Fastest Result of md_sync_window 2944 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    6272     |    2824     |    2824     | 112.3 MB/s 
  22  |    6288     |    2832     |    2832     | 112.3 MB/s 
  23  |    6304     |    2840     |    2840     | 112.5 MB/s 
  24  |    6328     |    2848     |    2848     | 112.2 MB/s 
  25  |    6344     |    2856     |    2856     | 112.2 MB/s 
  26  |    6360     |    2864     |    2864     | 112.3 MB/s 
  27  |    6376     |    2872     |    2872     | 112.3 MB/s 
  28  |    6400     |    2880     |    2880     | 112.3 MB/s 
  29  |    6416     |    2888     |    2888     | 112.4 MB/s 
  30  |    6432     |    2896     |    2896     | 112.2 MB/s 
  31  |    6448     |    2904     |    2904     | 112.3 MB/s 
  32  |    6464     |    2912     |    2912     | 112.2 MB/s 
  33  |    6488     |    2920     |    2920     | 112.4 MB/s 
  34  |    6504     |    2928     |    2928     | 112.3 MB/s 
  35  |    6520     |    2936     |    2936     | 112.2 MB/s 
  36  |    6536     |    2944     |    2944     | 112.3 MB/s 

Completed: 2 Hrs 7 Min 25 Sec.

Best Bang for the Buck: Test 4 with a speed of 112.0 MB/s

     Tunable (md_num_stripes): 1920
     Tunable (md_write_limit): 896
     Tunable (md_sync_window): 896

Link to comment

I let it run and got very consistent results across all tests it seems. The drive is a WD 3TB red on an SAS2LP

 

Are all drives WD Reds? Parity check is only as fast as your slowest disk. If they are all WD Reds I would expect a better starting speed of ~145Mb/s.

 

Interesting, i din't know that.

 

13 are reds and 2 are greens.

Link to comment

I let it run and got very consistent results across all tests it seems. The drive is a WD 3TB red on an SAS2LP

 

Are all drives WD Reds? Parity check is only as fast as your slowest disk. If they are all WD Reds I would expect a better starting speed of ~145Mb/s.

 

Interesting, i din't know that.

 

13 are reds and 2 are greens.

 

You probably have a WD green with lower density platters, run diskspeed from this thread, it will display the speed of all your disks, if there aren’t other bottlenecks parity check speed should closely fallow the slowest disk.

 

Note that most recent WD green and red drives have the same performance, but as far as I know all Red drives are minimum 1Tb/platter while older green drives can be 667 or 500Gb/ platter, these disks will have starting speed consistent with your results.

 

Link to comment

Just updated from 6.01 to 6.1.4 and now my cache disk is showing us unmountable and unraid would like to format it :(

 

[edit]  rebooted system and it came up with mounted cache but no data on drive, rebooted a second time and it looks like there is some file system corruption.  I can see total and used space accurately, but i cannot browse the shares on the cache.

 

[edit 2] rebooted again and now i can see all directories in the cache drive but not all files within directories.  some errors in syslog

CACHE.png.1ed7041979b78e090e38865971dfcd0a.png

CACHE1.png.5c02fc6d467cb8e22acfcc7d19ee2aa0.png

tower-syslog-20151119-0618.zip

log-latest.zip

Link to comment

Just updated from 6.01 to 6.1.4 and now my cache disk is showing us unmountable and unraid would like to format it :(

 

[edit]  rebooted system and it came up with mounted cache but no data on drive, rebooted a second time and it looks like there is some file system corruption.  I can see total and used space accurately, but i cannot browse the shares on the cache.

 

[edit 2] rebooted again and now i can see all directories in the cache drive but not all files within directories.  some errors in syslog

 

Please try booting into safe mode and see what happens.  I also see that you had a failure on ata7 (device sdh).  Looks like you had a hardware issue.  I'm guessing you also had your docker loopback image on that device, which is why I'm seeing all these btrfs /dev/loop0 errors because that's for our docker image.

 

This looks like a hardware and/or cabling issue.

Link to comment

Worth mentioning that I have already powered down and checked cabling.  Cache drive is a brand new msata drive.  I have noticed in the past ata7 getting reset during power on but it has never caused any actual problems.  It just strikes me as a bit of a coincidence that everything can be running 100% smoothly then after a software update suddenly I have a corrupt cache drive

Link to comment

Worth mentioning that I have already powered down and checked cabling.  Cache drive is a brand new msata drive.  I have noticed in the past ata7 getting reset during power on but it has never caused any actual problems.  It just strikes me as a bit of a coincidence that everything can be running 100% smoothly then after a software update suddenly I have a corrupt cache drive

 

Well, it's not just a software update.  You also rebooted the host.  You also have S3 sleep installed (we don't support sleep).  Your log shows ATA7 disabled.

 

Roll back to 6.1.3 and see what happens.

Link to comment

- Replaced Sata cable to Cache drive , error in log regarding ata7 has gone away (though ive had this in the past to no effect)

- after booting cache drive appeared ok at first, but after starting docker it lost connectivity through windows and start generating lots of errors again

- removed all plugins and rebooted

- still the same

- got frustrated :)

- reverted to 6.01 all running smoothly

Link to comment

I let it run and got very consistent results across all tests it seems. The drive is a WD 3TB red on an SAS2LP

 

Are all drives WD Reds? Parity check is only as fast as your slowest disk. If they are all WD Reds I would expect a better starting speed of ~145Mb/s.

 

Interesting, i din't know that.

 

13 are reds and 2 are greens.

 

You probably have a WD green with lower density platters, run diskspeed from this thread, it will display the speed of all your disks, if there aren’t other bottlenecks parity check speed should closely fallow the slowest disk.

 

Note that most recent WD green and red drives have the same performance, but as far as I know all Red drives are minimum 1Tb/platter while older green drives can be 667 or 500Gb/ platter, these disks will have starting speed consistent with your results.

 

The slowest drives in my array are 3 TB WD Reds, which are probably held back a little by their 5900 rpm? and a 5400 rpm 4 TB Hitachi.

 

I plan to upgrade some 3 TB Reds by 5 or 5 TBs ones. What do you recommend, 5 or 6 TB? Are Toshiba X300 any good? What platter size do they have? And what about warranty and replacement? Should I deal with dealer or directly with Toshiba? I am in Netherlands and only WD offers advanced replacement but it takes close to 2 weeks before I get a refurb one. I already have a 6 TB parity.

 

Sorry for hijacking this thread, please continue here:

 

http://lime-technology.com/forum/index.php?topic=44109.msg420967#msg420967

Link to comment

I’m afraid if this is a little off topic, wish there was a thread we could discuss all things related to parity check speeds.

 

The slowest drives in my array are 3 TB WD Reds, which are probably held back a little by their 5900 rpm? and a 5400 rpm 4 TB Hitachi.

 

I plan to upgrade some 3 TB Reds by 5 or 5 TBs ones. What do you recommend, 5 or 6 TB? Are Toshiba X300 any good? What platter size do they have? And what about warranty and replacement? Should I deal with dealer or directly with Toshiba? I am in Netherlands and only WD offers advanced replacement but it takes close to 2 weeks before I get a refurb one. I already have a 6 TB parity.

 

AFAIK, 3, 4 and 5TB reds are all 1TB/platter size and should have similar performance, 6TB is faster because it’s 1.2TB/platter, note however that for best parity check performance you should try to have as few different disk sizes as possible.

 

Don’t know the platter size of the new Toshibas X300 but suspect that they are the same as the WD reds, performance will be faster since they are 7200rpm, can’t help with how the warranty process works in the Netherlands.

 

 

 

Link to comment

I don't recall exact numbers but under 6.1.3, my previous parity check speed was under 20MB/sec and took way too many days.  Was just about to run a monthly parity check (if numbers came in similarly, was going to bring to LT's attention in the forum) when 6.1.4 was released.  Installed, ran parity check, and happy to report speeds have gone up again:

 

Estimated speed: 83.1 MB/sec

Elapsed time:         19 hours, 50 minutes

 

Thanks, team!  :)

 

Y

Link to comment

I don't recall exact numbers but under 6.1.3, my previous parity check speed was under 20MB/sec and took way too many days.  Was just about to run a monthly parity check (if numbers came in similarly, was going to bring to LT's attention in the forum) when 6.1.4 was released.  Installed, ran parity check, and happy to report speeds have gone up again:

 

Estimated speed: 83.1 MB/sec

Elapsed time:         19 hours, 50 minutes

 

Thanks, team!  :)

 

Y

 

Which 6 TB disks do you use? WD reds?

Link to comment

I don't recall exact numbers but under 6.1.3, my previous parity check speed was under 20MB/sec and took way too many days.  Was just about to run a monthly parity check (if numbers came in similarly, was going to bring to LT's attention in the forum) when 6.1.4 was released.  Installed, ran parity check, and happy to report speeds have gone up again:

 

Estimated speed: 83.1 MB/sec

Elapsed time:         19 hours, 50 minutes

 

Thanks, team!  :)

 

Y

 

Which 6 TB disks do you use? WD reds?

 

WD Green

Link to comment

I had 5 drives show up after upgrade with smart timeout errors.  After running for a hr or so got 2 notifications that 2 drives returned to normal?

 

I did not have this on previous version

 

For Command Timeout errors, please see the discussion here.  For further SMART monitoring discussion, please see this.

Link to comment

Just upgraded, had one small odd issue.  I have a single non ssd disk as a cache drive formatted as xfs.  After the upgrade it reset my cache disk slots from 1 to 4 and set default format of cache to btrfs.  That resulted in it wanting to format my cache drive.  I stopped the array, changed cache disk slots back to 1, and set cache format to xfs, then the array started and my cache drive was fine.

 

Hope this helps anyone else that encounters the same issue.

Link to comment

Just updated from 6.01 to 6.1.4 and now my cache disk is showing us unmountable and unraid would like to format it :(

 

[edit]  rebooted system and it came up with mounted cache but no data on drive, rebooted a second time and it looks like there is some file system corruption.  I can see total and used space accurately, but i cannot browse the shares on the cache.

 

[edit 2] rebooted again and now i can see all directories in the cache drive but not all files within directories.  some errors in syslog

I have to agree with JonP, this is a hardware issue with the interface to your new mSATA drive.  Can you provide a syslog from your current v6.0.1, for comparison?

 

In syslogs, drive was setup and mounted fine, but interface issues arose.  It handles it at first, but it rapidly gets worse.  There is occasionally a BadCRC flag, indicating packets still getting through but corrupted, then afterward no evidence of packets, and even after resets, no response, so drive is dropped by the kernel (causing all the rest of the errors, including the loss of the BTRFS loop0 system on it when Docker is enabled).

 

Try reseating the card, perhaps it's loose?

 

It's very hard for me to believe you aren't seeing the same issues on all versions, for that matter on any OS you might try, because this is a hardware interface issue.  If the drive stops responding, it doesn't matter what software is loaded.  It *is* possible that one software may use different timings and procedures, trying to maintain communications with a device, but that would mean it was just dumb luck that any particular release appeared to work better.  You have indicated you saw resets before.  Those are a big problem even if it appeared to work after them, and I'm not sure I'd trust the drive if it's occasionally resetting.  Sooner or later, even in the best of conditions, there's going to be file system corruption.

 

I know you find it too coincidental that this happened with the update, but I have noticed that invariably with almost every update there are one or two users that find a hardware issue, right after performing the update.  It's natural then to blame the update, as it's the only thing you know of that changed.

 

There's a small compatibility issue with your mSATA drive, but may be harmless.

 

You indicated you replaced the cable to the drive, but it's an mSATA drive (doesn't have cables)?

Link to comment

here is my syslog after rolling back to 6.01

 

Some observations -

 

* In both syslogs, there is evidence that one or more remnants of v5 were not cleaned out, it finds an old copy of the dynamix plugin.  Please review Files on flash drive.  This has nothing to do with your main issue though, but can cause strange issues.

 

* The v6.0.1 system and the v6.1.4 systems are a little different, as the v6.1.4 system loads Community Applications, CacheDirs, and S3_sleep, the v6.0.1 system does not.  But this too should have no effect on a hardware issue.  It does mean though that we're slightly comparing apples to oranges, or perhaps green apples to red apples.  And I have to wonder what else might be different in the config.  You do install S3_sleep later on the v6.0.1 system, and use it.

 

* There's no evidence of the interface issues in this v6.0.1 syslog.  I don't know why.  There are what I think very minor differences in the kernel between the 2 versions, nothing that (I think) LimeTech would have control over.  So I don't know what to tell you yet.  I really believe you will see problems with that card.  Any chance you have re-seated it?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.