unRAID Server Release 6.1.4 Available


limetech

Recommended Posts

  • Replies 127
  • Created
  • Last Reply

Top Posters In This Topic

Thank you for the feedback Robj

 

here is my syslog after rolling back to 6.01

 

Some observations -

 

* In both syslogs, there is evidence that one or more remnants of v5 were not cleaned out, it finds an old copy of the dynamix plugin.  Please review Files on flash drive.  This has nothing to do with your main issue though, but can cause strange issues.

 

* The v6.0.1 system and the v6.1.4 systems are a little different, as the v6.1.4 system loads Community Applications, CacheDirs, and S3_sleep, the v6.0.1 system does not.  But this too should have no effect on a hardware issue.  It does mean though that we're slightly comparing apples to oranges, or perhaps green apples to red apples.  And I have to wonder what else might be different in the config.  You do install S3_sleep later on the v6.0.1 system, and use it.

 

* There's no evidence of the interface issues in this v6.0.1 syslog.  I don't know why.  There are what I think very minor differences in the kernel between the 2 versions, nothing that (I think) LimeTech would have control over.  So I don't know what to tell you yet.  I really believe you will see problems with that card.  Any chance you have re-seated it?

 

* I'm surprised there is an old Dynamix lurking in the syslog as i did a clean install when moving to v6, but i will track this down.

 

* It's worth noting that the plugins mentioned were all running happily in 6.01 for quite some time before the syslogs that were posted.  So i know it's not a straight comparison but which ever plugins were previously running in 6.01 were then updated in 6.1.4.  I did also run 6.1.4 without any plugins installed to make sure it wasn't a plugin causing the issue

 

* I did replace the sata cable as a precaution during my troubleshooting of 6.1.4.  After replacing the sata cable the errors for ATA7 went away but i was still having all of the other errors in the syslog.  It wasn't until rolling back to 6.01 that my problems went away. So i'm still not sure, to me it feels as though there is an issue in 6.1.4 for me.  Maybe if i get a chance on the weekend i will try moving to 6.1.3

Link to comment

Updated to 6.1.4 and Im finding the window that pops up when updating a docker is blank. It finished the update, there's just no progress displayed to the user.

 

EDIT: On  update of the third docker it was blank until completion, then the information was populated. Started the 4th docker update and that window is blank too.

Link to comment

Updated to 6.1.4 and Im finding the window that pops up when updating a docker is blank. It finished the update, there's just no progress displayed to the user.

 

EDIT: On  update of the third docker it was blank until completion, then the information was populated. Started the 4th docker update and that window is blank too.

 

I have seen this behavior in numerous versions and just always knew that it would eventually finish.

Link to comment

Updated to 6.1.4 and Im finding the window that pops up when updating a docker is blank. It finished the update, there's just no progress displayed to the user.

 

EDIT: On  update of the third docker it was blank until completion, then the information was populated. Started the 4th docker update and that window is blank too.

Yup this is a bug we are aware of. Didn't have time to fix for 6.1.4.  It is actually planned to remain as a known issue for the initial 6.2 beta, but we plan to patch it after that release.

 

As for sharing what features are coming, well, you'll just have to wait and see, but I think everyone should get ready to put on a pair of diapers/depends because I think pants may be soiled once you guys see all the stuff we added ;-)

Link to comment

Just a small observation which might help some people with blank gui pages. After upgrading to 6.1.4 the "user shares" page would not show any shares, even though everthing worked fine and all shares were accessable through smb and would also show on the dashboard page. It turned out that the culprit was adblock plus - after disabling adblock for my local nas, everything showed again just like it should.

Link to comment

This release is showing all the disks connected to my Areca RAID card as SMART warning red. It is reading them as spun down when they are in fact all spun up.

 

in addition, Plex refuses to play any of my titles saying to check if the server is running. I am able to stream the MKV using VLC straight from the share just fine. I also watched a title earlier tonight (which won't play now). Diagnostics attached. Screen too.

 

ps: plex forum shows that 0.9.14.2 Plex for Linux is borked so that probably isn't an unRaid thing. Now to try and roll back a version. :P

tower-diagnostics-20151120-0014.zip

Screenshot_1.png.d1c744f0cf2b4c95862200ae0305aadb.png

Link to comment

I am running 3 of the SAS2LP cards, and mine has been running well without any issues.  After upgrading to 6.1.4 I started getting reallocated sectors on one of my drives, so I started migrating data off the drive and ordered a replacement.  I usually like to get the data off just as a failsafe incase of another drive has an issue.  I have tried 2 times so far to rsync the data to other drives and I start seeing errors such as:

 

ata25.00: exception Emask 0x0 SAct 0x300000 SErr 0x0 action 0x6

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/00:00:00:97:6d/04:00:26:00:00/40 tag 20 ncq 524288 out

        res 41/84:00:00:5f:6d/00:04:26:00:00/00 Emask 0x410 (ATA bus error) <F>

ata25.00: status: { DRDY ERR }

ata25.00: error: { ICRC ABRT }

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/08:00:00:9b:6d/00:00:26:00:00/40 tag 21 ncq 4096 out

        res 01/04:a8:00:9b:6d/00:00:26:00:00/40 Emask 0x2 (HSM violation)

ata25.00: status: { ERR }

ata25.00: error: { ABRT }

 

on each drive I try to copy the data too.  I have reverted back to 6.1.3 and started the copy again, and haven't seen any issues so far, but it hasn't been running very long.    I am not sure if this is really related to 6.1.4 upgrade or not, but since there is notes about tweaking that card I figured I would post..  I don't  really have the log since I rebooted back into 6.1.3 to try to get the data saved, but I am including a cut/paste of part of the log from a screen capture.

 

Thanks,

Jeff

output.zip

Link to comment

This release is showing all the disks connected to my Areca RAID card as SMART warning red. It is reading them as spun down when they are in fact all spun up.

 

I'm seeing the same thing.  I'm really hoping they'll add some logic to handle the Areca cards better, as they are a nice, cheap large density controller for unraid boxes.

 

Also glad I read about the problem with the .14 version of Plex, was getting ready to upgrade mine to that version, but I'll hold off for now...

areca_drives.jpg.8b635902f2365ca353493d04cfc651d2.jpg

Link to comment

Since upgrading to 6.1.4 I've run 2 parity checks. Once just using the default settings, and one setting the md_sync_thresh setting to -1 from sync_window. I am also including my Nov 1 parity check results for comparison from 6.1.3. This is using 3 M1015 cards:

 

6.1.3 (Nov 1):

Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors)

Description: Duration: 18 hours, 1 minute, 11 seconds. Average speed: 92.5 MB/sec

Importance: normal

 

6.1.4 (no thresh set):

Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors)

Description: Duration: 18 hours, 18 minutes, 27 seconds. Average speed: 91.1 MB/sec

Importance: normal

 

6.1.4 (thresh set):

Subject: Notice [CYDSTORAGE] - Parity check finished (0 errors)

Description: Duration: 17 hours, 39 minutes, 43 seconds. Average speed: 94.4 MB/sec

Importance: normal

 

So, 6.1.4 seemed to slow me down a bit, but setting the thresh value saved me an hour on the parity check which isn't bad.

Link to comment

I am running 3 of the SAS2LP cards, and mine has been running well without any issues.  After upgrading to 6.1.4 I started getting reallocated sectors on one of my drives, so I started migrating data off the drive and ordered a replacement.  I usually like to get the data off just as a failsafe incase of another drive has an issue.  I have tried 2 times so far to rsync the data to other drives and I start seeing errors such as:

 

ata25.00: exception Emask 0x0 SAct 0x300000 SErr 0x0 action 0x6

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/00:00:00:97:6d/04:00:26:00:00/40 tag 20 ncq 524288 out

        res 41/84:00:00:5f:6d/00:04:26:00:00/00 Emask 0x410 (ATA bus error) <F>

ata25.00: status: { DRDY ERR }

ata25.00: error: { ICRC ABRT }

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/08:00:00:9b:6d/00:00:26:00:00/40 tag 21 ncq 4096 out

        res 01/04:a8:00:9b:6d/00:00:26:00:00/40 Emask 0x2 (HSM violation)

ata25.00: status: { ERR }

ata25.00: error: { ABRT }

 

on each drive I try to copy the data too.  I have reverted back to 6.1.3 and started the copy again, and haven't seen any issues so far, but it hasn't been running very long.    I am not sure if this is really related to 6.1.4 upgrade or not, but since there is notes about tweaking that card I figured I would post..  I don't  really have the log since I rebooted back into 6.1.3 to try to get the data saved, but I am including a cut/paste of part of the log from a screen capture.

 

Thanks,

Jeff

 

Looks like it is doing it with 6.1.3, so it may be just some issue I am having with the other drives instead of the 6.1.4 update...

Link to comment

I'm running a SAS2LP card have have been experiencing the red ball(s) during parity sync/check issues described elsewhere.

 

After installing 6.1.4 and subsequently performing a parity check, it again resulted in a "failed" disk. Unraid alerted that a disk was in an error state however 20 minutes later "array turned good and Array has 0 disks with read errors".

 

The drive is still 'red balled' though.  Is this expected?

 

UNRAID-disk5.jpg

Link to comment

This release is showing all the disks connected to my Areca RAID card as SMART warning red. It is reading them as spun down when they are in fact all spun up.

In the attached picture at least, they do appear spun up, but Areca attached drives require special handling for spin detection and SMART access.  I suspect it is showing a problem icon when it can't access SMART.  Perhaps a third icon could be introduced, a grey one when SMART is inaccessible.

Link to comment

This release is showing all the disks connected to my Areca RAID card as SMART warning red. It is reading them as spun down when they are in fact all spun up.

In the attached picture at least, they do appear spun up, but Areca attached drives require special handling for spin detection and SMART access.  I suspect it is showing a problem icon when it can't access SMART.  Perhaps a third icon could be introduced, a grey one when SMART is inaccessible.

 

Previous version(s) just had empty space. I'd prefer that to having to mentally inventory the red icons each time to see if somebody outside the Areca is having issues. :P

 

ps. "special handling" sounds like the fix. Can haz?

Link to comment

I am running 3 of the SAS2LP cards, and mine has been running well without any issues.  After upgrading to 6.1.4 I started getting reallocated sectors on one of my drives, so I started migrating data off the drive and ordered a replacement.  I usually like to get the data off just as a failsafe incase of another drive has an issue.  I have tried 2 times so far to rsync the data to other drives and I start seeing errors such as:

 

ata25.00: exception Emask 0x0 SAct 0x300000 SErr 0x0 action 0x6

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/00:00:00:97:6d/04:00:26:00:00/40 tag 20 ncq 524288 out

        res 41/84:00:00:5f:6d/00:04:26:00:00/00 Emask 0x410 (ATA bus error) <F>

ata25.00: status: { DRDY ERR }

ata25.00: error: { ICRC ABRT }

ata25.00: failed command: WRITE FPDMA QUEUED

ata25.00: cmd 61/08:00:00:9b:6d/00:00:26:00:00/40 tag 21 ncq 4096 out

        res 01/04:a8:00:9b:6d/00:00:26:00:00/40 Emask 0x2 (HSM violation)

ata25.00: status: { ERR }

ata25.00: error: { ABRT }

 

on each drive I try to copy the data too.  I have reverted back to 6.1.3 and started the copy again, and haven't seen any issues so far, but it hasn't been running very long.    I am not sure if this is really related to 6.1.4 upgrade or not, but since there is notes about tweaking that card I figured I would post..  I don't  really have the log since I rebooted back into 6.1.3 to try to get the data saved, but I am including a cut/paste of part of the log from a screen capture.

 

Thanks,

Jeff

 

Looks like it is doing it with 6.1.3, so it may be just some issue I am having with the other drives instead of the 6.1.4 update...

 

There are 2 drive issues, so this is not related to the update.  I can't conclude much from the excerpt, but here's a few observations.

 

There are no clues as to the identity of the drive associated with ata25.00, except that it is connected to the 3rd port of a 4 port SAS card.  Twice it has ICRC errors, which are usually from faulty SATA cables, but could also be faulty power.  After each, there are one or more HSM Violations.  It appears one side of the link is retrying writes and the other side is not expecting them, is expecting something else, so they are clearly out of sync.  A hard reset fixes it.  You probably only have to identify the drive and replace its SATA cable.

 

The drive associated with ata8.00 has a bad sector (perhaps it's the one you're replacing?).  It's probably sdi, is connected to the 2nd port of an 8 port card.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.