unRAID Server Release 6.1.1 Available


limetech

Recommended Posts

Ran the update and now all my shares have disappeared except for a new one "Documentos" Rolling back to 6.1 does not restore my shares.  How do i get my shares back?

 

I have removed the plugin VFS_Recycle and got everything back.  Goes to show your i need to watch those plugins more carefully!

 

That didn't come from the vfs_recycle plugin.  It came from here: http://lime-technology.com/forum/index.php?topic=38635.msg407383#msg407383

I have never had Auto Mount USB installed.  In my case it was the VFS plugin..

 

It's unassigned devices, not Auto Mount USB.  Did you install unassigned devices and then remove it today just to see what it was?  There was an issue in gfjardim's first release today.  I did notice in your log it wasn't installed.

 

My test/development server has a copy of unassigned devices, but there is no way that any user share stuff is intentionally put into the vfs_recycle distribution.

 

I'm at a total loss how vfs_recycle could possibly install a user share full of gfjardim's unassigned devices stuff.

 

Do you use the powerdown plugin?  If you do can you post the last few logs?  The log history is at /boot/logs and have a time stamp.

 

I use power down but with the issue power down would freeze. so no clean power down.  The plugins that are listed are the only ones that i have installed for the longest time.  I have never had unassigned devices that I can remember.  f i did it was maybe more than a year ago.  The only thing i have added in the last few weeks was the VFS plugin.  I have been doing a lot of file moves the last week and have rebooted at least 3 times a day for the last 5 days.  I did not have the problem until the 6.1.1 release.  My last reboot before the upgrade was this morning with out incidence. However after 6.1.1 is when the issue occurred.  I have since removed the VFS plugin and have rebooted twice with the issue gone.

Link to comment

Ran the update and now all my shares have disappeared except for a new one "Documentos" Rolling back to 6.1 does not restore my shares.  How do i get my shares back?

 

I have removed the plugin VFS_Recycle and got everything back.  Goes to show your i need to watch those plugins more carefully!

 

That didn't come from the vfs_recycle plugin.  It came from here: http://lime-technology.com/forum/index.php?topic=38635.msg407383#msg407383

I have never had Auto Mount USB installed.  In my case it was the VFS plugin..

 

It's unassigned devices, not Auto Mount USB.  Did you install unassigned devices and then remove it today just to see what it was?  There was an issue in gfjardim's first release today.  I did notice in your log it wasn't installed.

 

My test/development server has a copy of unassigned devices, but there is no way that any user share stuff is intentionally put into the vfs_recycle distribution.

 

I'm at a total loss how vfs_recycle could possibly install a user share full of gfjardim's unassigned devices stuff.

 

Do you use the powerdown plugin?  If you do can you post the last few logs?  The log history is at /boot/logs and have a time stamp.

 

I use power down but with the issue power down would freeze. so no clean power down.  The plugins that are listed are the only ones that i have installed for the longest time.  I have never had unassigned devices that I can remember.  f i did it was maybe more than a year ago.  The only thing i have added in the last few weeks was the VFS plugin.  I have been doing a lot of file moves the last week and have rebooted at least 3 times a day for the last 5 days.  I did not have the problem until the 6.1.1 release.  My last reboot before the upgrade was this morning with out incidence. However after 6.1.1 is when the issue occurred.  I have since removed the VFS plugin and have rebooted twice with the issue gone.

From console or telnet, what do you get with this?
ls -lah /boot/config/plugins

Link to comment

Ran the update and now all my shares have disappeared except for a new one "Documentos" Rolling back to 6.1 does not restore my shares.  How do i get my shares back?

 

I have removed the plugin VFS_Recycle and got everything back.  Goes to show your i need to watch those plugins more carefully!

 

That didn't come from the vfs_recycle plugin.  It came from here: http://lime-technology.com/forum/index.php?topic=38635.msg407383#msg407383

I have never had Auto Mount USB installed.  In my case it was the VFS plugin..

 

It's unassigned devices, not Auto Mount USB.  Did you install unassigned devices and then remove it today just to see what it was?  There was an issue in gfjardim's first release today.  I did notice in your log it wasn't installed.

 

My test/development server has a copy of unassigned devices, but there is no way that any user share stuff is intentionally put into the vfs_recycle distribution.

 

I'm at a total loss how vfs_recycle could possibly install a user share full of gfjardim's unassigned devices stuff.

 

Do you use the powerdown plugin?  If you do can you post the last few logs?  The log history is at /boot/logs and have a time stamp.

 

I use power down but with the issue power down would freeze. so no clean power down.  The plugins that are listed are the only ones that i have installed for the longest time.  I have never had unassigned devices that I can remember.  f i did it was maybe more than a year ago.  The only thing i have added in the last few weeks was the VFS plugin.  I have been doing a lot of file moves the last week and have rebooted at least 3 times a day for the last 5 days.  I did not have the problem until the 6.1.1 release.  My last reboot before the upgrade was this morning with out incidence. However after 6.1.1 is when the issue occurred.  I have since removed the VFS plugin and have rebooted twice with the issue gone.

 

Plesase post your /boot/config/smb-extra.conf file.

Link to comment

Ran the update and now all my shares have disappeared except for a new one "Documentos" Rolling back to 6.1 does not restore my shares.  How do i get my shares back?

 

I have removed the plugin VFS_Recycle and got everything back.  Goes to show your i need to watch those plugins more carefully!

 

That didn't come from the vfs_recycle plugin.  It came from here: http://lime-technology.com/forum/index.php?topic=38635.msg407383#msg407383

I have never had Auto Mount USB installed.  In my case it was the VFS plugin..

 

It's unassigned devices, not Auto Mount USB.  Did you install unassigned devices and then remove it today just to see what it was?  There was an issue in gfjardim's first release today.  I did notice in your log it wasn't installed.

 

My test/development server has a copy of unassigned devices, but there is no way that any user share stuff is intentionally put into the vfs_recycle distribution.

 

I'm at a total loss how vfs_recycle could possibly install a user share full of gfjardim's unassigned devices stuff.

 

Do you use the powerdown plugin?  If you do can you post the last few logs?  The log history is at /boot/logs and have a time stamp.

 

I use power down but with the issue power down would freeze. so no clean power down.  The plugins that are listed are the only ones that i have installed for the longest time.  I have never had unassigned devices that I can remember.  f i did it was maybe more than a year ago.  The only thing i have added in the last few weeks was the VFS plugin.  I have been doing a lot of file moves the last week and have rebooted at least 3 times a day for the last 5 days.  I did not have the problem until the 6.1.1 release.  My last reboot before the upgrade was this morning with out incidence. However after 6.1.1 is when the issue occurred.  I have since removed the VFS plugin and have rebooted twice with the issue gone.

 

Plesase post your /boot/config/smb-extra.conf file.

I do not have a smb-extra file as i have removed the VFS plugin.  When i did have it installed I never started/configured it or used it.  I was still moving files around when I installed it and figured that I would look at it later. 

 

drwxrwxrwx  2 root root  32K Sep  7 11:56 Transmission/

-rwxrwxrwx  1 root root  19K Aug  8 08:21 Transmission.plg*

drwxrwxrwx  2 root root  32K Sep  7 07:34 community.applications/

-rwxrwxrwx  1 root root 6.7K Sep  7 07:34 community.applications.plg*

drwxrwxrwx  5 root root  32K Jun 19 16:51 dockerMan/

drwxrwxrwx  2 root root  32K May 28 19:16 docker_search/

drwxrwxrwx  3 root root  32K Sep  7 08:56 dynamix/

drwxrwxrwx  2 root root  32K May 30 13:37 dynamix.apcupsd/

drwxrwxrwx  2 root root  32K Apr 18 07:56 dynamix.kvm.manager/

drwxrwxrwx  2 root root  32K Apr 14 14:02 filebot/

drwxrwxrwx  2 root root  32K Apr 14 14:02 images/

drwxrwxrwx  3 root root  32K Sep  7 07:34 powerdown/

-rwxrwxrwx  1 root root 5.2K Sep  7 07:34 powerdown-x86_64.plg*

drwxrwxrwx  2 root root  32K Sep  7 08:51 preclear.disk/

-rwxrwxrwx  1 root root 3.5K Sep  7 08:51 preclear.disk.plg*

drwxrwxrwx  2 root root  32K Jun 14 20:41 serverlayout/

-rwxrwxrwx  1 root root 7.3K Jul 26 20:23 serverlayout.plg*

drwxrwxrwx  2 root root  32K Nov 15  2014 webGui/

 

 

 

Link to comment

I just upgraded this morning.  I have dockers disabled and have this error in the syslog

 

Sep 8 08:56:21 Tower logger: /usr/bin/docker not enabled

Sep 8 08:56:22 Tower avahi-daemon[9272]: Service "Tower" (/services/smb.service) successfully established.

Sep 8 08:56:24 Tower logger: Updating templates... Updating info...

Sep 8 08:56:24 Tower logger: Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (No such file or directory) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 490

Sep 8 08:56:24 Tower logger: Couldn't create socket: [2] No such file or directory Done

Link to comment

I can't update via plugin page..  I get :

plugin: updating: unRAIDServer.plg

plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.1.1-x86_64.zip ... done

plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.1.1-x86_64.md5 ... done

unzip error 0

plugin: run failed: /bin/bash retval: 1

 

Done

 

Any ideas?

 

Update I rebooted and all is good update worked. Thank you

Link to comment

Upgraded via the web gui from 6.1 to 6.1.1

 

All drives missing from the array. Rebooted a few times to see if any change. None.

 

Downgraded to 6.1 and the drives are present & the array started.

 

Note: this is a virtualised server (esxi 5.5); drives are connected via SAS2LP controller ( I'm following the thread on this SAS controller & parity check speed ).

$R3BGWD6.zip

Link to comment

If you can take the flash drive and do a check disk in a PC. I would the do a manual update and see if it works.. Looks like the file system can't be mounted. Might be a corrupt file.

 

Hope this helps

Thornwood

You were right. Check disk showed everything was ok, but after manual update the server booted. Updating through the GUI took longer than normal, perhaps that introduced a corrupted file.

 

Link to comment

I don't think I am being notified of Docker container updates.

 

I have it set to check four times a day but it seems that every few days I 'Check for Updates' and quite often the Emby Server container reports back that an update is available (even tough is said 'up-to-date').  I have seen this enough times that it can't be attributed to coincidental timing.

 

Is anyone else seeing the same?  How can I troubleshoot?

 

Diagnostics attached...

 

John

 

unraid-diagnostics-20150908-0953.zip

Link to comment

I don't think I am being notified of Docker container updates.

 

I have it set to check four times a day but it seems that every few days I 'Check for Updates' and quite often the Emby Server container reports back that an update is available (even tough is said 'up-to-date').  I have seen this enough times that it can't be attributed to coincidental timing.

 

Is anyone else seeing the same?  How can I troubleshoot?

 

Diagnostics attached...

 

John

Actually, I just started getting docker update notifications again. Maybe this was because my dockers haven't been updated in a while.
Link to comment

I don't think I am being notified of Docker container updates.

 

I have it set to check four times a day but it seems that every few days I 'Check for Updates' and quite often the Emby Server container reports back that an update is available (even tough is said 'up-to-date').  I have seen this enough times that it can't be attributed to coincidental timing.

 

Is anyone else seeing the same?  How can I troubleshoot?

Are you getting other notifications?  You might disable and re-enable notifications, see if that makes any difference.

Link to comment

I don't think I am being notified of Docker container updates.

 

I have it set to check four times a day but it seems that every few days I 'Check for Updates' and quite often the Emby Server container reports back that an update is available (even tough is said 'up-to-date').  I have seen this enough times that it can't be attributed to coincidental timing.

 

Is anyone else seeing the same?  How can I troubleshoot?

Are you getting other notifications?  You might disable and re-enable notifications, see if that makes any difference.

 

I'm getting plugin notifications.

 

I'll disable/re-enable docker notifications and see if that helps.  I think I tried that along the way but can't remember (fyi...I think I first saw this issue in one of the later 6.1 RCs).

 

John

Link to comment

Upgraded via the web gui from 6.1 to 6.1.1

 

All drives missing from the array. Rebooted a few times to see if any change. None.

 

Downgraded to 6.1 and the drives are present & the array started.

 

Note: this is a virtualised server (esxi 5.5); drives are connected via SAS2LP controller ( I'm following the thread on this SAS controller & parity check speed ).

It's using the 9485 card, so it's certainly a candidate for both the SAS2LP problem and the Marvell bug problem, but your issue is different, no evidence of either of the known issues.  The card is identified and setup correctly, no issues, then the 5 drives are identified and setup correctly too, no issues either.  And then the following -

Sep  8 07:52:47 unraid-backup kernel: hpet1: lost 3 rtc interrupts

This is unusual, first time I've seen this message, and probably indicates a problem.  HPET (the High Precision Event Timer) is critical, and you don't want to miss interrupts.  I don't know enough to advise you, but I understand it can be disabled in the BIOS, and the system will use the old timers.  Apparently, system performance can be worse or better, depending on the combination of CPU, motherboard, and OS.  Worth researching ...

 

The big issue here though is that this is followed immediately by a 5 second pause (nothing happening)!  As we all know, that's like an eternity to the computer!  And what follows is the following -

Sep  8 07:52:52 unraid-backup kernel: irq 19: nobody cared (try booting with the "irqpoll" option)

Sep  8 07:52:52 unraid-backup kernel: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.1.5-unRAID #5

Sep  8 07:52:52 unraid-backup kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/30/2013

...

Sep  8 07:52:52 unraid-backup kernel: Call Trace:

Sep  8 07:52:52 unraid-backup kernel: <IRQ>  [<ffffffff816105e3>] dump_stack+0x4c/0x6e

Sep  8 07:52:52 unraid-backup kernel: [<ffffffff8107afcb>] __report_bad_irq+0x2b/0xbe

...

Sep  8 07:52:52 unraid-backup kernel: handlers:

Sep  8 07:52:52 unraid-backup kernel: [<ffffffff814b2ae4>] usb_hcd_irq

Sep  8 07:52:52 unraid-backup kernel: Disabling IRQ #19

Sep  8 07:53:21 unraid-backup kernel: sas: Enter sas_scsi_recover_host busy: 5 failed: 5

This is fatal to mvsas and the 9485 card.  After a 30 second timeout, the SAS tasks are all aborted, and the drives are unresponsive.  Regrettably, and unlike other controllers, mvsas doesn't announce the IRQ it's using, but very obviously since it immediately crashes, it was using 19.  All of its drives are then lost and disabled.

 

You might want to compare a syslog where it all worked, with this syslog.

 

A side note, invariably when there's an irq ##: nobody cared error, that usb_hcd_irq module is found to be sharing the same IRQ!  I don't know the significance, but it's far too coincidental!  I would NEVER want to share an interrupt with that thing!

 

An additional observation, your docker.img is stored at /mnt/user/docker/docker.img.  I don't remember the details, but I believe the instructions now say not to use the /mnt/user path for it, use /mnt/cache instead.  I *think* all you have to do is disable dockers, edit the path, and re-enable dockers (array stopped of course).

Link to comment

I've removed the docker (I didn't have anything running in the docker) from the config.

 

I've attached 2 diagnostic files - the filename contains the unraid version 6.1 versus 6.1.1

 

Both diagnostics contain a hpet1: lost X rtc interrupts

 

I reconfigured how the pci devices are passed though into the vm (this is an esxi server with unraid as a guest) and the errors related to irq seem to have now disappeared.

 

6.1.1 boots with no disks present. Same hardware config and 6.1 boots fine.

 

Any thoughts?

unraid-backup-diagnostics-6.1.0-20150908-1841.zip

unraid-backup-diagnostics-6.1.1-20150908-1827.zip

Link to comment

I reconfigured how the pci devices are passed though into the vm (this is an esxi server with unraid as a guest) and the errors related to irq seem to have now disappeared.

 

6.1.1 boots with no disks present. Same hardware config and 6.1 boots fine.

For 6.1.1, it presents as if card moved to different slot, and IRQ 17 is used this time, not IRQ19, but there is absolutely no change at all!  Same error (irq 17: nobody cared), same messages, same lost controller and drives.

 

For 6.1.0, it is identical, except the IRQ is fine and no crashes, controller and drives are fine and stay fine, just as you said.  The hpet message is identical in the working and non-working syslogs, so must be harmless, unrelated.  I can see no clues at all as to what is different, so that's the limit of my ability to help.  I have no explanation.

 

Edit: an additional observation, that usb_hcd_irq module is again sharing the same IRQ, 17 this time.

Link to comment

For 6.1.0, it is identical, except the IRQ is fine and no crashes, controller and drives are fine and stay fine, just as you said.  The hpet message is identical in the working and non-working syslogs, so must be harmless, unrelated.  I can see no clues at all as to what is different, so that's the limit of my ability to help.  I have no explanation.

 

I have a Dell H310 ready to be converted to IT mode to replace this card. Thanks for looking into the problem nonetheless!

Link to comment

Hi, sorry to ask a stupid question, but found conflicting information across the wiki, various boards and outdated walkthroughs, etc and thought this would be easier.

 

 

I'm running v5.0.5 x86; can I upgrade to v6.1.1 (CPU is 64-bit compatible) or do I have to blow everything away?

 

 

Link to comment

I'm running v5.0.5 x86; can I upgrade to v6.1.1 (CPU is 64-bit compatible) or do I have to blow everything away?

It doesn't matter which v6 version you upgrade to, but the latest is best.  All of the docs should say that.

 

Hi, sorry to ask a stupid question, but found conflicting information across the wiki, various boards and outdated walkthroughs, etc and thought this would be easier.

Can I ask where the conflicting info is, so we can correct it?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.