unRAID Server Release 6.1-rc3 Available


Recommended Posts

To install this release, navigate to Plugins/Install Plugin and paste the following link into the box and click Install:

 

https://raw.githubusercontent.com/limetech/unRAIDServer-6.1-rc/master/unRAIDServer.plg

 

 

I thought the process was once you had downloaded a beta/RC that you should be able to check for updates on the plugins page and find/install the latest RC? I understand you having to manually enter yourself into the RC stream, but once there why would the auto-update not be enabled? Or is this something unique to RC3?

 

Something unique to RC3.

Link to comment
  • Replies 86
  • Created
  • Last Reply

Top Posters In This Topic

I'm seeing this message every 6 hours.  Is it normal?

 

Aug 12 18:10:01 unRAID crond[1881]: exit status 127 from user root /usr/local/sbin/plugincheck &> /dev/null

 

From the OP:

 

A side-effect of these changes is that any Notifications you have enabled won't work until you disable the Notification, hit Apply, and then enable it again, and hit Apply.

 

 

I almost said again (as I told Jon before) that I do not have notifications enabled.  I decided to check but I do in fact have Docker and Plugin update notifications enabled for every 4 hours.

 

Sorry and thanks for the clarification!

 

John

 

Perhaps this error message should be added to the first post so that users can put 2-and-2 together themselves :)

 

Added, although I don't really agree.  It flat out says in the post that if you had notifications enabled, you will need to do X, Y, and Z to fix.  If folks don't read that, then they aren't going to read the new blurb I put in about the crond log event.  That said, it's easier to comply than argue on this one...

 

I always read the release notes but just forgot that I had enabled those notifications.

Link to comment

To install this release, navigate to Plugins/Install Plugin and paste the following link into the box and click Install:

 

https://raw.githubusercontent.com/limetech/unRAIDServer-6.1-rc/master/unRAIDServer.plg

 

 

I thought the process was once you had downloaded a beta/RC that you should be able to check for updates on the plugins page and find/install the latest RC? I understand you having to manually enter yourself into the RC stream, but once there why would the auto-update not be enabled? Or is this something unique to RC3?

 

Something unique to RC3.

 

Not true.  I posed this question RE: rc2 and it was not answered.

 

http://lime-technology.com/forum/index.php?topic=41783.msg396850#msg396850

 

I have had to manually install rc2 and rc3.

 

John

Link to comment

To install this release, navigate to Plugins/Install Plugin and paste the following link into the box and click Install:

 

https://raw.githubusercontent.com/limetech/unRAIDServer-6.1-rc/master/unRAIDServer.plg

 

 

I thought the process was once you had downloaded a beta/RC that you should be able to check for updates on the plugins page and find/install the latest RC? I understand you having to manually enter yourself into the RC stream, but once there why would the auto-update not be enabled? Or is this something unique to RC3?

 

Something unique to RC3.

 

Not true.  I posed this question RE: rc2 and it was not answered.

 

http://lime-technology.com/forum/index.php?topic=41783.msg396850#msg396850

 

I have had to manually install rc2 and rc3.

 

John

 

Actually the answer is given in the release notes of v6.1rc1.

 

We are handling -beta and -rc releases a little differently.

 

It is a manual action to install or upgrade non-final releases.

 

Link to comment

It is a manual action to install or upgrade non-final releases.

 

Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.  Once 6.1 'stable' is released, the "check for updates" will return you back onto the 'stable' release path.

 

Personally, it doesn't bother me that it hasn't worked (at least not for me)
Link to comment

It is a manual action to install or upgrade non-final releases.

 

Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.  Once 6.1 'stable' is released, the "check for updates" will return you back onto the 'stable' release path.

 

Personally, it doesn't bother me that it hasn't worked (at least not for me)

 

Yes there was a bug in the unRAIDServer.plg file for 6.1-rc that prevented automatic update via 'check for updates'.  This should be fixed in -rc3, meaning when -rc4 comes out it "should work" to use 'check for updates' to get it.

Link to comment

It is a manual action to install or upgrade non-final releases.

 

Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.  Once 6.1 'stable' is released, the "check for updates" will return you back onto the 'stable' release path.

 

Personally, it doesn't bother me that it hasn't worked (at least not for me)

 

Agree.  It does not bother me either, but, it is also not what was communicated.  And when asked if that functionality was removed, the question was not answered.  When asked again, the question was answered incorrectly.

 

What does bother me is some of the snide remarks I have seen which basically say "RTFM".

 

John

 

John

Link to comment

This is the URL of "unRAIDServer-6.1-rc.plg" file out on github.  Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.

 

Yes there was a bug in the unRAIDServer.plg file for 6.1-rc that prevented automatic update via 'check for updates'.  This should be fixed in -rc3, meaning when -rc4 comes out it "should work" to use 'check for updates' to get it.

 

You'd think you'd have learned not to make statements like the first one by now Tom.  ;)

 

But I do admire the continued optimism of it!  ;D

Link to comment

It is a manual action to install or upgrade non-final releases.

 

Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.  Once 6.1 'stable' is released, the "check for updates" will return you back onto the 'stable' release path.

 

Personally, it doesn't bother me that it hasn't worked (at least not for me)

 

Agree.  It does not bother me either, but, it is also not what was communicated.  And when asked if that functionality was removed, the question was not answered.  When asked again, the question was answered incorrectly.

 

What does bother me is some of the snide remarks I have seen which basically say "RTFM".

 

John

 

John

 

My fault if anything got out of hand about that... in this particular case, the 'check for updates' was intended to work and someone (maybe you?) reported it didn't and I also noticed that, then finally got around to figuring how wtf was wrong, but I fixed it without posting back here and I don't even think I made a change log entry for it.  Probably got distracted and forgot all about it - that happens sometimes.

Link to comment

This is the URL of "unRAIDServer-6.1-rc.plg" file out on github.  Once you install this plugin, further "check for updates" will check for new 6.1-rc releases, though I don't expect there to be another beyond -rc1 for 6.1.

 

Yes there was a bug in the unRAIDServer.plg file for 6.1-rc that prevented automatic update via 'check for updates'.  This should be fixed in -rc3, meaning when -rc4 comes out it "should work" to use 'check for updates' to get it.

 

You'd think you'd have learned not to make statements like the first one by now Tom.  ;)

 

But I do admire the continued optimism of it!  ;D

 

You'd think so right?

Link to comment

Guys,

 

Given you're implementing new features, such as a customisable banner, shouldn't we look to future proof the code? It would be great if the images could be HDPi for example, currently the stock banner and graphical elements look pretty ugly on my Retina displays.

 

I have noticed that most plugins now include @2x images, so change is afoot.

 

Just a thought.

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

 

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

 

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

 

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

The next time this happens, check /mnt/user to see if the docker folder still exists.  I don't believe it's getting "deleted" but rather, moved.  There is a good chance that you have a "docker" folder on one of your array disks that contains the last copy of your docker.img file.  I would be very shocked if the file truly just "disappeared."  unRAID never deletes files automatically like that, but rather, it will try to move them if they aren't set to use a cache.

 

As far as the other share you have that doesn't ever "disappear," browse to the root of each of your array devices.  Do you see a "docker" directory on any of those?  How about the other folder you mentioned that doesn't disappear?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

 

Nothing was unintentionally deleted unless a plugin or other culprit exists on your system.  unRAID does not delete user data without your knowledge or intention...ever...  It may move the data because it is configured to do so, but it wouldn't delete it.

 

The share setting "Use Cache" can be toggled to three options:  Yes, No, or Only.

 

Yes means that NEW FILES written to the share will go to the cache first, and then be automatically moved to the array at a schedule you can define under Settings -> Scheduler.

 

No means that NEW FILES written to the share will go directly to the array.

 

Only means that NEW FILES written to the share will go to the cache and never be moved.

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

No changes to the mover.  This is a one off issue that JarDo is having.

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

No changes to the mover.  This is a one off issue that JarDo is having.

That's certainly debatable.
Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

No changes to the mover.  This is a one off issue that JarDo is having.

That's certainly debatable.

 

If you create a share that is cache-only, the mover won't move it.  Can you point me to an example of someone recently that has that setup and it moved it anyway?

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

No changes to the mover.  This is a one off issue that JarDo is having.

That's certainly debatable.

 

If you create a share that is cache-only, the mover won't move it.  Can you point me to an example of someone recently that has that setup and it moved it anyway?

 

I think trurl is referring to the oft reoccuring issue that people DON'T make their "appdata" share cache only and post issue.  I think trurl has responded to these queries so many times he has ctrl-v mapped for his response..

 

Link to comment

Every minute, I'm getting the line below in my log. I'm not sure where the heck it's coming from or how to check. Seems to only have arisen after rc3 update/reboot.

 

No plugins installed. Only 4 dockers and 2 VMs.

 

Aug 13 14:40:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:41:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:42:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:43:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null

Link to comment

Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

 

This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

 

1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

 

2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

I would expect a folder at the top level of cache, that was set to anything other than cache-only, to be moved to the array. That was what happened to many users appdata folders I have been involved with.

 

Was mover changed in this release? I haven't installed this release yet so I can't examine the script.

 

No changes to the mover.  This is a one off issue that JarDo is having.

That's certainly debatable.

 

If you create a share that is cache-only, the mover won't move it.  Can you point me to an example of someone recently that has that setup and it moved it anyway?

What is debatable about this is that it is a one off issue. Please see the debate that I linked.
Link to comment

Every minute, I'm getting the line below in my log. I'm not sure where the heck it's coming from or how to check. Seems to only have arisen after rc3 update/reboot.

 

No plugins installed. Only 4 dockers and 2 VMs.

 

Aug 13 14:40:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:41:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:42:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:43:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null

It seem you have to disable and enable notifications to get it working again. It's explained in the first post.

Link to comment

Every minute, I'm getting the line below in my log. I'm not sure where the heck it's coming from or how to check. Seems to only have arisen after rc3 update/reboot.

 

No plugins installed. Only 4 dockers and 2 VMs.

 

Aug 13 14:40:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:41:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:42:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null
Aug 13 14:43:01 Unraid crond[1572]: exit status 127 from user root /usr/local/sbin/monitor &> /dev/null

It seem you have to disable and enable notifications to get it working again. It's explained in the first post.

 

OOPS!!!  Thought I did that!

EDIT: Had to disable system notification and re-enable as well. Not just third-party notifications.

Link to comment
What is debatable about this is that it is a one off issue. Please see the debate that I linked.

It would solve one part of this issue if a new share is detected and the ONLY folder is on the cache drive, then that share would automatically be set to cache only. Or, if there is no share config defined, the mover ignores it.

 

It is not intuitive to have a folder that was manually created on the cache drive to automatically be moved.

 

The intuitive behaviour is to operate only on user shares that have explicitly been created and set to use the cache, and even then the creation of the files on the cache drive and the subsequent move to the array is transparent.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.