unRAID Server Release 6.0-beta14-x86_64 Available


limetech

Recommended Posts

Download

 

Disclaimer: This is beta software.  While every effort has been made to ensure no data loss, use at your own risk!

 

Clicking 'Check for Updates' on the Plugins page should permit upgrade as well.

 

Some notes on this release:

  • Important: dlandon's excellent apcupsd plugin is now 'built-in' to the base webGui.  Please remove the apcupsd plugin if you are using it.
  • System notifications will be disabled after update and must be enabled on the Notifications Settings page.  The setting will be preserved across reboot.
  • The 'mover' will now completely skip moving files off the cache disk/pool for shares marked "cache-only".  However it will move files off the cache disk/pool for shares not explicitly configured "cache-only" in the share's Share Settings page; even for shares that only exist on the cache!  This is different behavior from previous releases!
  • We are still investigating cpu scaling issue with Haswell cpu's, and have removed a 'patch' that seems to have been responsible for some users reporting kernel crashes.
  • You may see a rather dramatic message pop up on the Docker page which reads: "Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"  This means your image file does not have NOCOW attribute set on it and may get corrupted if the device it's located on runs completely out of free space.  Refer a few posts down for further discussion of this.
  • As of beta14, pointing to a docker image through a user share is not supported.  Please update your docker image location field to point to the actual disk device used for your docker image file (e.g. /mnt/cache/docker.img or /mnt/disk#/docker.img; substitute # for the actual disk number that the image is on).

 

If you installed -beta13 and it resulted in clobbering your cache disk, well if you can restore the partition table you set up previously then it will restore proper operation.  For example, if you followed this guide to set up a btrfs cache disk back in -beta6:

http://lime-technology.com/forum/index.php?topic=33806.0

 

Then you should be able to restore the proper partition table using this command sequence:

 

# substitute your cache disk device for 'sdX' in these commands:
sgdisk –Z /dev/sdX
sgdisk –g –N 1 /dev/sdX

 

I want to extend a special Thank You! to bonienl for his continued improvements to dynamix and to eschultz for his programming refinements in multiple areas of the code.

 

Summary of changes from 6.0-beta13 to 6.0-beta14
------------------------------------------------
- apcups: update to 3.14.13
- emhttp: don't clobber cache device MBR unless explicitly formatting (to handle non-unraid-standard partition layouts)
- emhttp: change term "Unformatted" to "Unmountable"
- emhttp: use 'update_cron' script to handle crontab updating for plugins
- linux: remove intel_pstate driver patch
- mover: only skip moving shares explicitly configured "cache-only"
- plugin: check md5sum of downloaded unRAID Server OS zip file
- webGui: integrated apcupsd webGui page, thanks dlandon!

 

Additional Notes from bonienl:

 

- Those using the dynamix band-aid plugin, should remove it BEFORE upgrading to B14. If removed AFTER the upgrade to B14 then a system reboot is required to unload the existing cron settings, which conflict with the new cron solution.

 

- All notifications are OFF by default, and these need to be enabled on the notification settings page. Once enabled they will survive a system reboot

 

- Settings for the scheduler (parity check) need to be re-applied to become active, this again is due to the new cron solution introduced in this version of unRAID

 

- And ... clear your browser's cache to ensure the GUI is updated completely

 

Additional notes from JonP

 

Regarding the Docker virtual disk image and needing to recreate it, there has been some confusion as to who will need to do this.  The only folks that will need to recreate their Docker image in Beta14 are those that have it stored on a BTRFS-formatted device.  If your image is stored on a device formatted with XFS or ReiserFS, you do NOT need to recreate your Docker image.

 

However, all users, regardless of filesystem type, will need to make sure their Docker image path in the webGui points to an actual disk device or cache pool, and not the unRAID user share file system.  This rule only applies to the path to your docker image itself.  Hope this clarifies things.

 

For a guide on how to recreate your Docker image without losing any application settings / data, please see this post.

Link to comment
  • Replies 213
  • Created
  • Last Reply

Top Posters In This Topic

My first attempt to update unRAID via built in plugin manager resulted in

 

/usr/local/sbin/plugin update unRAIDServer.plg 2>&1
plugin: running: 'anonymous'
invalid install: deleting /boot/config/plugins/unRAIDServer.plg
plugin: run failed: /bin/bash retval: 1

 

Second attempt seems to go through ok.

 

Link to comment

My first attempt to update unRAID via built in plugin manager resulted in

 

/usr/local/sbin/plugin update unRAIDServer.plg 2>&1
plugin: running: 'anonymous'
invalid install: deleting /boot/config/plugins/unRAIDServer.plg
plugin: run failed: /bin/bash retval: 1

 

Second attempt seems to go through ok.

I got exactly the same.

 

If I remember correctly this is also what happened when installing beta 13 via the GUI.

Link to comment

I'm scared!

Have i lost my docker.mg?

 

"Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"

 

A bit dramatic, right?  What it means is that the the docker image file does not have the "NOCOW" (No Copy-on-Write) flag set on it.  If you get into a situation where the underlying btrfs file system that the docker image file exists on fills completely up (zero free space), it is possible that docker operations inside the image file will cause corruptions because new extents cannot be allocated for COW operations.

 

We now set the NOCOW bit when a new docker image file is created in order to avoid this situation.

Link to comment

My first attempt to update unRAID via built in plugin manager resulted in

 

/usr/local/sbin/plugin update unRAIDServer.plg 2>&1
plugin: running: 'anonymous'
invalid install: deleting /boot/config/plugins/unRAIDServer.plg
plugin: run failed: /bin/bash retval: 1

 

Second attempt seems to go through ok.

 

Ok I see now what is happening with that.  It is "harmless", that is, certain circumstances will lead to this - we'll fix in next release.

Link to comment

I'm scared!

Have i lost my docker.mg?

 

"Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"

There were posts (I think in the docker section of the forum) about the fact that existing docker image files can get corrupted due to a BTRFS issue, and a change was introduced at beta 13 that would handle this for new docker image files.  The easiest way to fix it was to stop docker; remove the image file; and start docker again to repopulate it.

 

As long as you have (as is the recommended procedure) stored all your docker configuration and user data external to the docker image file this should not be too painful.  It just involves the relevant docker containers being redownloaded.

Link to comment

I'm scared!

Have i lost my docker.mg?

 

"Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"

 

A bit dramatic, right?

 

Thanks for clarifying, i'll re-create the file again.

Link to comment

Nice to see the UPS support is now integrated.  That seems to be the last of the major features that was planned for the 6.0 release?  Does that mean we are finally about to reach the feature-freeze state and move into the RC phase? 

 

I guess that there could still be changes around the virtualisation area - in particular the latest docker release?  However although that is a significant change It would probably not be seen as a feature change?

Link to comment

Nice to see the UPS support is now integrated.  That seems to be the last of the major features that was planned for the 6.0 release?  Does that mean we are finally about to reach the feature-freeze state and move into the RC phase? 

 

I guess that there could still be changes around the virtualisation area - in particular the latest docker release?  However although that is a significant change It would probably not be seen as a feature change?

 

Yes next release very well could be -rc1 which will include an update to Docker and hopefully Xen.  We will be introducing a virtual machine manager before 6.0 final but that is entirely a webGui component with somewhat independent development from base unRaid Server OS.  The one other feature we wanted to get in is block-level encryption.  That may have to wait for 6.1.

Link to comment

Just upgraded from 10a, went smoothly. I removed apcupsd prior to rebooting to be safe. Dockers are up and responsive. I had already moved to xfs just as a precaution earlier today.

 

UI is much more responsive then back when I did 12 (downgraded due to disk spin down issues, hoping those go fixed).

 

Thanks to everyone for all the work!

Link to comment

Nice to see the UPS support is now integrated.  That seems to be the last of the major features that was planned for the 6.0 release?  Does that mean we are finally about to reach the feature-freeze state and move into the RC phase? 

 

I guess that there could still be changes around the virtualisation area - in particular the latest docker release?  However although that is a significant change It would probably not be seen as a feature change?

 

I'd imagine they want to find and fix the disk spinup issue before 6.0 RC.

Link to comment

I'd imagine they want to find and fix the disk spinup issue before 6.0 RC.

I agree - but that is not a new feature! 

 

Since it only seems to affect a proportion of users if fixing it proves a bit intractable then I can see this being something that could still be outstanding as a "Known Issue" even when v6 goes final.  After all it is not something that can lead to data loss or something like that.  The downside is the cost of the power (and possibly extra noise/heat) due to disks not spinning down.  Some have worried about disk lifetime due to this issue but much of the evidence seems to point to it increasing disk lifetime as spinning disks up/down is harder on them than continually spinning..

Link to comment

Is there anyway to get the UPS plugin to work with 2 servers connected to a single UPS? Seems like there is only 1 USB out on my UPS.

 

http://www.amazon.com/CyberPower-CP1500PFCLCD-Sinewave-Compatible-Mini-Tower/dp/B00429N19W/ref=sr_1_2?ie=UTF8&qid=1424514153&sr=8-2&keywords=cyberpower+UPS

 

I'd imagine they want to find and fix the disk spinup issue before 6.0 RC.

I agree - but that is not a new feature! 

 

Since it only seems to affect a proportion of users if fixing it proves a bit intractable then I can see this being something that could still be outstanding as a "Known Issue" even when v6 goes final.  After all it is not something that can lead to data loss or something like that.  The downside is the cost of the power (and possibly noise) due to disks not spinning down.  Some have worried about disk lifetime due to this issue but much of the evidence seems to point to it increasing disk lifetime as spinning disks up/down is harder on them than continually spinning..

 

I have 2 servers, each containing about 20 drives in a smallish closet. Temps are fine, until I get majority of drives spun up for an extended period and the closet gets hot. My servers were designed to be very quiet as they are right next to a home theater, but when I have drives refusing to spin down, temps start to build up. I am basically forced to crank the case fans up, but then I can easily hear my servers. Normally I only turn them up when doing parity checks.

 

It's a pretty annoying issue for me, I've debated downgrading... but I've always thought the fix was right around the corner. Sadly it seems no one can figure out what is causing it.  :-\

Link to comment

Been looking through the list of 6.0 features under the Roadmap section of the forum and one thing I see mentioned there that is still missing is the option to boot with Docker support but no VM (KVM or Xen) support.  Is it still intended to supply this option?  I assume that it will require a different Linux kernel configuration so it would definitely need some widespread exposure to ensure there are not unexpected side-effects. 

 

There should probably also be some GUI support at some point to select boot options but I can see this being low priority and possibly for after the 6.0 release.

Link to comment

There is an interaction between Xen and apcupsd when Xen VMs are set to auto start.  Apcupsd loses communications to the UPS and won't start if the Xen VMs are set to auto start.  The VMs will have to be manually started after reboot.

 

This situation occurs when I pass through an Intel iGPU in Xen.  The USB devices go off line and I have to power cycle the server.

 

The solution is to not auto start any Xen VMs and start them manually after a reboot.

 

This situation started when the apcupsd package was included in b13.

 

I posted a defect report.

Link to comment

My first attempt to update unRAID via built in plugin manager resulted in

 

/usr/local/sbin/plugin update unRAIDServer.plg 2>&1
plugin: running: 'anonymous'
invalid install: deleting /boot/config/plugins/unRAIDServer.plg
plugin: run failed: /bin/bash retval: 1

 

Second attempt seems to go through ok.

 

 

Ok I see now what is happening with that.  It is "harmless", that is, certain circumstances will lead to this - we'll fix in next release.

 

It would be really nice to have some kind of download progress indicator here as well.

 

b14 update seems to have gone smoothly - I'd previously had the cache unformatted issue but re-formatted cache under b13.

 

Link to comment

Is there anyway to get the UPS plugin to work with 2 servers connected to a single UPS? Seems like there is only 1 USB out on my UPS.

 

You connect the UPS to one of the servers and then set up the other server as a client to the one connected to the UPS.  There is a NIS server included in the apcups package and you can configure the client to use the server connected to the UPS for notification of a power failure.  Click 'Help' on the UPS Settings page and it should help you configure the client for UPS notifications.

Link to comment

 

If you installed -beta13 and it resulted in clobbering your cache disk, well if you can restore the partition table you set up previously then it will restore proper operation.  For example, if you followed this guide to set up a btrfs cache disk back in -beta6:

http://lime-technology.com/forum/index.php?topic=33806.0

 

Then you should be able to restore the proper partition table using this command sequence:

 

# substitute your cache disk device for 'sdX' in these commands:
sgdisk –Z /dev/sdX
sgdisk –g –N 1 /dev/sdX

 

 

 

I wasn't sure if i needed to do this or not so I tried it just in case it applied to me and now my cache disk won't mount saying the disk is unmountable. is there anything i can do to fix?

 

 

Link to comment

Upgrade went smoothly - as with b13, had to click twice to upgrade ... see that this is expected to be fixed for next release.

 

I removed apcupsd plugin before upgrade - reconfigured UPS after upgrade and all seems well, including a slave machine being able to access the NIS service.  Pleased to see the upgrade to 3.14.13.

 

Didn't have any scary message about docker image but was surprised that the system came up with docker stopped, which did give me a scare.  Enabled docker and everything started up.  Did I miss something in the documentation?  Can I take it, since the scary message didn't appear, that NOCOW is already set?  How can I prove that?

Link to comment

 

If you installed -beta13 and it resulted in clobbering your cache disk, well if you can restore the partition table you set up previously then it will restore proper operation.  For example, if you followed this guide to set up a btrfs cache disk back in -beta6:

http://lime-technology.com/forum/index.php?topic=33806.0

 

Then you should be able to restore the proper partition table using this command sequence:

 

# substitute your cache disk device for 'sdX' in these commands:
sgdisk –Z /dev/sdX
sgdisk –g –N 1 /dev/sdX

 

 

 

I wasn't sure if i needed to do this or not so I tried it just in case it applied to me and now my cache disk won't mount saying the disk is unmountable. is there anything i can do to fix?

 

Oh dear!

Link to comment

Upgrade went smoothly - as with b13, had to click twice to upgrade ... see that this is expected to be fixed for next release.

 

I removed apcupsd plugin before upgrade - reconfigured UPS after upgrade and all seems well, including a slave machine being able to access the NIS service.  Pleased to see the upgrade to 3.14.13.

 

Didn't have any scary message about docker image but was surprised that the system came up with docker stopped, which did give me a scare.  Enabled docker and everything started up.  Did I miss something in the documentation?  Can I take it, since the scary message didn't appear, that NOCOW is already set?  How can I prove that?

 

 

My first reboot docker was also stopped, i had to re-apply a patch to syslinux for cpu scaling that i took out prior to updating to b14 and on second reboot my docker started automatically.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.