unRAID Server Release 6.0-beta14b-x86_64 Available


limetech

Recommended Posts

Download

 

Clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade.  Note: Due to how we are managing dynamix in parallel with unRAID Server base OS, you might see that dynamix plugin has an update as well.  You do not need to install this update, just install new version of unRaid OS and you'll get the latest dynamix.  Next upgrade after this release will not exhibit this behavior.  Also: Unfortunately upgrade progress display got broken, let it run.  We are working on a fix for this behavior.

 

Disclaimer: This is beta software.  While every effort has been made to ensure no data loss, use at your own risk!

 

This is another patch release of -beta14, meaning just a couple bug fixes; this one primarily to fix spinning status display.  However, please do read following notes since the original post has been modified.

 

Some notes on this release:

  • Device spinning status is now maintained internally and the actual devices are no longer interrogated to get their spinning state.  This means that if you manually use
    hdparm -y

    to spin down devices, or use a 3rd party plugin that does so, the webGui will likely get out of sync with the actual device state.  So don't do that.

  • Important: dlandon's excellent apcupsd plugin is now 'built-in' to the base webGui.  Please remove the apcupsd plugin if you are using it.
  • System notifications may be disabled after update and must be enabled on the Notifications Settings page.  The setting will be preserved across reboot.
  • The 'mover' will now completely skip moving files off the cache disk/pool for shares marked "cache-only".  However it will move files off the cache disk/pool for shares not explicitly configured "cache-only" in the share's Share Settings page; even for shares that only exist on the cache!  This is different behavior from previous releases!
  • We are still investigating cpu scaling issue with Haswell cpu's, and have removed a 'patch' that seems to have been responsible for some users reporting kernel crashes.
  • You may see a rather dramatic message pop up on the Docker page which reads: "Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"  This means your image file does not have NOCOW attribute set on it and may get corrupted if the device it's located on runs completely out of free space.  Refer a few posts down for further discussion of this.
  • As of beta14, pointing to a docker image through a user share is not supported.  Please update your docker image location field to point to the actual disk device used for your docker image file (e.g. /mnt/cache/docker.img or /mnt/disk#/docker.img; substitute # for the actual disk number that the image is on).

 

If you installed -beta13 and it resulted in clobbering your cache disk, well if you can restore the partition table you set up previously then it will restore proper operation.  For example, if you followed this guide to set up a btrfs cache disk back in -beta6:

http://lime-technology.com/forum/index.php?topic=33806.0

 

Then you should be able to restore the proper partition table using this command sequence:

 

# substitute your cache disk device for 'sdX' in these commands:
sgdisk –Z /dev/sdX
sgdisk –g –N 1 /dev/sdX

 

I want to extend a special Thank You! to bonienl for his continued improvements to dynamix and to eschultz for his programming refinements in multiple areas of the code.

 

Additional Notes from bonienl:

 

- Those using the dynamix band-aid plugin, should remove it BEFORE upgrading to B14. If removed AFTER the upgrade to B14 then a system reboot is required to unload the existing cron settings, which conflict with the new cron solution.

 

- All notifications are OFF by default, and these need to be enabled on the notification settings page. Once enabled they will survive a system reboot

 

- Settings for the scheduler (parity check) need to be re-applied to become active, this again is due to the new cron solution introduced in this version of unRAID

 

- And ... clear your browser's cache to ensure the GUI is updated completely

 

Additional notes from JonP

 

Regarding the Docker virtual disk image and needing to recreate it, there has been some confusion as to who will need to do this.  The only folks that will need to recreate their Docker image in Beta14 are those that have it stored on a BTRFS-formatted device.  If your image is stored on a device formatted with XFS or ReiserFS, you do NOT need to recreate your Docker image.

 

However, all users, regardless of filesystem type, will need to make sure their Docker image path in the webGui points to an actual disk device or cache pool, and not the unRAID user share file system.  This rule only applies to the path to your docker image itself.  Hope this clarifies things.

 

For a guide on how to recreate your Docker image without losing any application settings / data, please see this post.

Link to comment
  • Replies 476
  • Created
  • Last Reply

Top Posters In This Topic

Feb 26 03:04:16 Unraid-Nas kernel: mdcmd (67): spindown 0
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: renewing lease of 192.168.1.10
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: rebind in 1350 seconds, expire in 1800 seconds
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: sending REQUEST (xid 0x363228b6), next in 3.69 seconds
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: acknowledged 192.168.1.10 from 192.168.1.30
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: leased 192.168.1.10 for 3600 seconds
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: renew in 1800 seconds, rebind in 3150 seconds
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: adding IP address 192.168.1.10/24
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: writing lease `/var/lib/dhcpcd/dhcpcd-br0.lease'
Feb 26 03:04:41 Unraid-Nas dhcpcd[1693]: br0: executing `/lib/dhcpcd/dhcpcd-run-hooks' RENEW
Feb 26 03:04:41 Unraid-Nas ntpd[5721]: ntpd exiting on signal 1 (Hangup)
Feb 26 03:04:42 Unraid-Nas ntpd[28552]: ntpd [email protected] Sun Dec 21 02:01:49 UTC 2014 (1): Starting
Feb 26 03:04:42 Unraid-Nas ntpd[28552]: Command line: /usr/sbin/ntpd -g -p /var/run/ntpd.pid
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: proto: precision = 0.076 usec (-24)
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: Listen and drop on 0 v4wildcard 0.0.0.0:123
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: Listen normally on 1 lo 127.0.0.1:123
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: Listen normally on 2 br0 192.168.1.10:123
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: Listen normally on 3 docker0 172.17.42.1:123
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: Listening on routing socket on fd #20 for interface updates
Feb 26 03:04:42 Unraid-Nas ntpd[28555]: restrict default: KOD does nothing without LIMITED.
Feb 26 03:04:42 Unraid-Nas dhcpcd[1693]: br0: sending ARP announce (1 of 2), next in 2.00 seconds
Feb 26 03:04:44 Unraid-Nas dhcpcd[1693]: br0: sending ARP announce (2 of 2)
Feb 26 03:16:14 Unraid-Nas kernel: mdcmd (68): spindown 1
Feb 26 03:16:41 Unraid-Nas kernel: mdcmd (69): spindown 10
Feb 26 03:18:02 Unraid-Nas kernel: mdcmd (70): spindown 13
Feb 26 03:18:29 Unraid-Nas kernel: mdcmd (71): spindown 14
Feb 26 03:19:05 Unraid-Nas kernel: mdcmd (72): spindown 15
Feb 26 03:25:35 Unraid-Nas kernel: mdcmd (73): spindown 11
Feb 26 03:25:36 Unraid-Nas kernel: mdcmd (74): spindown 12
Feb 26 03:29:13 Unraid-Nas kernel: mdcmd (75): spindown 2
Feb 26 03:29:43 Unraid-Nas kernel: mdcmd (76): spindown 8
Feb 26 03:31:29 Unraid-Nas kernel: mdcmd (77): spindown 3

Link to comment

regards gui spin status, i've got temps on drives that are showing as grey.

is that right, cos i've not seen it do that before.

Had the same thing, did some playing around and this is what it looks like is happening:

 

poll_attributes runs every 1800 seconds (30 minutes).  Until the next polling period, Dynamix is going to show what ever value is currently sitting in disks.ini (from the last polling period) - If the drive was spun down at that time, when it spins up it will show "*" for a temperature until the next polling period... And vice versa

 

I changed poll_attributes to be 10 seconds, and everything is working the way that we remember with <b14. 

 

You're just going to have to play with poll_attributes and find a value which you like

 

Maybe a refresh button should get put back onto Main that will poll the attributes right away

 

 

 

Link to comment

@limetech, besides reporting any strange behavior when I upgrade from one beta to the next, are there any specific tests I could run on my system after I upgrade that would help you?

 

The truth is that general usage itself is really very good testing by itself.  What's important is that when an issue is discovered, you document the exact steps that were taken up and through when the incident occurred.  Think of any settings you may have recently changed, or modifications you may have recently made to related equipment (e.g networking).  The key to solving bugs is getting steps necessary to consistently reproduce them.

Link to comment

regards gui spin status, i've got temps on drives that are showing as grey.

is that right, cos i've not seen it do that before.

Had the same thing, did some playing around and this is what it looks like is happening:

 

poll_attributes runs every 1800 seconds.  Until the next polling period, Dynamix is going to show what ever value is currently sitting in disks.ini (ie - I had the same issue as you on one server, and my other server was showing "*" for disks that were spun up. (and that value is only updated every 1800 seconds)

 

I changed poll_attributes to be 10 seconds, and everything is working the way that we remember with <b14. 

 

You're just going to have to play with poll_attributes and find a value which you like

 

Maybe a refresh button should get put back onto Main that will poll the attributes right away

 

it's late (or really damn early, depending on you how you look at it) so i may be well out of skew here,  but i thought they changed that polling to fix the spin down issue in the first place.

Link to comment

it's late (or really damn early, depending on you how you look at it) so i may be well out of skew here,  but i thought they changed that polling to fix the spin down issue in the first place.

I don't believe so.  I think they played around elsewhere for the spindown, but the side effect of poll_attributes is that temperatures on drives are now only updated by default every 30 minutes.  If the drive spins up or spins down before the next polling period the gui will display the result from the last poll - which is obviously wrong.

 

Now that I'm thinking more about it, when a drive spins down, emhttp should update the temperature entry for the drive to be a "*".  When the drive spins up, it should automatically poll the attributes for THAT drive, regardless of the poll_attributes settings.

 

You can always change poll_attributes back down to the pre 14 setting of 10 seconds.  That setting was put in only to allow users to have a little more control of parity check speeds, because of the overhead involved in polling all of the drives every 10 seconds.

 

 

Link to comment

root@Unraid-Nas:/boot# showsize
4.0K	./make_bootable.bat
4.0K	./make_bootable_mac
4.0K	./sense.sh
8.0K	./license.txt
20K	./readme.txt
44K	./preclear_reports
68K	./ldlinux.sys
120K	./ldlinux.c32
148K	./memtest
148K	./scripts-etc
208K	./sensors-detect
828K	./xen
864K	./syslinux
3.8M	./bzimage
11M	./config
88M	./bzroot
92M	./v6.0-beta12
93M	./v6.0-beta13
93M	./v6.0-beta14
93M	./v6.0-beta14a

 

 

i'm getting quite a collection of beta folders on the flash drive, can i delete those ?

Link to comment

regards gui spin status, i've got temps on drives that are showing as grey.

is that right, cos i've not seen it do that before.

Had the same thing, did some playing around and this is what it looks like is happening:

 

poll_attributes runs every 1800 seconds (30 minutes).  Until the next polling period, Dynamix is going to show what ever value is currently sitting in disks.ini (from the last polling period) - If the drive was spun down at that time, when it spins up it will show "*" for a temperature until the next polling period... And vice versa

 

I changed poll_attributes to be 10 seconds, and everything is working the way that we remember with <b14. 

 

You're just going to have to play with poll_attributes and find a value which you like

 

Maybe a refresh button should get put back onto Main that will poll the attributes right away

 

Huh well I guess that's a surprising result given history of how it worked in the past.

What has changed is that we separated the "check if hdd is spinning" from "read the hdd smart attributes".  This is because reading SMART data is very "expensive" operation (as explained earlier).  Especially if you have a large number of drives.  You can't just have a monitor process running the background polling SMART every second or 10 seconds or higher.  It causes everything to slow down, not only webGui page refreshes but also potentially big impact on I/O and parity operations.  On the other hand we do want to monitor SMART sometime - this is the only way you can get any kind of warning that one of your drive cage fans has failed, other than noticing the melted plastic and smoke coming out of your server that is.

I guess it might take a few iterations of tweaking to get it right.

Link to comment

Huh well I guess that's a surprising result given history of how it worked in the past.

What has changed is that we separated the "check if hdd is spinning" from "read the hdd smart attributes".  This is because reading SMART data is very "expensive" operation (as explained earlier).  Especially if you have a large number of drives.  You can't just have a monitor process running the background polling SMART every second or 10 seconds or higher.  It causes everything to slow down, not only webGui page refreshes but also potentially big impact on I/O and parity operations.  On the other hand we do want to monitor SMART sometime - this is the only way you can get any kind of warning that one of your drive cage fans has failed, other than noticing the melted plastic and smoke coming out of your server that is.

I guess it might take a few iterations of tweaking to get it right.

Poll attributes on spinup, clear temperature on spindown.  I've adjusted my settings down to 300 (5 minutes)
Link to comment

Poll attributes on spinup, clear temperature on spindown.  I've adjusted my settings down to 300 (5 minutes)

 

What if device spins up due to I/O?  Maybe that's not so bad because it would just add a bit more delay after spinup... lemme think about that.

 

Something useful would be to see what impact, if any, a "high" SMART polling rate has on media streams.  Different players may have different results.

 

 

Link to comment

We've been running with a 10 second polling rate for how many years now?  I personally have never had any streaming issues, but you're right - different players might not buffer enough and that delay for smart might result in a dropped frame.

 

Polling at spin up shouldn't have any I'll effects other than delaying access a hair longer.  Setting the temp to * at spin down will only be an issue if the user manually issues a hdparm command to spin down / up the drives - everything would be out of sync -  but you already warned about that in the OP

 

Link to comment

We've been running with a 10 second polling rate for how many years now?  I personally have never had any streaming issues, but you're right - different players might not buffer enough and that delay for smart might result in a dropped frame.

Actually, all releases pre-dynamix, used 'hdparm' to only get spinning state in background.  Temperatures and other SMART data was only read "on-demand" as a result user navigating webGui.  In analyzing where time is spent in each page refresh, this operation accounted for most time, especially on larger arrays.  Go back and boot unRaid-5 and you'll see the page navigation is not as "snappy" - this is entirely due to reading SMART.

 

Polling at spin up shouldn't have any I'll effects other than delaying access a hair longer.  Setting the temp to * at spin down will only be an issue if the user manually issues a hdparm command to spin down / up the drives - everything would be out of sync -  but you already warned about that in the OP

So I just added the code to do above - not so bad.  What's left is to detect transition of spun-down to spun-up due to I/O and trigger start of a SMART poll because of it.

Link to comment

I successfully used the gui to update from 14a to 14b, but it wasn't easy...

 

After applying the update I wasn't able stop Docker - when I changed "enable docker" to "no" and hit apply, nothing happened.

 

So I used the gui to stop the array, hoping it would stop the dockers on its way down, but the gui just hung while trying to unmount the drives.

 

So I SSH'd to the server and ran "docker ps" to list running dockers and then "docker stop [dockername]" to shut them off.

 

Then I ran "powerdown -r" to reboot.

 

It came back up and appears to be fine.

 

 

This experience makes me wonder... since an OS update requires an immediate reboot, should we require the array to be offline before the OS update can be applied? 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.