unRAID Server Release 6.2.0-beta23 Available


Recommended Posts

 

I am hoping that we get to final before the end of July.  I have a license of win7 pro that I want to upgrade to win10 for free.  I have been holding off as unraid 6.2 switched to ovmf and my current win7 VM is seabios.  I usually don't install betas.... but as the free upgrade date gets closer....maybe I'll change my mind.  Just trying to avoid activation issues with Microsoft.

 

Never say never so I'll give you a less than 1% chance that 6.2 final will be out by the end of July.

 

Which July?  lol

I wouldnt be surprised if microsoft extended the upgrade time anyway. They really want to push everyone to 10

Link to comment
  • Replies 229
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I'm having trouble getting the server into S3 (using the S3.sh script available in the forum). worked properly before, but it seems that after beta 23 the behavior is the parity disk seems to be spin down, although there's still temperature reading.

 

root@LITTLEMAMBA:~# hdparm -C /dev/sdb
/dev/sdb: drive state is:  active/idle
root@LITTLEMAMBA:~# hdparm -C /dev/sdc
/dev/sdc: drive state is:  standby

The parity drive is active, not spun down.  You might check its spin down settings, see if they have changed.

Link to comment

I'm having trouble getting the server into S3 (using the S3.sh script available in the forum). worked properly before, but it seems that after beta 23 the behavior is the parity disk seems to be spin down, although there's still temperature reading.

 

root@LITTLEMAMBA:~# hdparm -C /dev/sdb
/dev/sdb: drive state is:  active/idle
root@LITTLEMAMBA:~# hdparm -C /dev/sdc
/dev/sdc: drive state is:  standby

The parity drive is active, not spun down.  You might check its spin down settings, see if they have changed.

 

Yep, that's the issue. Have a look at the screenshot in the previous post. Dashboard shows the disk spun down, although it still reports temperatures.

 

I have the default settings (will share them later on).

 

Cheers,

Anthonws.

Link to comment

I'm having trouble getting the server into S3 (using the S3.sh script available in the forum). worked properly before, but it seems that after beta 23 the behavior is the parity disk seems to be spin down, although there's still temperature reading.

 

root@LITTLEMAMBA:~# hdparm -C /dev/sdb
/dev/sdb: drive state is:  active/idle
root@LITTLEMAMBA:~# hdparm -C /dev/sdc
/dev/sdc: drive state is:  standby

The parity drive is active, not spun down.  You might check its spin down settings, see if they have changed.

 

Yep, that's the issue. Have a look at the screenshot in the previous post. Dashboard shows the disk spun down, although it still reports temperatures.

 

I have the default settings (will share them later on).

 

Cheers,

Anthonws.

 

I have been having this problem for months now with both of my servers.  (If my somewhat faulty memory is correct, it began when I moved to version 6.X.)  So it can effect more than just this beta.  What I do to have a status report sent once a day and I check it to see if a disk (It always seems like it is just one disk)  has a temperature listed.  If I see one, I manually spin all of the disks down on the 'Array Operation' tab on the 'Main' page.  While it might seem to really big issue, I can't see where it causes any real problem.  Sure the server might use a bit more power for a few hours, it is only a very small amount in the larger picture.  I also believe this has been reported in the past and no one has ever really figured what causes it or a solution.  The Logs show that the disk in question has been spun down and nothing to indicate that it was spun up for any reason...

Link to comment

Can confirm this isn't a beta issue. I see the same thing and I'm on 6.1.9.

 

However, I thought this is not a defect, as per Joe L.'s post here.

Some drives can return a temperature without spinning up.  The existence of a temperature is NOT a definitive indication of the spin status.
Link to comment

Can confirm this isn't a beta issue. I see the same thing and I'm on 6.1.9.

 

However, I thought this is not a defect, as per Joe L.'s post here.

Some drives can return a temperature without spinning up.  The existence of a temperature is NOT a definitive indication of the spin status.

 

The problem is that the temperature is always a few degrees above what the other drives report when they are spun up (as a 'check' to see what is the actual status of the drive in question but not quite as high as one might expect if the drive was actively spinning for several hours).  So it appears that something is going on but I am not quite sure what.  I can also vaguely recall that a major change was made  in the way that drive status was either being detected or drives being spun down as a solution to another larger related issue but I can't recall any details.

Link to comment

Your drive assignments are actually stored in super.dat ...

 

A comment for Tom or Eric - there's been a change in how super.dat is modified, sometime recently so probably a part of 6.2 development.  My guess is that pre-6.2, super.dat was modified by seeking then reading or writing in place, whereas now it is modified by clearing the file then writing out the whole file.  The problem is that there is now a window of opportunity that wasn't there before, as we have seen 3 to 5 cases already where the super.dat file exists but is zero bytes (a loss of all array assignments).  I don't remember that ever happening before this.  It's my impression that each of the cases are different, which implies it's the general super.dat update routine that has changed, and is clearing the file to zero bytes well before it is rewritten, a window of time that's vulnerable to power outage or system crash.  Hopefully that can be improved, either never clearing it or clear before immediate write.

 

Is there a system log somewhere from a server that exhibited this issue?

Link to comment

I'm having trouble getting the server into S3 (using the S3.sh script available in the forum). worked properly before, but it seems that after beta 23 the behavior is the parity disk seems to be spin down, although there's still temperature reading.

 

root@LITTLEMAMBA:~# hdparm -C /dev/sdb
/dev/sdb: drive state is:  active/idle
root@LITTLEMAMBA:~# hdparm -C /dev/sdc
/dev/sdc: drive state is:  standby

The parity drive is active, not spun down.  You might check its spin down settings, see if they have changed.

 

Yep, that's the issue. Have a look at the screenshot in the previous post. Dashboard shows the disk spun down, although it still reports temperatures.

As I said, the drive is active, it's not spun up, the Dashboard is wrong.  So the issue is not that it's showing temps, but that something is spinning it up without notifying the main unRAID module.  Let's call that main controller 'Central Command'.  The Dashboard just asks it for the current states, what Central Command thinks they are.

 

Computer systems now-a-days are complex systems of complex subsystems of complex subsystems, layers upon layers.  Communications between major and minor parts is often not what it should be.  A long time example is our drive systems, with quite a few subsystems involved.  In particular, we have 'Central Command' mounting and reading and writing drives, and various low level parts of the kernel doing the work of communicating and performing the low level tasks of the high level mounts, reads, and writes.  They start both knowing similar facts about the drive, but then the drive faults in some way, and low level routines attempt recovery (completely unknown to high level routines).  Those kernel routines may finally give up on the drive and disable it, effectively removing it from the system.  But they have no way to communicate that to Central Command, so it continues issuing reads and writes, and doesn't know why they are not succeeding.  It continues trying to read from a drive that isn't there, because it has no way of knowing the drive is no longer there.

 

Our case is similar, temps and spin states get out of sync.  Central Command has a board showing the spin states of all drives, and assumes it will be notified of any changes, but that doesn't happen in certain cases.  In earlier v6 days, I knew of 3 or 4 ways to fool it, but with all the fixes since then, many of those may no longer be true.  One I can remember was simple, just click on the drive to access its config settings and SMART data.  A message would appear indicating that the SMART data wasn't available because the drive was spun down, but then the SMART data sections would begin populating.  That meant that a part of that function detected the drive spun down, and wouldn't access SMART info, but another part went ahead and accessed it anyway (a lack of communication?), and thereby spun the drive up.  When you went back to the Main page, the drive would still show as spun down, but the temp would now appear.  I don't remember the other ways to do it, but obviously there are still one or more ways to trigger a spin up, without notifying Central Command.

 

Now when you issue a spin up or down command, whether it's to a single drive or to all of the drives, that goes through Central Command, so it updates the spin board accordingly, and at that point the drive or drives are all in sync.  As stated by someone above, that 'fixes' the problem, at least temporarily.

 

The problem of out-of-sync temps and spin states used to be worse, but has improved a lot.  One issue that originally in early v6 made it worse is that the update of the display (Display Settings->Tunable (poll_attributes)) was greatly delayed, for good performance reasons.  It was set to 30 minutes, which meant that you could potentially go that long before they were back in sync.  Some of us then recommended that be changed to something much shorter like 60 to 180 seconds at the most.  I like 60 or 120 (any shorter and you can make parity checks slower).  I can't guarantee this is the only factor though.

 

It's true that there are certain drives that can be accessed for temps without spinning up, used to be just certain WD drives.  It would be interesting to know the current list.  One nice feature of the MyMain screen of UnMenu was that it knew which drives those were, and would show their temps even if the drive was spun down.  The current Dynamix Main screen does not attempt that, might be nice to add that since it does separate the display of temp from spin state.

 

The fact that the temps are correctly showing, leads me to make one recommendation, a feature request - since the function displaying the drive temps must be checking the active/standby state, why not communicate that to 'Central Command', thereby quickly correcting the out-of-sync condition.  Even if we catch every known function that is altering spin state and make them notify accordingly, new ways can arise through other functions or plugins, or through command line actions.  This way, it's corrected fairly quickly, no matter who did it, now or in the future.    My apologies for the post length

Link to comment

Your drive assignments are actually stored in super.dat ...

 

A comment for Tom or Eric - there's been a change in how super.dat is modified, sometime recently so probably a part of 6.2 development.  My guess is that pre-6.2, super.dat was modified by seeking then reading or writing in place, whereas now it is modified by clearing the file then writing out the whole file.  The problem is that there is now a window of opportunity that wasn't there before, as we have seen 3 to 5 cases already where the super.dat file exists but is zero bytes (a loss of all array assignments).  I don't remember that ever happening before this.  It's my impression that each of the cases are different, which implies it's the general super.dat update routine that has changed, and is clearing the file to zero bytes well before it is rewritten, a window of time that's vulnerable to power outage or system crash.  Hopefully that can be improved, either never clearing it or clear before immediate write.

 

Is there a system log somewhere from a server that exhibited this issue?

 

Naturally you'd put me on the spot, make me document my assertions, prove what I say - which is perfectly valid, you want facts over anecdotes.  ;)

 

My assertions are based on my noticing 3 or 4 cases of apparently zero byte super.dat's, and thinking I don't remember this EVER happening before recently.  Here's a few cases I can find, but I'm going to have to rely on others to recall and locate the other ones.

 

1. Here's one, from v6.1.6, with diagnostics.

 

2. Probably another, from v6.1.9, with syslog.  This one is confusing for multiple reasons.  It never says super.dat was zero bytes, but it was suddenly empty.  Time line is confusing as he describes it as empty then shows a pic where it must be full then in next post says it's empty (perhaps pic is from before).  (Be aware the syslog is doubled.  I don't know how but in copying it the user appended it to itself, so first half is identical to second half.)

 

3. Here's another, from v6.1.7, but no syslogs!  Definitely a zero byte super.dat though.

 

4. Possibly another, from v6.1.6, with syslog.  But this one may be a red herring, as there was file system corruption on the flash drive.

 

If there is an issue with rewriting super.dat, then it appears it began around v6.1.6, at least from the data here.  I *think* there are a couple of other cases, perhaps others will find them.

Link to comment

Our case is similar, temps and spin states get out of sync.  Central Command has a board showing the spin states of all drives, and assumes it will be notified of any changes, but that doesn't happen in certain cases.  In earlier v6 days, I knew of 3 or 4 ways to fool it, but with all the fixes since then, many of those may no longer be true.  One I can remember was simple, just click on the drive to access its config settings and SMART data.  A message would appear indicating that the SMART data wasn't available because the drive was spun down, but then the SMART data sections would begin populating.  That meant that a part of that function detected the drive spun down, and wouldn't access SMART info, but another part went ahead and accessed it anyway (a lack of communication?), and thereby spun the drive up.  When you went back to the Main page, the drive would still show as spun down, but the temp would now appear.  I don't remember the other ways to do it, but obviously there are still one or more ways to trigger a spin up, without notifying Central Command.

 

One note.... the later versions of smartctl will tell you the spinstat of a drive, if you use the -n standby option, if the drive is spun down you get a message the drive is in standby mode, and if it is not, you get the data.

 

This has 2 advantages over checking spinstat via hdparm:

 

- hdparm doesn't work properly on some controllers.

- you have to do 2 steps (check hdparm, then do smartctl).

 

And one more future advantage.... if you get to the point where you can properly query Areca or other HW RAID controllers via smartctl -d custom parameters, you can also get spinstatus from them too even though hdparm won't work.

 

Plus, if you had a process every so often to check spin status to resync the GUI indicators, it would help situation where the controllers are doing the spindown themselves w/ unRAID knowing it (i,e. Areca, Adaptec, etc.)

 

A checkbox in the drive config page "Drive returns SMART data when spun down" would be nice too ;)

 

Link to comment

One note.... the later versions of smartctl will tell you the spinstat of a drive, if you use the -n standby option, if the drive is spun down you get a message the drive is in standby mode, and if it is not, you get the data.

 

This has 2 advantages over checking spinstat via hdparm:

 

- hdparm doesn't work properly on some controllers.

- you have to do 2 steps (check hdparm, then do smartctl).

 

And one more future advantage.... if you get to the point where you can properly query Areca or other HW RAID controllers via smartctl -d custom parameters, you can also get spinstatus from them too even though hdparm won't work.

 

Plus, if you had a process every so often to check spin status to resync the GUI indicators, it would help situation where the controllers are doing the spindown themselves w/ unRAID knowing it (i,e. Areca, Adaptec, etc.)

 

A checkbox in the drive config page "Drive returns SMART data when spun down" would be nice too ;)

Heartily agree with the above, on all points!

Link to comment

Is there anyway i can disable Dual parity i upgraded to the latest Beta and it shows parity as invalid and no assigned disk to parity slot 2 but i don't want to use dual parity?

 

Am i missing a setting somewhere or is this a bug/ expected behaviour

I think your parity must be invalid for some other reason. A screenshot of Main - Array Operation might tell, or just post your diagnostics.
Link to comment

Is there anyway i can disable Dual parity i upgraded to the latest Beta and it shows parity as invalid and no assigned disk to parity slot 2 but i don't want to use dual parity?

 

Am i missing a setting somewhere or is this a bug/ expected behaviour

 

There's something funky with your config, on dashboard view there should be an empty parity2 column, parity should be just "parity", not "parity 0" and the cache disk should appear as "cache", not "cache 30".

Link to comment

i'm having issues running the 6.2.0 beta 23, i've installed a TBS dvb-s2 card to make use of the tvheadend docker, so i used the unraid dvb plugin to install the TBS 6.2.0 beta 23 image.

 

upon reboot my dockers and the vm tabs had disappeared, i've attempted reinstalling the stock beta 23 & even downgrading to 6.1.9 & multiple reboots but still the same.

 

i currently have the TBS 6.2.0 beta 23 image installed

 

i've attached my diagnostics, help please:)

server-diagnostics-20160624-1210.zip

Link to comment

Was playing around with VMs on my crappy hardware, and had the system just up and reset on me.  Not worried about this.  I have cheap hardware.

 

But, upon restart the system did NOT start up a parity check.  Fix Common Problems knew that it was an unclean shutdown (it handles this in its own way), but unRaid didn't realize it for some reason

 

I've had this same behavior on the last couple of beta's, think I mentioned it at that time.

Looking at the console at bootup I believe I also seen the entry for the dirty bit detected (or whatever it flags) on the USB drive, however no parity check when booted just "parity is valid".

Have not had to perform a force reset on this release however.

Link to comment

I've posted this in other places but is it possible to get 6.2 updated so that the plugins page sorts in an acceptable manner?

I figure this might be the time to get this fixed.

 

Please see this: https://lime-technology.com/forum/index.php?topic=49780.0

 

and dmacias's comment:

A newer version of tablesorter and sortReset: true and widgets['saveSort'] would allow this. Save sort would save your choice between page refreshes and sort reset will reset the sort order to default on the third click. That's what I use in my plugins with tables.

 

Thank you!

 

 

Link to comment

There is a minor error in the "Help" system.  Go to the 'Shares', 'User Shares', And click on any User Share.  Look at the 'NFS Security Settings' tab and you will find no 'Help' for those settings.  There is 'Help' for both the 'Share Setting' and 'SMB Security Setting' tabs.

 

Looks like a minor oversight that should probably be easy to correct. 

Link to comment

Is there anyway i can disable Dual parity i upgraded to the latest Beta and it shows parity as invalid and no assigned disk to parity slot 2 but i don't want to use dual parity?

 

Am i missing a setting somewhere or is this a bug/ expected behaviour

I think your parity must be invalid for some other reason. A screenshot of Main - Array Operation might tell, or just post your diagnostics.

unraid-master-diagnostics-20160624-1727.zip

Link to comment

Small suggestion, add a little space between parity disks and data disks for device assignments, it happen to me more than once after doing a new config and planing to use single parity assigning a data disk to the parity2 slot, not data loss because it was on my test server and I would be more careful on a production server, but IMO as more people change to v6.2 it's just a matter of time fot this to happen to someone using single parity after a new config, a small and easy change like the mock-up below, or any other way of better separating parity from data disks would be enough to avoid this.

mockup.png.7cbc4125fdf48db8d2cdbf4c099d49b6.png

Link to comment

I just upgraded to b23 from 6.1.9. I am not assigning a second parity drive right away, and I have something I don't see any other posts on.

 

I don't have a way to initiate a parity check. I've been looking for a setting I missed, but I'm lost.

 

ADDED:

Also, the main tab shows Parity2 as unassigned after starting up. Am I supposed to do a new config after upgrading?

 

Also, the start array button says the array will be unprotected, but if I start I get a green ball. So....?

Link to comment

Was playing around with VMs on my crappy hardware, and had the system just up and reset on me.  Not worried about this.  I have cheap hardware.

 

But, upon restart the system did NOT start up a parity check.  Fix Common Problems knew that it was an unclean shutdown (it handles this in its own way), but unRaid didn't realize it for some reason

 

I've had this same behavior on the last couple of beta's, think I mentioned it at that time.

Looking at the console at bootup I believe I also seen the entry for the dirty bit detected (or whatever it flags) on the USB drive, however no parity check when booted just "parity is valid".

Have not had to perform a force reset on this release however.

 

I have same issue

Link to comment
Guest
This topic is now closed to further replies.