unRAID Server Release 6.1-rc2 Available


Recommended Posts

I was thinking to make it relative to $_SERVER['DOCUMENT_ROOT'] which is /usr/local/emhttp.

 

In openvpn case the 'openvpnserver script would be moved from /etc/rc.d to /usr/local/emhttp/plugins/openvpn/scripts and his form would like this:

 

<form name="stop_openvpnserver" method="POST" action="/update.htm" target="progressFrame">
          <input type="hidden" name="cmd" value="plugins/openvpn/scripts/openvpnserver stop">
          <input type="submit" name="runCmd" value="Stop">
         </form>

 

I like the idea of all plugin dependencies contained within the plugin folder itself, but it may break some conventions.

 

When a script is called to start or stop a service (e.g. as part of the array event functionality) then the convention recommends that such a script is run under /etc/rc.d, it would now be under plugins/plugin-name/scripts, of course not a show stopper but something to be aware of.

 

Link to comment
  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

I was thinking to make it relative to $_SERVER['DOCUMENT_ROOT'] which is /usr/local/emhttp.

 

In openvpn case the 'openvpnserver script would be moved from /etc/rc.d to /usr/local/emhttp/plugins/openvpn/scripts and his form would like this:

 

<form name="stop_openvpnserver" method="POST" action="/update.htm" target="progressFrame">
          <input type="hidden" name="cmd" value="plugins/openvpn/scripts/openvpnserver stop">
          <input type="submit" name="runCmd" value="Stop">
         </form>

 

I like the idea of all plugin dependencies contained within the plugin folder itself, but it may break some conventions.

 

When a script is called to start or stop a service (e.g. as part of the array event functionality) then the convention recommends that such a script is run under /etc/rc.d, it would now be under plugins/plugin-name/scripts, of course not a show stopper but something to be aware of.

In my case, that's back where I started... lol
Link to comment

I was thinking to make it relative to $_SERVER['DOCUMENT_ROOT'] which is /usr/local/emhttp.

 

In openvpn case the 'openvpnserver script would be moved from /etc/rc.d to /usr/local/emhttp/plugins/openvpn/scripts and his form would like this:

 

<form name="stop_openvpnserver" method="POST" action="/update.htm" target="progressFrame">
          <input type="hidden" name="cmd" value="plugins/openvpn/scripts/openvpnserver stop">
          <input type="submit" name="runCmd" value="Stop">
         </form>

 

I like the idea of all plugin dependencies contained within the plugin folder itself, but it may break some conventions.

 

When a script is called to start or stop a service (e.g. as part of the array event functionality) then the convention recommends that such a script is run under /etc/rc.d, it would now be under plugins/plugin-name/scripts, of course not a show stopper but something to be aware of.

In my case, that's back where I started... lol

 

So maybe you were right all the time  ;)

Link to comment

 

I like the idea of all plugin dependencies contained within the plugin folder itself, but it may break some conventions.

 

When a script is called to start or stop a service (e.g. as part of the array event functionality) then the convention recommends that such a script is run under /etc/rc.d, it would now be under plugins/plugin-name/scripts, of course not a show stopper but something to be aware of.

 

That is a de facto convention of convenience, but as long as we are going support plugins, we have to be more security conscious.  In the past I didn't care too much about this kind of stuff because unRaid has traditionally been installed in "trusted" networks.  But "The Times They Are a-Changin'".

 

Note that restricting the root of where update.php/update.htm can invoke scripts helps but is not perfect.  Plugins in general have to be scrutinized because they could still do some thing like this:

 

GET update.htm?cmd=plugins/myplugin/scripts/remove-stuff+myfile

 

And then inside /usr/local/emhttp/plugins/myplugin/scripts/remove-stuff we have:

 

#!/bin/bash

rm -r $1

 

But at least this is a bit more obvious.

 

 

Link to comment

What's coming in 6.2?  After 6.1 'stable' is released, the next significant feature for 6.2 will be double-parity protection using Reed-Solomon algorithm for second "parity" disk (a.k.a., "P+Q").  We plan on a brief closed beta and then a (hopefully) brief public beta.  This will be a true beta, meaning use on test server because data could be at risk - though not really because I always write bug-free code  ::)  More on this later....

 

HUGE news guys, great momentum happening with unRAID.

 

Thought, should this really be a 6.x point release? Why not 7.0 to help would be beta testers clearly know this is a completely new idea and beta.

 

We'll give that some thought, we can then make it a paid upgrade as well.

 

Or make it a Pro only feature! 

 

In my opinion, dual parity is not really that necessary.  There have been very few times when two disks have failed at the same time or even during a rebuilt.  And I would bet that most of those were in systems where the sysop has not been watching things and performing periodic parity checks.  In those cases, dual parity would probably not prevented an issue because no action would be taken until after a third disk had failed.  The biggest advantage to dual parity is that the exact disk with the issue can be quickly determined provided there is only a single failure point.

 

If I can use dual parity to tell me which disk is problematic, I'd like to use it in the Plus version.  It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Link to comment

even if we don't get official vanilla support for it though with the community we've got someone will probably be able to come up with a plugin solution.

 

Don't think so.. If the technical choice limetech makes, makes it possible to pinpoint the offending drive I have no doubt they will show it.  Potentially the chosen path (remember there were not a lot of paths) does not make this possible in which case a plugin will also not be able to do it..

 

I must say however that I have been running unraid for many years and have never had a parity error.. With the exception of when I had to reset the array dirty.. In those cases the parity needs to rebuild to reflect the disk state so it is not needed to know what disk is causing problems.

 

Maybe others have these more frequently..

 

I can however imagine that it would be better to see parity as the way to get back failed disks (and get back two shortly: YEAH !). It is my feeling that we need some kind of "bitrot protection" to see if files have changed where they should not have.. I can see that as a future expansion to unraid..

Link to comment

If I can use dual parity to tell me which disk is problematic, I'd like to use it in the Plus version.  It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

 

SMART reports aren't related to parity, they are 2 very different things.  While it's possible that a drive issue or a power issue could cause both, it would be rather coincidental to see both a SMART issue and a parity error at the same time.

 

In addition (and not everyone will agree!), if you get parity errors, the odds are extremely high that it was NOT a data drive, but the parity drive that was wrong.  It always has been that way, and even though both drive capacities and platter densities are increasing, I believe it will still be that way for some time to come.  We do keep talking about bitrot and other causes, but by far, in my opinion, the main reason for a parity correction is an interrupted write.  Data was written, but the parity write was interrupted, by power outage or crash or ... (so the data is correct, parity is not).  User error is another (rare) possibility (writing to the drive in a way that is not parity protected), but again the data is correct, the parity is not.  I do believe that bit rot is more than just a theoretical thing, but try looking for actual reports of proven bit rot among us.  I'm sure they exist, but it is still very rare.

 

If the coming dual parity implementation does happen to indicate the specific drive that has wrong bits, I am expecting it to indicate that it's one of the parity drives over 99% of the time, perhaps over 99.9% of the time.  But that's just my opinion.  And that's why I have never bothered with a non-correcting parity check, but I know not every one agrees.  And I know I shouldn't touch off that discussion all over again!

Link to comment

I have the situation that a number of my disks do not spin down. In the GUI this is presented as spun-down state WITH temperature (see attachment).

 

Forcing all disks to spin-up and then forcing all disks to spin-down doesn't resolve the situation (I verify the actual disk state with hdparm).

 

Anyone else has the same observation in v6.1rc2 ?

disks-not-spun-down.png.e6d53b73bf66be6a9e9a257539eccfef.png

Link to comment

I have the situation that a number of my disks do not spin down. In the GUI this is presented as spun-down state WITH temperature (see attachment).

 

Forcing all disks to spin-up and then forcing all disks to spin-down doesn't resolve the situation (I verify the actual disk state with hdparm).

 

Anyone else has the same observation in v6.1rc2 ?

I see same the same behavior on my server.

 

Plus that my cache disk/pool never spun down any more. Right now I'm closing down some docker to see if these force these disk to not spun down,  this have been OK on 6.0.

 

//Peter

disks.JPG.0931a7862ba02f94fe8f6ce241b112e8.JPG

Link to comment

It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Does this happen every time you do a parity check?

 

It doesn't happen every time, but I do get a couple parity errors almost every time.  It's been rare that I get 0 errors after running a parity check.

Link to comment

If I can use dual parity to tell me which disk is problematic, I'd like to use it in the Plus version.  It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

 

SMART reports aren't related to parity, they are 2 very different things.  While it's possible that a drive issue or a power issue could cause both, it would be rather coincidental to see both a SMART issue and a parity error at the same time.

 

In addition (and not everyone will agree!), if you get parity errors, the odds are extremely high that it was NOT a data drive, but the parity drive that was wrong.  It always has been that way, and even though both drive capacities and platter densities are increasing, I believe it will still be that way for some time to come.  We do keep talking about bitrot and other causes, but by far, in my opinion, the main reason for a parity correction is an interrupted write.  Data was written, but the parity write was interrupted, by power outage or crash or ... (so the data is correct, parity is not).  User error is another (rare) possibility (writing to the drive in a way that is not parity protected), but again the data is correct, the parity is not.  I do believe that bit rot is more than just a theoretical thing, but try looking for actual reports of proven bit rot among us.  I'm sure they exist, but it is still very rare.

 

If the coming dual parity implementation does happen to indicate the specific drive that has wrong bits, I am expecting it to indicate that it's one of the parity drives over 99% of the time, perhaps over 99.9% of the time.  But that's just my opinion.  And that's why I have never bothered with a non-correcting parity check, but I know not every one agrees.  And I know I shouldn't touch off that discussion all over again!

 

It sounded like the feature would support pin pointing which drive had the incorrect bits.  It's good to know that the parity drives usually contain the wrong bit instead of the data drives.

Link to comment

  It's good to know that the parity drives usually contain the wrong bit instead of the data drives.

 

That's his opinion... Granted, it should carry more weight than mine here, but I'm not convinced.  Personally, the only time I have lost data with UnRAID was when I had a failing data disk and did a correcting parity check...  So in my anecdotal experience, it is very ill-advised to blindly run a correcting parity check without having a solid reason to expect corrupted parity but not corrupted data.

 

I don't experiment with virtualization, plugins, etc.  My UnRAID server sits in the basement on a big UPS and just serves files.  It only gets tinkered with when I add a disk or there is a new stable release, so uptime has historically been very, very, VERY long  ;D  Barring a bug in UnRAID itself, there should not be any interrupted writes!

Link to comment

It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Does this happen every time you do a parity check?

 

It doesn't happen every time, but I do get a couple parity errors almost every time.  It's been rare that I get 0 errors after running a parity check.

Something is wrong with your server. Power or RAM would be my first concerns.

 

When was the last time you ran a 24 hour memtest?

 

My strangest problems have always been power supply related.

Link to comment

It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Does this happen every time you do a parity check?

 

It doesn't happen every time, but I do get a couple parity errors almost every time.  It's been rare that I get 0 errors after running a parity check.

Something is wrong with your server. Power or RAM would be my first concerns.

 

When was the last time you ran a 24 hour memtest?

 

My strangest problems have always been power supply related.

Agree. The only time I have ever had a parity error was when I had a dirty shutdown. Zero parity errors should not be rare. Non-zero parity errors should be investigated.
Link to comment

I have the situation that a number of my disks do not spin down. In the GUI this is presented as spun-down state WITH temperature (see attachment).

 

Forcing all disks to spin-up and then forcing all disks to spin-down doesn't resolve the situation (I verify the actual disk state with hdparm).

 

Anyone else has the same observation in v6.1rc2 ?

I see same the same behavior on my server.

 

Plus that my cache disk/pool never spun down any more. Right now I'm closing down some docker to see if these force these disk to not spun down,  this have been OK on 6.0.

 

//Peter

All disk with temperature are in state idle, the other are in state standby

My cache disk/pool are now spun down, but these was also in idle mode, even if GUI show that these was spun down.

//Peter

Link to comment

It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Does this happen every time you do a parity check?

 

It doesn't happen every time, but I do get a couple parity errors almost every time.  It's been rare that I get 0 errors after running a parity check.

Something is wrong with your server. Power or RAM would be my first concerns.

 

When was the last time you ran a 24 hour memtest?

 

My strangest problems have always been power supply related.

Agree. The only time I have ever had a parity error was when I had a dirty shutdown. Zero parity errors should not be rare. Non-zero parity errors should be investigated.

 

I've been running my sever for about 1 year now, and I don't think I've ever had a parity error. Having recuring errors would point me to believing that something is not right with the server, and my guess would be memory or powersupply (if the disks aren't showing anything weird in the smart reports).

Link to comment

I have the situation that a number of my disks do not spin down. In the GUI this is presented as spun-down state WITH temperature (see attachment).

 

Forcing all disks to spin-up and then forcing all disks to spin-down doesn't resolve the situation (I verify the actual disk state with hdparm).

 

Anyone else has the same observation in v6.1rc2 ?

I see the same thing, but, I am running version 6.0.1.  The problem popped up for me (I believe) in the 6.0.0 final release.  I do not recall seeing the problem in the RCs.  I could be wrong on that, but, I know it exists for me in 6.0.1.  Same story here that spinning up all drives and spinning all down does not resolve the situation.

 

Perhaps as mentioned above, it could be related to a particular docker.

 

EDIT: In my case, it is related to S3 sleep.  Sleeping the server messes up disk status and temp reporting.

Drive_Temp_Spun_Down.png.6b57d3559d4fa3986b64a097c62ade73.png

Link to comment

I like the idea of adding dual parity to the Pro version.  It would give some users a reason to upgrade.  However I have no problem paying for this feature if the price is reasonable.  I love what LT has done and will continue to support them.

 

See I'm happy with my Plus license and the 12 disks. I'll never need more than that. I'd rather it either get bolted to that license, or be a reasonable cost upgrade if it goes the route of paid. I'd like to use it without getting a full blown Pro license.

Link to comment

Wow, dual parity is big!! This should be v7 equivalent and not a puny .x release :)

 

I am intrigued by what this can achieve .. will this provide data recovery for any two disks failing (2x data, data+parity or 2x parity)? How will the read/write speeds be affected?

 

..

What's coming in 6.2?  After 6.1 'stable' is released, the next significant feature for 6.2 will be double-parity protection using Reed-Solomon algorithm for second "parity" disk (a.k.a., "P+Q").  We plan on a brief closed beta and then a (hopefully) brief public beta.  This will be a true beta, meaning use on test server because data could be at risk - though not really because I always write bug-free code  ::)  More on this later....

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.