unRAID Server Release 6.1-rc2 Available


Recommended Posts

  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

I like the idea of adding dual parity to the Pro version.  It would give some users a reason to upgrade.  However I have no problem paying for this feature if the price is reasonable.  I love what LT has done and will continue to support them.

 

See I'm happy with my Plus license and the 12 disks. I'll never need more than that. I'd rather it either get bolted to that license, or be a reasonable cost upgrade if it goes the route of paid. I'd like to use it without getting a full blown Pro license.

 

I must admit one thing at the beginning of this post.  It has been forty years since I took a one semester course in Data theory in Grad School back at Ohio State.  And I have forgotten more that I can remember from that course.  But once, you get beyond detecting and correcting one error, the mathematics of the problem become much more complex. 

 

I would imagine that the cost of upgrading the hardware to be able to implement a double correcting scheme may well outweigh the cost upgrading to a Pro license.  There will the cost of at least one more hard drive and perhaps a CPU upgrade required in many servers to deliver satisfactory performance. 

 

If you want some conformation of the issues involved, just Google for terms like error detection and correction.  Take at look at the math behind these schemes.  While there may be a simple and cheap answer to how to implement a 'dual parity' scheme (and I hope there is), I suspect that this feature will involve much more than most of us can visualize at this point. 

Link to comment

I like the idea of adding dual parity to the Pro version.  It would give some users a reason to upgrade.  However I have no problem paying for this feature if the price is reasonable.  I love what LT has done and will continue to support them.

 

See I'm happy with my Plus license and the 12 disks. I'll never need more than that. I'd rather it either get bolted to that license, or be a reasonable cost upgrade if it goes the route of paid. I'd like to use it without getting a full blown Pro license.

 

I must admit one thing at the beginning of this post.  It has been forty years since I took a one semester course in Data theory in Grad School back at Ohio State.  And I have forgotten more that I can remember from that course.  But once, you get beyond detecting and correcting one error, the mathematics of the problem become much more complex. 

 

I would imagine that the cost of upgrading the hardware to be able to implement a double correcting scheme may well outweigh the cost upgrading to a Pro license.  There will the cost of at least one more hard drive and perhaps a CPU upgrade required in many servers to deliver satisfactory performance. 

 

If you want some conformation of the issues involved, just Google for terms like error detection and correction.  Take at look at the math behind these schemes.  While there may be a simple and cheap answer to how to implement a 'dual parity' scheme (and I hope there is), I suspect that this feature will involve much more than most of us can visualize at this point.

 

Agreed. That being said I'm more liking the prospect of having that extra layer of redundancy. While my important data that is on my box is backed up my media isn't as at this point it's just not feasible. I could "live without it" in the event of a serious failure. But if I have the option of making that set of data more redundant to reduce (not eliminate obviously) the chances of data loss due to more than one of my drives deciding to bite the dirt before I can rebuild I like the thought of having a higher disk failure tolerance.

 

As far as cost goes I'm already planning on the extra drive, sata card, and enclosure I'm going to need as I've filled my current box. A 50 dollar or whatever upgrade to Pro makes that just a bit harder. A smaller fee to just get that feature would be doable, and obviously including it in Plus and Pro would be nice but I get why that might not happen at the Plus level. As far as error correcting and whatnot that'd be nice but I have no plans on upgrading silicon right now and I can get by without it.

Link to comment

Or make it a Pro only feature! 

 

Certainly possible, but I'd hope not.  With the expanded device limits, Plus arrays are large enough that it's a good idea for them to have dual fault-tolerance as well.    Even a Basic user may want to have that added level of fault tolerance ... especially if they're choosing to roll the dice and not maintain backups.

 

 

In my opinion, dual parity is not really that necessary.  There have been very few times when two disks have failed at the same time or even during a rebuilt.  And I would bet that most of those were in systems where the sysop has not been watching things and performing periodic parity checks.  In those cases, dual parity would probably not prevented an issue because no action would be taken until after a third disk had failed.

 

With the size of modern arrays, the likelihood of a 2nd failure during a rebuild is actually fairly high.  In fact, some data integrity academics have warned that RAID-6 is rapidly approaching the point where it's not enough protection for the size of modern RAID arrays.    But it's certainly far better than single fault tolerance.    ... as for waiting for a 3rd drive to fail => I'd certainly hope not.    That'd be the equivalent of not bothering to replace a failed drive today until a 2nd drive failed ... at which point it's no longer possible to do a rebuild.

 

 

The biggest advantage to dual parity is that the exact disk with the issue can be quickly determined provided there is only a single failure point.

 

It's mathematically possible to do this, but whether or not this capability will be included in the UnRAID implementation isn't clear.

Link to comment

It's annoying when I get 3 parity errors per parity check and the SMART reports for the drives indicate all is well.

Does this happen every time you do a parity check?

 

It doesn't happen every time, but I do get a couple parity errors almost every time.  It's been rare that I get 0 errors after running a parity check.

Something is wrong with your server. Power or RAM would be my first concerns.

 

When was the last time you ran a 24 hour memtest?

 

My strangest problems have always been power supply related.

 

The last time I ran a 24 hour memtest was about a year ago when I assembled the server.  The 4 DIMMs are also getting old (~5 years IIRC).  I should run a memtest.

 

The power supply was new and kept under low load (~40W total power consumption for my server and it frequently enters hibernate).

Link to comment

A question was asked about performance. Short answer is we don't know yet. We know it will not be faster than single drive parity (unless Tom finds a new optimization), but Tom has said not much slower. Like a parity check that doesn't get longer every time you add a disk (if that disk I'd the same size and speed as another disk in the array), writing to 2 parity drives in parallel is the same speed as writing to one. And the mathematics for Q parity is very efficient. So conjecture is that speed will be similar or a little slower than single parity.

 

I guess we all know that the largest cost of dual parity is the second parity drive, which can easily cost over $200 depending on parity size. It will be up to Tom to answer about licensing cost, which I expect will be considerably less than that, if there is a cost at all.

Link to comment

... And the mathematics for Q parity is very efficient ...

 

Everything's relative.  Compared to a basic parity calculation, the polynomial math for Reed-Solomon is MUCH more complex.  I wrote assembly language routines to do this ~ 30 years ago (for a special-purpose embedded system), and it was indeed fairly fast, but certainly still many times as long as a simple XOR parity calculation.    That's why hardware RAID controllers use special purpose RAID chips for these calculations instead of the host CPU (and why the less expensive controllers don't support RAID-6).

 

Having said that, with the speed of modern processors, most systems shouldn't have a problem doing the calculations in at least near real time, since the speed of disks is very slow compared to the processor capabilities.    But the complexity is why I'm not sure LT will implement a "which bit failed" feature ... at least not initially.  The math's not difficult to do that; but it would be relatively time-consuming, so it's not likely they'll do it "real time" during a parity check => a dedicated routine to isolate it after the check is more likely ... which would be a separate utility.

 

Link to comment

Based on the whitepaper I read, RAID6 is implemented with mostly XORs (XORing different things j as well. 3 drive parity was where it got into more complex math. Like I said, good chance it will be very similar performance as single drive parity. But we have to wait and see.

Link to comment

I said earlier that the biggest cost of dual parity is the new drive, but I may back away from that cost because many users have that drive already. Many keep a spare drive either in their server (warm spare) or offline (cold spare), precleared and ready to go in case of a failure. Dual parity would put that drive to much better use as Q parity, providing, in essence, the same function (enabling redundancy after a disk failure) only immediately and with no user action. Of course it is getting wear and tear, but still better than having the drive sit on the sidelines for a very long time waiting to get in the game. You would not have to be in quite the rush to replace the failed drive knowing you are still protected.

Link to comment

A question was asked about performance. Short answer is we don't know yet. We know it will not be faster than single drive parity (unless Tom finds a new optimization), but Tom has said not much slower. Like a parity check that doesn't get longer every time you add a disk (if that disk I'd the same size and speed as another disk in the array), writing to 2 parity drives in parallel is the same speed as writing to one. And the mathematics for Q parity is very efficient. So conjecture is that speed will be similar or a little slower than single parity.

 

I guess we all know that the largest cost of dual parity is the second parity drive, which can easily cost over $200 depending on parity size. It will be up to Tom to answer about licensing cost, which I expect will be considerably less than that, if there is a cost at all.

 

I think it is important Tom do find a way for a  new optimization.

 

Hard Drives is getting bigger every year and maybe in 5 years time there will be 20TB Disk will come out.

 

 

Link to comment

I doubt it. Maybe a small tweak. If drives get faster and interfaces get faster, writes will get faster. Although there is still the full rotation delay in the write prices.

 

Assuming the mechanics of the drives remain the same. Theres always SSD and possibly HoloCubes.

Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

Link to comment

Since upgrading to RC2, I can't access my cache drive directly via file explorer.  ie) \\tower\cache

 

My docker applications work, so the OS is able to access it without an issue.  I'm also able to copy to user shares that specify cache disk only.

 

IS this expected behavior with the new release?

Link to comment

Since upgrading to RC2, I can't access my cache drive directly via file explorer.  ie) \\tower\cache

 

My docker applications work, so the OS is able to access it without an issue.  I'm also able to copy to user shares that specify cache disk only.

 

IS this expected behavior with the new release?

Yes. See here.
Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Diagnostics would be good. Are you even seeing the progress indicator that displays a percentage complete?  Can you screenshot where its frozen?

Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Diagnostics would be good. Are you even seeing the progress indicator that displays a percentage complete?  Can you screenshot where its frozen?

 

No progress indicator. Just says "Downloading".

 

Syslog - http://1drv.ms/1JQvRa4

 

Scren.jpg

 

Link to comment

I have the situation that a number of my disks do not spin down. In the GUI this is presented as spun-down state WITH temperature (see attachment).

 

Forcing all disks to spin-up and then forcing all disks to spin-down doesn't resolve the situation (I verify the actual disk state with hdparm).

 

Anyone else has the same observation in v6.1rc2 ?

Can confirm to see the same.

While Unraid GUI shows all disks down and temperature on some of the drives and on some not, unmenu's myMain shows none of them are spinning.

 

While this is completely unrelated to this issue - may I ask, why myMain is able to show disk temperature also when disk is spun down and unraid's GUI cannot? I vagely remember, that the ability to read disk temp when drive is spun down is depending on hard disk model/manufacturer - in my case I have all WD drives, which DO support reading temps when drive is not spinning - it would be nice to have the unraid GUI support that too - is this possible? As a side note: To get temperature reading, I have to add "smartopt -A" to some of the drives parameter settings, depending on what controller they're connected to.

Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Diagnostics would be good. Are you even seeing the progress indicator that displays a percentage complete?  Can you screenshot where its frozen?

 

No progress indicator. Just says "Downloading".

 

Syslog - http://1drv.ms/1JQvRa4

 

Scren.jpg

Have you tried to change the destination folder to one without a space in the name?

Link to comment

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Diagnostics would be good. Are you even seeing the progress indicator that displays a percentage complete?  Can you screenshot where its frozen?

 

No progress indicator. Just says "Downloading".

 

Syslog - http://1drv.ms/1JQvRa4

 

Scren.jpg

Have you tried to change the destination folder to one without a space in the name?

 

I tried multiple destinations, but they all had spaces. This fixed the issue.

 

Thanks

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.