unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

 

So, does dual parity allow single bit error detection and correction?

 

Ohhh, good question.

 

I think the question(s) has to be even further expanded.  Does it provide for single bit error detection as to the true location of the fault (i.e., the disk with the failure on it) and provide for correction of said error?  In case of two failures, does it allow the ability to rebuilt both failures and what is the level of error identify?  What is the rebuilt procedure in the case of two devices failing? (Replace one disk at a time with two rebuilds or replace both disks with a single rebuild?)

 

Remember that the more failures there are in an array (thinking of disks here), figuring out which ones are bad is going to be much more difficult.

 

Curious as well. That was one of the things Btrfs brought to the table that I was hoping would eventually be implemented, along with deduplication

 

From what I can see, and I’m sure LT will correct me if I’m wrong, it doesn’t identify the disk with the error, you can however know if a sync error is on the disk or parity, sync errors are detected as P(parity), Q(parity2), or PQ(both), so if on a parity check the log indicates PQ sync error it’s the disk, if P or Q sync error it’s that specific parity.

 

Users need realistic expectations here.  Each parity drive provides a single bit of information across the array at any given bit position.  The first parity bit is a simple even parity summation, the second is a fancier calculation but still a single bit.  Together that's 2 bits only.  Either one can be used to detect a failure but that's about it.  A failure is detected because the calc predicts one bit state, but the other is found instead, no other possible info can be determined from a 2 state bit.  To know which data bit is wrong, you would have to have address information stored with it (with EACH bit), perhaps in a compressed form, but still taking up a considerable amount of extra bits.  Say it takes 30 extra bits for a 30 drive array, and your first parity drive is 8TB, then your second parity drive has to be 8TBx30=240TB!  You might as well mirror the entire array.  With only 2 bits, you can't derive info that just isn't there.

 

Having 2 bits does help though, in that as johnnie.black said, you now know if it's one of the parity drives at fault or a data drive (but not which one).  If it's only one parity failure, then that parity drive is wrong and can be corrected.  You know which one it is, because it's the one that failed, at that bit position.  If both parity bits are wrong, then you know the bad bit is on a data drive, but you don't know which one, and therefore it can't be corrected.

Link to comment
  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Parity sync just completed. Now running with parity1 and parity2 (both WD RED 6TB).

 

PArity completed without an issue, status that came back was "cancelled", since I am still running all my plugins that could also be the notification plugin.

My notification also said cancelled.
Link to comment

 

So, does dual parity allow single bit error detection and correction?

 

Ohhh, good question.

 

I think the question(s) has to be even further expanded.  Does it provide for single bit error detection as to the true location of the fault (i.e., the disk with the failure on it) and provide for correction of said error?  In case of two failures, does it allow the ability to rebuilt both failures and what is the level of error identify?  What is the rebuilt procedure in the case of two devices failing? (Replace one disk at a time with two rebuilds or replace both disks with a single rebuild?)

 

Remember that the more failures there are in an array (thinking of disks here), figuring out which ones are bad is going to be much more difficult.

 

Curious as well. That was one of the things Btrfs brought to the table that I was hoping would eventually be implemented, along with deduplication

 

From what I can see, and I’m sure LT will correct me if I’m wrong, it doesn’t identify the disk with the error, you can however know if a sync error is on the disk or parity, sync errors are detected as P(parity), Q(parity2), or PQ(both), so if on a parity check the log indicates PQ sync error it’s the disk, if P or Q sync error it’s that specific parity.

 

Users need realistic expectations here.  Each parity drive provides a single bit of information across the array at any given bit position.  The first parity bit is a simple even parity summation, the second is a fancier calculation but still a single bit.  Together that's 2 bits only.  Either one can be used to detect a failure but that's about it.  A failure is detected because the calc predicts one bit state, but the other is found instead, no other possible info can be determined from a 2 state bit.  To know which data bit is wrong, you would have to have address information stored with it (with EACH bit), perhaps in a compressed form, but still taking up a considerable amount of extra bits.  Say it takes 30 extra bits for a 30 drive array, and your first parity drive is 8TB, then your second parity drive has to be 8TBx30=240TB!  You might as well mirror the entire array.  With only 2 bits, you can't derive info that just isn't there.

 

Having 2 bits does help though, in that as johnnie.black said, you now know if it's one of the parity drives at fault or a data drive (but not which one).  If it's only one parity failure, then that parity drive is wrong and can be corrected.  You know which one it is, because it's the one that failed, at that bit position.  If both parity bits are wrong, then you know the bad bit is on a data drive, but you don't know which one, and therefore it can't be corrected.

Although this may well be true of the way that Limetech has elected to implement dual parity it is not necessarily so with all dual parity schemes.  Some of them take into account the disk position as part of the calculation of the second parity and CAN identify which disk has the error if a single bit goes wrong and apply the correct auto-correction 

 

Having said that the ability to re-order data disks without invalidating parity is a good plus for the simpler scheme that LimeTech seem to have adopted.  It does, however, mean that something like the File Integrity plugin has an important place in detecting WHICH files might be corrupt.

Link to comment

Hi guys.

 

I'm having a couple of issues with the beta..

 

1. Array will not auto start at boot. Not even with "/boot/unmenu/uu" added to go file. Tried disabling auto start in disk settings, reboot, eneble, reboot but no luck.

 

2. Locations pointing to a mounted smb share in docker containers show up as empty locations from within the docker app (Plex -> path: /mnt/disks/my smb mount point/ -> container path: /movies) the smb mount is functional from unraid as I can see content.

 

3. How can I downgrade back to 6.1? This to install using the link below from plugins page but stops with message "will install older version"

https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg

 

Thanks.

Link to comment

Hi guys.

 

I'm having a couple of issues with the beta..

 

1. Array will not auto start at boot. Not even with "/boot/unmenu/uu" added to go file. Tried disabling auto start in disk settings, reboot, eneble, reboot but no luck.

 

2. Locations pointing to a mounted smb share in docker containers show up as empty locations from within the docker app (Plex -> path: /mnt/disks/my smb mount point/ -> container path: /movies) the smb mount is functional from unraid as I can see content.

 

3. How can I downgrade back to 6.1? This to install using the link below from plugins page but stops with message "will install older version"

https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg

 

Thanks.

1. unMenu has nothing to do with any of this.

 

2. Dockers cannot see things that were mounted after the docker service starts. Has always been this way, not just this beta.

Link to comment

Automatic System Shares

 

When starting the array with at least one cache device, a share called "system" will be automatically created.  Inside, two subfolders will be created (docker and libvirt respectively).  Each of these folders will contain a loopback image file for each service. 

 

NOTE:  If you do NOT have a cache device, you will need to add those three shares manually if you wish to utilize apps or VMs on unRAID.

 

...

 

- support NVMe storage devices assignable to array and cache/pool

Is the existance of a cache disk checked on every start or just the first after the Upgrade? I asume only the first, therefore the need to manually add them later if a cache disk gets added.

 

I would like to test the NVMe support, as of now, my cache is a sata ssd and the nvme disk is manually mounted for vms/games outside of the array.

Would the addition of the "system" share and its creation on the first array start affect the procedure of removing the current cache drive to add the new cache drive?

 

Last time I tried, preClear didn't work with NVMe disks, would it be an issue to add the NVMe disk as an unformatted/uncleared disk as the only cache device prior the first start of the array?

Or should I just remove the cache disk bevore the update, and add the NVMe cache after the first start und create the "system" share manually?

 

What would be a good/easy/safe way to change the change disk after the system share got created?

Can I move all files from the "system" share to a disk in the array (as long as all VMs/Docker are shutdown) or would it still be in use by the "system" (wich the name somewhat implies)?

 

I would asume "change all shares (including "system") to no cache" -> stop all VMs/Docker that may use the cache -> run mover (and move anything mover ignored manually to the array) -> stop array -> remove old cache -> add unformatted NVMe disk -> start Array -> wait for the cache disk to be formatted -> move everything back to the cache... -> start docker/vms

 

 

Maybe I am reading to much into the system share, but to me "system" implies something important. In which case it would be strange to put it on the unprotected cache...

Link to comment

I'm a user of the UNRAID in quite a while now and from today i've been trying to setup the new 6.2.0 BETA since i have 2 NVMe drives

 

My issue is that i have an ongoing problem, since i can't delete my SHARES and the name of the mashine still is TOWER on the network. I can't really connect to the machine anymore.

 

Even when i've formated the USB and following drives.. + switched out the drives, the SHARES and PC name (TOWER) still remains.

 

 

My question is:

 

- How do i do a complete reset so i can start from scratch? (Without switching all hardware in the end)

 

- Where does UNRAID store those information, even when i formated all drives and tried different USB's and new drives?

 

Link to comment

I got it working switching around with NVMe disk and the HDD between Cache and Disk Drives. Letting UNRAID reformat.

But atm. i have the following error:

"SG_IO: questionable sense data, results may be incorrect"

 

Other forums does mention the Samsung 950 PRO NVMe disk may not be formatted properly.

 

UNRAID does still remember when it were formatted as a Cache drive and a Disk Drive.

 

- I have no idear where UNRAID does hide those settings?

 

Does other have problems with the new Samsung 950 PRO? Mine hangs and BSOD while updating/upgrading Windows 10 with a driver issue.

 

Link to comment

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

Link to comment

I got it working switching around with NVMe disk and the HDD between Cache and Disk Drives. Letting UNRAID reformat.

But atm. i have the following error:

"SG_IO: questionable sense data, results may be incorrect"

 

Other forums does mention the Samsung 950 PRO NVMe disk may not be formatted properly.

 

UNRAID does still remember when it were formatted as a Cache drive and a Disk Drive.

 

- I have no idear where UNRAID does hide those settings?

 

Does other have problems with the new Samsung 950 PRO? Mine hangs and BSOD while updating/upgrading Windows 10 with a driver issue.

Not clear what you're asking, but unRAID saves configuration on the flash. Most of the configuration files are text so you can read them yourself. It only remembers the current disk assignments.
Link to comment

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

 

Both paritys have to be equal or bigger than all data disks.

Link to comment

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

 

Both paritys have to be equal or bigger than all data disks.

 

Do both parity drives have to be the same size (assuming that the size of each is equal to or larger than any data drive)?  I am a 3TB parity and three 1TB data drives, and would like to use a spare 1Tb drive as the second parity drive to 'test drive' 6.2 beta.)

Link to comment

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

 

Both paritys have to be equal or bigger than all data disks.

 

Do both parity drives have to be the same size (assuming that the size of each is equal to or larger than any data drive)?  I am a 3TB parity and three 1TB data drives, and would like to use a spare 1Tb drive as the second parity drive to 'test drive' 6.2 beta.)

 

No, e.g., you can have an array with 2TB parity and data disks and add one 3TB disk as second parity.

Link to comment

No IPv6 in unRAID, yet ;) ... that stock dnsmasq package we use was compiled by a slackware developer who had IPv6 support on their build machine.

 

for what is unraid using dnsmasq?

in 6.1 I installed dnsmasq myself to use it as dhcp/dns server in my local network. I'm worried, my custom configuration will disable some new functionality. (i'll compare the default unraid config file, once I reboot the server)

Link to comment

 

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

 

Both paritys have to be equal or bigger than all data disks.

 

Has anyone determined if the parity drives have to be the same size? Or if you have an array of all 4 TB drives can you have 5 & 6 TB drive for parity?

Link to comment

Hi guys.

 

I'm having a couple of issues with the beta..

 

1. Array will not auto start at boot. Not even with "/boot/unmenu/uu" added to go file. Tried disabling auto start in disk settings, reboot, eneble, reboot but no luck.

 

2. Locations pointing to a mounted smb share in docker containers show up as empty locations from within the docker app (Plex -> path: /mnt/disks/my smb mount point/ -> container path: /movies) the smb mount is functional from unraid as I can see content.

 

3. How can I downgrade back to 6.1? This to install using the link below from plugins page but stops with message "will install older version"

https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg

 

Thanks.

1. unMenu has nothing to do with any of this.

 

2. Dockers cannot see things that were mounted after the docker service starts. Has always been this way, not just this beta.

 

Thanks for the quick response, however I'm just trying to make sense of this as I'm just starting learn unraid

 

1- in 6.1, when my array wouldn't auto boot, it was suggested in a post (sorry, can't find the actual post, it's been too long since) to add that line to my go file and my array all of asudden sudden started to auto start at boot. But regardless, is there a possible fix to have my array auto start?

 

2-i did have this working in 6.1, do you think maybe it's because my array is not auto starting? Also, my docker image is not located on a cache drive, so maybe my smb mounts got a chance to start before my docker containers. However, Is there a solution to have docker apps start after the array?

 

 

Link to comment

 

Thanks for the quick response, however I'm just trying to make sense of this as I'm just starting learn unraid

 

1- in 6.1, when my array wouldn't auto boot, it was suggested in a post (sorry, can't find the actual post, it's been too long since) to add that line to my go file and my array all of asudden sudden started to auto start at boot. But regardless, is there a possible fix to have my array auto start?

 

2-i did have this working in 6.1, do you think maybe it's because my array is not auto starting? Also, my docker image is not located on a cache drive, so maybe my smb mounts got a chance to start before my docker containers. However, Is there a solution to have docker apps start after the array?

 

have you set a 2nd parity?? not having a 2nd parity is causing the array not to autostart

Link to comment

have you set a 2nd parity?? not having a 2nd parity is causing the array not to autostart

 

LT is aware of the issue.  They're working on a setting to disable the second parity, which will fixed the autostart and error notification issues when only using 1 parity disk.

Link to comment

Check Settings > Disk Settings to make sure your autostart setting is enabled.

Mine was.  Even toggled the setting to no avail.  It was also stuck on the registration page (showing I had a pro key).  No matter how many times I hit done it would just reload that page and bring it back up on the next boot)

Link to comment

 

Thanks for the quick response, however I'm just trying to make sense of this as I'm just starting learn unraid

 

1- in 6.1, when my array wouldn't auto boot, it was suggested in a post (sorry, can't find the actual post, it's been too long since) to add that line to my go file and my array all of asudden sudden started to auto start at boot. But regardless, is there a possible fix to have my array auto start?

 

2-i did have this working in 6.1, do you think maybe it's because my array is not auto starting? Also, my docker image is not located on a cache drive, so maybe my smb mounts got a chance to start before my docker containers. However, Is there a solution to have docker apps start after the array?

 

have you set a 2nd parity?? not having a 2nd parity is causing the array not to autostart

 

Nope. Didn't set a second parity. I guess that explains it.

Link to comment
Guest
This topic is now closed to further replies.