Jump to content

6.2 Series Parity Question


magic144

Recommended Posts

Haven't stumbled upon this exact info - wondering if it's easy to answer.

 

I'm running unRAID 6.1.9. If I upgrade to 6.2.x series, but carry on with a single parity drive configuration, will parity check/calculation times be at all affected? In fact is there practically any difference as far as parity is concerned with such an upgrade path - or any reason to hesitate going to 6.2.x?

 

I have one of the old MD-1510/LI chassis with E7300 Core 2 Duo and 4GB RAM, so I don't believe it supports the AVX2 processor extensions.

 

Thanks in advance...

Link to comment

When I upgraded from 6.1.9 to 6.2 my parity check times increased a VERY small amount -- added less than 5 minutes to an ~ 8 hr check time.

 

The one thing that IS different probably won't have any impact on you:  With v6.2 the ORDER of the disks matters; whereas with 6.1.9 it does not.    This would only be an issue if you did a "New Config" with the "parity is already valid" option checked -- with 6.1.9 you could freely re-order the data disks and all would be well;  with 6.2.2 you could not.

 

Link to comment

@ALL - thanks for the feedback - the disk ordering nugget is good to know too

Having said that I think Tom said he intends to remove this restriction in the 6.3 series for systems that only have a single parity drive.

 

The reason disk ordering is enforced for single-Parity arrays is because the 'slot number' of a data disk forms a mountpoint/share name.  For example, suppose you have device Id "SEAGATE-XXXX_YYYY" assigned to disk1.  Then one day you decide you don't like that device assigned as disk1 so you assign as disk2, and maybe some other device is assigned as disk1.  But, elsewhere in your server maybe you assigned docker image file to /mnt/disk1/docker.img.  Well now Docker won't work because the image file is now on disk2 (but other s/w does not know you changed slots around).  This is just one example of unexpected side-effects of arbitrarily changing slot numbers.  So unRAID makes you think about it, and go into New Config utility if you really want to rearrange slots.

Link to comment

@ALL - thanks for the feedback - the disk ordering nugget is good to know too

Having said that I think Tom said he intends to remove this restriction in the 6.3 series for systems that only have a single parity drive.

 

The reason disk ordering is enforced for single-Parity arrays is because the 'slot number' of a data disk forms a mountpoint/share name.  For example, suppose you have device Id "SEAGATE-XXXX_YYYY" assigned to disk1.  Then one day you decide you don't like that device assigned as disk1 so you assign as disk2, and maybe some other device is assigned as disk1.  But, elsewhere in your server maybe you assigned docker image file to /mnt/disk1/docker.img.  Well now Docker won't work because the image file is now on disk2 (but other s/w does not know you changed slots around).  This is just one example of unexpected side-effects of arbitrarily changing slot numbers.  So unRAID makes you think about it, and go into New Config utility if you really want to rearrange slots.

 

Interesting -- I thought the ordering was simply due to structure of the 2nd parity computation.    Didn't realize there were other reasons.    But this begs a question => does that mean that even though 6.1.9 doesn't enforce drive ordering, you'd have an issue with Dockers if you re-ordered the disks?

 

Seems like a largely mute point, since MOST users aren't ever going to do this anyway -- but just curious if the issue you described is a known problem with pre-6.2 versions.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...