Upgrading Parity Drive


Recommended Posts

Hey, 

I want to upgrade my 3TB parity drive with a 4TB drive.

 

I am currently running 6.7.2 and my 4TB drive is in the server connected and is unassigned. 

 

I tried to find some documentation but I'm not sure if it is current or outdated. 

 

Do I just want to stop the array, and select the new drive as the parity drive or do I want to add it as a second parity drive then assigne the 3TB as a data drive after the process is done?

Any help would be greatly appreciated. 

 

Thanks!

Edited by senrab
Link to comment

If you only want to end up with one parity drive then:

  • Stop the array
  • unasdogn the current parity drive
  • start the array to make Unraid ‘forget’ the current parity drive.  Not sure this step is necessary but it will not do any harm
  • stop the array
  • assign the new 4TB drive as parity.
  • start the array to build parity on the 4TB drive.

i would suggest keeping the old parity drive untouched until the new parity is built just in case another drive fails before the parity build completes.    Once the new parity is built you can stop the array and assign the old 3TB drive as a new data drive.    When you start the array Unraid will start a Clear on the drive you have just added.    When the Clear completes you will be able to format and then start using the newly added drive.

  • Upvote 1
Link to comment

Here is the basic instruction set for doing so:

 

   https://wiki.unraid.net/UnRAID_6/Storage_Management#Upgrading_parity_disk.28s.29

 

Now for my thoughts...

 

First, run a non-correcting parity check.  This will assure that you have don't have any issues that would prevent rebuilding parity on a new disk.

 

Second, I really question your decision to using only a 4TB drive to upgrade your parity drive.  My suggestion to be to go to (at least) a 6TB drive.  This will give you a lot more 'head room' when you replace a data drive just because you need more storage space.  (Basically, The cost of that one additional TB is the cost the cost of that 4TB drive!) 

 

Third, do not assign the new drive as Parity 2!  Parity 1 is a very simple boolean operation that any CPU can easily handle.  Parity 2 is a complex matrix operation that requires either (1) a lot of CPU horsepower or (2) a recent CPU with a special instruction to significantly reduce  the CPU overhead.  (Having only Parity 2 will still only protect against the failure of a single drive.) 

 

Edited by Frank1940
Link to comment

I have a query about parity drives that are much larger than all the data drives. My system consisted of all 3TB WD Reds and a 3TB parity check would take something like 7-ish hours.

 

I recently upgraded my parity to an 8TB WD Red and am using the old 3Tb as a data drive, the idea being that any future additions will likely be 8TB data drives.

Now a parity check takes around 17 hours.

 

So my question: After that first 3TB of actual parity checking the array, why does it continue with the (unused by data) last 5TB of the disc? Why is it necessary? I thought parity was only written to when the corresponding data address is written to. Unless I add a data disc bigger than 3TB that last 5TB should not be touched.

Link to comment

Because if/when you upgrade the data drive from 3TB to be an 8TB, the parity information has to reflect that.  

 

IE: If it doesn't currently check that the remaining 5TB of the parity drive is all zeroes, then when you upgrade a data drive to be 8TB, then the parity information may or may not be correct.

 

The system is working as it should

Link to comment
16 minutes ago, boragthung said:

So my question: After that first 3TB of actual parity checking the array, why does it continue with the (unused by data) last 5TB of the disc? Why is it necessary?

It is a peace-of-mind check that every byte on the parity disk can be read.  You really want to do this while the disk is still covered under warranty.  However, I will admit that it is seldom that a disk will fail  in this manner.  The advantage of using larger data disks is that it appears that large capacity disks have the approximately the same annual failure rate rates as much smaller capacity disks.   Thus, an array will fewer larger capacity data disks will have a lower probability of having disk failure problems than storing the data more small capacity disks. 

Link to comment
3 hours ago, Frank1940 said:

Here is the basic instruction set for doing so:

 

   https://wiki.unraid.net/UnRAID_6/Storage_Management#Upgrading_parity_disk.28s.29

 

Now for my thoughts...

 

First, run a non-correcting parity check.  This will assure that you have don't have any issues that would prevent rebuilding parity on a new disk.

 

Second, I really question your decision to using only a 4TB drive to upgrade your parity drive.  My suggestion to be to go to (at least) a 6TB drive.  This will give you a lot more 'head room' when you replace a data drive just because you need more storage space.  (Basically, The cost of that one additional TB is the cost the cost of that 4TB drive!) 

 

Third, do not assign the new drive as Parity 2!  Parity 1 is a very simple boolean operation that any CPU can easily handle.  Parity 2 is a complex matrix operation that requires either (1) a lot of CPU horsepower or (2) a recent CPU with a special instruction to significantly reduce  the CPU overhead.  (Having only Parity 2 will still only protect against the failure of a single drive.) 

 

Have to agree, if upgrading Parity disk - go with the largest capacity that your budget will allow. 

 

(to stretch, consider shucking an external disk)

Link to comment
2 hours ago, Squid said:

Because if/when you upgrade the data drive from 3TB to be an 8TB, the parity information has to reflect that.  

 

IE: If it doesn't currently check that the remaining 5TB of the parity drive is all zeroes, then when you upgrade a data drive to be 8TB, then the parity information may or may not be correct.

 

The system is working as it should

 

1 hour ago, Frank1940 said:

It is a peace-of-mind check that every byte on the parity disk can be read.  You really want to do this while the disk is still covered under warranty.  However, I will admit that it is seldom that a disk will fail  in this manner.  The advantage of using larger data disks is that it appears that large capacity disks have the approximately the same annual failure rate rates as much smaller capacity disks.   Thus, an array will fewer larger capacity data disks will have a lower probability of having disk failure problems than storing the data more small capacity disks. 

 

Just to add... (and to my very basic understanding) Parity is an "equation" that needs to "solved for".  If you were not include the full capacity of the parity drive in the calculation, you would not be able to "solve for" (rebuild data) in the case of failure or increased capacity.

Link to comment
57 minutes ago, J.Nerdy said:

Just to add... (and to my very basic understanding) Parity is an "equation" that needs to "solved for".  If you were not include the full capacity of the parity drive in the calculation, you would not be able to "solve for" (rebuild data) in the case of failure or increased capacity.

Not quite, here is the explanation of how parity is calculated:

 

     https://wiki.unraid.net/index.php/UnRAID_Manual_6#Network_Attached_Storage

 

Let's assume that you have an 8TB parity drive.  Your array consists of several different size data drives (-- all 8TB or smaller).  One of these data disks is a 500GB hard drive and that HD has the drive motor fail.   To be rebuild this disk, only the first 500GB of the parity data is used (or needed).  (A small bit of trivia knowledge for you--  The the actual calculation that is performed on the data to get the parity bit is called is an XOR operator.  The XOR operator has been a member of the basic microprocessor instruction set since the 8008 days-- 1972.) 

 

While you may think that building parity (by writing 'zeros' to the portion of the parity drive that is not be actively used for calculating data parity) is a waste of time and resources, it makes sense from the logical software development standpoint of what has to happen when you add a data drive that has a larger capacity than any of the currently installed data disks. When this happens, you don't have to 'adjust' parity if you write all zeros to the drive being installed.  Parity will always be correct.  This 'zeroing' of the new drive is the first thing that Unraid will do.  Then, if it finishes without error,  Unraid will add the disk to the array and format it (can't remember if it asks permission or not).  As this formatting (adding the basic file system structure) occurs, parity (less than 1% of the total data disk's capacity) will be updated as this formatting is being done. 

  • Like 2
Link to comment
5 minutes ago, Frank1940 said:

Not quite, here is the explanation of how parity is calculated:

 

     https://wiki.unraid.net/index.php/UnRAID_Manual_6#Network_Attached_Storage

 

Let's assume that you have an 8TB parity drive.  Your array consists of several different size data drives (-- all 8TB or smaller).  One of these data disks is a 500GB hard drive and that HD has the drive motor fail.   To be rebuild this disk, only the first 500GB of the parity data is used (or needed).  (A small bit of trivia knowledge for you--  The the actual calculation that is performed on the data to get the parity bit is called is an XOR operator.  The XOR operator has been a member of the basic microprocessor instruction set since the 8008 days-- 1972.) 

 

While you may think that building parity (by writing 'zeros' to the portion of the parity drive that is not be actively used for calculating data parity) is a waste of time and resources, it makes sense from the logical software development standpoint of what has to happen when you add a data drive that has a larger capacity than any of the currently installed data disks. When this happens, you don't have to 'adjust' parity if you write all zeros to the drive being installed.  Parity will always be correct.  This 'zeroing' of the new drive is the first thing that Unraid will do.  Then, if it finishes without error,  Unraid will add the disk to the array and format it (can't remember if it asks permission or not).  As this formatting (adding the basic file system structure) occurs, parity (less than 1% of the total data disk's capacity) will be updated as this formatting is being done. 

Cheers.

 

Great explanation!

Link to comment

I understand the basic principles involved but when the parity disc was first rebuilt new after replacing the smaller driver, it was my assumption that the last 5TB was all written zero. From this point on that last 5TB of parity disc would not actually need to be read or written to again until a point when another >3TB data drive is added.

 

So does this mean even though no reads or writes need to take place yet to that last 5TB there could still occur errors here? A mysterious 1 appearing instead of a zero when nothing is being written to the area under normal operation? What would cause this, heat or some other interference maybe corrupting the magnetised area? Sorry to sound a bit thick - my brain certainly doesn't work like it did when I was younger :(.

Link to comment
5 minutes ago, boragthung said:

So does this mean even though no reads or writes need to take place yet to that last 5TB there could still occur errors here?

Yes, the disk could fail even though it's not being used. This would manifest by a read failure, where the disk would report it couldn't return the data from that address.

7 minutes ago, boragthung said:

A mysterious 1 appearing instead of a zero when nothing is being written to the area under normal operation?

The chances of that happening are vanishingly slim. Much more likely for it to give an error when trying to read the 0 that was placed there.

Link to comment
8 minutes ago, boragthung said:

OK thanks, so really in my example the (extra) parity check is really a test on that discs reliability beyond the current array size, just reading to confirm still zero.

Yes, the primary reason for parity checks is to confirm that the array is capable of reconstructing a failed disk accurately. That includes both the concept of mathematical accuracy as well as disk reliability.

 

In a "normal" RAID setup, all disks are spinning all the time and pretty much participating equally to some extent. In unraid, however, it's perfectly plausible to have disks that are NEVER accessed during day to day activities, due to them not containing any data that someone needs currently. If one of those drives fails, you wouldn't know it until it was too late, when you were trying to reconstruct a different failed drive. Parity checks provide a way to keep up with the health of those seldom used drives.

Link to comment

Sorry I think you might have misunderstood what I was saying there. 8TB parity with all data drives 3TB. The first 3TB of parity checks all discs (the actual parity of array), but the last 5TB check is only checking the parity disc itself for errors and runs slightly quicker as it's just reading that one disc. In my case when I had a 3TB parity disc the check took over 7 hours, whereas now with an 8TB parity disc it takes over 17 hours, so the extra 10 hours is reading the parity disc alone.

Link to comment

The extra time for the parity check operation (as currently implemented) when the Parity disk is considerably larger than the data disks in the array is more of a perceptual one than being significant factor in actual operation.  Yes, it will slow down data writes that are made directly to the array but it should not affect the reading of data from the array.  Granted, the parity disk will be spun up an extra ten hours on each parity check and that would result in some additional power usage (perhaps as much as .1KWH).  I should not not think that the extra time spend spun up would seriously impact its time-to-failure.  (Let's assume a maximum expected life of seven years (61,320 Hrs) for a hard drive, doing one parity check per month would only be a maximum of 840Hrs (less than two months) of potential shortened life.)    Plus, the 'extra' time would not be a factor over the entire lifetime of the disk as the next time, you either (1) replace a fail disk or (2) add a new disk for more capacity, you should be using a disk which is a 8TB disk.  Some of your current 3TB disks are probably four to six years old.  How much longer do you think they are going to last?  (BTW, I have no clue as how long an specific HD is going to last and either does anyone else!  All of the data that I have seen is inconclusive at best and misleading at worst. The best information that I have found is from Backblaze and its disk usage profile is not how Unraid uses its disks.)  

Link to comment

Interesting. I was wondering really if it was all that necessary for the extra time in parity checking beyond the size of the data array, although as you say it does not stop normal usage of the system. I had a passing thought that maybe I had missed something in the settings that would distinguish between full parity disc check and full data size check. I suppose the extra time taken gives some peace of mind as to the reliability of that bigger parity disc.

 

My only concern is the other week I actually got a warning about the temperature of the 8TB WD Red because it hit around 45C (normal use not parity check). It seems it generally runs 8-10C hotter than the 3TB WD Reds which I find surprising seeing at is has helium rather than air. Some net searching seems this is a common complaint. The weather was very hot though that week in the UK, hitting 38C in places.

 

Since then (on another thread) because I was having issues with a satellite card, while I had the case open I added another 2 inlet fans to my Fractal Design Node 804 case - which does make it noisier despite the bios quiet fan setting. The disappointment here is I discovered the only way to add extra fans at the top of this case is to remove the hard disc cage holders, which seems to defeat the purpose of providing extra cooling to that side of the case. A good case otherwise.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.