Jump to content

(SOLVED) Replace 4 data drives while retaining dual parity (with prior zeroing of old and new drives)


Recommended Posts

Hi all,

 

I am planning the replacement of 4x4TB old data drives by 4x8TB new data drives. I have 2 parity drives and would like to perform the replacement in the most efficient and secure manner (i.e.: minimal operations on parity).

 

My understanding of how unRaid works leads me to think that the best would be to:

 

  1. preclear the 4x8TB new data drives as unassigned devices
  2. empy the 4x4TB old data drives of all data and format them
  3. "zero out" the 4x4TB old data drives with this script: http://lime-technology.com/forum/index.php?topic=50416.msg494968#msg494968
  4. stop the array and make a New Config (retaining current configuration)

  5. unassign the 4x4TB old data drives and physically remove them

  6. start array with "Parity is already valid" box checked

  7. do a parity check (which in theory should not detect errors since the removed drives were "zeroed out")

  8. stop the array

  9. add the precleared 4x8TB new data drives

  10. start the array

 

What are your thoughts ? Woudl that work with dual parity ? Do you guys find that realistic ? Is there any better way ?

 

Many thanks,

OP 

 

 

 

Edited by Opawesome
marked as solved
Link to comment

Effectively step 3 is writing the full capacity of parity, so I don't see any benefit over just doing a new config and swapping the 4 drives just as they are, without emptying or formatting or zeroing the 4 old drives, each of which is writing to parity.

 

Here are the steps I would use.

Copy the content of the 4x4TB to other array drives, moving is just a copy and a delete, where the delete writes to parity as well, so copy instead of move.

Shut down the array, physically swap the 4 drives.

Power up, do a new config keeping all, go to main page and put the 4 new drives into the "missing" slots.

Start the array and build parity.

 

If you want to stress the 4x8TB with preclear, go right ahead, I personally do, but it's not necessary, as building parity will read the entire surface, and a subsequent extended smart test should uncover any issues. I like to preclear because it's easier to deal with a bad drive before it's in the array.

 

Your method will work, the methodology is sound, but it will take FOREVER compared to what I outlined, and realistically your method will result in probably double or triple the total writes to the parity drives compared to just building parity with the new drives.

 

Honestly I can see your method taking a week or more of array time, and it's filled with potential data loss landmines if you fat finger or misunderstand a command. Leaving the data on the old drives and copying to other drives in the array means you have a backup of that data, at least until you repurpose the old drives.

 

  • Thanks 1
Link to comment

Hi @jonathanm,

 

Thank you very much for your detailed reply. I really appreciate the time you took to help me (and hopefully others in the same situation).

 

You raised shortcomings in the process I described which I did not think of. Therefore, I will probably end doing the swap the way you suggest, which is indeed much quicker and less prone to human error.

 

The drawback however (and if I understand well) would be that, because parity will become invalid, I will loose data if one of the "non swapped" data drives fail during the process, although this is unlikely I guess (and hope :) ).

 

Indeed, by "zeroing out" the old data drives before replacing them, I had the intuition that parity would have remained valid. But I am not even sure about that because: (i) I am in a dual parity environment and; (ii) the new drives would be of a larger size; and therefore I wonder if parity2 would not become invalid anyway. I'd be curious to know. 

 

Thanks again.

Best,

OP

Link to comment
4 minutes ago, Opawesome said:

Indeed, by "zeroing out" the old data drives before replacing them, I had the intuition that parity would have remained valid. But I am not even sure about that because: (i) I am in a dual parity environment and; (ii) the new drives would be of a larger size; and therefore I wonder if parity2 would not become invalid anyway. I'd be curious to know

As long as drives are completely zeroed they will not affect either parity drive.  Size is not a factor as Unraid assumes that any space beyond the physical size an array drive can be treated as if it were zero filled.

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...