Opawesome Posted December 8, 2020 Share Posted December 8, 2020 (edited) Hi all, I am planning the replacement of 4x4TB old data drives by 4x8TB new data drives. I have 2 parity drives and would like to perform the replacement in the most efficient and secure manner (i.e.: minimal operations on parity). My understanding of how unRaid works leads me to think that the best would be to: preclear the 4x8TB new data drives as unassigned devices empy the 4x4TB old data drives of all data and format them "zero out" the 4x4TB old data drives with this script: http://lime-technology.com/forum/index.php?topic=50416.msg494968#msg494968 stop the array and make a New Config (retaining current configuration) unassign the 4x4TB old data drives and physically remove them start array with "Parity is already valid" box checked do a parity check (which in theory should not detect errors since the removed drives were "zeroed out") stop the array add the precleared 4x8TB new data drives start the array What are your thoughts ? Woudl that work with dual parity ? Do you guys find that realistic ? Is there any better way ? Many thanks, OP Edited December 10, 2020 by Opawesome marked as solved Quote Link to comment
JonathanM Posted December 9, 2020 Share Posted December 9, 2020 Effectively step 3 is writing the full capacity of parity, so I don't see any benefit over just doing a new config and swapping the 4 drives just as they are, without emptying or formatting or zeroing the 4 old drives, each of which is writing to parity. Here are the steps I would use. Copy the content of the 4x4TB to other array drives, moving is just a copy and a delete, where the delete writes to parity as well, so copy instead of move. Shut down the array, physically swap the 4 drives. Power up, do a new config keeping all, go to main page and put the 4 new drives into the "missing" slots. Start the array and build parity. If you want to stress the 4x8TB with preclear, go right ahead, I personally do, but it's not necessary, as building parity will read the entire surface, and a subsequent extended smart test should uncover any issues. I like to preclear because it's easier to deal with a bad drive before it's in the array. Your method will work, the methodology is sound, but it will take FOREVER compared to what I outlined, and realistically your method will result in probably double or triple the total writes to the parity drives compared to just building parity with the new drives. Honestly I can see your method taking a week or more of array time, and it's filled with potential data loss landmines if you fat finger or misunderstand a command. Leaving the data on the old drives and copying to other drives in the array means you have a backup of that data, at least until you repurpose the old drives. 1 Quote Link to comment
Opawesome Posted December 9, 2020 Author Share Posted December 9, 2020 Hi @jonathanm, Thank you very much for your detailed reply. I really appreciate the time you took to help me (and hopefully others in the same situation). You raised shortcomings in the process I described which I did not think of. Therefore, I will probably end doing the swap the way you suggest, which is indeed much quicker and less prone to human error. The drawback however (and if I understand well) would be that, because parity will become invalid, I will loose data if one of the "non swapped" data drives fail during the process, although this is unlikely I guess (and hope ). Indeed, by "zeroing out" the old data drives before replacing them, I had the intuition that parity would have remained valid. But I am not even sure about that because: (i) I am in a dual parity environment and; (ii) the new drives would be of a larger size; and therefore I wonder if parity2 would not become invalid anyway. I'd be curious to know. Thanks again. Best, OP Quote Link to comment
itimpi Posted December 9, 2020 Share Posted December 9, 2020 4 minutes ago, Opawesome said: Indeed, by "zeroing out" the old data drives before replacing them, I had the intuition that parity would have remained valid. But I am not even sure about that because: (i) I am in a dual parity environment and; (ii) the new drives would be of a larger size; and therefore I wonder if parity2 would not become invalid anyway. I'd be curious to know As long as drives are completely zeroed they will not affect either parity drive. Size is not a factor as Unraid assumes that any space beyond the physical size an array drive can be treated as if it were zero filled. 1 Quote Link to comment
Opawesome Posted December 9, 2020 Author Share Posted December 9, 2020 9 minutes ago, itimpi said: As long as drives are completely zeroed they will not affect either parity drive. Size is not a factor as Unraid assumes that any space beyond the physical size an array drive can be treated as if it were zero filled. Hi @itimpi, Many thanks for the confirmation. Best, OP Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.