replacing parity drive with larger capacity


Recommended Posts

Sorry for asking almost the same question as the OP, but I just wanted to clarify something... I'm trying to do the same thing (replace parity with a larger drive, and once it's done rebuilding, add the original parity drive to the array). I'm not using the same port, and both drives will always be connected at the same time. So, when I get to Step 5, and change Parity1 drive to the larger one, I get a warning that it's the wrong drive (see attached pic). It looks like it'll let me the array anyway, but that warning got me worried that I might trash my array by doing it this way.

 

Can you guys please let me know if that's exactly what I'm supposed to be doing?

parity.png

Link to comment
  • 1 month later...

I have tried to upgrade my parity to a larger drive (12TB) and then add the old parity (8TB) to the array, but it is not working for me following the suggestions in this post.

After preclearing the new drive and stopping the array, I added the new drive as Parity 2, brought the array back up and let the parity sync. I was able to disable VMs and Docker while I did this and the sync process took approximately 24 hours. I then stopped the array and tried to change the old parity drive (P1) to "no device" so that it would become unassigned, and I could preclear it,  ready to add to the array, but it would not accept this after the restart and the old parity resumed as P1. I tried to remove P1 to "no device" and then adding it to the array, but after reboot, the old parity resumes as P1 again.

I'm not sure if this is relevant, but when I unassign the old parity drive, it is listed in the Unassigned Devices as Dev 1, but there is already another Dev 1 device in that set of disks.

I'm not sure what I'm missing here and I've attached my diagnostics.

Thanks

nabu-diagnostics-20230511-1421.zip

Link to comment
7 minutes ago, Boyturtle said:

I have tried to upgrade my parity to a larger drive (12TB) and then add the old parity (8TB) to the array, but it is not working for me following the suggestions in this post.

After preclearing the new drive and stopping the array, I added the new drive as Parity 2, brought the array back up and let the parity sync. I was able to disable VMs and Docker while I did this and the sync process took approximately 24 hours. I then stopped the array and tried to change the old parity drive (P1) to "no device" so that it would become unassigned, and I could preclear it,  ready to add to the array, but it would not accept this after the restart and the old parity resumed as P1. I tried to remove P1 to "no device" and then adding it to the array, but after reboot, the old parity resumes as P1 again.

I'm not sure if this is relevant, but when I unassign the old parity drive, it is listed in the Unassigned Devices as Dev 1, but there is already another Dev 1 device in that set of disks.

I'm not sure what I'm missing here and I've attached my diagnostics.

Thanks

nabu-diagnostics-20230511-1421.zip 177.58 kB · 0 downloads

Are you saying the following sequence of commands do not work to remove parity1?

  • Stop array
  • unassign parity1
  • start array without parity1 assigned to commit its removal?

Note you cannot assign the old parity1 to the array.until you have successfully started the array without parity1 (I.e. it has to be done in 2 stages).

Link to comment
13 minutes ago, itimpi said:

Are you saying the following sequence of commands do not work to remove parity1?

  • Stop array
  • unassign parity1
  • start array without parity1 assigned to commit its removal?

Note you cannot assign the old parity1 to the array.until you have successfully started the array without parity1 (I.e. it has to be done in 2 stages).

Using the process you mention above, I am unable to start the array after unassigning parity 1; the button option is greyed out and the disk is labelled as missing. My only option is to reboot the server and when I do, parity 1 is back again.

Link to comment
3 minutes ago, Boyturtle said:

Using the process you mention above, I am unable to start the array after unassigning parity 1; the button option is greyed out and the disk is labelled as missing. My only option is to reboot the server and when I do, parity 1 is back again.

Are you sure there is not a checkbox to confirm you want to start the array without that drive (and that checking it enables the Start button).   I would expect a reboot to bring the drive back as the change is not committed until you start the array without the parity1 drive.

  • Thanks 1
Link to comment
43 minutes ago, Boyturtle said:

. I then stopped the array and tried to change the old parity drive (P1) to "no device" so that it would become unassigned, and I could preclear it,

BTW:  there is no need to Preclear the drive as Unraid’s built-in Clear is much faster.    The only reason you normally run pre-clear is to stress test a new drive before adding it to the array, and since this drive has been performing OK as parity1 it does not need to be stress tested.

Link to comment
8 minutes ago, itimpi said:

Are you sure there is not a checkbox to confirm you want to start the array

I completely missed this. I found the box and checked it and now it is all working, thanks. I will preclear the old parity 1 before adding it to the array.

Thanks again for your assistance.

Link to comment
4 minutes ago, itimpi said:

BTW:  there is no need to Preclear the drive as Unraid’s built-in Clear is much faster.    The only reason you normally run pre-clear is to stress test a new drive before adding it to the array, and since this drive has been performing OK as parity1 it does not need to be stress tested.

Ok, I'll just add it to the array then :-)

Link to comment
  • 6 months later...

I am stuck/confused on what to do in my case with dual parity. Seems like most of these scenarios are with a single parity drive setup. 

My setup: 
14TB Parity 1
14TB Parity 2

I bought two new 20TB drives to replace both parity drives. Though, what is the most efficient way to do so? Am I going to have to rebuild parity twice? 

1. Stop Array

2. Remove Parity 1
3. Assign new 20TB drive to Parity 1
4. Start array and let it rebuild 

Then repeat these steps for Parity 2?..

Link to comment
  • 1 month later...

Sorry to resurrect this thread but I have a couple of additional questions regarding a similar procedure I want to perform.

Here's my current setup: double parity, all drives are 8TB. I now have purchased two new 18TB drives and want to use those as new parity drives and subsequently reuse the old 8TB parity drives as data drives.

 

Here's what I was thinking of doing:

 

1. Uncheck autostart for all dockers, stop all dockers

2. Stop array and power down system

3. Remove old parity drive 1 from SATA port

4. Install new 18TB drive on the same SATA port as the old parity 1 drive which I have just removed

5. Boot system

6. Run pre-clear on new 18TB drive (to stress test as this is a brand new drive) > is this step necessary?

7. Assign new drive as parity

8. Start array and let it rebuild parity

9. When parity is rebuilt, stop the array

10. Power down system

11. Add old parity drive to a free SATA port

12. Boot system and assign old parity drive as a data drive > this will automatically clear all existing data on the disk, correct?

13. Start array

14. Repeat same procedure but for parity drive 2

 

The questions I have are:

  • is a preclear using the plugin recommend / possible seeing as the 18TB drives are brand new? (see red text step 6)
  • during any time, do I need to keep all dockers turned off or can I turn some of them on? I'm thinking of turning the PLEX docker on while rebuilding the parity so I can keep watching, for example. I would only run dockers that read from the array but don't write, such as PLEX. Would this be an issue?
  • By replacing one parity drive at a time, I believe my array will still be protected since at any time I will have an fully built and active parity drive. So in worst case, if a random drive happens to fail during this procedure, I would still be protected, correct?
  • Is this a good way in general to go about it or are there other (better) ways? Ideally, I would like to keep the parity drives seated on the same SATA as the former ones were and keep at least one parity drive intact at all times

Thank you very much for any helpful insights you can give me!Unraid1.thumb.png.710050ebe99f2a70074263f5b8711d89.png

 

Link to comment

If you have the ports to spare, you can preclear the disks while keeping the original disks installed, and then replace. Doesn't matter if you use the same port as the original or not, it only matters that you assign the disk to the slot your are replacing. So really no need to remove anything if you are just going to reuse them anyway.

 

If you really want to keep the ports corresponding to certain slots for some reason, you can change the ports around however you want anytime you want, Unraid only cares about the serial numbers for keeping assignment straight. Of course, you should always be careful of connections when mucking about. Bad connections are the main reason for problems.

 

When you say you want to reuse the old parity drives, do you mean assign them to new data slots in the array? If so, then yes, Unraid will clear the disk if it isn't already clear so parity will remain valid.

  • Like 1
Link to comment

Thanks for the helpful comment, trurl, I'll get started on the procedure tomorrow.

 

I do have SATA slots free and will indeed reuse the old parity drives as data drives, so I'll go with your suggestion, leave them in the SATA port I have put them in and just reassign one by one, starting with each new parity drive and then add the old parity drives as data drives.

Link to comment
On 1/11/2024 at 6:26 PM, Kyo28 said:

add the old parity drives as data drives.

Once that is complete, download a new flash drive backup, and DELETE any previous backups. If your flash dies and you accidentally use an old backup that had those data drives in the parity slots, Unraid will assume it needs to rebuild parity on those drives, permanently wiping any data you have on them.

Link to comment
  • 2 weeks later...
On 3/27/2023 at 1:48 AM, BestITGuys said:

Sorry for asking almost the same question as the OP, but I just wanted to clarify something... I'm trying to do the same thing (replace parity with a larger drive, and once it's done rebuilding, add the original parity drive to the array). I'm not using the same port, and both drives will always be connected at the same time. So, when I get to Step 5, and change Parity1 drive to the larger one, I get a warning that it's the wrong drive (see attached pic). It looks like it'll let me the array anyway, but that warning got me worried that I might trash my array by doing it this way.

 

Can you guys please let me know if that's exactly what I'm supposed to be doing?

parity.png

 

On 3/27/2023 at 4:40 AM, JorgeB said:

The warning is normal, start the array and it will re-sync the new parity, once that's done you can add old parity as a new data disk.

 

I've been reading through this thread and wanted to confirm as my situation is similar to the one quoted above.  I'm going to replace a 12tb parity drive with a new larger parity drive.  Once the new parity is established I will then add the old 12tb parity drive as a data drive.

 

Questions:

1. Should I do a parity check (either correcting or non-correcting) before I start the whole process?

2. The first time I connect the new drive can I start up the array as normal without the new drive assigned and preclear it?  I don't remember the preclear process but am hoping I can do that while the server runs.

3. After the new drive is precleared and I start the array with the old drive excluded and the new drive set as the parity drive do I need to do a New Config?  

4. After everything is complete (new drive as parity and old drive as data drive) do I need to download a new flash backup?  It is mentioned above but I'm not sure if their use case is different from mine.

 

Thanks!

Link to comment

If you add old parity to a new data slot after you have already built new parity, Unraid will have to clear new data (old parity) so parity remains valid.,

 

If instead, you New Config new parity into parity slot, old parity into new data slot, and rebuild parity, then new parity will be in sync with existing contents of new data (old parity). Then you can format new data (old parity) so it is ready to accept folders and files.

Link to comment
3 hours ago, trurl said:

If instead, you New Config new parity into parity slot, old parity into new data slot, and rebuild parity, then new parity will be in sync with existing contents of new data (old parity). Then you can format new data (old parity) so it is ready to accept folders and files.

Ok, so if I understand correctly, doing it this way means I don't have to fully zero out the new data (old parity).  If something goes wrong in the process, will the data of the old parity still be valid to in theory to load up the old config and essentially recreate the pool?  I basically don't wipe the old parity data until I've confirmed the new pool is valid with the parity drive?

Link to comment
1 hour ago, ks-man said:

If something goes wrong in the process, will the data of the old parity still be valid

Might be slightly out-of-sync even if you don't write anything to the array. Unless you do it all in maintenance mode so none of the disks mount, which effectively takes it all offline until you start it in normal mode.

 

But that will be the case even if you assign it later.

Link to comment
12 hours ago, trurl said:

Might be slightly out-of-sync even if you don't write anything to the array. Unless you do it all in maintenance mode so none of the disks mount, which effectively takes it all offline until you start it in normal mode.

 

But that will be the case even if you assign it later.

Ok, thanks.  

Link to comment
  • 2 months later...

Similar question as a few others, but slightly different in that I want to go from parity 1 to parity 2 AND swap the current parity drive. The previous examples/questions were either parity 1 -> parity 1, or parity 2 -> parity 2. Just want to make sure I don't do anything (too) stupid...

 

What would be the safest and quickest method to go from 1 x 14tb parity drive to 2 x 18tb (and add the 14tb to the data pool)?

- Unassign 14tb parity, assign BOTH 18tb and build,

- Unassign 14tb parity, assign ONE 18tb and build, then assign other 18tb and build,

- Leave 14tb parity, assign ONE 18tb to parity 2, build, then unassign 14tb parity and assign other 18tb to parity 1 and build,

- Option D I don't know about

(then when any of the above are completed, add the 14tb to the data pool)

 

I feel like I see merit with each option, eg options 1 and 2 keep the 14tb parity 'safe' by not being assigned and therefore not at risk of issue while writing to one or two 18tb drives.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.