Jump to content

(SOLVED) 2 Parity back to 1 Parity


Recommended Posts

Hello everyone,


I have been reading through the forum but I still need to get a confirmation on this. Here is my current setup:


Parity 1

Partiy 2

Disk 1

Disk 2


I want to go back to use just 1 parity drive, and use the 2nd parity as Disk data. Is it ok for me to do a New Config and assign Parity 2 as Disk 1. Current Disk 1 as Disk 2; current Disk 2 -> Disk 3.


Do I check the "Parity is valid" so Parity 1 does not need to rebuild? 


Please let me know what I do is correct. Thank you!

Link to comment

During Assign old Parity2 as a new disk, can I assign it as Disk 1? Also assign my current Disk 1 as Disk 2..etc?


Parity 1

Disk 3 (used to be parity 2)

Disk1 (current)

Disk 2 (current)


Or do I assign it as Disk 3. Once done clearing I can stop the array and reassign the slot to:


Parity 1

Disk 1 (used to be parity2/Disk3)

Disk 2 (used to be disk 1)

Disk 3 (used to be disk 2)



Link to comment
26 minutes ago, mgsvr said:

During Assign old Parity2 as a new disk, can I assign it as Disk 1? Also assign my current Disk 1 as Disk 2..etc?

You can but only if you do a new config:


Stop array
New config, assign parity as parity, disk1 as disk2, disk2 as disk3, leave old parity unassigned for now (disk1 slot empty)

Check parity is already valid and start array

Stop array
Assign old parity2 as a new disk1
Start array to begin clearing
When done format the new disk



Link to comment
3 hours ago, mgsvr said:

Thank you very much!


I believe you could do this without a new config.


- Unassign parity2 

- Start / stop array

- Unassign disk from disk2 slot

- Assign that disk to slot 3

- Start / stop array

- Unassign disk from disk1 slot

- Assign that disk to slot2

- Start / stop array

- Assign the old parity2 disk to slot1

- Start array (disk in slot1 will begin zeroing, which done disk will be empty and usable)


90% sure this will work. I know you used to be able to exchange 2 disks in array and when array was restarted, it would record the slot change. But you couldn't exchange 4 disks, instead you'd have to do 2, start/stop array, and then do the other 2, and start the array. 


BTW, slot exchange would not work with parity2 disk assigned.

Link to comment
13 hours ago, mgsvr said:

Is there any disadvantages by going with a New Config and check Retain settings turn on? Do I have to setup all the system and docker/VM settings again?



No,  no disadvantage.  Except maybe slightly easier to make a mistake with a new config. 


But seems that is the only easy to do it given Johnnie's update. 

Link to comment

Thank you, guys. So many ways to do stuff with unRAID. I'll make sure I capture an image of the main page before I do a new config.


I'll post this here from user itimpi in case I need to upgrade to larger drive:




The process in unRAID for upgrading a disk to a larger one (assuming you have a parity drive) is:

  • stop the array
  • change the slot with the disk to be removed to Unassigned
  • start the array.   It will start with the missing disk being emulated by the combination of the other drives plus parity.   This step simulates a disk failing, but its main purpose in this case is to get unRAID to ‘forget’ the serial number of the old disk.
  • stop the array
  • assign the replacement disk to the slot where the old disk was
  • start the array

this will cause unRAID to automatically rebuild the disk contents to the new disk and on completion of the rebuild expand the file system to fill the new larger disk.   You will be able to use the system as normal while this is going on although any significant amount of I/O will slow down the rebuild process.   Although the rebuild normally works fine it is a sensible precaution to keep the disk being replaced unchanged until the rebuild completes just in case.


in fact I am not sure you even need the steps to ‘forget’ the old drive but the steps only take a minute or so.



Link to comment


This topic is now archived and is closed to further replies.

  • Create New...