Jump to content

HD Upgrade with no Parity


KLSsandman

Recommended Posts

Hi,

 

I have a 4 x 3TB Disks with a 1TB SSD Cache but no parity.  I have purchased 4 x 6GB disks, so I'd like to use 3 for capacity and have the fourth for parity.  As far as I can tell, the only way I can see is to copy the data off to another location, and remove the disks and re-create the array with the new ones.

 

Can anyone think of an alternative?

 

One thought I had was moving data from the old array's fourth disk and then remove it, put in the new 6tb disk and assign it as parity.  I could then remove each of the other 3 x 3TB disks one by one, upgrade and wait for resync, and then repeat.

 

Thanks

Simon

Link to comment

If you wanted to follow @trurl's method, you could, for the duration of the upgrade, disable vm and docker, and remove the cache SSD to allow use of that port for the extra array drive needed. You will still need to manually move data from the drive slot you wish to vacate in the final configuration however.

 

If it were me, here's how I would do it. Since you don't already have parity, I see no need to build parity just to rebuild drives and then have to break it to remove the extra drive at the end.

1. Disable docker and vm services

2. Remove cache drive and temporarily attach 1 new drive

3. Set new config and assign new drive as array slot 5

4. Make sure the only unformatted drive is the fresh drive, and format new drive as desired, XFS or BTRFS

5. Use rsync at console to copy contents of array slot 1 and 2 to new drive in slot 5

6. Remove old drives assigned to slot 1 and 2, replace with fresh drive

7. Set new config, assign drive with contents of old slot 1 and 2 to slot 1, assign fresh drive to slot 2, 2 remaining old drives to slots 3 and 4

8. Make sure the only unformatted drive is the fresh drive, format as desired

9. Use rsync at console to copy contents of slot 3 and 4 to new drive in slot 2

10. Remove remaining old drives, physically mount all new drives and cache drive in desired locations

11. Set new config, assign remaining 2 fresh drives as parity and slot 3, assign 2 drives full of copied data to slots 1 and 2, and assign cache drive

12. Build parity

13. Check parity

14. Reenable docker and vm

 

At each step that involves doing a new config, check and double check the serial numbers of the drives you are adding and removing. I recommend either putting a piece of tape on the drive where you can keep notes, or keeping a notepad and mapping the serial numbers to slots, along with intended and current content.

 

The advantage to doing it my way is speed, and you will be getting a fresh format on the new drives. Rebuilding doesn't allow reformatting, and any built up fragmentation is transferred to the new drives.

Link to comment
41 minutes ago, FlorinB said:

No preclear needed?

Preclear has never been "needed". If it were, limetech would have included it as part of unraid a LONG time ago.

 

Originally, when you added a new slot assignment to an already parity protected array, the entire array would be unavailable for the time needed to write zeroes the the drive so parity would remain valid. That downtime prompted a few forum members to collaborate with limetech to produce a script that would zero the drive, and then put a proprietary bit of data on the drive that would allow unraid to trust that the rest of the drive was already zeroes, so it could be immediately added, and then formatted.

 

Logic was added to the script that would test the drive extensively, and only declare success when zeroes were successfully read back from the entire drive. Over the years, this was determined to be a REALLY GOOD way of testing and erasing drives, regardless of where the drive was intended to be used.

 

Fast forward several years, and limetech changed the timing of the array start with the addition of a new drive to a protected array, now the writing of zeroes is done in the background while the array is allowed to start normally. Then after the clearing is done, you are given the option to format it and start using it.

 

So, the original purpose of preclear has been not needed for MANY years, but it's still a very good testing tool.

 

In the OP's case, he is going to have full backup of all his files on the original drives, so as long as parity builds and checks ok, and the smart reports look good, I'd be satisfied the drives are ok.

Link to comment
3 minutes ago, jonathanm said:

Originally [...] the entire array would be unavailable for the time needed to write zeroes the the drive so parity would remain valid

Good reason for preclearing earlier.

4 minutes ago, jonathanm said:

limetech changed the timing of the array start with the addition of a new drive to a protected array, now the writing of zeroes is done in the background while the array is allowed to start normally

Now the array starts immediately.

 

Preclear might be used optionally for drive testing purposes.

 

Thank you very much for the detailed clarifications jonathanm.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...