Jump to content

Rebuilding dual parity drives


Drewster727

Recommended Posts

I've currently got 2x6TB WD Red drives as my parity disks (dual parity).

I recently purchased 2x8TB HGST Deskstar drives to replace them (so that I can start adding 8TB drives to my array).

 

I've never had to rebuild dual parity before, let alone replacing the disks. I assume it's exactly the same process as a single disk.

In other words, my plan to upgrade them is:

 

  • Preclear new 8TB disks (already done)
  • Stop the array
  • Shut down the server
  • Swap the current parity disks with the new ones
  • Boot up server
  • Re-assign parity slots to the new drives
  • Turn on the array and just let it rebuild

 

Is this a correct procedure for dual parity rebuilds?

 

Thanks!

Link to comment

Well, the only reason I wasn't considering doing them one at a time is because when parity checks run, they're slow and causes performance issues with my array during the sync that I'm trying to avoid. Question -- if I do it one at a time, is unRAID smart enough to rebuild parity from the existing parity disk or does it still have to read from the entire array during the sync process?

Link to comment
Just now, Drewster727 said:

if I do it one at a time, is unRAID smart enough to rebuild parity from the existing parity disk or does it still have to read from the entire array during the sync process

 

Parity disks are different from each other, it can't read the other one, it needs to read all disks.

Link to comment
1 minute ago, johnnie.black said:

 

Parity disks are different from each other, it can't read the other one, it needs to read all disks.

 

Ok, figured that was probably the case. I may get risky and just do a full rebuild on both of them to minimize the time I'm putting pressure on the array.

Link to comment

If you want the array still valid with the old parity drive (6TBx2), you should run in maintenance mode to rebuild both new drive in same time. The drawback was whole file system will not available, but you still have 2 drive fail protection.

Link to comment

Ok, so just to clarify:

  1. Turn off the array
  2. Switch to maintenance mode (ensures no writes?)
  3. Swap the parity disks in the GUI
  4. Let it rebuild
  5. Once complete, exit maintenance mode

If anything fails, pop the old 6TB parity disks back in to resolve the issues.

 

Is this correct?

Link to comment
4 minutes ago, Drewster727 said:

Is this correct?

Yes

 

Better one

 

Preclear new 8TB disks (already done)

  • Stop the array
  • Shut down the server
  • Swap the current parity disks with the new ones
  • Boot up server
  • Re-assign parity slots to the new drives
  • Turn on the array in maintenance mode and just let it rebuild
Link to comment
  • 10 months later...

question will it make much difference i am planing to go from 1 parity disk to 2x 10tb drives as parity.

using the steps @Benson listed before. i am running unRaid OS Version 6.5

any idea how long it will take to do this to get 2x 10 tb drives done.  

22h 30 min to go from a 4 tb single parity to a single 8 tb parity drive with only 500gb of data 

the new setup is going to have 26 tb of space before i move over plex with 24.21 tb of data 

then i am taking 2x of the ironwolf 10 tb drives and making them the new parity drives

then adding the remaining 4x 10tb drives to the array giving me 36 tb more storage.

a total of 62 tb of storage in the end.

  • Sync the arrary once before change then 
  • Stop the array
  • Shut down the server
  • Swap the current parity disks with the new ones
  • Boot up server
  • Re-assign parity slots to the new drives
  • Turn on the array in maintenance mode and just let it rebuild

 

thank you in advance for all the helpful info in this post.

Link to comment
3 hours ago, Leon_CC said:

question will it make much difference i am planing to go from 1 parity disk to 2x 10tb drives as parity.

No different, but pls disable array "auto start" before change.

 

3 hours ago, Leon_CC said:

any idea how long it will take to do this to get 2x 10 tb drives done. 

It depends on all existing disk and new disk performance (i.e.7200rpm faster the 5400rpm, different model harddisk even with same capacity also have big different).

Also, system have bottleneck or not (i.e. controller bandwidth reach ceiling, 10 disks will limited to 100MB/s speed per disk , and 5 disks will got 200MB/s full speed)

Some well system with 7200rpm 10TB harddisk i.e. ST10000DM0004 (not DM004), I will expect less then 19 hrs to finish re-build.

 

3 hours ago, Leon_CC said:

22h 30 min to go from a 4 tb single parity to a single 8 tb parity drive with only 500gb of data 

There are a bit slow, does it is actual finish time or estimate time during rebuild ? BTW, how many data won't make different, because rebuild was block operation which not relate to filesystem.

 

3 hours ago, Leon_CC said:

adding the remaining 4x 10tb drives to the array giving me 36 tb more storage.

Do you mean replace four 1TB disk with four 10TB disk by rebuilding in 2 times.  I just think does something can do to save lot of time, but it may risk.

Link to comment

I just rebuilt my parity with two 8TB WD Reds (well, white-label WD80EZZX which is basically a Red), and I got:

 

Parity is valid
Last checked on Mon 26 Mar 2018 11:34:45 AM BST (today), finding 0 errors.
Duration: 17 hours, 58 minutes, 58 seconds. Average speed: 123.6 MB/sec

 

I wouldn't expect a 10TB to run much more than 20-22 hours, depending on the speed of your other drives.  If you've an array full of old 2TB Greens, then expect things to be slower.

 

My array is an elderly 4TB Seagate Desktop, 2x 8TB Seagate Archive v2, so none of the drives are quick units.  The machine is a Ryzen R5-1600 running on a X370 board.

Link to comment
7 hours ago, Leon_CC said:

thats the current systems my old Nas an RN516 I am dumping all the data to the unRaid server then adding 4x of these drives to the array and using the other 2x as the dual parity setup.

5ab8dbb89c4fc_ScreenShot2018-03-26at7_37_42AM.png.0d73df507885749d82cf3ffdcf847ae1.png5ab8dbb41d23b_ScreenShot2018-03-26at7_36_47AM.thumb.png.733248064df9b1a5860e306fcfda25f4.png

It more clear.

Big job, migrate to unRAID, data move out and in.

 

 

 

For your case, I will propose some differnet way. ( If your plan was 11+2 disk array, instead 7+2 )

 

- Your old NAS was RAID-5, if you plug-out 2 disk then all data in those 10TB disks already invalid.

 

So I will do this ( Protection still maintain by the old 8TB parity disk and those 2T /4T data disk )

- Copy all data to unRAID

- Disable array auto start

- shutdown unRAID

- Plug-out the 8TB parity

- Pling-in all 10TB disk

- Start unRAID and click new disk config with retain data disk

- Start array in maintenance mode

- Stop array and assign 10TB disks as 2 parity, 4 data disk

- Just let unRAID new-build parity

- If all nornal

- Start array in normal mode and let unRAID format those 4 10TB data disk.

- Finish

 

 

 

But if possible, I would suggest add a 4TB disk be 2nd parity before move in 10TB disk, then you will got 2 disk protection all time.

 ( not sure unRAID allow or not for differet size parity drive )

Link to comment

i was thinking of getting a extra 8TB before i do the migration anyway after more reading i have been doing. and yes they have to be the same size dorm what i was reading. just for the backup and in the end i would have an extra 18TB to the total and a setup of 2x 10TB parity drives and a array of 4x 10TB + 2x 8TB drives + 6x 4TB drives and 1x 2TB drive with a total of 40+16+24+2= 82TB with 2x 10TB parity drives aka 12+2 :) and thank you again for the info @Benson

Link to comment

Sound good for adding extra 8TB parity during migration. Capacity increase so much.

 

But those 8TB x2 need to be add after migration complete.

 

Would you consider not add all be a 12+2 array. Let say, 10+2, then 2 disk i.e. 4TBx2 could be spare drive.

Link to comment
  • 3 weeks later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...