Jump to content

Process to remove failed hdd (as part of move from RFS to XFS FS)


Recommended Posts

Hi,

 

Its been a while as I've had my server down during decorating for around 6 months so on powering everything back up again I had OS upgrades to do, apps to upgrade etc and as part of it setup the common issues / fix app. On its advise I'm working on converting all my RFS disks to XFS.

 

At the start my Parity is 4TB, one 4TB data disk and rest 2TB (5).

 

I've installed a new pre-cleared 4TB to allow me to copy from existing 4TB, then i'd work down the 2TB drives like follows:

 

Disk 7 (new disk formatted as XFS)
Disk 4 (RFS 4TB) -> Disk 7 (4TB XFS)

Disk 2 (RFS 2TB)-> Disk 4 (Now XFS 4TB)

Disk 1 (RFS 2TB)-> Disk 4 (Now XFS 4TB)

My plan was to replace disk 1 as it is fairly old so moved data from Disk 2 and Disk 1 onto the 4TB thinking Disk 1 could be removed and Disk 2 formatted to XFS to carry on the cycle.

 

Towards the end of copying Disk 1 is redballed on me and as it was copying overnight I didnt notice until the morning so the copy process completed onto Disk 4 using emulated data. I've checked the hashes of the data copy from Disk 1 to Disk 4 and everything looks ok so I don't believe its caused an issue but I'm not really sure how best to get back to protected without emulation and carry on the move to XFS.

 

Can I remove Disk 1 completely from the setup, allow it to build whatever parity it needs to again and then stop array, change Disk 2 to XFS and carry on? If so can someone outline the steps please? I've seen a few threads mention using new confg tool but I didn't want to start hitting buttons before getting advise on what I'm doing.

 

I've been pretty lucky with Unraid over the years and I've needed to do very little to keep it going and I've only had a couple failures which were straight replacements but this ease of use has left me with little in the way of troubleshooting / fixing skills

Link to comment

You could use the New Config tool and build parity based on the remaining disks (whose data will remain intact).   Doing this you are unprotected until the parity rebuild is completed.

 

An alternative would be to replace disk1 and with a new drive and rebuild its contents.  This would be rebuilt with reiserfs, but on completion you could follow the process to create an empty XFS file system on it (which only tasks moments) and continue from there.

 

What is not clear to me as (you did not mention it) is whether on completion of this conversion exercise you want to end up with 6 or 7 disks in the array.  The answer to that might favour one approach over another.

 

BTW:  Disks frequently red-ball for reasons other than the drive actually failing.   Just mention in case you want to consider testing the 'failed' 2TB disk1 after removing it and repurposing it.  If so you could run a pre-clear cycle on it to test it.

 

Link to comment

I do have another few 4TB drives which I could pre-clear and use to rebuild disk1 but these disk are to become part of a 2nd server I'm building I with the plan of it being offsite and just bring it around once a month so if I use up a 4TB drive to rebuild disk1, I would want that back would as some point for the new server ideally so I'm presuming I would then just be in a position where I'd need to use the New Config tool to remove it down the line and would be in the same position without protection until the rebuild completed?

 

As I've added 4TB storage (as I needed somewhere to copy my existing RFS 4TB data to), I had planned by the end to be able to remove 2x 2TB drives to remove some of the oldest disks leaving me with 1x Parity 4TB, 2x 4TB data disk and 3x 2TB giving me 14TB total which is the same as I started with. I have about 3.5TB free before starting this process so that arrangement would work fine and if I need more space in the future, I can upgrade both the main server and backup server at the same time to keep the total space the same. Sorry I never thought to mention plans for the 2nd server and what impact that may have to my options. This 2nd server will have a 6TB parity, 1x 6TB data and 2x 4TB data.

 

Sadly disk1 isn't detecting at bios level so I think its just packed in completely, its making start-up sounds when powered but isn't detected and I've tried different cables, ports etc. I think its about 10yo so Its not done too bad so I think I'll just bid him farewell as I'm not sure I'd trust is even if it did start to detect again.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...