Jump to content

Parity Swap, Followed By Data Rebuild - Disk Now "Unmountable", Even Though It's Green


Go to solution Solved by JorgeB,

Recommended Posts

Hello! I'm having some issues, here's the quick summary:

  • A 4TB drive in my array failed (Disk 6 in my array)
  • I decided to replace this with a 6TB drive. This was larger than my parity, so I did a Parity Swap, as outlined here
  • Once it completed, I started the array, which also initiated a Data-Rebuild
  • Parity sync / Data rebuild finished
  • My array is running. Both the Parity Drive & Disk 6 have green balls have next to them.
    • The new 6TB is now the Parity, and the old Parity is now Disk 6.
    • If I hover over the green ball next to Disk 6, it says "Normal operation, disk is active"
    • However, on the right, Disk 6 says "Unmountable: not mounted"

 

This is where I'm currently at. I'm a little confused, because Disk 6 doesn't say it's emulated, but it also says it's not mounted. Did it delete the data in the Data-Rebuild? What exactly is going on here? 

 

I'm also confused about how best to proceed. My ultimate goal is to replace Disk 6 with a second 6TB drive I have. Can I just remove Disk 6 at this point and add it in, and initiate another Data Rebuild? Or do I need to get the array back into a fully functional state with the current drives first?

 

image.thumb.png.d0c93413974b8e8843f787878069bc06.png

 

Thanks for any advice. Diagnostics are attached.

unraid-diagnostics-20231206-2352.zip

Link to comment

Got the following response:

 


Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Looks like I need to run with -L? I've read that can have data loss - if running "xfs_repair -L" causes me to lose data, is that data unrecoverable?

 

I'm also curious whether Disk 6 is properly backed up on the Parity Disk. Is there any way to check that?

Link to comment
22 minutes ago, RikkiTikkiTavi said:

I'm also curious whether Disk 6 is properly backed up on the Parity Disk. Is there any way to check that?

Parity does not ‘back up’ anything.   It merely provides a way to reconstruct sectors on a failed drive as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.  
 

Parity is real-time on Unraid so by default it is kept in sync.    There is the option under Settings->Scheduler to do a periodic housekeeping task to check that nothing has happened (e.g. crashes, power cuts) that might mean it is not perfectly synced and (optionally) correct discrepancies found.

 

Link to comment

Disk 6 is mounted and running, thank you! One final question, just to make sure I'm not being an idiot. If I want to now replace Disk 6 with a larger disk, I just do this as normal, by shutting down the server, replacing Disk 6, then doing a Data Rebuild? 

 

Or should I do a Parity Check first? Not sure how xfs_repair affects the parity sync.

Edited by RikkiTikkiTavi
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...