Jump to content

Accidentally Mounted Data Drive as Parity Drive after Reverting to Old Config


rynojvr

Recommended Posts

Hello,

 

I was attempting to upgrade my Unraid to a newer version, then had issues with it not shutting down for over 40 minutes, then messed up by hard shutting down, causing the flash drive to break. Was able to get a new flash drive working, but the only config was from a couple months ago, and a previous layout of disks. Between then and now, I'd upgraded the parity drive and moved it into the rotation of disks; now, the disk with data on it was assigned to be the parity (as it was in the old days), and because of an oversight, I started the array. Not too long after, I noticed there was an unassigned drive (the one that held the current parity data, but the old config wasn't aware of), and immediately recognized my mistake. 

 

Now, I'm unable to mount that drive at all. Based on some of the errors, I'm hoping the data is relatively intact in the middle of the drive/blocks, and it's mainly the headers that are broken. 

 

For debugging, it is `/dev/sde`, the 6 TB drive that is the one failing to mount. 

 

dmesg when mounting: 

[140844.853987] XFS (sde1): Failed to read root inode 0x80, error 117
[140868.791678] XFS (sde1): Mounting V5 Filesystem
[140868.891545] XFS (sde1): Metadata corruption detected at xfs_dir2_sf_verify+0xbf/0x1d8 [xfs], inode 0x80 data fork
[140868.892386] XFS (sde1): Unmount and run xfs_repair
[140868.893073] XFS (sde1): First 117 bytes of corrupted metadata buffer:
[140868.893752] 00000000: 0a 04 00 00 00 00 00 00 00 80 05 00 60 4d 65 64  ............`Med
[140868.894404] 00000010: 69 61 02 00 00 00 00 00 00 00 83 04 00 78 52 79  ia...........xRy
[140868.895125] 00000020: 6e 6f 02 00 00 00 02 06 74 5c 4d 05 00 88 47 61  no......t\M...Ga
[140868.895825] 00000030: 6d 65 73 02 00 00 00 01 8b 36 02 a4 03 00 a0 76  mes......6.....v
[140868.896536] 00000040: 6d 73 02 00 00 00 02 8f 90 aa 48 11 00 b0 72 65  ms........H...re
[140868.897228] 00000050: 73 69 6c 69 6f 5f 64 6f 77 6e 6c 6f 61 64 73 02  silio_downloads.
[140868.897927] 00000060: 00 00 00 01 01 e1 9b 07 00 00 00 00 00 00 00 00  ................
[140868.898632] 00000070: 00 00 00 00 00                                   .....
[140868.899342] XFS (sde1): Failed to read root inode 0x80, error 117

 

The final output of running `xfs_repair` on the drive:

.......................................................................................................................................................................Sorry, could not find valid secondary superblock
Exiting now.

 

If there are any other actions/logs I need to attach/run please let me know.

 

vault-diagnostics-20231123-1111.zip

Link to comment

IF the actual parity drive was not formatted, nothing was written to the array between start and stop and the server is left untouched for now a last resort after a recovery attempt with UFS explorer could be to make unraid rebuild that drive, it'll likely be corrupted too but maybe less / in a different way that could allow more complete recovery.

Link to comment

@Kilrah's suggestion might just work.   The steps involved would be.

 

  • Use Tools-> New Config.  I would suggest selecting the option to retain all current assignments
  • Return to the Main tab and correct the data drive and parity drive assignments to what they should be
  • Tick the Parity is valid checkbox to avoid Unraid trying to recalculate parity.  You still get a warning as that does not take into account the fact that the check box was ticked.
  • Start the array to commit these assignments.  I would expect the problem drive to show as unmountable but ignore that for now.
  • Stop the array.
  • Unassign the disk you incorrectly assigned to parity earlier
  • Start the array and this disk should now be emulated. 
  • Whatever now shows up on the emulated disk would be what you would end up with if you attempted to rebuild into a physical drive.   It is possible the disk will still show as 'unmountable' but it still might be possible to repair its file system.

I would suggest that at this point you stop and take diagnostics and post them here so we can see the current state of things.   Keep the disk you have just removed intact at this point just in case the UFS Explorer type route is still needed.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...