Force Read Only Array Mount with Old Parity Drive


Recommended Posts

  • Replies 96
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

You'd need to get small disk, cannot be smaller than the old disk was but also cannot be larger than parity, to test if the emulate disk mounts you can even used a bad/failing disk, as long as it's detected for the initial config it's enough, then if you want to rebuild you'll need a good one of course.

Link to comment

Well the old Parity and Disk 1 were both 5TB so I don't really have many options on the size.


What about some of the other things I mentioned like cloning the Parity to a larger drive or the parity-swap, which looking at the page on it (https://wiki.unraid.net/Manual/Storage_Management#Parity_Swap) seems to be somewhat like cloning the parity drive?

 

If I can just get it to emulate I was thinking about just copying the data to an external drive instead of trying to rebuild it.

Link to comment
On 10/1/2022 at 3:43 AM, JorgeB said:

-Unassign the disk you want to emulate
-Start array (in normal mode now), ideally the emulated disk will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk
-If the emulated disk mounts and contents look correct you can access it or even rebuild, to rebuild stop the array

 

I ordered a 5TB drive that will arrive in a couple of days to attempt to load my original 5TB Parity drive, but on the new 8TB Parity that didn't emulate the failed Disk 1, you mentioned if it doesn't to run a filesystem check.  So if it's saying Disk 1 is unmountable, is there something I can try in the meantime working off the 8TB Parity to possibly make the DIsk 1 mountable?

 

The other question I had that I mentioned previously but didn't get an answer on was if a Parity drive can be cloned to a different (larger) drive and used instead, or if a data drive is cloned, would the cloned drive still work or would it throw a parity error?  I didn't know if Parity only calculated data or used hardware as well for it's calculations.

Link to comment
3 hours ago, Kevin T said:

So if it's saying Disk 1 is unmountable, is there something I can try in the meantime working off the 8TB Parity to possibly make the DIsk 1 mountable?

No valid filesystem was being detected on the emulate disk but you can try running xfs_repair directly, assuming the filesystem was xfs, start the array in maintenance mode and type:

xfs_repair -v /dev/md1

 

3 hours ago, Kevin T said:

if a Parity drive can be cloned to a different (larger) drive and used instead

It can, though it might not be valid after the old drive capacity, but it would not be a problem since the array would not have any drives going over that.

 

3 hours ago, Kevin T said:

or if a data drive is cloned, would the cloned drive still work or would it throw a parity error?

Don't understand the question.

Link to comment

I guess with all the swapping of the Parity drives I swapped back to the newer Parity drive to do the xfs_repair and now none of the drives are showing as configured.  Sorry for the questions, but I want to be safe than sorry but since I have the option at the bottom Parity is already valid, do I have to still do a New Config again?  Also I noticed at the top right of the Parity drive it says "All existing data on this device will be OVERWRITTEN when array is Started", do I need to worry about that?  Screenshot is below.

 

Web capture_7-10-2022_22436_192.168.0.20.jpg

Link to comment
On 10/1/2022 at 11:43 AM, JorgeB said:

-IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked)

 

Link to comment

Oh ok, the wording was different.

 

I got this error popping up:

 

Unraid Disk 1 error: 07-10-2022 02:59

Alert [UNRAID] - Disk 1 in error state (disk dsbl)
No device identification ()

 

And this is what I'm getting now with xfs_repair in Maintenance mode:

 

root@unRAID:~# xfs_repair -v /dev/md1
Phase 1 - find and verify superblock...
bad primary superblock - inconsistent filesystem geometry information !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
writing modified primary superblock
        - block cache size set to 142328 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 181991 tail block 181938
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
 

Web capture_7-10-2022_22436_192.168.0.20.jpg

Link to comment

It's saying to mount the filesystem but Maintenance mode doesn't mount so should I stop and restart the array regular or just remain in Maintenance mode as it is right now?  Also what's the exact commandline please, I don't want to screw anything up.

 

UPDATE: Running xfs_repair -L /dev/md1 -hopefully that is correct

Getting tons of messages and writes to the Parity drive

Alot of resetting inode and Metadata corruption detected messages.

 

Edited by Kevin T
Link to comment
On 10/1/2022 at 3:43 AM, JorgeB said:

-IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked)
-Stop array
-Unassign the disk you want to emulate
-Start array (in normal mode now), ideally the emulated disk will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk
-If the emulated disk mounts and contents look correct you can access it or even rebuild, to rebuild stop the array

 

I tried mounting it previously and it didn't mount so I was going to move on to my other Parity drive but I have to wait for another 5TB to come in.  So this is the 2nd attempt on the newer Parity but I never did the xfs_repair before.

 

I just realized that I am still in Maintenance mode after unassigning the disk to emulate and that is what xfs_repair is running it.  I'm hoping that's not an issue, there were some similar postings I came across in the forums and they were in Maintenance mode while running xfs_repair.

 

This is the tail end of xfs_repair when it completed:

resetting inode 5524570396 nlinks from 3 to 2
resetting inode 5524595356 nlinks from 3 to 2
resetting inode 5524595374 nlinks from 3 to 2
resetting inode 5524603471 nlinks from 3 to 2
resetting inode 5524694124 nlinks from 28 to 12
resetting inode 5524765084 nlinks from 3 to 2
resetting inode 5532130247 nlinks from 3 to 2
resetting inode 5532130255 nlinks from 3 to 2
resetting inode 5532130258 nlinks from 4 to 3
resetting inode 5532130330 nlinks from 3 to 2
resetting inode 5532298543 nlinks from 3 to 2
resetting inode 5532298547 nlinks from 3 to 2
resetting inode 5532298558 nlinks from 5 to 4
resetting inode 5532298562 nlinks from 3 to 2
Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d1c11b30/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d1c11b30/0x8
Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d21689b8/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d21689b8/0x8
Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d216c4c8/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d216c4c8/0x8
Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d216a188/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d216a188/0x8
Metadata corruption detected at 0x451c80, xfs_bmbt block 0x132170f70/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x132170f70/0x8
Metadata corruption detected at 0x457850, xfs_bmbt block 0x1d1e5ac88/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d1e5ac88/0x8
Maximum metadata LSN (1976772755:33719779) is ahead of log (1:2).
Format log to cycle 1976772758.
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Refusing to write a corrupt buffer to the data device!
xfs_repair: Lost a write to the data device!

fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair.

 

 

This is what it's showing now:

 

Web capture_7-10-2022_35112_192.168.0.20.jpg

Edited by Kevin T
Link to comment

I mentioned before that you should repeat the complete procedure again, the end of that is this part:

 

Quote

-Start array (in normal mode now), ideally the emulated disk will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk
-If the emulated disk mounts and contents look correct you can access it or even rebuild, to rebuild stop the array

 

If you did not do that do it now first, disk likely won't mount but you should confirm.

Link to comment

It appears to be emulating Disk 1 (see below) but there is alot of free space and I believe the disk was pretty full before.  Weird it's showing the Disk 1 as 6TB but the original one was 5TB, is that because I originally assigned the new drive which is 6TB and removed it?

 

There's tons of folders/files in the lost+found directory on Disk 1 it's showing 29226 objects: 23985 directories, 5241 files (38.1 PB total)

 

Do you think that's the best I can do on recovery from this parity or are there other options?  Also should the files be okay that are still present and not in lost+found?

 

I'm thinking to still try the other parity when I get the additional drive in a couple of days and will probably just backup what is on here if I'm out of options with this parity.

 

Web capture_7-10-2022_35112_192.168.0.20.jpg

Link to comment
28 minutes ago, Kevin T said:

Weird it's showing the Disk 1 as 6TB but the original one was 5TB, is that because I originally assigned the new drive which is 6TB and removed it?

Yes.

 

29 minutes ago, Kevin T said:

Do you think that's the best I can do on recovery from this parity or are there other options?  Also should the files be okay that are still present and not in lost+found?

Files outside lost+found should be OK, and yes probably the best we can do with this parity.

 

30 minutes ago, Kevin T said:

I'm thinking to still try the other parity when I get the additional drive in a couple of days and will probably just backup what is on here if I'm out of options with this parity.

Agree.

Link to comment

Very good, at least I'm able to recover some so far and hoping for better results on the other one.  I really appreciate all your help through this thus far, thank you very much.

 

Should I go ahead and rebuild the new drive?  Also wondering if I should rebuild this parity if I'm not going to need it for further recovery?

 

 

Link to comment
1 hour ago, Kevin T said:

Should I go ahead and rebuild the new drive? 

It's up to you, you can copy the data from the emulated disk, just make sure nothing is written to the array.

 

1 hour ago, Kevin T said:

Also wondering if I should rebuild this parity if I'm not going to need it for further recovery?

Not sure what you mean, parity is valid for the current config, if you rebuild the disk and check parity it will already be in sync.

Link to comment
6 hours ago, JorgeB said:

Not sure what you mean, parity is valid for the current config, if you rebuild the disk and check parity it will already be in sync.

 

Disk 4 failed first during a rebuild upgrading to a larger drive so the Disk 4 in it now is the original one, so I wasn't sure if the Parity would completely be valid and in sync with that.  And as that was sitting in a degraded state, Disk 1 original failed when trying to copy a file off of it (the system froze) but that was the original on it.

 

That's why I didn't know if I should rebuild the parity to match what's on the drives and maybe run a filecheck on the other drives as well just to be safe.

Link to comment

Hi, well now I'm completely confused, I got another 5TB drive as a Disk 1 placeholder so I can use my original Parity drive.  I put all my old original working drives back in (which are all the same models and sizes except the Disk 4), ran the New Config and now I'm getting this message:

 

"Disk in parity slot is not biggest"

 

It seems to be a bug.  I just upgraded to 6.11.1 from 6.11.0 earlier today so I thought it might be a bug in the new version, but downgrading didn't resolve it.  Here's all my screenshots of all my drives, the new 5TB is an external and it is showing a larger partition size, but even if I exclude it, I still get the same error.  I included screenshots of with it both included and excluded, with the same message, as well as the drive info on all the drives.

 

Any thoughts how to circumvent this?

Web capture_15-10-2022_184113_unraid.local.jpg

Web capture_15-10-2022_184143_unraid.local.jpg

Web capture_15-10-2022_184217_unraid.local.jpg

Web capture_15-10-2022_184242_unraid.local.jpg

Web capture_15-10-2022_184270_unraid.local.jpg

Web capture_15-10-2022_184332_unraid.local.jpg

Web capture_15-10-2022_184400_unraid.local.jpg

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.