version 6.9.2 - most likely all data lost / why is parity disk not emulating unmountable/missing drive?


Recommended Posts

Hi!

 

I tested unraid with one 8TB btrfs data disk and one 8TB parity disk in a 5 bay orico usb case for nearly a month and all went very well.
Yesterday I decided to add another 4TB btrfs data disk since the 8TB was nearly full. I rebuilt the parity from the 8TB data drive just the day before.

 

Now - after starting the array with "Parity is already valid" the 8TB data disk was not mountable with  these errors:

 

BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003
BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003
BTRFS warning (device sdb1): couldn't read tree root
BTRFS error (device sdb1): open_ctree failed

 

To repair the 8TB data disk I tried several things without success so far (recover a btrfs partition).

 

Next I took steps 1-3 from "Rebuilding a drive onto itself" in the docs to let the parity disk emulate the missing disc but after starting the array with the now missing 8TB disk I couldn't find the expected data under /mnt/user/[...] , only some directories but no files where visible.
I tried this with and without the new attached 4TB drive.

 

BTW: I always had to create a "New Config" because the distribution of disks to slots where always mixed up.

 

Currently I'm running "btrfs rescue chunk-recover" on the 8TB disk and since this takes a very long time I decided to ask for help/alternatives in the unraid forum.

 

My questions are: shouldn't the parity drive emulate the missing data drive after I removed the 8TB drive like I described before and what does it tells me if I only can see the directory structue in the emulatd drive - most likely all data lost?
Is there any way to recover the data from the parity disk to the 8TB data disk?

 

Any help is very apricated.

Link to comment

Are you

17 minutes ago, jehan said:

after starting the array with "Parity is already valid" the 8TB data disk was not mountable with  these errors:

Did you add a new drive and click parity is already valid?  If you are adding a new disk that has not been precleared then that would not be the case.  Most users use XFS for the array.  Post a diagnostics for the gurus (Tools tab click diagnostics and attach file). Here is the procedure for adding disks.

Edited by Gragorg
Link to comment

about those usb hard disk docks with the multiple bays... how do they actually work with unraid?

is unraid able to detect each disk individually? is unraid able to correctly detect the individual disk serial numbers?

 

just curious.

Link to comment

In my case I had no problems with the dock, all disks are individually detected and all information like the serial number are detected also correctly. Attached you can find an example.

disk identity.png

Link to comment
15 hours ago, Gragorg said:

Are you

Did you add a new drive and click parity is already valid?  If you are adding a new disk that has not been precleared then that would not be the case.  Most users use XFS for the array.  Post a diagnostics for the gurus (Tools tab click diagnostics and attach file). Here is the procedure for adding disks.

 

I added a new drive and started the array with "parity is already valid". Unfortunately the disk was not precleared but formatted. I'm unsure what this does with the parity drive - if I understand you correctly I have no chance to restore the data from the parity, right?

 

Concerning the post to the gurus: could you give me a hint how to reach them?

Link to comment
1 hour ago, jehan said:

I added a new drive and started the array with "parity is already valid". Unfortunately the disk was not precleared but formatted. I'm unsure what this does with the parity drive - if I understand you correctly I have no chance to restore the data from the parity, right?

That's not how you add a new disk, and by doing that you invalided parity.

 

Please post the diagnostics: Tools -> Diagnostics

Link to comment

I wanted to see the diags from when the disk got unmountable, before rebooting, but unless you saved them they will be lost.

 

19 hours ago, jehan said:

parent transid verify failed on 7053440794624 wanted 12999 found 13003

This error means writes to the device were lost. very likely because of using USB, which as mentioned is not recommended for array drives, btrfs restore might be able to recover some data, option 2 here:

https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490

 

Link to comment
7 hours ago, limawaken said:

about those usb hard disk docks with the multiple bays... how do they actually work with unraid?

is unraid able to detect each disk individually? is unraid able to correctly detect the individual disk serial numbers?

 

just curious.

USB connections are often unreliable and if a disk disconnects it will have to be rebuilt. This may be a frequent occurrence.

 

Also, parity operations need to happen in parallel, so if disks are sharing a single connection performance will be severely impacted.

Link to comment

I have not saved this diags before rebooting.

I tried several things to get the data from the btrfs disk but without success. At the moment I'm running a "rescue chunk-recover -v".

 

Here is what I tried so far:

 

"btrfs restore -vi"

parent transid verify failed on 7053440794624 wanted 12999 found 13003
parent transid verify failed on 7053440794624 wanted 12999 found 13003
parent transid verify failed on 7053440794624 wanted 12999 found 13003
Ignoring transid failure
WARNING: could not setup extent tree, skipping it
Couldn't setup device tree
Could not open root, trying backup super

 

"btrfs rescue super-recover"

All supers are valid, no need to recover

 

"btrfs rescue zero-log"

parent transid verify failed on 7053440794624 wanted 12999 found 13003
parent transid verify failed on 7053440794624 wanted 12999 found 13003
parent transid verify failed on 7053440794624 wanted 12999 found 13003
Ignoring transid failure
WARNING: could not setup extent tree, skipping it
Couldn't setup device tree
ERROR: could not open ctree

 

"mount -t btrfs -o usebackuproot"

wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error.

 

dmesg while mounting with "usebackuproot":

BTRFS warning (device sdb1): 'usebackuproot' is deprecated, use 'rescue=usebackuproot' instead
BTRFS info (device sdb1): trying to use backup root at mount time
BTRFS info (device sdb1): disk space caching is enabled
BTRFS info (device sdb1): has skinny extents
BTRFS info (device sdb1): flagging fs with big metadata feature
BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003
BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003
BTRFS warning (device sdb1): couldn't read tree root
BTRFS error (device sdb1): open_ctree failed

 

 

Link to comment
6 minutes ago, jehan said:

No chance to restore from parity?

Parity isn't valid so it can't help, also note that parity generally can't help with filesystem corruption anyway.

 

If btrfs restore is not working with the old disk, not the emulated disk, there's not a great chance of recovery, but still btrfs maintainers might be able to help.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.