Unable to mount disk3 in array after upgrade to 6.9.1 from 6.8.3


Recommended Posts

Originally the system was on 6.8.3 running just fine. Haven't had any issues with it lately and had recently ran the parity check. All the drives have been fine. All the drives in the array are btrfs. I upgraded to 6.9.1 & after rebooting disk3 shows as unmountable: not mounted. When looking through the system logs I only seen the below error for md3 or disk3.

Attached the diagnostic file.

 

 

Do I need to get rid of the  corrupt leaf block from the disk before it will mount?

Should I restore back to 6.8.3 and see if it mounts it properly?

Or

Do you think it might be best to turn the system off, remove the drive that couldn't be mounted, boot the system let the parity disk step in for the disk that couldn't be mounted format the drive & then readd the disk to the array and have the parity rebuild the data to the disk? 

 

 

 

Apr  7 15:19:46 TJs-Unraid emhttpd: shcmd (43): mkdir -p /mnt/disk3
Apr  7 15:19:47 TJs-Unraid emhttpd: shcmd (44): mount -t btrfs -o noatime,space_cache=v2 /dev/md3 /mnt/disk3
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS info (device md3): enabling free space tree
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS info (device md3): using free space tree
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS info (device md3): has skinny extents
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS critical (device md3): corrupt leaf: block=244514816 slot=137 extent bytenr=141763051520 len=45056 invalid generation, have 16777096 expect (0, 23174]
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS error (device md3): block=244514816 read time tree block corruption detected
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS critical (device md3): corrupt leaf: block=244514816 slot=137 extent bytenr=141763051520 len=45056 invalid generation, have 16777096 expect (0, 23174]
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS error (device md3): block=244514816 read time tree block corruption detected
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS error (device md3): failed to read block groups: -5
Apr  7 15:19:47 TJs-Unraid root: mount: /mnt/disk3: wrong fs type, bad option, bad superblock on /dev/md3, missing codepage or helper program, or other error.
Apr  7 15:19:47 TJs-Unraid emhttpd: shcmd (44): exit status: 32
Apr  7 15:19:47 TJs-Unraid emhttpd: /mnt/disk3 mount error: not mounted
Apr  7 15:19:47 TJs-Unraid emhttpd: shcmd (45): umount /mnt/disk3
Apr  7 15:19:47 TJs-Unraid kernel: BTRFS error (device md3): open_ctree failed
Apr  7 15:19:47 TJs-Unraid root: umount: /mnt/disk3: not mounted.

 

image.thumb.png.4b4bd0f1b4a0e4329561827fe79acf2d.png

tjs-unraid-diagnostics-20210407-1721.zip

Link to comment

I haven't looked at diagnostics yet because I wanted to jump in and keep you from making any mistakes.

 

Typically rebuild won't fix unmountable, and there would be no point in formatting a disk you are going to rebuild since rebuild will overwrite the entire disk.

 

More importantly, be very careful with that idea of format. If you format a disk while it is in the parity array, parity will be updated to agree that the disk is now empty, and so nothing could be recovered from rebuild.

 

I don't have any experience with fixing btrfs filesystem, maybe someone else will step in. In the meantime, read this and wait for further advice:

 

 

 

  • Like 1
Link to comment
47 minutes ago, trurl said:

I haven't looked at diagnostics yet because I wanted to jump in and keep you from making any mistakes.

 

Typically rebuild won't fix unmountable, and there would be no point in formatting a disk you are going to rebuild since rebuild will overwrite the entire disk.

 

More importantly, be very careful with that idea of format. If you format a disk while it is in the parity array, parity will be updated to agree that the disk is now empty, and so nothing could be recovered from rebuild.

 

I don't have any experience with fixing btrfs filesystem, maybe someone else will step in. In the meantime, read this and wait for further advice:

 

 

 

Thanks for your insight. I am reading through that article. Didn't want trying things and make it worse.  *Format meaning take it to another system and format it then plug it in like a replacement disk. If a disk is unmountable the data is still intact technically due to parity, right? Meaning if I simulated a failure like unplugging that particular drive that couldn't be mounted. The parity already emulates whats on that drive until a replacement drive gets re-added.

Edited by neighborhdtechgeek
Link to comment
3 hours ago, neighborhdtechgeek said:

*Format meaning take it to another system and format it then plug it in like a replacement disk.

I knew what you intended to do. There would be no point in formatting a replacement disk outside the array since the disk would be completely overwritten during rebuild.

 

Many people seem to have a very vague idea of what "format" means. Format means write an empty filesystem to this disk. That is what it has always meant on every operating system you have ever used.

 

The filesystem with its contents is part of the complete overwrite of the disk during rebuild. So no point in writing an empty filesystem before using a disk as a replacement since that empty filesystem will be completely overwritten with a filesystem that has contents. Except...

 

3 hours ago, neighborhdtechgeek said:

If a disk is unmountable the data is still intact technically due to parity, right?

If a data disk in the parity array is unmountable, that typically means its filesystem is corrupt. Since parity is in sync with all the data disks, rebuilding an unmountable disk usually results in an unmountable disk.

 

 

Link to comment

Instead of rebuilding, repairing the filesystem is usually the fix for unmountable. And if these had been XFS disks I wouldn't have hesitated to guide you through that process.

 

A disk can be disabled (or missing), in which case it is emulated from parity by reading parity PLUS ALL OTHER disks. Unraid disables a disk (kicks it out of the array) when a write to it fails. An emulated disk can be read and can even be written by updating parity as if the data had been written to the missing disk.

 

A disk can be unmountable. If it was a data disk in the parity array, that means it has a corrupt filesystem. Disks that don't have filesystems are also unmountable. A disk that hasn't been formatted is unmountable, and parity is unmountable because it doesn't have a filesystem.

 

Disabled and unmountable are independent conditions. A disk can be neither, either, or both. A disable disk is emulated from parity as mentioned, but if it was an unmountable disk, the emulated disk is also unmountable.

  • Thanks 1
Link to comment
18 hours ago, JorgeB said:

Yes, newer kernel can detect previously undetected corruption.

Believe I found the corrupted file that was on the disk. Going to remove it as the file that was corrupted wasn't used for anything thankfully and scrub the disk again to see if everything looks clear. If it looks clear I'll try re-upgrading back to 6.9.1 or even better 6.9.2. Thanks @JorgeB & @trurl for your assistance!

 

Apr 7 23:02:55 TJs-Unraid kernel: BTRFS warning (device md3): checksum error at logical 84725248000 on dev /dev/md3, physical 85807378432, root 5, inode 56689, offset 6307840, length 4096, links 1 (path: TJs-Unraid/......../..... Path to file

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.