Harlequin42 Posted March 4, 2021 Share Posted March 4, 2021 (edited) Hey, I noticed yesterday some issues with my docker container that they suddenly stopped because of "server error". I rebooted unraid and it looked fine. Now I couldn't connect to my docker again and realized my whole server was not reachable via ping. Also a connected monitor couldn't show me anything. So I had to switch it off and turned it on again. After that I was able to start unraid but my cache pool (2x 1TB SSD as raid 1 btrfs encrypted) showed up as "unmountable: no file system". So I searched through the forum and found some topics which looked helpful. I started with this: I just tried the first and second step. Then I stopped the array and removed one cache drive from the pool and startet the array again and tried to mount the disk via unassigned devices and the tips mentioned in the link above (step 1 and 2 only). But this also didn't help. Now I restarted again and I am not able to add the second cache drive back to the pool ... obviously. But the second cache drive doesn't show the encrypted logo nor the filesystem in unassigned devices. Now I am a bit nervous, because there are many files I need and I thought I would be safe with a raid 1. The drives are sdb and sdc. I really hope you can help me to access the data. diagnostics-20210304-0208.zip Edited March 4, 2021 by Harlequin42 spelling correction Quote Link to comment
JorgeB Posted March 4, 2021 Share Posted March 4, 2021 If the 1st two options in the FAQ didn't help not much more I can add, filesystem is corrupt, you can try asking for help on IRC or btrfs fsck. Also it won't help assigning just one device, they are both corrupt, since it's the same filesystem. Quote Link to comment
Energen Posted March 4, 2021 Share Posted March 4, 2021 The root of your problem is here Quote Mar 4 01:47:07 SN-HOME01 kernel: BTRFS critical (device dm-5): corrupt leaf: root=5 block=373812412416 slot=170 ino=1297036692682703108, invalid previous key objectid, have 260 expect 1297036692682703108 Mar 4 01:47:07 SN-HOME01 kernel: BTRFS error (device dm-5): block=373812412416 read time tree block corruption detected Mar 4 01:47:07 SN-HOME01 root: mount: /mnt/cache: can't read superblock on /dev/mapper/sdb1. So as JorgeB said, your filesystem is corrupt. You can try to repair the BTFRS superblock but I'm not exactly sure what you'd want to do to try.. if you google " can't read superblock on /dev/mapper" or "btrfs repair superblock" or similar terms like that you may find some help. Quote Link to comment
Harlequin42 Posted March 4, 2021 Author Share Posted March 4, 2021 Sounds bad. I thought I would be safe with raid 1, but obviously not in this case... Because I moved, my backup is not up to date. I would be able to restore 90-95% of the data from separate sources, but this will take much more time than trying to fix the filesystem, but I don’t know how. The other benefit of fixing the filesystem is that I exactly have the lost data where it was and don’t have to guess what I am missing in total. My plan is now to add a complete new cache drive and try to restore the data from different locations, meanwhile I will try to fix the filesystem of the old cache drives with the tips you mentioned. Is it possible to fix a broken filesystem on an encrypted drive? What are your suggestions for the future? i read that btrfs is not the best for this. i also have not found the optimal backup strategy. maybe there are also some suggestions. Quote Link to comment
Harlequin42 Posted March 6, 2021 Author Share Posted March 6, 2021 I tried the repair but it didn't help. First I tried "btrfs check --repair /dev/sdb1" but then I got the message: Quote enabling repair mode WARNING: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck can successfully repair all types of filesystem corruption. Eg. some software or hardware bugs can fatally damage a volume. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... No valid Btrfs found on /dev/sdc1 ERROR: cannot open file system Then I tried "btrfs check --repair /dev/mapper/sdb1" and it started, but it seems not working well: Quote enabling repair mode WARNING: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck can successfully repair all types of filesystem corruption. Eg. some software or hardware bugs can fatally damage a volume. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... warning, device 2 is missing Checking filesystem on /dev/mapper/sdb1 UUID: c2dbd0b4-de7c-4663-a6c2-b97dbd2ea2b3 [1/7] checking root items Fixed 0 roots. [2/7] checking extents bad key ordering 170 171 bad key ordering 170 171 bad key ordering 170 171 Unable to find block group for 0 Unable to find block group for 0 Unable to find block group for 0 Unable to find block group for 0 Unable to find block group for 0 Unable to find block group for 0 bad block 373812412416 ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space tree cache and super generation don't match, space cache will be invalidated [4/7] checking fs roots bad key ordering 170 171 ERROR: commit_root already set when starting transaction ERROR: errors found in fs roots found 849140412416 bytes used, error(s) found total csum bytes: 0 total tree bytes: 66895872 total fs tree bytes: 9961472 total extent tree bytes: 56295424 btree space waste bytes: 18634369 file data blocks allocated: 235431047168 referenced 36227096576 ERROR: attempt to start transaction over already running one Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.