wmcneil Posted December 8, 2022 Share Posted December 8, 2022 The system log reports the following: Dec 1 16:51:36 lily root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. Dec 1 16:51:36 lily emhttpd: /mnt/cache mount error: No file system Dec 1 17:01:00 lily root: Fix Common Problems: Error: cache (TEAM_T253X1120G_AA000000000000004401) has file system errors () This disk is part of a two-disk cache pool. The disks are btrfs. unraid reports "Unmountable disk present". Since scrub requires the disk to be mounted, I don't know how to proceed. I have attached the output of tools/diagnostics . Thanks in advance for any help. lily-diagnostics-20221208-0900.zip Quote Link to comment
JorgeB Posted December 8, 2022 Share Posted December 8, 2022 If the problem is only the log tree zeroing it might fix it, if you don't have backups recommend first trying these recovery options to see if you can make one, then type: btrfs-rescue zero-log /dev/sdd1 Then stop and re-start the array and post new diags. Quote Link to comment
wmcneil Posted December 8, 2022 Author Share Posted December 8, 2022 I tried the recovery options in an attempt to create a backup, and was not successful. I then tried the command you posted, with the following result: root@lily:~# btrfs-rescue zero-log /dev/sdd1 -bash: btrfs-rescue: command not found I assumed the - between "btrfs" and "rescue" was a typo, and did this instead: root@lily:~# btrfs rescue zero-log /dev/sdd1 parent transid verify failed on 169771008 wanted 13150122 found 13150103 Couldn't read tree root ERROR: could not open ctree I then stopped the array, restarted it, and have attached the diagnostics. lily-diagnostics-20221208-1227.zip Quote Link to comment
JorgeB Posted December 8, 2022 Share Posted December 8, 2022 Sorry for the typo, if that doesn't work the filesystem is likely beyond repair, that transid verify failed error is fatal, it means some writes were lost, it can happen if a storage device lies about flushing it's write cache, most often caused by bad drive (or controller) firmware. Second option in the rescue options, btrfs restore, is usually the best for this, but if you've already tried that and it didn't work not much more you can do other then reformatting the pool and restoring from backups. Quote Link to comment
Solution wmcneil Posted December 8, 2022 Author Solution Share Posted December 8, 2022 I tried having unraid format the unmountable drives, and that was not successful. The system log is now complaining about both disks in the pool: I powered down the machine, and the sata connector on the drive of interest was a little less than fully plugged in. After powering on, the format was successful. At this point, looks like the SATA cable connection was the problem. @JorgeB, thanks for your help, appreciate it! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.