gm6147 Posted September 24, 2023 Share Posted September 24, 2023 (edited) 6 drives total: two drives in one pool (xfs), two drives in another pool (btrfs), a cache disk and a parity disk. Everything has been running stable for months. Only recent major event was updating UnRAID. Issue may have existed before that but I did not notice, only noticed since doing the update and looking around afterward for new features and such. Unsure how to proceed. Diagnostic info attached to start. Really need to recover some of the files off either one of the drives, so reformatting or recreating the file system may not be an option since they were both part of a pool and both will not mount. fbserver-diagnostics-20230923-2242.zip Edited September 24, 2023 by gm6147 spelling Quote Link to comment
JorgeB Posted September 25, 2023 Share Posted September 25, 2023 Are you sure disks 1 and 3 were btrfs? Post the output of btrfs fi show and blkid Quote Link to comment
gm6147 Posted September 25, 2023 Author Share Posted September 25, 2023 11 hours ago, JorgeB said: Are you sure disks 1 and 3 were btrfs? Post the output of btrfs fi show and blkid Pretty sure, yeah. The Main tab in the GUI is still reporting that along with the unmountable error. I'm no expert in Linux or CLI but output and Main tab screenshot attached for your reference. command output.txt Quote Link to comment
JorgeB Posted September 26, 2023 Share Posted September 26, 2023 The partitions do report btrfs, but no valid btrfs filesystem exists, post the output of: btrfs check /dev/md1p1 Quote Link to comment
gm6147 Posted September 26, 2023 Author Share Posted September 26, 2023 7 hours ago, JorgeB said: The partitions do report btrfs, but no valid btrfs filesystem exists, post the output of: btrfs check /dev/md1p1 ****@FBServer:~# btrfs check /dev/md1p1 Opening filesystem to check... Checking filesystem on /dev/md1p1 UUID: 2bcde74f-b59d-4324-a47f-b08c7d64403f [1/7] checking root items [2/7] checking extents [3/7] checking free space tree [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) found 739187720192 bytes used, no error found total csum bytes: 719988480 total tree bytes: 845774848 total fs tree bytes: 9355264 total extent tree bytes: 8781824 btree space waste bytes: 100989142 file data blocks allocated: 750168219648 referenced 738341859328 Quote Link to comment
JorgeB Posted September 26, 2023 Share Posted September 26, 2023 That's very strange, stop the array and see if the disks mounts manual (read-only) mkdir /x mount -v -t btrfs -o ro /dev/sdc1 /x If it mounts see if you can browse the data under /x to unmount then umount /x Quote Link to comment
gm6147 Posted September 26, 2023 Author Share Posted September 26, 2023 3 hours ago, JorgeB said: That's very strange, stop the array and see if the disks mounts manual (read-only) mkdir /x mount -v -t btrfs -o ro /dev/sdc1 /x If it mounts see if you can browse the data under /x to unmount then umount /x To re-iterate my novice knowledge here, thank you for the detailed instructions! And good to know I'm not crazy. Thought those two drives running on btrfs had all my Plex media on them. And all is confirmed while giving my best attempt to browse the directories on the disk. I really just need that Photos folder recovered somehow as you can see it has been my pseudo-backup solution for photos for the last 8 years or so. Not ideal, I know... Read-Only Manual Mount Browse.txt Quote Link to comment
JorgeB Posted September 27, 2023 Share Posted September 27, 2023 Doh! Missed this before: Label: none uuid: 2bcde74f-b59d-4324-a47f-b08c7d64403f Total devices 2 FS bytes used 688.42GiB devid 1 size 1.82TiB used 345.03GiB path /dev/md3p1 devid 2 size 1.82TiB used 345.03GiB path /dev/md1p1 Both disks 1 and 3 and members of the same btrfs pool, that's why they don't mount on the array (they would mount with a previous Unraid release), you basically have two options, recreate the array without those tow disks and mount them on a pool instead, you can then leave them as a pool or move the data, mount the disks manually with the instructions above and move the data elsewhere, then reformat them as individual array disks. Quote Link to comment
gm6147 Posted September 27, 2023 Author Share Posted September 27, 2023 Woah woah woah wait... couple things. Are you saying that at some point UnRAID no longer supports btrfs as a file system?? I feel like there should have been much stronger announcements around this. Possibly even going so far as emailing those with licenses and contact info in their system just as a heads up. Granted, I don't go out of my way to find and read changelogs and just hit the upgrade button usually but still, that's huge! 1) Is there no way to roll-back the OS to a previous version where these drives would mount again? 2) If not, I believe I would prefer the second option; 4 hours ago, JorgeB said: move the data, mount the disks manually with the instructions above and move the data elsewhere, then reformat them as individual array disks. but would need some specific instruction on how to do this. I'm fine with pointing me to a previous thread so long as it is highly applicable to my situation and detailed enough. Quote Link to comment
itimpi Posted September 27, 2023 Share Posted September 27, 2023 17 minutes ago, gm6147 said: Are you saying that at some point UnRAID no longer supports btrfs as a file system?? I feel like there should have been much stronger announcements around this. Not sure how you inferred this There have been no indications that btrfs support would be removed. The only one (that has been announced for some time) is that reiserfs support will be removed from Unraid when support for it is removed from the Linux kernel. Quote Link to comment
gm6147 Posted September 27, 2023 Author Share Posted September 27, 2023 Sorry, inferred from this statement 5 hours ago, JorgeB said: Both disks 1 and 3 and members of the same btrfs pool, that's why they don't mount on the array (they would mount with a previous Unraid release), So is it the fact that they are part of a pool that is a problem? My single cache disk is on btrfs and that still mounts just fine after the update. Since my previous response, I found some other threads that explained how to restore the previous version since I did upgrade via the GUI. Did that and both btrfs drives re-mounted and working perfectly. I would like to continue to upgrade when possible but what is my best options for pools & btrfs then? Quote Link to comment
Solution JorgeB Posted September 27, 2023 Solution Share Posted September 27, 2023 22 minutes ago, gm6147 said: So is it the fact that they are part of a pool that is a problem? Correct, that could only happen if the disks were previously assigned to a pool, and then later assigned to the array without wiping them first, it can be considered a bug, the current release doesn't allow that, you probably didn't notice but both disks 1 and 3 would show the exact same stats on the GUI, since they would both link to the same pool with the same data. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.