adgilcan Posted May 6, 2023 Share Posted May 6, 2023 So, don't shout at me. You can't be harder on me than I have been! I have (had) a pool of 4 cache drives on BTRFS. For some unfathomable reason I reformatted the first one (because it came up on unassigned drives with a different label). Now the entire cache pool is coming up as "Unmountable: no file system". I have removed the reformatted disk and restarted the array hoping the remaining three will rebalance. The rebalance button is greyed out and says it is only available when the array has started, which it has. In addition, every so often the array stop button goes greyed out and the message changes to "unavailable: --btrfs operation is running". (see screenshots). Is anything actually happening? My question, basically, is "Am i fu*ked"? Is there any prospect of recovering the data on this cache pool? I do know it didn't contain any actual server "media data" but, of course, it does contain all appdata, VMs etc so all dockers and VMs are disabled. Can anyone advise my best next steps before I go nuclear and reformat the entire cache pool and try to re build it from scratch? Many thanks D Quote Link to comment
JonathanM Posted May 6, 2023 Share Posted May 6, 2023 25 minutes ago, adgilcan said: best next steps Attach diagnostics from the current session to your NEXT post in this thread. Quote Link to comment
adgilcan Posted May 6, 2023 Author Share Posted May 6, 2023 Here they are: tower-diagnostics-20230506-2040.zip syslog.txt Quote Link to comment
JorgeB Posted May 7, 2023 Share Posted May 7, 2023 Pool cannot mount without the missing device because the pool wasn't redundant, at least not for all chunks: May 6 17:55:04 Tower kernel: BTRFS warning (device sdp1): devid 1 uuid d6e771ff-5581-44c5-9aa2-c57bc3405efc is missing May 6 17:55:04 Tower kernel: BTRFS warning (device sdp1): chunk 3949479329792 missing 1 devices, max tolerance is 0 for writable mount Was the formatted device re-formatted btrfs or a different filesystem? Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 Hi It was reformatted XFS I can put it back Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 And I get the message as shown: Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 Shall I start the array? Quote Link to comment
JorgeB Posted May 7, 2023 Share Posted May 7, 2023 9 minutes ago, adgilcan said: Shall I start the array? No, don't start like that, post the output of btrfs-select-super -s 1 /dev/sds1 Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 root@Tower:~# btrfs-select-super -s 1 /dev/sds1 No valid Btrfs found on /dev/sds1 ERROR: open ctree failed Quote Link to comment
JorgeB Posted May 7, 2023 Share Posted May 7, 2023 Backup superblock is no longer available, lets try the 2nd and last backup btrfs-select-super -s 2 /dev/sds1 Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 root@Tower:~# btrfs-select-super -s 2 /dev/sds1 No valid Btrfs found on /dev/sds1 ERROR: open ctree failed 😪 Quote Link to comment
JorgeB Posted May 7, 2023 Share Posted May 7, 2023 I'm afraid it's gone, you can try these recovery options using the reaming drives but since the pool was not redundant, at least not 100%, some data will likely be missing, first try option #1, read-only with the degraded parameter. 1 Quote Link to comment
adgilcan Posted May 7, 2023 Author Share Posted May 7, 2023 Thank you so much, Jorge. I will try as you suggest. This morning (UK) I did discover a cache backup I made in 2020 and nothing of huge importance has changed since then, so, if all else fails the appdata directory from that might be a good starting point? Once again, I really appreciate the help you have offered Duncan 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.