HomerJ Posted May 24, 2023 Share Posted May 24, 2023 Hi all I'm in version 6.12.0-rc6 and plugins etc are updated. I've just noticed that my 3 cache drives which are formatted btrfs are unmountable and unRAID is telling me to format, but I have stuff on them. Is this because with all the recent updates I need to select something so the btrfs drives are recognised? I don't really know what to do next coz all was working fine a couple days ago and it's only changed due to the updates. Any help will be much appreciated. Quote Link to comment
HomerJ Posted May 24, 2023 Author Share Posted May 24, 2023 Diagnostic file is attached. tower-diagnostics-20230525-0932.zip Quote Link to comment
Solution JorgeB Posted May 25, 2023 Solution Share Posted May 25, 2023 Try this, if the log tree is the only problem it may help: btrfs rescue zero-log /dev/nvme1n1p1 1 Quote Link to comment
HomerJ Posted May 26, 2023 Author Share Posted May 26, 2023 17 hours ago, JorgeB said: Try this, if the log tree is the only problem it may help: btrfs rescue zero-log /dev/nvme1n1p1 This fixed it! Thanks for your help! If you get a spare moment, would you be able to explain how you found the problem? Should I change the btrfs file system to something else? Quote Link to comment
JorgeB Posted May 26, 2023 Share Posted May 26, 2023 The log shows that the log tree failed be read. 6 hours ago, HomerJ said: Should I change the btrfs file system to something else? Usually something else causes this issue, not directly btrfs related, but you can try a different fs if you prefer, also note that this issue tends to re-occur, if it does would recommend re-formatting the fs. 1 Quote Link to comment
HomerJ Posted May 27, 2023 Author Share Posted May 27, 2023 16 hours ago, JorgeB said: The log shows that the log tree failed be read. Usually something else causes this issue, not directly btrfs related, but you can try a different fs if you prefer, also note that this issue tends to re-occur, if it does would recommend re-formatting the fs. Thank you so much for your help! If I change file systems, which do you think I should choose? NB it's cache. Quote Link to comment
JorgeB Posted May 27, 2023 Share Posted May 27, 2023 I assume current pool is raid1 with the 3 devices? Quote Link to comment
HomerJ Posted May 28, 2023 Author Share Posted May 28, 2023 On 5/27/2023 at 6:02 PM, JorgeB said: I assume current pool is raid1 with the 3 devices? Yes Quote Link to comment
JorgeB Posted May 28, 2023 Share Posted May 28, 2023 For multi device pools you can use btrfs or zfs, so you could change to zfs, but note that there's no 3 device raid1 option with zfs, you can have 2 or 4 devices in raid1, with 3 devices you can only make a 3-way mirror. Quote Link to comment
HomerJ Posted December 8, 2023 Author Share Posted December 8, 2023 On 5/25/2023 at 6:02 PM, JorgeB said: Try this, if the log tree is the only problem it may help: btrfs rescue zero-log /dev/nvme1n1p1 G'day again! I've had this issue occur a few times since it first happened, and this solution has fixed it each time. But what would actually cause this to happen? Quote Link to comment
JorgeB Posted December 8, 2023 Share Posted December 8, 2023 If it keeps happening I would recommend to backup and reformat the pool. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.