xyseer Posted August 30, 2023 Share Posted August 30, 2023 I have only one parity and 2 disks in my system. Recently one of my disk(disk1) had read errors and the system disabled that disk. I have no choice but to replace that bad disk with a new one. As many topics have said. Just replace disk in the array slot and the system will automatically rebuild data with out any other procedures. While tragedy happened here. When the rebuilding finished. A notice that "Unmountable: no or wrong filesystem." was on the disk. While I used the btrfs in the former disk so I tried to run "btrfs check" to see whether there was a filesystem error. Unfortunately, the tools shows that "cannot read chunk-tree". I tried lots of things including btrfs rescue but nothing happened. It was stiil unmountable. Is there any way to bring my data back? There was no valuable logs since I've reboot systems for lots of times. Quote Link to comment
JorgeB Posted August 30, 2023 Share Posted August 30, 2023 Please post he diagnostics after array start. Quote Link to comment
xyseer Posted August 31, 2023 Author Share Posted August 31, 2023 The disk1 reported below when check status of btrfs: Couldn't read tree root Could not open root, trying backup super parent transid verify failed on 240418816 wanted 33573 found 33543 parent transid verify failed on 240418816 wanted 33573 found 33543 parent transid verify failed on 240418816 wanted 33573 found 33543 Ignoring transid failure ERROR: root [1 0] level 2 does not match 0 Quote Link to comment
xyseer Posted August 31, 2023 Author Share Posted August 31, 2023 Here's the diagnostics from my Unraid xynas-diagnostics-20230831-1023.zip Quote Link to comment
JorgeB Posted August 31, 2023 Share Posted August 31, 2023 6 hours ago, xyseer said: parent transid verify failed on 240418816 wanted 33573 found 33543 This error is fatal, it means some writes were lost, it can happen if a storage device lies about flushing it's write cache, this is usually a drive (or controller) firmware problem. You can try btrfs restore (option #2 here) but doubt it will work the way the fs is, if it does then the device will need to be formatted and the data restore. Quote Link to comment
xyseer Posted September 1, 2023 Author Share Posted September 1, 2023 So, you mean that if 'btrfs restore' cannot bring the files back, then all my data will be lost? (Since I've tried restore yesterday however it cannot restore anything. Even using btrfs-find-root together with btrfs restore in order to manually recover some data has failed😭) Quote Link to comment
Solution JorgeB Posted September 1, 2023 Solution Share Posted September 1, 2023 7 hours ago, xyseer said: So, you mean that if 'btrfs restore' cannot bring the files back, then all my data will be lost? Most likely, you might want and check the old disk, if it's not completely dead most data should be recoverable, ddrescue may help. Quote Link to comment
xyseer Posted September 2, 2023 Author Share Posted September 2, 2023 OK, Thanks for your answer and solutions. I'll try ddrescue to bring data. Quote Link to comment
xyseer Posted September 2, 2023 Author Share Posted September 2, 2023 However, this parity is so sucks and I'm not considering use this raw backup method again. It's so unreliable! Unraid is so that worse than Raid! I trusted in Unraid so much and it treated my data like that inversely. Quote Link to comment
JorgeB Posted September 2, 2023 Share Posted September 2, 2023 What happened is not normal, it suggests parity was not in sync or the controller/disk firmware lost some writes when the disk got disabled, possibly best to use xfs with your hardware, it's usually more robust. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.