Civrock Posted May 26, 2020 Posted May 26, 2020 Something caused my server to restart this morning and I woke up to find two drives with the Unmountable: No file system statuses. Here is where I messed up big time. I stopped the array to restart in maintenance mode to run a smart check of the drives and go from there but accidentally started in regular mode which initiated a parity rebuild... I have a found a few threads with this same issue and they all point towards the filesystem needing to be repaired. Assuming I already have not jacked everything up, will letting the parity rebuild finish overwite data on those drives or corrupt the parity data? Should I stop it? It also seems to have an unusually long estimated time at approximately 6 days vs. the normal 1 day. I attached logs and a screenshot. Both drives are on the same power / sata hotswap bay so I think the original issue may be related to that because it seems weird for two drives to fail at the same time without something common between them. mediavault-diagnostics-20200526-1054.zip Quote
JorgeB Posted May 26, 2020 Posted May 26, 2020 11 minutes ago, Civrock said: Should I stop it? The damage (if any) is already done, you'll need to run xfs_repair once it finishes, you can also cancel and run xfs_repair now then start rebuilding again from the start, this way you know if it's worth rebuilding and can also access the data during it. 12 minutes ago, Civrock said: It also seems to have an unusually long estimated time at approximately 6 days vs. the normal 1 day. There's something else using the array, in the screenshot you can see extra reads on disk6, and on the diags there extra reads on disk3. Quote
Vr2Io Posted May 26, 2020 Posted May 26, 2020 (edited) 19 minutes ago, Civrock said: Should I stop it? Yes, I will unassign all other disk ( data and parity ), then retain those two disks. If repair filesystem not success, than resume all assignment and rebuild. 19 minutes ago, Civrock said: estimated time at approximately 6 days Extra reading on disk6, as you see the read was 91.1 MB/s . Edited May 26, 2020 by Benson Quote
Civrock Posted May 26, 2020 Author Posted May 26, 2020 (edited) Thanks for your time guys, I will just go ahead and stop it now and attempt to repair the filesystems. Will update if I have more issues or make it through that. Edited May 26, 2020 by Civrock Quote
Civrock Posted May 26, 2020 Author Posted May 26, 2020 Okay, so I ran xfs_repiar -v on both disks and they both had issues with the primary crc, said that there were pending changes to the log and I should mount first before re-running the repair tool. When I mounted them, both showed up normally and rebuild started. The wording of the repair report made it seem like (at least to me as a layperson in this area) I needed to mount and then immediately run the repair again. Since both disks showed up normally, should I just let the rebuild run, or do I need to stop and run the repair again? I feel like the right answer here is to let the rebuild run now, but I just want to be sure. Thanks again for your time. Quote
JorgeB Posted May 27, 2020 Posted May 27, 2020 If they are mounting correctly they should be fine but it's also OK to run xfs_repair again to make sure. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.