M41k Posted March 21 Share Posted March 21 Hello everyone, One of my two array devices reports the "Unmountable: Unsupported or no file system" error. I have already searched a little here in the forum for a solution and read that formatting and restoring should not necessarily be the first solution. However, I have not found a good answer as to what else I should do at this point. I have already run through the Check Filesystem status, which has given me that there is an error and I should look into the log. In addition, I started the array once without the affected disk and then restarted it so that the array emulated it. I suspect the error occurred because the NAS had no power for a short time. Can someone give me useful tips on the fix? Thank you very much for the effort. cloud-diagnostics-20240227-1757.zip Quote Link to comment
JorgeB Posted March 21 Share Posted March 21 19 minutes ago, M41k said: that formatting and restoring should not necessarily be the first solution That is not a solution, unless you want to delete everything in the disk. There's filesystem corruption with xfs and btrfs filesystems, and this error from btrfs: write time tree block corruption detected usually means a RAM issue, start by running memtest Quote Link to comment
M41k Posted March 21 Author Share Posted March 21 Thank you very much for the quick response. The test is currently underway. I still have an unused RAM memory here anyway, does it make sense to replace the old memory right away or better wait until the test has gone through? Quote Link to comment
JorgeB Posted March 21 Share Posted March 21 I would wait for the test, but since memtest is only definite if errors are found it may still be worth replacing the RAM if nothing is. Quote Link to comment
M41k Posted March 22 Author Share Posted March 22 I went through the test, but it didn't find any errors. I still exchanged the memory. What can I do next to solve my problem? Quote Link to comment
Solution JorgeB Posted March 22 Solution Share Posted March 22 Check filesystem on disk1, run it without -n, and also run a correcting scrub on the pool. Quote Link to comment
M41k Posted March 22 Author Share Posted March 22 The check immediately throws out the following error: Quote Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Should I really export the test with the argument -L or are there other suggestions? Quote Link to comment
JorgeB Posted March 22 Share Posted March 22 58 minutes ago, M41k said: Should I really export the test with the argument -L Yep. Quote Link to comment
M41k Posted March 23 Author Share Posted March 23 I'm not sure if this has made my problem worse. After the process went through, all my shares are gone, but the hard drives are still 76% full. What should I do best? Quote Link to comment
M41k Posted March 24 Author Share Posted March 24 I take everything back, after a restart everything is back. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.