Rorororororororo Posted March 22, 2022 Share Posted March 22, 2022 (edited) hey all, looks like I have a similar issue as: --- --- symptoms: i/o error over nfs & smb input/output error when ssh'd into the unraid server and using /mnt/user/ shfs shares going direct to /mnt/disk#/ did allow for a write (ie touch). Notes: There were about 85 errors on one of my array disks that is starting to fail. I've attached my diag zip. --- I attached some screenshots showing: the failed touch against /mnt/user/ the successful touch against /mnt/disk1/ xfs_repair no-changes dry run xfs_repair for-real showing the same error seen in the attached threads xfs_repair for-real running w/o error ** --- ** I started the array in maintenance mode in order to get the xfs_repair to run w/o error. -- I'm restarting the array back into normal mode now, but i wanted to see if anyone could help me through the diag logs to find the best file that has the best lead on my error. device sdg is my failing disk, but i didn't think a working-but-failing drive would be able to take the entire array offline; Array came back online w/o issue. touch-test passes now (final screenshot): -- unraid details: - 6.9.2 - 8 disks + 2 parity -- -- edit 1: I noticed that one of my disks (not the failing disk) was showing as "Unmountable: not mounted" backbone-diagnostics-20220321-2214.zip Edited March 22, 2022 by Rorororororororo fix version; organize SSs Quote Link to comment
JorgeB Posted March 22, 2022 Share Posted March 22, 2022 Check filesystem on disk5. Quote Link to comment
JorgeB Posted March 22, 2022 Share Posted March 22, 2022 You have to do it without -n, or nothing will be done. Quote Link to comment
Rorororororororo Posted March 22, 2022 Author Share Posted March 22, 2022 3 hours ago, JorgeB said: Check filesystem on disk5. Fsck or xfs_repair? Quote Link to comment
JorgeB Posted March 22, 2022 Share Posted March 22, 2022 Just now, Rorororororororo said: Fsck or xfs_repair? Click on the link and you'll see the instructions. Quote Link to comment
Rorororororororo Posted March 22, 2022 Author Share Posted March 22, 2022 Just now, Rorororororororo said: Fsck or xfs_repair? I have the screenshots mixed up. I ran without the -n twice. Before maintenance mode, there was a failure; no failure after. But this was for a different disk than the one showing as unmounted. I'll try to rerun on that disk today Quote Link to comment
Rorororororororo Posted March 22, 2022 Author Share Posted March 22, 2022 10 minutes ago, JorgeB said: Click on the link and you'll see the instructions. Didn't notice the link. I'll blame it on being on mobile or it being too early in the am. Thanks! Quote Link to comment
Rorororororororo Posted March 24, 2022 Author Share Posted March 24, 2022 On 3/22/2022 at 4:20 AM, JorgeB said: You have to do it without -n, or nothing will be done. Bit of the opposite issue with the disk5/sdc1 disk. I could run repair in dryrun mode, but got the same 'valuable metadata changes' error when trying to run live. couldn't mount the disk as described. opt'd for the 'last resort option and used -L After restarting the array, looks like SDC is alive again. no idea what happened. A bit concerning that whatever this issue was caused the array to go into R-only mode. Quote Link to comment
JorgeB Posted March 24, 2022 Share Posted March 24, 2022 5 hours ago, Rorororororororo said: opt'd for the 'last resort option and used -L That's common and usually fine. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.