MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 sdc is the original disk 3 (which failed smart health check and was shown as unmountable when i originally started this thread) Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 (edited) Another update -- i rebooted and re-started the array and now disk 1 is showing up as mounted. Should I still attempt those steps on disk 3? (which is currently 'not installed')? Or was that what i should do after adding the (old parity) drive as disk 3? (UPDATE: guessing that is what you meant - as clicking on disk 3 when it is 'not installed' doesn't allow me to do anything best I can tell). kowunraid-diagnostics-20221225-1347.zip Edited December 25, 2022 by MooTheKow Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 3 minutes ago, MooTheKow said: Another update -- i rebooted and re-started the array and now disk 1 is showing up as mounted. OK, it didn't mount before because of duplicate UUID. Not sure why reboot fixed that but let's continue. DO NOT reassign any disk as disk3 yet. 11 minutes ago, trurl said: Click on Disk3 to get to its page, set its filesystem to xfs, then check filesystem on emulated disk3. Be sure to capture the output so you can post it. If it still won't let you check filesystem from the webUI that way we can try from command line. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 This is what I see on disk 3 settings. I can change it to 'XSF' and click 'Done' - but then it just reverts back to auto. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 Hey - sorry reading that link - do I need to be in maintenance mode? Right now my array is just running normally. Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 12 minutes ago, MooTheKow said: guessing that is what you meant Please don't act on any guesses. From command line xfs_repair -nv /dev/md3 Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 1 minute ago, MooTheKow said: Hey - sorry reading that link - do I need to be in maintenance mode? Right now my array is just running normally. sorry, yes maintenance mode Quote Link to comment
itimpi Posted December 25, 2022 Share Posted December 25, 2022 5 minutes ago, MooTheKow said: This is what I see on disk 3 settings. I can change it to 'XSF' and click 'Done' - but then it just reverts back to auto. Did you click Apply after changing the setting? Simply clicking Done will not make any change. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 (edited) 1) Going to switch to maintenance mode -- 'apply changes' was not available as an option after I switched from 'auto' to 'xsf' -- UPDATE: still leaving apply changes greyed out weven in maintenance mode - attempting command line now. 2) Oh - absolutely not going to act on any assumptions --- you've devoted way too much time to helping me for me to hose things up by just acting about something I'm not 100% certain on at this point. 🙂 Edited December 25, 2022 by MooTheKow Quote Link to comment
itimpi Posted December 25, 2022 Share Posted December 25, 2022 3 minutes ago, MooTheKow said: 1) Going to switch to maintenance mode -- 'apply changes' was not available as an option after I switched from 'auto' to 'xsf' -- UPDATE: still leaving apply changes greyed out weven in maintenance mode - attempting command line now. 2) Oh - absolutely not going to act on any assumptions --- you've devoted way too much time to helping me for me to hose things up by just acting about something I'm not 100% certain on at this point. 🙂 You can only make changes to the disk settings if the array is stopped. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 (edited) Here is the output from the command line xfs_repair: xfs_repair_output_disk_3.txt Do I still need to stop the array and change the file system on that disk 3? Edited December 25, 2022 by MooTheKow Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 1 minute ago, MooTheKow said: command line xfs_repair You were in Maintenance mode? Looks kind of messy, but lets see what we get without -n From command line xfs_repair /dev/md3 Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 Ok - live run of the utility completed - here is the output: xfs_repair_output_disk_3_live.txt Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 Start the array in Normal (not Maintenance mode) and post new diagnostics Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 Done: kowunraid-diagnostics-20221225-1425.zip Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 Emulated disk3 mounts and appears to be about 1/2 full. You should be able to see files on emulated disk3 now. Disk3 also has a new lost+found share on it where repair put things it couldn't figure out. Don't know how much trouble that will be. If you go to the User Shares page and click Compute... for that lost+found share, it will show you how much is in it. Check it out and let us know. More instructions to come. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 (edited) Thanks... lost + found size is about 60 GB -- which relative to everything else isn't all that much -- and files in the folders it list largely have file names associated with them that'd be relatively easy for me to sort out. Only like 8 folders - and not looking like anything mission critical so far. All things considered - not bad. FYI - placed an order for another 14 TB drive to add - should be here in 2 days hopefully. Edited December 25, 2022 by MooTheKow Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 OK. Looks like disk1 worked well enough to mount, and perhaps more importantly, disk1 worked well enough to participate in emulating disk3. Rebuilding disk3 to the former parity disk should result in the same contents as emulated disk3, if everything continues to work well enough. If there is anything critical you want to try to copy off the array before attempting rebuild, do it now, but maybe best if you don't exercise disk1 too much since it still has issues, and any access to disk1 or emulated disk3 is going to involve disk1. Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 And in case it isn't obvious, rebuilding disk3 is going to require disk1 continue to work well enough so emulated disk3 data can be calculated from parity and all other disks. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 Yup - so what is my next step? Stop the array - and just assign my old parity drive to disk 3? Then start the array -- at which point it will automatically attempt to rebuild? Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 Just now, MooTheKow said: Stop the array - and just assign my old parity drive to disk 3? Then start the array -- at which point it will automatically attempt to rebuild? Correct Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 Huzzah. I understand I'm not out of the woods yet - but the fact that I've gotten this far at least is so awesome. I can't tell you how much your help has made and how much I appreciate it. Quote Link to comment
trurl Posted December 25, 2022 Share Posted December 25, 2022 22 minutes ago, MooTheKow said: FYI - placed an order for another 14 TB drive to add - should be here in 2 days hopefully. You will be using that to REPLACE disk1? I always like to reserve the word ADD to mean a new slot in the array. Quote Link to comment
MooTheKow Posted December 25, 2022 Author Share Posted December 25, 2022 (edited) Sorry - yes, REPLACE :-). Also - rebuild started.. 17 hours of holding my breath now 🙂 Edited December 25, 2022 by MooTheKow Quote Link to comment
MooTheKow Posted December 26, 2022 Author Share Posted December 26, 2022 Well --- dare I say it completed successfully? That drive clearly struggling and needs replacing - but it actually finished: kowunraid-diagnostics-20221226-1058.zip Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.