MooTheKow

Members
  • Posts

    60
  • Joined

  • Last visited

Everything posted by MooTheKow

  1. Huzzah: Last check completed on Fri 30 Dec 2022 09:18:26 PM EST (today) Finding 0 errors Duration: 1 day, 5 hours, 21 minutes, 30 seconds Replacement drive now up and running. Thanks again for all the help!
  2. Just an update -- so far so good still. Replacement hard drive finally arrived today (lousy holidays :-)). So far I've only found a minor amount of missing data and have been able to recover it from other sources - so life is good there. Now just wait 22 hours for the 14 TB rebuild to complete and I think i'll be in the clear. Thanks again for your help -- made my holiday season here a lot less depressing :). After the rebuild finishes - thinking of updating to the latest version of unRaid (currently on version 6.9.2) -- any gotchas or anything I should be aware of before attempting it?
  3. Unclear - I just ran it a 2nd time because the first time ended with a log of: fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair
  4. Sorry - "We usually try to get emulated disk mountable and with no corruption before rebuilding. Especially when rebuilding to the same disk, but you aren't. In your case, you can repair before or after rebuild since the original disk (such as it is) will still have its contents (if they can be read)." Right now the emulated disk is mounted, isn't it? and is there still corruption? I thought the check filesystem utility took care of that and that's why I have a bunch of stuff in the lost+Found now. Sorry for all the questions - but for as much work as you've helped me with on this I want to make sure I get this right down the stretch :-).
  5. Sorry - for clarity: I had already started to go through the lost+found on the emulated disk 1's lost+found. Is this a bad idea/waste of time at this point or may it potentially hose things up further? Are you saying I shouldn't do that until I actually have the replacement disk installed? When I install the replacement disk -- it should build everything on the emulated disk 1 (including the lost+found files) onto the replacement drive, correct?
  6. Welp, lot more lost and found now 🙂, but share is available again now. (595 objects: 317 directories, 278 files (102 GB total) in the lost+found folder on emulated disk 1). So far looks like it's not too bad.. browsing into folders and will find all the contents were a single folder like 'Season 02' of House with all the filenames intact.. so for now just creating a 'House' folder, moving the folder into it, then renaming it to Season 02. Thank you so much for the help. IS there a best-practice way for me to move files from the 'lost+found' to my sorted folders on in 'KowData'? Looks like it wants to actually copy the files if I try to move them between folders in windows. Disk 1 NonTrial - Attempt 2.txt Disk 1 NonTrial - Attempt 1.txt kowunraid-diagnostics-20221226-1937.zip
  7. Ok - ran it: results.txt Still same behavior though. here are diagnostics i just ran after that check file system without -n kowunraid-diagnostics-20221226-1416.zip
  8. Haven’t had a chance to yet, will do that as soon as I get home, thanks
  9. Here are the diagnostics: kowunraid-diagnostics-20221226-1311.zip And a screenshot of the shares: Examples of browsing individual disks.
  10. Correct, it listed nothing in user shares at all. I did reboot, trying again. in windows I used to be able to access \\kowunraid\Kowdata, but now the only thing I’m seeing is \\kowunraid\flash. I can use the browse button next to individual drives and see a “Kowdata” I’m them and list (and download) files. have some family obligations to attend to now, but will be back later this evening to investigate further.
  11. Will do. Just realized after doing that 'new config' (i think?) I lost my share. How can I re-add that back? I tried going to the 'Shares' and adding a new share with the same name I used to have - but it doesn't seem to be sticking. AFter clicking 'ADd Share' it just goes back to a blank list of shares. Been a long time since I set up the original -- so what else do I need to be doing?
  12. Ah - that makes a lot more sense (re: timing of being disabled). So far as the check filesystem on disk 3 -- here are the results. Anything look note-worthy? Disk3 Check.txt Also - even assuming there is some bad data on the rebuilt disk 3 --- I'm still calling it a win that it wasn't a complete loss.
  13. Thanks for the info. So - i think this is a successful result? :-). At this point all that is left/can be done is to wait for my replacement drive to arrive, remove disk 1 and then add the new drive to the array in the disk 1 position, correct? At this point it will again do a rebuild.
  14. And immediately after posting that/running diagnostics -- it did this. Would running the diagnostics utility have actually triggered it being disabled? Just sheer curiosity at this point. Glad it waited until after the 19 hour rebuild _just_ finished
  15. Well --- dare I say it completed successfully? That drive clearly struggling and needs replacing - but it actually finished: kowunraid-diagnostics-20221226-1058.zip
  16. Sorry - yes, REPLACE :-). Also - rebuild started.. 17 hours of holding my breath now 🙂
  17. Huzzah. I understand I'm not out of the woods yet - but the fact that I've gotten this far at least is so awesome. I can't tell you how much your help has made and how much I appreciate it.
  18. Yup - so what is my next step? Stop the array - and just assign my old parity drive to disk 3? Then start the array -- at which point it will automatically attempt to rebuild?
  19. Thanks... lost + found size is about 60 GB -- which relative to everything else isn't all that much -- and files in the folders it list largely have file names associated with them that'd be relatively easy for me to sort out. Only like 8 folders - and not looking like anything mission critical so far. All things considered - not bad. FYI - placed an order for another 14 TB drive to add - should be here in 2 days hopefully.
  20. Ok - live run of the utility completed - here is the output: xfs_repair_output_disk_3_live.txt
  21. Here is the output from the command line xfs_repair: xfs_repair_output_disk_3.txt Do I still need to stop the array and change the file system on that disk 3?
  22. 1) Going to switch to maintenance mode -- 'apply changes' was not available as an option after I switched from 'auto' to 'xsf' -- UPDATE: still leaving apply changes greyed out weven in maintenance mode - attempting command line now. 2) Oh - absolutely not going to act on any assumptions --- you've devoted way too much time to helping me for me to hose things up by just acting about something I'm not 100% certain on at this point. 🙂
  23. Hey - sorry reading that link - do I need to be in maintenance mode? Right now my array is just running normally.
  24. This is what I see on disk 3 settings. I can change it to 'XSF' and click 'Done' - but then it just reverts back to auto.